Monday Jun 21, 2010

OpenJDK 6: b20 Source Bundle Published

On June 21, 2010 the source bundle for OpenJDK 6 b20 was published.

The predominant change in this build was rebranding, replacing Sun copyrights with Oracle ones. In the HotSpot repository, this was largely accomplished by Andrew John Hughes's backport of HotSpot 17 into OpenJDK 6. Additional fixes for Zero were also applied as were backports of Nimbus and timezone changes. On the build front, the jaxp and jax-ws source bundles are not downloaded by default anymore. To allow them to be downloaded, add "ALLOW_DOWNLOADS=true" to the top-level make command.

A detailed list of all the changes is also available.

Tuesday Jun 15, 2010

Syntax Sin Tax

In various forums, recent discussion about Project Lambda have commented on, and often noted in dismay, the current syntax for lambda expressions in the initial prototype. "Don't panic!" is advice as valid for work on language evolution as on any other endeavor. Since syntax is the easiest aspect of a language change to form an opinion on, it is the aspect of language changes most susceptible to bikeshedding. While syntax is an important component of language changes, it is far from the only important component; the semantics matter too! Fixation on the syntax of a feature early in its development is premature and counterproductive. Having a prototype to gain actual experience with the feature is more valuable than continued informed analysis and commentary without working code. I believe this diagram included in a talk on the Project Coin language change process holds for language changes in Java more generally:

While proposing and commenting can be helpful, the effort required to produce a prototype is disproportionally beneficial and the incremental effort using the prototype has even higher leverage. Experience trumps speculation. And not all efforts lead to positive results; complaining and obstructing alone are rarely helpful contributions.

Just the engineering needed to fully deliver a language changes involves many coordinated deliverables even without including documentation, samples and user guides. A consequence of an open style of development is that changes are pushed early, even if not often, and early changes imply the full fit and finish of a final product will of necessity not be present from the beginning. Long digressions on small issues, syntactical or otherwise, are a distraction from the other work that needs to get done.

True participation in a project means participating in the work of the project. The work of a language change involves much more than just discussing syntax. Once a prototype exists, the most helpful contribution is to use the prototype and report experiences using it.

Wednesday Jun 09, 2010

Project Coin: Inducing contributory heap pollution

US patent law defines various kinds of patent infringement, as do other jurisdictions. (I am not a lawyer! This is not legal advice! Check your local listings! Don't kill kittens! Example being used for analogy purposes only! ) One can infringe on a patent directly, say, by making, using, selling, offering to sell, or importing a patented widget without a suitable license. A computer scientist looking to infringe might (erroneously) believe the conditions for infringement can be circumvented by applying the familiar technique of adding a level of indirection. For example, one indirection would be selling 90% percent of the patented widget, leaving the end-user to complete the final 10% and thereby infringe. Such contributory infringement is also verboten. Likewise, providing step-by-step instructions on how to infringe the patent is outlawed as inducing infringement. Putting both techniques together, inducing contributory infringement is also disallowed.

Starting in JDK 5, a compiler must issue mandatory unchecked warnings at sites of possible heap pollution:

Java Language Specification, Third Edition — § Heap Pollution
It is possible that a variable of a parameterized type refers to an object that is not of that parameterized type. This situation is known as heap pollution. This situation can only occur if the program performed some operation that would give rise to an unchecked warning at compile-time.

One case where unchecked warnings occur is a call to a varargs method where the type of the variable argument is not reifiable. That is, where the type information for the parameter is not fully expressible at runtime due to the erasure of generics. Varargs are implemented using arrays and arrays are reified; that is, the component type of an array is stored internally and used when needed for various type checks at runtime. However, the type information stored for an array's component type cannot store the information needed to represent a non-reifiable parameterized type.

The mismatch between reified arrays being used to pass non-reified (and non-reifiable) parameterized types is the basis for the unchecked warnings when such conflicted methods are called. However in JDK 5, only calling one of conflicted methods causes a compile-time warning; declaring such a method doesn't lead to any similar warning. This is analogous to the compiler only warning of direct patent infringement, while ignoring or being oblivious too indirect infringement. While the mere existence of a conflicted varargs method does not cause heap pollution per se, its existence contributes to heap pollution by providing an easy way to cause heap pollution to occur and induces heap pollution by offering the method to be called. By this reasoning, if method calls that cause heap pollution deserve a compiler warning, so do method declarations which induce contributory heap pollution.

Additionally, the warnings issued for some calls to varargs methods involving heap pollution are arguably spurious since nothing bad happens. For example, calling various useful helper varargs methods in the platform trigger unchecked warnings, including:

These three methods all iterate over the varargs array pulling out the elements in turn and processing them. If the varargs array is constructed by the compiler using proper type inference, the bodies of the methods won't experience any ClassCastExceptions due to handling of the array's elements. Currently, to eliminate the warnings associated with calling these methods, each call site needs a @SuppressWarnings("unchecked") annotation.

To address these usability issues with varargs, Project Coin accepted simplifying varargs as one if the project's changes. The initial prototype version of this feature pushed by Maurizio, has several parts:

  • A new mandatory compiler warning is generated on declaration sites of problematic varargs methods that are able to induce contributory heap pollution.

  • The ability to suppress those mandatory warnings at a declaration site using an @SuppressWarnings("varargs") annotation. The warnings may also be suppressing using the -Xlint:-varargs option to the compiler.

  • If the @SuppressWarnings("varargs") annotation is used on a problematic varargs method declaration, the unchecked warnings at call sites of that method are also suppressed.

This prototype will allow experience to be gained with the algorithms to detect and suppress the new mandatory warnings. However, the annotation used to suppress the warnings should be part of the varargs method's contract, denoting that when a compiler-constructed array is passed to the method nothing bad will happen, for a suitable definition of nothing bad. Therefore, an @Documented annotation type needs to be used for this purpose and SuppressWarnings is not @Documented. Additionally, the suppressing annotation for varags should also be @Inherited so the method implementation restrictions are passed on to subclasses.

Subsequent design discussions about the new annotation type with the properties in question to suppress the varargs warnings as well as criteria for the annotation to be correctly applied can occur on the Project Coin mailing list.

Thursday Jun 03, 2010

Shaped to a T

Last week I attended a talk by John Hennessy on the "Future of Research Universities" (video). The talk ranged over many subjects, from distinctions between graduate and undergraduate learning, to the difference between training and education, and distinguishing properties of Silicon Valley.

Toward the end of the hour, Hennessy noted the importance of "T-shaped" researchers for effective collaboration: deep enough to contribute in their own field, but with enough breadth to work with colleagues in other areas. Applying the venerable computer science technique of recursion gives a T-shaped fractal all the way down, analogous to Koch's curve:

T-shaped all the way down.

Having a well-balanced combination of depth and breadth strikes me as being important in many other endeavours too. For example, I was reminded of the different stability versus progress trade-offs that developers need to be sensitive to when working on the JDK, such as the benefits of changes in a given area versus the impact on the rest of the release.

Monday May 10, 2010

JavaOne 2010 Talks Accepted!

I was happy to be notified today that I have two JavaOne talks accepted for this year. The first is a session on Project Coin: Small Language Changes for JDK 7 in the core Java platform track. When I spoke at JavaOne 2009 about Project Coin, the feature selection wasn't yet finalized and I covered some of the general concerns that influence language evolution and spoke on feature selection methodology. Now that additional Coin features are becoming available in JDK 7 builds, this year I expect to focus more on experiences implementing and using the new features.

My second accepted talk is a bof on Patents, Copyrights, and TMs: An Intellectual Property Primer for Engineers over in the Java Frontier track. I've wanted to put together a talk on this topic for several years to condense what I've learned from taking a few classes on intellectual property and filing (and eventually getting issued) several patents.

See you in San Francisco in a few short months!

Friday May 07, 2010

Draft of Restarted "OpenJDK Developers' Guide" available for discussion

I've been working on a restarted version of the "OpenJDK Developers' Guide" and I have a draft far enough along for general discussion. The content of the existing guide is primarily logistical and procedural in nature; in time, I plan to migrate this information to a JDK 7 specific page because many of the details are release-specific. The new guide is more conceptual and once completed is intended to be able to last for several releases without major updating.

The table of contents of draft version 0.775 is:

The full draft is available from here.

The compatibility sections are currently more fully developed than the ones about developing a change. (Long-time readers of this blog will be familiar with earlier versions of some of the material.)

All level of feedback is welcome, from correcting typos, to stylistic suggestions, to proposals for new sections. Significant feedback should be sent to the project alias for the guide.

Initially, I plan to maintain the guide as an HTML file and publish new versions as needed. Over time, guide maintenance may transition to more formal version control, such as a small Mercurial repository or a wiki.

Monday May 03, 2010

Project Coin: multi-catch and final rethrow

As alluded to as a possibility previously, I'm happy to announce that improved exception handling with multi-catch and final rethrow will be part of an upcoming JDK 7 build. Improved exception handling is joining other Project Coin features available in the repository after successful experiences with a multi-catch implementation developed by Maurizio Cimadamore.

Maurizio's work also revealed and corrected a flaw in the originally proposed static analysis for the set of exception that can be rethrown; from the original proposal form for this feature:

[a] final catch parameter is treated as throwing precisely those exception types that

  • the try block can throw,
  • no previous catch clause handles, and
  • is a subtype of one of the types in the declaration of the catch parameter

Consider a final rethrow statement as below where the dynamic class of a thrown exception differs from the static type (due to a cast in this case):

class Neg04 {
  static class A extends Exception {}
  static class B extends Exception {}

  void test(boolean b1, boolean b2) throws B {
      try {
          if (b1) {
              throw new A();
          } else if (b2) {
              throw new B();
          } else {
              throw (Throwable)new Exception();
      catch (A e) {}
      catch (final Exception e) {
          throw e;
      catch (Throwable t) {}

The set of exceptions thrown by the try block is computed {A, B, Throwable}; therefore, the set of exceptions that can be rethrown is the set of exceptions from the try block:

  1. minus A, handled by a previous catch clause, giving {B, Throwable}
  2. minus Throwable since Throwable is not a subtype of one of the types declared for the catch parameter (just Exception in this case), leaving only {B}

However, if an Exception is thrown from the try block it should be caught in the "catch(final Exception e)" clause even if the exception is cast to Throwable since catch clauses work based on the runtime class of the exceptions being thrown.

To address this, the third clause is changed to

  • is a subtype/supertype of one of the types in the declaration of the catch parameter

More formally, this clause covers computing a join over the set of thrown exceptions, eliminating subtypes. In the example above {Throwable} is computed as the set of exceptions being throwable from the try block. This is then intersected with the exceptions that can be caught by the catch block, resulting in {Exception}, a properly sound result.

Very general exception types being thrown by a try block would reduce the utility of multi-catch since only imprecise information would be available. Fortunately, from analyzing the JDK sources, throwing a statically imprecise exception seems rare, indicating multi-catch with the amended specification should still be very be helpful in practice.

Today the adventurous can apply a changest to a copy of the JDK 7 langtools repository and do a build to get a compiler supporting this feature. Otherwise, following the integration process, improved exception handling will appear in the promoted JDK 7 builds in due course.

Thursday Apr 15, 2010

OpenJDK 6: b19 regression test results

Running with the usual jtreg flags, -a -ignore:quiet in all repositories and adding -s and now also -ea -esa for langtools, the basic regression test results on Linux for OpenJDK 6 build 19 are:

  • HotSpot, 62 tests passed.

  • Langtools, 1,359 tests passed.

  • JDK, 3,261 tests pass, 28 fail, and 5 have errors.

All the HotSpot tests continue to pass and more tests were added:

0: b18-hotspot/summary.txt  pass: 24
1: b19-hotspot/summary.txt  pass: 62

0      1      Test
---    pass   compiler/5057225/
---    pass   compiler/6378821/
---    pass   compiler/6539464/
---    pass   compiler/6589834/
---    pass   compiler/6603011/
---    pass   compiler/6636138/
---    pass   compiler/6636138/
---    pass   compiler/6711117/
---    pass   compiler/6772683/
---    pass   compiler/6778657/
---    pass   compiler/6795161/
---    pass   compiler/6795465/
---    pass   compiler/6797305/
---    pass   compiler/6799693/
---    pass   compiler/6800154/
---    pass   compiler/6814842/
---    pass   compiler/6823354/
---    pass   compiler/6823453/
---    pass   compiler/6826736/
---    pass   compiler/6832293/
---    pass   compiler/6833129/
---    pass   compiler/6837011/
---    pass   compiler/6837094/
---    pass   compiler/6843752/
---    pass   compiler/6849574/
---    pass   compiler/6851282/
---    pass   compiler/6852078/
---    pass   compiler/6855164/
---    pass   compiler/6855215/
---    pass   compiler/6857159/
---    pass   compiler/6859338/
---    pass   compiler/6860469/
---    pass   compiler/6863155/
---    pass   compiler/6863420/
---    pass   compiler/6865031/
---    pass   compiler/6875866/
---    pass   compiler/6892265/
---    pass   gc/6845368/

38 differences

In langtools all the tests continue to pass and a few tests were added:

0: b18-langtools/summary.txt  pass: 1,355
1: b19-langtools/summary.txt  pass: 1,359

0      1      Test
---    pass   tools/javac/enum/
---    pass   tools/javac/processing/6511613/
---    pass   tools/javac/processing/6634138/
---    pass   tools/javac/processing/model/util/elements/

4 differences

And in jdk, many tests were backported form JDK 7 in addition to some new test being added. Otherwise, the existing tests have generally consistent results. As done started with build 18, the test run below was executed outside of Sun's and Oracle's wide-area network using the following contents for the testing network configuration file:

The file location to use for the networking configuration can be set by passing a -e JTREG_TESTENV=Path to file option to jtreg.

0: b18-jdk/summary.txt  pass: 3,148; fail: 19; error: 2
1: b19-jdk/summary.txt  pass: 3,261; fail: 28; error: 5

0      1      Test
---    error  java/awt/Focus/ActualFocusedWindowTest/
---    error  java/awt/Focus/ActualFocusedWindowTest/
---    error  java/awt/Focus/NonFocusableWindowTest/
---    pass   java/awt/Focus/NonFocusableWindowTest/
---    fail   java/awt/Focus/TypeAhead/
---    fail   java/awt/Insets/WindowWithWarningTest/WindowWithWarningTest.html
fail   pass   java/awt/event/KeyEvent/CorrectTime/
---    fail   java/awt/xembed/server/
---    pass   java/nio/charset/Charset/
---    pass   java/nio/charset/Charset/
---    fail   java/nio/charset/Charset/
---    pass   java/nio/charset/Charset/
---    pass   java/nio/charset/Charset/
---    pass   java/nio/charset/Charset/
---    fail   java/nio/charset/Charset/
---    pass   java/nio/charset/Charset/
---    pass   java/nio/charset/Charset/
---    pass   java/nio/charset/Charset/
---    pass   java/nio/charset/CharsetDecoder/
---    pass   java/nio/charset/CharsetDecoder/
---    pass   java/nio/charset/CharsetEncoder/
---    pass   java/nio/charset/CharsetEncoder/
---    pass   java/nio/charset/RemovingSunIO/
---    pass   java/nio/charset/RemovingSunIO/
---    pass   java/nio/charset/RemovingSunIO/
---    pass   java/nio/charset/coders/
---    pass   java/nio/charset/coders/
---    pass   java/nio/charset/coders/
---    pass   java/nio/charset/coders/
---    pass   java/nio/charset/coders/
---    pass   java/nio/charset/coders/
---    pass   java/nio/charset/coders/
---    pass   java/nio/charset/coders/
---    pass   java/nio/charset/coders/
---    pass   java/nio/charset/coders/
---    pass   java/nio/charset/coders/
---    pass   java/nio/charset/spi/
fail   pass   java/rmi/transport/pinLastArguments/
---    pass   java/text/Bidi/
---    pass   java/text/Bidi/
---    pass   java/text/Bidi/
---    pass   java/util/prefs/
---    pass   java/util/prefs/
---    pass   java/util/prefs/
---    pass   java/util/prefs/
---    pass   java/util/prefs/
---    pass   java/util/prefs/
---    pass   java/util/prefs/
---    pass   java/util/prefs/
---    pass   javax/sound/midi/Gervill/AudioFloatFormatConverter/
---    pass   javax/sound/midi/Gervill/ModelByteBufferWavetable/
---    pass   javax/sound/midi/Gervill/ModelStandardIndexedDirector/
---    pass   javax/sound/midi/Gervill/SoftChannel/
---    pass   javax/sound/midi/Gervill/SoftSynthesizer/
---    pass   javax/sound/midi/Gervill/SoftSynthesizer/
---    pass   javax/sound/midi/Gervill/SoftSynthesizer/
---    pass   javax/sound/midi/Gervill/SoftSynthesizer/
---    pass   javax/sound/midi/Gervill/SoftTuning/
---    pass   javax/swing/JColorChooser/
---    pass   javax/swing/JColorChooser/
---    pass   javax/swing/JColorChooser/
---    pass   javax/swing/JColorChooser/
---    pass   javax/swing/JColorChooser/
---    pass   javax/swing/JColorChooser/
---    pass   javax/swing/border/
---    pass   javax/swing/border/
---    pass   javax/swing/border/
---    pass   sun/nio/cs/
---    pass   sun/nio/cs/
---    pass   sun/nio/cs/
---    pass   sun/nio/cs/
---    pass   sun/nio/cs/
---    pass   sun/nio/cs/
---    pass   sun/nio/cs/
---    pass   sun/nio/cs/
---    pass   sun/nio/cs/
---    pass   sun/nio/cs/
---    pass   sun/nio/cs/
---    pass   sun/nio/cs/
---    pass   sun/nio/cs/
---    pass   sun/nio/cs/
---    pass   sun/nio/cs/
---    pass   sun/nio/cs/
---    pass   sun/nio/cs/
---    pass   sun/nio/cs/
---    pass   sun/nio/cs/
---    pass   sun/nio/cs/
---    pass   sun/nio/cs/
---    pass   sun/nio/cs/
---    pass   sun/nio/cs/
---    pass   sun/nio/cs/
---    pass   sun/nio/cs/
---    pass   sun/nio/cs/
---    pass   sun/nio/cs/
---    pass   sun/nio/cs/
---    pass   sun/nio/cs/
---    fail   sun/nio/cs/
---    pass   sun/nio/cs/
---    pass   sun/nio/cs/
---    pass   sun/nio/cs/
---    pass   sun/nio/cs/
---    pass   sun/nio/cs/
---    pass   sun/nio/cs/
---    pass   sun/nio/cs/
---    pass   sun/nio/cs/
---    pass   sun/nio/cs/
---    pass   sun/nio/cs/
---    pass   sun/nio/cs/
---    pass   sun/nio/cs/
---    pass   sun/nio/cs/
---    pass   sun/nio/cs/
---    pass   sun/nio/cs/
---    pass   sun/nio/cs/
---    pass   sun/nio/cs/
---    pass   sun/nio/cs/
---    pass   sun/nio/cs/
---    fail   sun/nio/cs/
---    pass   sun/nio/cs/
---    pass   sun/nio/cs/
---    pass   sun/nio/cs/
---    pass   sun/nio/cs/
---    pass   sun/nio/cs/
---    pass   sun/nio/cs/
---    pass   sun/nio/cs/
pass   ---    sun/security/ssl/com/sun/net/ssl/internal/ssl/SSLSocketImpl/
pass   ---    sun/security/ssl/javax/net/ssl/NewAPIs/
pass   ---    sun/security/ssl/javax/net/ssl/NewAPIs/SSLEngine/
pass   ---    sun/security/ssl/javax/net/ssl/NewAPIs/SSLEngine/
pass   ---    sun/security/ssl/javax/net/ssl/NewAPIs/SSLEngine/
---    fail   sun/security/ssl/sun/net/www/protocol/https/HttpsURLConnection/
---    pass   sun/security/ssl/sun/net/www/protocol/https/HttpsURLConnection/
---    pass   sun/security/ssl/sun/net/www/protocol/https/HttpsURLConnection/
---    pass   sun/security/ssl/sun/net/www/protocol/https/HttpsURLConnection/
---    fail   sun/security/ssl/sun/net/www/protocol/https/HttpsURLConnection/
---    fail   sun/security/ssl/sun/net/www/protocol/https/HttpsURLConnection/
---    fail   sun/security/ssl/sun/net/www/protocol/https/HttpsURLConnection/
---    pass   sun/security/util/Oid/

137 differences

OpenJDK: 6 b19 Source Bundle Published

On April 15, 2010 the source bundle for OpenJDK 6 b19 was published.

Major changes in this build include the latest round of security fixes and, courtesy of Andrew John Hughes, several significant backports including the HotSpot 16 changes, Zero support, and regression tests previously published under open source in JDK 7. Also of note, all the langtools regression tests now pass when run with assertions enabled.

A detailed list of all the changes is also available.

Thursday Mar 25, 2010

OpenJDK 6 Processes

[The entry below is a lightly edited version of a message previously sent to the OpenJDK 6 development alias]

Since questions about OpenJDK 6 processes come up from time to time, in my role as release manager I thought it would be helpful to more fully document the current engineering practices and receive comments about them.

OpenJDK 6 is an implementation of the Java SE 6 specification valuing stability, compatibility, and security. As an implementation of the Java SE 6 specification, all changes to OpenJDK 6 must be allowable within that specification. This requirement precludes many API changes. Acceptable API changes include those permitted by the endorsed standards mechanism, such as upgrading to a newer version of a standalone technology, like a component JSR. One example of such an API change was the upgrade of JAX-WS from 2.0 to 2.1 in OpenJDK 6 build b06.

Changes allowable within the Java SE 6 specification may still be rejected for inclusion in OpenJDK 6 if the behavioral compatibility risk is judged as too large. (See previous write-ups of kinds of compatibility and compatibly regions in JDK releases.) Behavioral compatibility concerns implementation properties of the JDK. Clients of the JDK can knowingly or unknowingly come to rely upon implementation-specification behaviors not guaranteed by the specification and care should be taken to not break such applications needlessly. In contrast, if a change is appropriate for every other JDK release train, it is generally appropriate for OpenJDK 6 too. Examples of such universal changes include security fixes and time zone information updates.

With the above caveats, bug fixes in JDK 7 that do not involve specification changes have presumptive validity for OpenJDK 6. That is, by default such fixes are assumed to be applicable to OpenJDK 6, especially if having "soaked" in JDK 7 for a time without incident. On a related point, the fixes from the stabilized HotSpot forests (for example HotSpot Express 14 or HotSpot Express 16) are suitable for bulk porting to the OpenJDK 6 HotSpot repository without review of individual bugs.

As a general guideline, if a change is applicable to both JDK 7 and OpenJDK 6, the change should be made in JDK 7 no later than the change is made in OpenJDK 6.

With the exception of security fixes, all OpenJDK 6 code review traffic should be sent to for consideration before a commit occurs. (For subscription instructions and to browse the archives see All fixes require the approval of the release manager and may require additional technical review and approval at the discretion of the release manager. Security fixes are first kept confidential and applied to a private forest before being pushed to the public forest as part of the general synchronized publication of the fix to effected JDK release trains.

The master Mercurial forest of OpenJDK 6 repositories resides at:

Since there is only a single master OpenJDK 6 forest, near the end of a build period the release manager may defer otherwise acceptable changes to the start of the next build.

The schedule to tag builds of OpenJDK 6 is on an as-needed basis. The timing and feature list of a build is coordinated on the jdk6-dev alias with the IcedTea 6 project, a downstream client of OpenJDK 6. Before a build is tagged, regression and other testing is performed to verify the quality of the build.

After feedback is collected from jdk6-dev and this blog entry, the OpenJDK 6 project page will be updated with the current process information.

Alternatively, in JDK 7 there are a hierarchy of staging integration forests under the master to manage a higher rate of change, see OpenJDK Mercurial Wheel). As the rate of change in OpenJDK 6 is comparatively small, as long as the end of build quiet periods continue to be acceptably short, having a single master forest should be simpler than starting and managing an intermediate staging forest kept open to accepting changes at all times.

Monday Mar 15, 2010

Beware of Covariant Overriding in Interface Hierarchies

One of the changes to the Java programming language made back in JDK 5 was the introduction of covariant returns, that is, the ability in a subtype to override a method in a supertype and return a more specific type. For example,

public class A {
  public Object method() {return null;}

public class B extends A {
  public String method() {return "";}

Covariant returns can be a very handy facility to more accurately convey the type of object returned by a method. However, the feature should be used judiciously, especially in interface hierarchies. In interface hierarchies, covariant returns force constraints on the implementation classes. Such a constraint was included the in the apt API modeling the Java language and was subsequently removed from the analogous portion of the standardized JSR 269 API in javax.lang.model.\*.

In apt, the TypeDeclaration interface defines a method
Collection<? extends MethodDeclaration> getMethods().
In the sub-interface ClassDeclaration, the method is overriden with
Collection<MethodDeclaration> getMethods()
and in another sub-interface, AnnotationTypeDeclaration, the method is overriden with
Collection<AnnotationTypeElementDeclaration> getMethods().
Consequently, it is not possible for a single class to implement both the ClassDeclaration and AnnotationTypeDeclaration interfaces since the language specification forbids having two methods with the same name and argument types but different return types (JLSv3 §8.4.2). (This restriction does not exist at the class file level and a compiler will generate synthetic bridge methods with this property when implementing covariant returns.) If a compiler chose to use a single type to model all kinds of types (classes, enums, interfaces, annotation types), it would not be able to be directly retrofitted to implement the entirely of this apt API; wrapper objects would need to be created just to allow the interfaces to be implemented at a source level.

In contrast, in JSR 269 the root modeling interface Element defines a List<? extends Element> getEnclosedElements() method which returns all kinds of enclosed elements, from fields, to constructors, to methods. Elements of a particular type can than be extracted using a filter. This approach provides more flexibility in retrofitting the interfaces onto an existing implementation; a spectrum of implementations are possible, from a single type to represent all sorts of elements to a one-to-one correspondence of implementation types to interface types.

Note that in cases where an implementation type collapses several interface types, instanceof checks for an interface type are not necessarily useful since implementing one interface does not imply no other related interface is implemented. The Element specification warns of this possibility:

To implement operations based on the class of an Element object, either use a visitor or use the result of the getKind() method. Using instanceof is not necessarily a reliable idiom for determining the effective class of an object in this modeling hierarchy since an implementation may choose to have a single object implement multiple Element subinterfaces.

Friday Mar 12, 2010

Last Round Compiling

As of build 85 of JDK 7, bug 6634138 "Source generated in last round not compiled" has been fixed in javac. Previously, source code generated in a round of annotation processing where RoundEnvironment.processingOver() was true was not compiled. With the fix, source generated in the last round is compiled, but, as intended, while compiled such source still does not undergo annotation processing since processing is over. The fix has also been applied to OpenJDK 6 build 19.

Annotation Processor SourceVersion

In annotation processing there are three distinct roles, the author of the annotation types, the author of the annotation processor, and the client of the annotations. The third role includes the responsibility to configure the compiler correctly, such as setting the source, target, and encoding options and setting the source and class file destination for annotation processing. The author of the annotation processor shares a related responsibility: property returning the source version supported by the processor.

Most processors can be written against a particular source version and always return that source version, such as by including a @SupportedSourceVersion annotation on the processor class. In principle, the annotation processing infrastructure could tailor the view of newer-than-supported language constructs to be more compatible with existing processors. Conversely, processors have the flexibility to implement their own policies when encountering objects representing newer-than-supported structures. In brief, by extending version-specific abstract visitor classes, such as AbstractElementVisitor6 and AbstractTypeVisitor6, the visitUnknown method will be called on entities newer than the version in question.

Just as regression tests inside the JDK itself should by default follow a dual policy of accepting the default source and target settings rather than setting them explicitly like other programs, annotation processors used for testing with the JDK should generally support the latest source version and not be constrained to a particular version. This allows any issues or unexpected interactions of new features to be found more quickly and keeps the regression tests exercising the most recent code paths in the compiler.

This dual policy is now consistently implemented in the langtools regression tests as of build 85 of JDK 7 (6926699).

Thursday Mar 11, 2010

An Assertive Quality in Langtools

With a duo of fixes in JDK 7 build 85, one by Jon (6927797) and another by me (6926703), the langtools repository has reached another milestone in testing robustness: all the tests pass with assertions (-ea) and system assertions (-esa) enabled. This adds to other useful langtools testing properties, such as being able to successufully run in the speedy same vm testing mode.

Jon's fix was just updating a test so that some code would always be run with assertions disabled while my fix corrected an actual buggy assert I included in apt. Addressing such problems helps simplify analyzing test results; if there is a failure, there is a problem!

These fixes have also been applied in the forthcoming OpenJDK 6 build 19 so it too will have the same assertive testing quality.

Thursday Feb 25, 2010

Notions of Floating-Point Equality

Moving on from identity and equality of objects, different notions of equality are also surprisingly subtle in some numerical realms.

As comes up from time to time and is often surprising, the "==" operator defined by IEEE 754 and used by Java for comparing floating-point values (JLSv3 §15.21.1) is not an equivalence relation. Equivalence relations satisfy three properties, reflexivity (something is equivalent to itself), symmetry (if a is equivalent to b, b is equivalent to a), and transitivity (if a is equivalent to b and b is equivalent to c, then a is equivalent to c).

The IEEE 754 standard defines four possible mutually exclusive ordering relations between floating-point values:

  • equal

  • greater than

  • less than

  • unordered

A NaN (Not a Number) is unordered with respective to every floating-point value, including itself. This was done so that NaNs would not quietly slip by without due notice. Since (NaN == NaN) is false, the IEEE 754 "==" relation is not an equivalence relation since it is not reflexive.

An equivalence relation partitions a set into equivalence classes; each member of an equivalence classes is "the same" as the other members of the classes for the purposes of that equivalence relation. In terms of numerics, one would expect equivalent values to result in equivalent numerical results in all cases. Therefore, the size of the equivalence classes over floating-point values would be expected to be one; a number would only be equivalent to itself. However, in IEEE 754 there are two zeros, -0.0 and +0.0, and they compare as equal under ==. For IEEE 754 addition and subtraction, the sign of a zero argument can at most affect the sign of a zero result. That is, if the sum or difference is not zero, a zero of either sign doesn't change the result. If the sum or differnece is zero and one of the arguments is zero, the other argument must be zero too:

  • -0.0 + -0.0 ⇒ -0.0

  • -0.0 + +0.0 ⇒ +0.0

  • +0.0 + -0.0 ⇒ +0.0

  • +0.0 + +0.0 ⇒ +0.0

Therefore, under addition and subtraction, both signed zeros are equivalent. However, they are not equivalent under division since 1.0/-0.0 ⇒ -∞ but 1.0/+0.0 ⇒ +∞ and -∞ and +∞ are not equivalent.1

Despite the rationales for the IEEE 754 specification to not define == as an equivalence relation, there are legitimate cases where one needs a true equivalence relation over floating-point values, such as when writing test programs, and cases where one needs a total ordering, such as when sorting. In my numerical tests I use a method that returns true for two floating-point values x and y if:
((x == y) &&
(if x and y are both zero they have the same sign)) ||
(x and y are both NaN)
Conveniently, this is just computed by using (, y) == 0). For sorting or a total order, the semantics of are fine; NaN is treated as being the largest floating-point values, greater than positive infinity, and -0.0 < +0.0. That ordering is the total order used by by java.util.Arrays.sort(double[]). In terms of semantics, it doesn't really matter where the NaNs are ordered with respect to ther values to as long as they are consistently ordered that way.2

These subtleties of floating-point comparison were also germane on the Project Coin mailing list last year; the definition of floating-point equality was discussed in relation to adding support for relational operations based on a type implementing the Comparable interface. That thread also broached the complexities involved in comparing BigDecimal values.

The BigDecimal class has a natural ordering that is inconsistent with equals; that is for at least some inputs bd1 and bd2,, bd2)==0
has a different boolean value than
In BigDecimal, the same numerical value can have multiple representations, such as (100 × 100) versus (10 × 101) versus (1 × 102). These are all "the same" numerically (compareTo == 0) but are not equals with each other. Such values are not equivalent under the operations supported by BigDecimal; for example (100 × 100) has a scale of 0 while (1 × 102) has a scale of -2.4

While subtle, the different notions of numerical equality each serve a useful purpose and knowing which notion is appropriate for a given task is an important factor in writing correct programs.

1 There are two zeros in IEEE 754 because there are two infinities. Another way to extend the real numbers to include infinity is to have a single (unsigned) projective infinity. In such a system, there is only one conceptual zero. Early x87 chips before IEEE 754 was standardized had support for both signed (affine) and projective infinities. Each style of infinity is more convenient for some kinds of computations.

2 Besides the equivalence relation offered by, y), another equivalence relation can be induced by either of the bitwise conversion routines, Double.doubleToLongBits or Double.doubleToRawLongBits. The former collapses all bit patterns that encode a NaN value into a single canonical NaN bit pattern, while the latter can let through a platform-specific NaN value. Implementation freedoms allowed by the original IEEE 754 standard have allowed different processor families to define different conventions for NaN bit patterns.

3 I've at times considered whether it would be worthwhile to include an "@NaturalOrderingInconsistentWithEquals" annotation in the platform to flag the classes that have this quirk. Such an annotation could be used by various checkers to find potentially problematic uses of such classes in sets and maps.

4 Building on wording developed for the BigDecimal specification under JSR 13, when I was editor of the IEEE 754 revision, I introduced several pieces of decimal-related terminology into the draft. Those terms include preferred exponent, analogous to the preferred scale from BigDecimal, and cohort, "The set of all floating-point representations that represent a given floating-point number in a given floating-point format." Put in terms of BigDecimal, the members of a cohort would be all the BigDecimal numbers with the same numerical value, but distinct pairs of scale (negation of the exponent) and unscaled value.

Wednesday Feb 24, 2010

API Design: Identity and Equality

When designing types to be reused by others, there are reasons to favor interfaces over abstract classes. One complication of using an interface-based approach stems from defining reasonable behavior for the equals and hashCode methods, especially if different implementations are intended to play well together when used in data structures like collections, in particular if an interface type is meant to serve as the key of a map or as the element type of a set.

Some interfaces, like CharSequence, are designed to not be a usable type for a map key or an element type of a set:

[The CharSequence] interface does not refine the general contracts of the equals and hashCode methods. The result of comparing two objects that implement CharSequence is therefore, in general, undefined. Each object may be implemented by a different class, and there is no guarantee that each class will be capable of testing its instances for equality with those of the other. It is therefore inappropriate to use arbitrary CharSequence instances as elements in a set or as keys in a map.

Amongst other problems, CharSequences are not required to be immutable so in general there are always hazards from time of check to time of use conditions.

Even if a type is not suitable as a map key, it can be fine as the type of the value to which a key gets mapped. Likewise, even if type cannot serve as the element type of a set, it can often still be perfectly fine as the element type of a list.

Expanding on a slide from my JavaOne talk Tips and Tricks for Using Language Features in API Design and Implementation, for interface types intended to be used as map keys or set elements, equality can be defined in several ways. First, equality can be defined solely in terms of information retrievable from methods of the interface. Alternatively, equality can be defined in terms of information retrievable via the interface methods as well as additional information. Finally, object identity (the == relation) is always a valid definition for equals and often a good implementation choice.

An example of the first kind of equality definition is specified for annotation types:

Returns true if the specified object represents an annotation that is logically equivalent to this one. In other words, returns true if the specified object is an instance of the same annotation type as this instance, all of whose members are equal to the corresponding member of this annotation, as defined below: ...

Returns the hash code of this annotation, as defined below:
The hash code of an annotation is the sum of the hash codes of its members (including those with default values), as defined below:...

A consequence of defining equality in this manner is that the hashCode algorithm must also be specified. If it were not specified, the equals/hashCode contract would be violated since equal objects must have equal hashCodes. Therefore, different implementations of this style of interface must have enough information to implement the equals method and have a precise algorithm for hashCode.

An annotation type is a kind of interface. At runtime, dynamic proxies are used to create the core reflection objects implementing annotation types, such as the objects returned by the getAnnotation method. After a quick identity check, the equals algorithm used in the proxy sees if the annotation type of the two annotation objects is the same and then compares the results of the annotation type's methods. This indirection allows the annotation objects from core reflection to interact properly with other implementations of annotation objects. The annotation objects generated for annotation processing in apt and javac both use the same underlying implementation as core reflection. However, completely independent annotation implementations are fine too. For example, the code below

import javax.annotation.processing.\*;
import javax.lang.model.SourceVersion;
import java.lang.annotation.\*;
import java.lang.reflect.\*;
import java.util.\*;

 \* Demonstrate equality of different annotation implementations.
public class AnnotationEqualityDemonstration {
    static class MySupportedSourceVersion implements SupportedSourceVersion {
        private final SourceVersion sourceVersion;

        private MySupportedSourceVersion(SourceVersion sourceVersion) {
            this.sourceVersion = sourceVersion;

        public Class annotationType() {
            return SupportedSourceVersion.class;

        public SourceVersion value() {
            return sourceVersion;
        public boolean equals(Object o) {
            if (o instanceof SupportedSourceVersion) {
                SupportedSourceVersion ssv = (SupportedSourceVersion) o;
                return ssv.value() == sourceVersion;
            return false;

        public int hashCode() {
            return (127 \* "value".hashCode()) \^ sourceVersion.hashCode();

    public static void main(String... args) {
        SupportedSourceVersion reflectSSV =
        SupportedSourceVersion localSSV = 
            new MySupportedSourceVersion(reflectSSV.value());

        System.out.println("reflectSSV == localSSV is " +
                           (reflectSSV == localSSV));

        System.out.println("reflectSSV.equals(localSSV) is " +

        System.out.println("localSSV.equals(reflectSSV) is " +

        System.out.println("reflectSSV.getClass()equals(localSSV.getClass()) is " +

        System.out.println("\\nreflectSSV.hashCode() is " +

        System.out.println("localSSV.hashCode()   is " +

when run outputs:

reflectSSV == localSSV is false
reflectSSV.equals(localSSV) is true
localSSV.equals(reflectSSV) is true
reflectSSV.getClass()equals(localSSV.getClass()) is false

reflectSSV.hashCode() is 1867635603
localSSV.hashCode()   is 1867635603

The second kind of equality definition is specified for the language modeling interfaces in the javax.lang.model.element package:

Note that the identity of an element involves implicit state not directly accessible from the element's methods, including state about the presence of unrelated types. Element objects created by different implementations of these interfaces should not be expected to be equal even if "the same" element is being modeled; this is analogous to the inequality of Class objects for the same class file loaded through different class loaders.

Inside javac, instance control is used for the implementation classes for javax.lang.model.element.Element subtypes. This allows the default pointer equality to be used and allows the hashing algorithm to not be specified. Just as you can't step in the same river twice, the identity of an Element object is tied to the context in which it is created. Operationally, one consequence of this context sensitivity is that Element objects modeling "the same" type produced during different rounds of annotation processing will not be equal even if there are equivalent methods, fields, constructors, etc. in both types in both rounds.

When independent implementations of an interface and not required to be equal to one another, the hashCode algorithm does not need to be specified, providing the implementer more flexibility. This second style of specification allows disjoint islands of implementations to be defined.

Which style of specification is more appropriate depends on how the interface type is intended to be used. Defining interoperable implementation is more difficult and limits the ability of the interface to be retrofitted onto existing types. For example, while the Element interface and other interfaces from JSR 269 were successfully implemented by classes in both javac and Eclipse, it would be impractical to expect Element objects from those disparate implementations to compare as equal. Mixin interfaces, like CharSequence and Closeable, should be cautious in defining equals behavior if the interface is intended to be widely implemented. In some cases, a mixin interface can finesse this issue by being limited to an existing type hierarchy with already defined equals and hashCode polices. For example, the Parameterizable and QualifiedNameable interfaces added to the javax.lang.model.element package in JDK 7 (6460529) are extensions to javax.lang.model.Element and therefore get to reuse the existing policies quoted above.

Saturday Feb 20, 2010

Everything Older is Newer Once Again

Catching up on writing about more numerical work from years past, the second article in a two-part series finished last year discusses some low-level floating-point manipulations methods I added to the platform over the course of JDKs 5 and 6. Previously, I published a blog entry reacting to the first part of the series.

JDK 6 enjoyed several numerics-related library changes. Constants for MIN_NORMAL, MIN_EXPONENT, and MAX_EXPONENT were added to the Float and Double classes. I also added to the Math and StrictMath classes the following methods for low-level manipulation of floating-point values:

There are also overloaded methods for float arguments. In terms of the IEEE 754 standard from 1985, the methods above provide the core functionality of the recommended functions. In terms of the 2008 revision to IEEE 754, analogous functions are integrated throughout different sections of the document.

While a student at Berkeley, I wrote a tech report on algorithms I developed for an earlier implementation of these methods, an implementation written many years ago when I was a summer intern at Sun. The implementation of the recommended functions in the JDK is a refinement of the earlier work, a refinement that simplified code, added extensive and effective unit tests, and sported better performance in some cases. In part the simplifications came from not attempting to accommodate IEEE 754 features not natively supported in the Java platform, in particular rounding modes and sticky flags.

The primary purpose of these methods is to assist in in the development of math libraries in Java, such as the recent pure Java implementation of floor and ceil (6908131). This expected use-case drove certain API differences with the functions sketched by IEEE 754. For example, the getExponent method simply returns the unbiased value stored in the exponent field of a floating-point value rather than doing additional processing, such as computing the exponent needed to normalized a subnormal number, additional processing called for in some flavors of the 754 logb operation. Such additional functionality can actually slow down math libraries since libraries may not benefit from the additional filtering and may actually have to undo it.

The Math and StrictMath specifications of copySign have a small difference: the StrictMath version always treats NaNs as having a positive sign (a sign bit of zero) while the Math version does not impose this requirement. The IEEE standard does not ascribe a meaning to the sign bit of a NaN and difference processors have different conventions NaN representations and how they propagate. However, if the source argument is not a NaN, the two copySign methods will produce equivalent results. Therefore, even if being used in a library where the results need to be completely predictable, the faster Math version of copySign can be used as long as the source argument is known to be numerical.

The recommended functions can also be used to solve a little floating-point puzzle: generating the interesting limit values of a floating-point format just starting with constants for 0.0 and 1.0 in that format:

  • NaN is 0.0/0.0.

  • POSITIVE_INFINITY is 1.0/0.0.

  • MAX_VALUE is nextAfter(POSITIVE_INFINITY, 0.0).

  • MIN_VALUE is nextUp(0.0).

  • MIN_NORMAL is MIN_VALUE/(nextUp(1.0)-1.0).

Wednesday Feb 17, 2010

OpenJDK 6: b18 regression test results

Running with the usual jtreg flags, -a -ignore:quiet in all repositories and adding -s for langtools, the basic regression test results on Linux for OpenJDK 6 build 18 are:

  • HotSpot, 24 tests passed.

  • Langtools, 1,355 tests passed.

  • JDK, 3,148 tests pass, 19 fail, and 2 have errors.

All the HotSpot tests continue to pass:

0: b17-hotspot/summary.txt  pass: 24
1: b18-hotspot/summary.txt  pass: 24

No differences

In langtools all the tests continue to pass and one test was added:

0: b17-langtools/summary.txt  pass: 1,354
1: b18-langtools/summary.txt  pass: 1,355

0      1      Test
---    pass   tools/javac/

1 differences

And in jdk, a few dozen new tests were added in b18 and the existing tests have generally consistent results, with a number of long-standing test failures corrected by Pavel Tisnovsky. The test run below was executed outside of Sun's and Oracle's wide-area network using the following contents for the testing network configuration file:

The file location to use for the networking configuration can be set by passing a -e JTREG_TESTENV=Path to file option to jtreg.

0: b17-jdk/summary.txt  pass: 3,118; fail: 26
1: b18-jdk/summary.txt  pass: 3,148; fail: 19; error: 2

0      1      Test
---    pass   com/sun/java/swing/plaf/nimbus/
---    pass   com/sun/java/swing/plaf/nimbus/
---    pass   com/sun/jdi/
---    pass   com/sun/jdi/
---    pass   com/sun/jdi/
---    pass   demo/jvmti/compiledMethodLoad/
fail   pass   java/awt/Frame/DynamicLayout/
fail   pass   java/awt/Frame/MaximizedToIconified/
fail   pass   java/awt/Frame/ShownOffScreenOnWin98/
fail   pass   java/awt/Frame/UnfocusableMaximizedFrameResizablity/
---    pass   java/awt/GraphicsDevice/
fail   pass   java/awt/GridLayout/LayoutExtraGaps/
fail   pass   java/awt/Insets/
fail   pass   java/awt/KeyboardFocusmanager/TypeAhead/ButtonActionKeyTest/ButtonActionKeyTest.html
fail   pass   java/awt/KeyboardFocusmanager/TypeAhead/MenuItemActivatedTest/MenuItemActivatedTest.html
fail   pass   java/awt/KeyboardFocusmanager/TypeAhead/SubMenuShowTest/SubMenuShowTest.html
fail   pass   java/awt/KeyboardFocusmanager/TypeAhead/TestDialogTypeAhead.html
pass   fail   java/awt/Multiscreen/LocationRelativeToTest/
fail   pass   java/awt/TextArea/UsingWithMouse/SelectionAutoscrollTest.html
fail   pass   java/awt/Toolkit/ScreenInsetsTest/
pass   ---    java/awt/Window/AlwaysOnTop/
pass   fail   java/awt/Window/GrabSequence/
fail   pass   java/awt/grab/EmbeddedFrameTest1/
pass   fail   java/awt/print/PrinterJob/
---    pass   java/lang/ClassLoader/
pass   fail   java/net/ipv6tests/
pass   fail   java/nio/channels/SocketChannel/
pass   fail   java/nio/channels/SocketChannel/
pass   fail   java/nio/channels/SocketChannel/
pass   fail   java/rmi/transport/pinLastArguments/
---    pass   java/util/TimeZone/
---    pass   java/util/TimeZone/
---    pass   javax/swing/JButton/6604281/
fail   pass   javax/swing/JTextArea/
---    pass   javax/swing/Security/6657138/
---    pass   javax/swing/Security/6657138/
---    pass   javax/swing/ToolTipManager/
---    pass   javax/swing/UIManager/
---    pass   javax/swing/plaf/basic/BasicSplitPaneUI/
---    pass   javax/swing/plaf/metal/MetalBorders/
---    pass   javax/swing/plaf/metal/MetalBumps/
---    pass   javax/swing/plaf/metal/MetalInternalFrameUI/
---    pass   javax/swing/plaf/metal/MetalSliderUI/
pass   error  sun/java2d/OpenGL/
pass   fail   sun/rmi/transport/proxy/
---    pass   sun/security/provider/certpath/DisabledAlgorithms/
---    pass   sun/security/provider/certpath/DisabledAlgorithms/
---    pass   sun/security/provider/certpath/DisabledAlgorithms/
---    pass   sun/security/provider/certpath/DisabledAlgorithms/
pass   error  sun/security/ssl/javax/net/ssl/NewAPIs/
---    pass   sun/security/tools/jarsigner/
---    pass   sun/security/util/DerValue/
fail   pass   sun/tools/jhat/
fail   pass   sun/tools/native2ascii/

54 differences

OpenJDK 6: b18 Source Bundle Published

On February 16, 2010 the source bundle for OpenJDK 6 b18 was published.

Major changes in this build include the latest round of security fixes and, courtesy of Andrew John Hughes, a backport of the Nimbus look and feel from JDK 7. In addition, a new delivery model is being used for the jaxp and jax-ws components.

A detailed list of all the changes is also available.

Friday Feb 12, 2010

Finding a bug in FDLIBM pow

Writing up a piece of old work for some more Friday fun, an example of testing where the failures are likely to be led to my independent discovery of a bug in the FDLIBM pow function, one of only two bugs fixed in FDLIBM 5.3. Even back when this bug was fixed for Java some time ago (5033578), the FDLIBM library was well-established, widely used in the Java platform and elsewhere, and already thoroughly tested so I was quite proud my tests found a new problem. The next most recent change to the pow implementation was eleven years prior to the fix in 5.3.

The specification for Math.pow is involved, with over two dozen special cases listed. When setting out to write tests for this method, I re-expressed the specification in a tabular form to understand what was going on. After a few iterations reminiscent of tweaking a Karnaugh map, the table below was the result.

Special Cases for FDLIBM pow and {Math, StrictMath}.pow
xy y
x –∞ –∞ < y < 1 –1 –1 < y < 0 –0.0 +0.0 0 < y < 1 1 1 < y < +∞ +∞ NaN
–∞ +0.0 f2(y) 1.0 f1(y) +∞ NaN
–∞ < y < –1 +0.0 f3(x, y) f3(x, y) +∞
–1 NaN NaN
–1 < y < 0 +∞ +0.0
–0.0 +∞ f1(y) f2(y) +0.0
+0.0 +∞ +0.0
0 < y < 1 +∞     x   +0.0
1 NaN 1.0 NaN
1 < y < +∞ +0.0 x +∞
+∞ +0.0 +∞

f1(y) = isOddInt(y) ? –∞ : +∞;
f2(y) = isOddInt(y) ? –0.0 : +0.0;
f3(x, y) = isEvenInt(y) ? |x|y : (isOddInt(y) ? –|x|y : NaN);
Defined to be +1.0 in C99, see §F.9.4.4 of the C99 specification. Large magnitude finite floating-point numbers are all even integers (since the precision of a typical floating-point format is much less than its exponent range, a large number will be an integer times the base raised to a power). Therefore, by the reasoning of the C99 committee, pow(-1.0, ∞) was like pow(-1.0, Unknown large even integer) so the result was defined to be 1.0 instead of NaN.

The range of arguments in each row and column are partitioned into eleven categories, ten categories of finite values together with NaN (Not a Number). Some combination of x and y arguments are covered by multiple clauses of the specification. A few helper functions are defined to simplify the presentation. As noted in the table, a cross-platform wrinkle is that the C99 specification, which came out after Java was first released, defined certain special cases differently than in FDLIBM and Java's Math.pow.

A regression test based on this tabular representation of pow special cases is jdk/test/java/lang/Math/ The test makes sure each interesting combination in the table is probed at least once. Some combinations receive multiple probes. When an entry represents a range, the exact endpoints of the range are tested; in addition, other interesting interior points are tested too. For example, for the range 1 < x< +∞ the individual points tested are:

+1.0000000000000002, // nextAfter(+1.0, +oo)
-(double)Integer.MIN_VALUE - 1.0,
-(double)Integer.MIN_VALUE + 1.0,
 double)Integer.MAX_VALUE + 4.0,
 (double) ((1L<<53)-1L),
 (double) ((1L<<53)),
 (double) ((1L<<53)+2L),

Besides the endpoints, the interesting interior points include points worth checking because of transitions either in the IEEE 754 double format or a 2's complement integer format.

Inputs that used to fail under this testing include a range of severities, from the almost always numerical benign error of returning a wrongly signed zero, to returning a zero when the result should be finite nonzero result, to returning infinity for a finite result, to even returning a wrongly singed infinity!

Selected Failing Inputs

Failure for StrictMath.pow(double, double):
       For inputs -0.5                   (-0x1.0p-1) and 
                   9.007199254740991E15  (0x1.fffffffffffffp52)
       expected   -0.0                   (-0x0.0p0)
       got         0.0                   (0x0.0p0).

Failure for StrictMath.pow(double, double):
       For inputs -0.9999999999999999    (-0x1.fffffffffffffp-1) and 
                   9.007199254740991E15  (0x1.fffffffffffffp52)
       expected   -0.36787944117144233   (-0x1.78b56362cef38p-2)
       got        -0.0                   (-0x0.0p0).

Failure for StrictMath.pow(double, double):
       For inputs -1.0000000000000004    (-0x1.0000000000002p0) and 
                   9.007199254740994E15  (0x1.0000000000001p53)
       expected  54.598150033144236      (0x1.b4c902e273a58p5)
       got       0.0                     (0x0.0p0).

Failure for StrictMath.pow(double, double):
       For inputs -0.9999999999999998    (-0x1.ffffffffffffep-1) and 
                   9.007199254740992E15  (0x1.0p53)
       expected    0.13533528323661267   (0x1.152aaa3bf81cbp-3)
       got         0.0                   (0x0.0p0).

Failure for StrictMath.pow(double, double):
       For inputs -0.9999999999999998    (-0x1.ffffffffffffep-1) and 
                  -9.007199254740991E15  (-0x1.fffffffffffffp52)
       expected   -7.38905609893065      (-0x1.d8e64b8d4ddaep2)
       got        -Infinity              (-Infinity).

Failure for StrictMath.pow(double, double):
       For inputs -3.0                   (-0x1.8p1) and 
                   9.007199254740991E15  (0x1.fffffffffffffp52)
       expected   -Infinity              (-Infinity)
       got        Infinity               (Infinity).

The code changes to address the bug were fairly simple; corrections were made to extracting components of the floating-point inputs and sign information was propagated properly.

Even expertly written software can have errors and even long-used software can have unexpected problems. Estimating how often this bug in FDLIBM caused an issue is difficult, while the errors could be egregious, the needed inputs to elicit the problem were arguably unusual (even though perfectly valid mathematically). Thorough testing is key aspect of assuring the quality of numerical software, it is also helpful for end-users to be able to examine the output of their programs to help notice problems.




« April 2014

No bookmarks in folder