Friday Jul 16, 2010

Project Coin ARM Implementation

I'm happy to announce that starting with a prototype written by Tom Ball, Oracle's javac team has produced and pushed an implementation of the try-with-resources statement, otherwise known as ARM blocks, into JDK 7.

Today the resourceful can apply a changeset to a copy of the JDK 7 langtools repository and do a build to get a compiler supporting this feature. Otherwise, following the integration process, support for try-with-resources statements will be available in the promoted JDK 7 builds in due course.

Besides possible refinements to the semantics of the translations, there will likely be other small adjustments to the implementation of the language feature in the future. For example, the lint category named "arm" will likely be renamed to something more like "resource."

Thursday Jul 15, 2010

Project Coin: Updated ARM Spec

Starting with Project Coin proposal for Automatic Resource Management (ARM) (Google Docs version), in consultation with Josh Bloch, Maurizio, Jon, and others, Alex and I have produced a specification for ARM blocks that is much closer to Java Language Specification (JLS) style and rigor. The specification involves changes to the existing JLS section §14.20 "The try statement," and will eventually introduce a new subsection §14.20.3 "Execution of try-with-resources," although the specification below is not partitioned as such. Non-normative comments about the specification text below appear inside "[]". Differences between the new specification and the earlier Project Coin proposal for ARM are discussed after the specification.

SYNTAX: The existing set of grammar productions for TryStatement in JLS §14.20 is augmented with:

try ResourceSpecification Block Catchesopt Finallyopt

Supporting new grammar productions are added:

( Resources )
Resource ; Resources
VariableModifiers Type VariableDeclaratorId = Expression

[An implication of the combined grammar is that a try statement must have at least one of a catch clause, a finally block, and a resource specification. Furthermore, it is permissible for a try statement to have exactly one of these three components. Note that it is illegal to have a trailing semi-colon in the resource specification.]

A try-with-resources statement has a resource specification that expresses resources to be automatically closed at the end of the Block. A resource specification declares one or more local variables and/or has one or more expressions, each of whose type must be a subtype of AutoCloseable or a compile-time error occurs.

If a resource specification declares a variable, the variable must not have the same name as a variable declared earlier in the resource specification, a local variable, or parameter of the method or initializer block immediately enclosing the try statement, or a compile-time error occurs.

The scope of a variable declared in a resource specification of a try-with-resources statement (§14.20) is from the declaration rightward over the remainder of the resource specification and the entire Block associated with the try. Within the Block of the try, the name of the variable may not be redeclared as a local variable of the directly enclosing method or initializer block, nor may it be redeclared as an exception parameter of a catch clause in a try statement of the directly enclosing method or initializer block, nor may it be redeclared as a variable in the resource specification, or a compile-time error occurs. However, a variable declared in a resource specification may be shadowed (§6.3.1) anywhere inside a class declaration nested within the Block of the try.

The meaning of a try-with-resources statement with a Catches clause or Finally block is given by translation to a try-with-resources statement with no Catches clause or Finally block:

try ResourceSpecification

try {
  try ResourceSpecification

In a try-with-resources statement that manages a single resource:

  • If the initialization of the resource completes abruptly because of a throw of a value V, or if the Block of the try-with-resources statement completes abruptly because of a throw of a value V and the automatic closing of the resource completes normally, then the try-with-resources statement completes abruptly because of the throw of value V.

  • If the Block of the try-with-resources statement completes abruptly because of a throw of a value V1, and the automatic closing of the resource completes abruptly because of a throw of a value V2, then the try-with-resources statement completes abruptly because of the throw of value V1, provided that V2 is an Exception. In this case, V2 is added to the suppressed exception list of V1. If V2 is an error (i.e. a Throwable that is not an Exception), then the try-with-resources statement completes abruptly because of the throw of value V2. In this case, V1 is not suppressed by V2.

If a try-with-resources statement that manages multiple resources:

  • If the initialization of a resource completes abruptly because of a throw of a value V, or if the Block of the try-with-resources statement completes abruptly because of a throw of a value V (which implies that the initialization of all resources completed normally) and the automatic closings of all resources completes normally, then the try-with-resources statement completes abruptly because of the throw of value V.

  • If the Block of the try-with-resources statement completes abruptly because of a throw of a value V1, and the automatic closings of one or more resources (that were previously successfully initialized) complete abruptly because of throws of values V2...Vn, then the try-with-resources statement completes abruptly because of the throw of a value Vi (1 ≤ i ≤ n) determined by the translation below.

The exceptions that can be thrown by a try-with-resources statement are the exceptions that can thrown by the Block of the try-with-resources statement plus the union of the exceptions that can be thrown by the automatic closing of the resources themselves. Regardless of the number of resources managed by a try-with-resources statement, it is possible for a Catchesopt clause to catch an exception due to initialization or automatic closing of any resource.

A try-with-resources statement with a ResourceSpecification clause that declares multiple Resources is treated as if it were multiple try-with-resources statements, each of which has a ResourceSpecification clause that declares a single Resource. When a try-with-resources statement with n Resources (n > 1) is translated, the result is a try-with-resources statement with n-1 Resources. After n such translations, there are n nested try-catch-finally statements, and the overall translation is complete.

The meaning of a try-with-resources statement with a ResourceSpecification clause and no Catches clause or Finally block is given by translation to a local variable declaration and a try-catch-finally statement. During translation, if the ResourceSpecification clause declares one Resource, then the try-catch-finally statement is not a try-with-resources statement, and ResourceSpecificationtail is empty. If the ResourceSpecification clause declares n Resources, then the try-catch-finally statement is treated as if it were a try-with-resources-catch-finally statement, where ResourceSpecificationtail is a ResourceSpecification consisting of the 2nd, 3rd, ..., nth Resources in order. The translation is as follows, where the identifiers #primaryException, #t, and #suppressedException are fresh:

try ResourceSpecification

final VariableModifiers_minus_final R #resource = Expression;
Throwable #primaryException = null;

try ResourceSpecificationtail
catch (final Throwable #t) {
  #primaryException = t;
  throw #t;
} finally {
  if (#primaryException != null) {
    try {
    } catch(Exception #suppressedException) {
  } else {

If the Resource being translated declares a variable, then VariableModifiers_minus_final is the set of modifiers on the variable (except for final if present); R is the type of the variable declaration; and #resource is the name of the variable declared in the Resource.

Discussion: Resource declarations in a resource specification are implicitly final. For consistency with existing declarations that have implicit modifiers, it is legal (though discouraged) for a programmer to provide an explicit "final" modifier. By allowing non-final modifiers, annotations such as @SuppressWarnings will be preserved on the translated code. It is unlikely that the Java programming language will ever ascribe a meaning to an explicit final modifier in this location other than the traditional meaning.
[Unlike the new meaning ascribed to a final exception parameter.]

Discussion: Unlike the fresh identifier in the translation of the enhanced-for statement, the #resource variable is in scope in the Block of a try-with-resources statement.

If the Resource being translated is an Expression, then the translation includes an local variable declaration for which VariableModifiers_minus_final is empty; the type R is the type of the Expression (under the condition that the Expression is assigned to a variable of type AutoCloseable); and #resource is a fresh identifier.

Discussion: The method Throwable.addSuppressedException has a parameter of type Throwable, but the translation is such that only an Exception from #resource.close() will be passed for suppression. In the judgment of the designers of the Java programming language, an Error due to automatic closing of a resource is sufficiently serious that it should not be automatically suppressed in favor of an exception from the Block or the initialization or automatic closing of lexically rightward resources.
[However, perhaps such an Error should instead be recorded as suppressing an exception from the Block or other lexically rightward component.]

Discussion: This translation exploits the improved precision of exception analysis now triggered by the rethrow of a final exception parameter.

The reachability and definite assignment rules for the try statement with a resource specification are implicitly specified by the translations above.

Compared to the earlier proposal, this draft specification:

  • Assumes the revised supporting API with java.lang.AutoCloseable as the type indicating participation in the new language feature.

  • Changes the official grammar for a declared resource from
    VariableModifiers Type VariableDeclaratorId = Expression
    The former syntactically allowed code like
    AutoCloseable a, b, c
    which would not be useful in this context.

  • Preserves modifiers on explicitly declared resources, which implies @SuppressWarnings on a resource should have the intended effect.

  • States how the exception behavior of close methods is accounted for in determining the set of exceptions a try-with-resource statement can throw.

  • Gives a more precise determination of the type used for the local variable holding a resource given as an Expression. This precision is important to allow accurate exception information to be computed.

  • Provides typing constraints so that type inference works as expected if the Expression given as a Resource in a ResourceSpecification is, say, a generic method or null.

Compiler changes implementing this revised specification remain in progress. After experience is gained with the initial implementation, I expect various changes to the feature to be contemplated:

  • Dropping support for a resource to be specified as a general Expression. Nontrivial specification and implementation complexities arise from allowing a general Expression to be used as resource. Allowing a restricted expression that was just a name may provide nearly all the additional flexibility at marginal additional implementation and specification impact.

  • Adjustments to the suppressed exception logic: in the present specification, an incoming primary exception will suppress an Exception thrown by a close method; however, if the close method throws an error, that error is propagated out without suppressing an incoming primary exception. Possible alternatives include having a primary exception in a try-with-resource statement suppress all subsequent Throwables originating in the statement and having a non-Exception thrown by a close suppress any incoming primary exception.

    These alternatives could be implemented by replacing the translated code

        try {
        } catch(Exception #suppressedException) {


        try {
        } catch(Throwable #suppressedException) {


        try {
        } catch(Exception #suppressedException) {
        } catch(Throwable #throwable) {
          throw #throwable;



Tuesday Jul 06, 2010

Project Coin: Bringing it to a Close(able)

As a follow-up to the initial API changes to support automatic resource management (ARM) I wrote an annotation processor, CloseableFinder, to programmatically look for types that were candidates to be retrofitted as Closeable or AutoCloseable.

The processor issues a note for a type that has a public no-args instance method returning void whose name is "close" where the type does not already implement/extend Closeable or AutoCloseable. Based on the exceptions a close method is declared to throw, the processor outputs whether the type is a candidate to be retrofitting to just AutoCloseable or to either of Closeable and AutoCloseable. Which of Closeable and AutoCloseable is more appropriate can depend on the semantics of the close method not captured in its signature. For example, Closeable.close is defined to be idempotent, repeated calls to close have no effect. If a close method is defined to not be idempotent, without changing the specification the type can only be correctly retrofitted to AutoCloseable.

To use the processor, first compile it and then configure your compiler or IDE to run the processor. The processor can be compiled under JDK 6. Once compiled, it can be run either under JDK 6 or under a JDK 7 build that has the AutoCloseable interface; the processor will configure itself appropriately based on the JDK version it is running under. For javac, the command line to run the processor can look like:

javac -proc:only \\
-processor CloseableFinder \\
-processorpath Path_to_processor \\

A thread on build-dev discusses how to run an annotation processor over the JDK sources; a larger than default heap size may be needed to process all the files in one command. When run over the JDK 7 sources, the processor finds many candidate types to be retrofitted. After consulting with the teams in question, an additional nine types were retrofitted to work with ARM, two in java.beans, two in, one in java.util, and four in javax.sound; these additional retrofittings have been pushed into JDK 7 and will appear in subsequent builds.

Besides the potential updating of JDBC at some point in the future, other significant retrofitting of JDK classes in java.\* and javax.\* to AutoCloseable/Closeable should not be expected. Unofficial JDK APIs in other namespaces might be examined for retrofitting in the future. The compiler changes to support the ARM language feature remain in progress.

Wednesday Jun 23, 2010

Project Coin: ARM API

The initial API changes to support the Project Coin feature automatic resource management (ARM) blocks have been pushed into JDK 7 (langtools, jdk) and will appear in subsequent builds. The corresponding compiler changes to support the actual language feature remain in progress.

The initial API work to support ARM was divided into two pieces, essential API support and retrofitting platform classes. The essential support includes:

  • A new interface java.lang.AutoCloseable which defines a single method
    void close() throws Exception

  • A new enum constant in the language model:

  • Methods on java.lang.Throwable to add and retrieve information about suppressed exceptions, including printing out suppressed exceptions in stack traces.

The retrofitting includes:

  • Having extend java.lang.AutoCloseable. (From a typing perspective, a subtype of AutoCloseable can be declared to throw fewer exceptions than the supertype. Therefore is is fine for the close method in AutoCloseable to throw Exception and the close method in Closeable to throw the more specific IOException. It would even be fine for the close method in a subtype of AutoCloseable to be declared to throw no exceptions at all.)

  • Adding a close method to java.nio.channels.FileLock and having FileLock implement AutoCloseable.

  • Adding Closeable as an interface implemented by

Other platform classes may be retrofitted to implement AutoCloseable or Closable in future builds.

Compared to the API support in earlier versions of the ARM proposal, the top-level interface to mark participation in ARM is in package java.lang rather than its own package and, after consultation with the JDBC and graphics teams, neither java.sql.\* nor java.awt.Graphics were retrofitted for ARM.

Tuesday Jun 15, 2010

Syntax Sin Tax

In various forums, recent discussion about Project Lambda have commented on, and often noted in dismay, the current syntax for lambda expressions in the initial prototype. "Don't panic!" is advice as valid for work on language evolution as on any other endeavor. Since syntax is the easiest aspect of a language change to form an opinion on, it is the aspect of language changes most susceptible to bikeshedding. While syntax is an important component of language changes, it is far from the only important component; the semantics matter too! Fixation on the syntax of a feature early in its development is premature and counterproductive. Having a prototype to gain actual experience with the feature is more valuable than continued informed analysis and commentary without working code. I believe this diagram included in a talk on the Project Coin language change process holds for language changes in Java more generally:

While proposing and commenting can be helpful, the effort required to produce a prototype is disproportionally beneficial and the incremental effort using the prototype has even higher leverage. Experience trumps speculation. And not all efforts lead to positive results; complaining and obstructing alone are rarely helpful contributions.

Just the engineering needed to fully deliver a language changes involves many coordinated deliverables even without including documentation, samples and user guides. A consequence of an open style of development is that changes are pushed early, even if not often, and early changes imply the full fit and finish of a final product will of necessity not be present from the beginning. Long digressions on small issues, syntactical or otherwise, are a distraction from the other work that needs to get done.

True participation in a project means participating in the work of the project. The work of a language change involves much more than just discussing syntax. Once a prototype exists, the most helpful contribution is to use the prototype and report experiences using it.

Wednesday Jun 09, 2010

Project Coin: Inducing contributory heap pollution

US patent law defines various kinds of patent infringement, as do other jurisdictions. (I am not a lawyer! This is not legal advice! Check your local listings! Don't kill kittens! Example being used for analogy purposes only! ) One can infringe on a patent directly, say, by making, using, selling, offering to sell, or importing a patented widget without a suitable license. A computer scientist looking to infringe might (erroneously) believe the conditions for infringement can be circumvented by applying the familiar technique of adding a level of indirection. For example, one indirection would be selling 90% percent of the patented widget, leaving the end-user to complete the final 10% and thereby infringe. Such contributory infringement is also verboten. Likewise, providing step-by-step instructions on how to infringe the patent is outlawed as inducing infringement. Putting both techniques together, inducing contributory infringement is also disallowed.

Starting in JDK 5, a compiler must issue mandatory unchecked warnings at sites of possible heap pollution:

Java Language Specification, Third Edition — § Heap Pollution
It is possible that a variable of a parameterized type refers to an object that is not of that parameterized type. This situation is known as heap pollution. This situation can only occur if the program performed some operation that would give rise to an unchecked warning at compile-time.

One case where unchecked warnings occur is a call to a varargs method where the type of the variable argument is not reifiable. That is, where the type information for the parameter is not fully expressible at runtime due to the erasure of generics. Varargs are implemented using arrays and arrays are reified; that is, the component type of an array is stored internally and used when needed for various type checks at runtime. However, the type information stored for an array's component type cannot store the information needed to represent a non-reifiable parameterized type.

The mismatch between reified arrays being used to pass non-reified (and non-reifiable) parameterized types is the basis for the unchecked warnings when such conflicted methods are called. However in JDK 5, only calling one of conflicted methods causes a compile-time warning; declaring such a method doesn't lead to any similar warning. This is analogous to the compiler only warning of direct patent infringement, while ignoring or being oblivious too indirect infringement. While the mere existence of a conflicted varargs method does not cause heap pollution per se, its existence contributes to heap pollution by providing an easy way to cause heap pollution to occur and induces heap pollution by offering the method to be called. By this reasoning, if method calls that cause heap pollution deserve a compiler warning, so do method declarations which induce contributory heap pollution.

Additionally, the warnings issued for some calls to varargs methods involving heap pollution are arguably spurious since nothing bad happens. For example, calling various useful helper varargs methods in the platform trigger unchecked warnings, including:

These three methods all iterate over the varargs array pulling out the elements in turn and processing them. If the varargs array is constructed by the compiler using proper type inference, the bodies of the methods won't experience any ClassCastExceptions due to handling of the array's elements. Currently, to eliminate the warnings associated with calling these methods, each call site needs a @SuppressWarnings("unchecked") annotation.

To address these usability issues with varargs, Project Coin accepted simplifying varargs as one if the project's changes. The initial prototype version of this feature pushed by Maurizio, has several parts:

  • A new mandatory compiler warning is generated on declaration sites of problematic varargs methods that are able to induce contributory heap pollution.

  • The ability to suppress those mandatory warnings at a declaration site using an @SuppressWarnings("varargs") annotation. The warnings may also be suppressing using the -Xlint:-varargs option to the compiler.

  • If the @SuppressWarnings("varargs") annotation is used on a problematic varargs method declaration, the unchecked warnings at call sites of that method are also suppressed.

This prototype will allow experience to be gained with the algorithms to detect and suppress the new mandatory warnings. However, the annotation used to suppress the warnings should be part of the varargs method's contract, denoting that when a compiler-constructed array is passed to the method nothing bad will happen, for a suitable definition of nothing bad. Therefore, an @Documented annotation type needs to be used for this purpose and SuppressWarnings is not @Documented. Additionally, the suppressing annotation for varags should also be @Inherited so the method implementation restrictions are passed on to subclasses.

Subsequent design discussions about the new annotation type with the properties in question to suppress the varargs warnings as well as criteria for the annotation to be correctly applied can occur on the Project Coin mailing list.

Friday May 07, 2010

Draft of Restarted "OpenJDK Developers' Guide" available for discussion

I've been working on a restarted version of the "OpenJDK Developers' Guide" and I have a draft far enough along for general discussion. The content of the existing guide is primarily logistical and procedural in nature; in time, I plan to migrate this information to a JDK 7 specific page because many of the details are release-specific. The new guide is more conceptual and once completed is intended to be able to last for several releases without major updating.

The table of contents of draft version 0.775 is:

The full draft is available from here.

The compatibility sections are currently more fully developed than the ones about developing a change. (Long-time readers of this blog will be familiar with earlier versions of some of the material.)

All level of feedback is welcome, from correcting typos, to stylistic suggestions, to proposals for new sections. Significant feedback should be sent to the project alias for the guide.

Initially, I plan to maintain the guide as an HTML file and publish new versions as needed. Over time, guide maintenance may transition to more formal version control, such as a small Mercurial repository or a wiki.

Monday May 03, 2010

Project Coin: multi-catch and final rethrow

As alluded to as a possibility previously, I'm happy to announce that improved exception handling with multi-catch and final rethrow will be part of an upcoming JDK 7 build. Improved exception handling is joining other Project Coin features available in the repository after successful experiences with a multi-catch implementation developed by Maurizio Cimadamore.

Maurizio's work also revealed and corrected a flaw in the originally proposed static analysis for the set of exception that can be rethrown; from the original proposal form for this feature:

[a] final catch parameter is treated as throwing precisely those exception types that

  • the try block can throw,
  • no previous catch clause handles, and
  • is a subtype of one of the types in the declaration of the catch parameter

Consider a final rethrow statement as below where the dynamic class of a thrown exception differs from the static type (due to a cast in this case):

class Neg04 {
  static class A extends Exception {}
  static class B extends Exception {}

  void test(boolean b1, boolean b2) throws B {
      try {
          if (b1) {
              throw new A();
          } else if (b2) {
              throw new B();
          } else {
              throw (Throwable)new Exception();
      catch (A e) {}
      catch (final Exception e) {
          throw e;
      catch (Throwable t) {}

The set of exceptions thrown by the try block is computed {A, B, Throwable}; therefore, the set of exceptions that can be rethrown is the set of exceptions from the try block:

  1. minus A, handled by a previous catch clause, giving {B, Throwable}
  2. minus Throwable since Throwable is not a subtype of one of the types declared for the catch parameter (just Exception in this case), leaving only {B}

However, if an Exception is thrown from the try block it should be caught in the "catch(final Exception e)" clause even if the exception is cast to Throwable since catch clauses work based on the runtime class of the exceptions being thrown.

To address this, the third clause is changed to

  • is a subtype/supertype of one of the types in the declaration of the catch parameter

More formally, this clause covers computing a join over the set of thrown exceptions, eliminating subtypes. In the example above {Throwable} is computed as the set of exceptions being throwable from the try block. This is then intersected with the exceptions that can be caught by the catch block, resulting in {Exception}, a properly sound result.

Very general exception types being thrown by a try block would reduce the utility of multi-catch since only imprecise information would be available. Fortunately, from analyzing the JDK sources, throwing a statically imprecise exception seems rare, indicating multi-catch with the amended specification should still be very be helpful in practice.

Today the adventurous can apply a changest to a copy of the JDK 7 langtools repository and do a build to get a compiler supporting this feature. Otherwise, following the integration process, improved exception handling will appear in the promoted JDK 7 builds in due course.

Friday Mar 12, 2010

Last Round Compiling

As of build 85 of JDK 7, bug 6634138 "Source generated in last round not compiled" has been fixed in javac. Previously, source code generated in a round of annotation processing where RoundEnvironment.processingOver() was true was not compiled. With the fix, source generated in the last round is compiled, but, as intended, while compiled such source still does not undergo annotation processing since processing is over. The fix has also been applied to OpenJDK 6 build 19.

Annotation Processor SourceVersion

In annotation processing there are three distinct roles, the author of the annotation types, the author of the annotation processor, and the client of the annotations. The third role includes the responsibility to configure the compiler correctly, such as setting the source, target, and encoding options and setting the source and class file destination for annotation processing. The author of the annotation processor shares a related responsibility: property returning the source version supported by the processor.

Most processors can be written against a particular source version and always return that source version, such as by including a @SupportedSourceVersion annotation on the processor class. In principle, the annotation processing infrastructure could tailor the view of newer-than-supported language constructs to be more compatible with existing processors. Conversely, processors have the flexibility to implement their own policies when encountering objects representing newer-than-supported structures. In brief, by extending version-specific abstract visitor classes, such as AbstractElementVisitor6 and AbstractTypeVisitor6, the visitUnknown method will be called on entities newer than the version in question.

Just as regression tests inside the JDK itself should by default follow a dual policy of accepting the default source and target settings rather than setting them explicitly like other programs, annotation processors used for testing with the JDK should generally support the latest source version and not be constrained to a particular version. This allows any issues or unexpected interactions of new features to be found more quickly and keeps the regression tests exercising the most recent code paths in the compiler.

This dual policy is now consistently implemented in the langtools regression tests as of build 85 of JDK 7 (6926699).

Thursday Mar 11, 2010

An Assertive Quality in Langtools

With a duo of fixes in JDK 7 build 85, one by Jon (6927797) and another by me (6926703), the langtools repository has reached another milestone in testing robustness: all the tests pass with assertions (-ea) and system assertions (-esa) enabled. This adds to other useful langtools testing properties, such as being able to successufully run in the speedy same vm testing mode.

Jon's fix was just updating a test so that some code would always be run with assertions disabled while my fix corrected an actual buggy assert I included in apt. Addressing such problems helps simplify analyzing test results; if there is a failure, there is a problem!

These fixes have also been applied in the forthcoming OpenJDK 6 build 19 so it too will have the same assertive testing quality.

Thursday Feb 25, 2010

Notions of Floating-Point Equality

Moving on from identity and equality of objects, different notions of equality are also surprisingly subtle in some numerical realms.

As comes up from time to time and is often surprising, the "==" operator defined by IEEE 754 and used by Java for comparing floating-point values (JLSv3 §15.21.1) is not an equivalence relation. Equivalence relations satisfy three properties, reflexivity (something is equivalent to itself), symmetry (if a is equivalent to b, b is equivalent to a), and transitivity (if a is equivalent to b and b is equivalent to c, then a is equivalent to c).

The IEEE 754 standard defines four possible mutually exclusive ordering relations between floating-point values:

  • equal

  • greater than

  • less than

  • unordered

A NaN (Not a Number) is unordered with respective to every floating-point value, including itself. This was done so that NaNs would not quietly slip by without due notice. Since (NaN == NaN) is false, the IEEE 754 "==" relation is not an equivalence relation since it is not reflexive.

An equivalence relation partitions a set into equivalence classes; each member of an equivalence classes is "the same" as the other members of the classes for the purposes of that equivalence relation. In terms of numerics, one would expect equivalent values to result in equivalent numerical results in all cases. Therefore, the size of the equivalence classes over floating-point values would be expected to be one; a number would only be equivalent to itself. However, in IEEE 754 there are two zeros, -0.0 and +0.0, and they compare as equal under ==. For IEEE 754 addition and subtraction, the sign of a zero argument can at most affect the sign of a zero result. That is, if the sum or difference is not zero, a zero of either sign doesn't change the result. If the sum or differnece is zero and one of the arguments is zero, the other argument must be zero too:

  • -0.0 + -0.0 ⇒ -0.0

  • -0.0 + +0.0 ⇒ +0.0

  • +0.0 + -0.0 ⇒ +0.0

  • +0.0 + +0.0 ⇒ +0.0

Therefore, under addition and subtraction, both signed zeros are equivalent. However, they are not equivalent under division since 1.0/-0.0 ⇒ -∞ but 1.0/+0.0 ⇒ +∞ and -∞ and +∞ are not equivalent.1

Despite the rationales for the IEEE 754 specification to not define == as an equivalence relation, there are legitimate cases where one needs a true equivalence relation over floating-point values, such as when writing test programs, and cases where one needs a total ordering, such as when sorting. In my numerical tests I use a method that returns true for two floating-point values x and y if:
((x == y) &&
(if x and y are both zero they have the same sign)) ||
(x and y are both NaN)
Conveniently, this is just computed by using (, y) == 0). For sorting or a total order, the semantics of are fine; NaN is treated as being the largest floating-point values, greater than positive infinity, and -0.0 < +0.0. That ordering is the total order used by by java.util.Arrays.sort(double[]). In terms of semantics, it doesn't really matter where the NaNs are ordered with respect to ther values to as long as they are consistently ordered that way.2

These subtleties of floating-point comparison were also germane on the Project Coin mailing list last year; the definition of floating-point equality was discussed in relation to adding support for relational operations based on a type implementing the Comparable interface. That thread also broached the complexities involved in comparing BigDecimal values.

The BigDecimal class has a natural ordering that is inconsistent with equals; that is for at least some inputs bd1 and bd2,, bd2)==0
has a different boolean value than
In BigDecimal, the same numerical value can have multiple representations, such as (100 × 100) versus (10 × 101) versus (1 × 102). These are all "the same" numerically (compareTo == 0) but are not equals with each other. Such values are not equivalent under the operations supported by BigDecimal; for example (100 × 100) has a scale of 0 while (1 × 102) has a scale of -2.4

While subtle, the different notions of numerical equality each serve a useful purpose and knowing which notion is appropriate for a given task is an important factor in writing correct programs.

1 There are two zeros in IEEE 754 because there are two infinities. Another way to extend the real numbers to include infinity is to have a single (unsigned) projective infinity. In such a system, there is only one conceptual zero. Early x87 chips before IEEE 754 was standardized had support for both signed (affine) and projective infinities. Each style of infinity is more convenient for some kinds of computations.

2 Besides the equivalence relation offered by, y), another equivalence relation can be induced by either of the bitwise conversion routines, Double.doubleToLongBits or Double.doubleToRawLongBits. The former collapses all bit patterns that encode a NaN value into a single canonical NaN bit pattern, while the latter can let through a platform-specific NaN value. Implementation freedoms allowed by the original IEEE 754 standard have allowed different processor families to define different conventions for NaN bit patterns.

3 I've at times considered whether it would be worthwhile to include an "@NaturalOrderingInconsistentWithEquals" annotation in the platform to flag the classes that have this quirk. Such an annotation could be used by various checkers to find potentially problematic uses of such classes in sets and maps.

4 Building on wording developed for the BigDecimal specification under JSR 13, when I was editor of the IEEE 754 revision, I introduced several pieces of decimal-related terminology into the draft. Those terms include preferred exponent, analogous to the preferred scale from BigDecimal, and cohort, "The set of all floating-point representations that represent a given floating-point number in a given floating-point format." Put in terms of BigDecimal, the members of a cohort would be all the BigDecimal numbers with the same numerical value, but distinct pairs of scale (negation of the exponent) and unscaled value.

Thursday Feb 11, 2010

Project Coin: Taking a Break for Strings in Switch

The initial way a string switch statement was implemented in JDK 7 was to desugar a string switch into a series of two switch statements, the first switch mapping from the argument string's hash code to the ordinal position of a matching string label followed by a second switch mapping from the computed ordinal position to the code to be executed. Before this approach was settled on, Jon, Maurizio, and I had extensive discussions about alternative implementation techniques. One approach from Maurizio we seriously considered using employed labeled break statements (in lieu of unavailable goto statements) to allow a string switch to be desugared into a single integer switch statement. In this approach as well, the basis for the integer switch built around the strings' hash codes.

One kind of complication in desugaring string switch statements stems from irregular control flow, such as when control transfers to one label, code is executed, and then control falls through to the code under the next label rather than exiting the switch statement after the initial code execution. When using hash codes to identify the string being switched on, another class of complications stem from dealing with the possibility of hash collisions, the situation where two distinct strings have the same hash code. A string can be constructed to have any integer hash code so collisions are always a possibility. Since many strings have the same hash code, it is not sufficient to verify the string being switched on just has the same hash value as a string case label; the string being switched on must be checked for equality with the case label string. Furthermore, when two string case labels have the same hash value, a string being switched on with a matching hash code must be checked for equality potentially against both case labels.

While relying on hash codes to implement string switch is contentious with some, the hashing algorithm of java.lang.String is an extremely stable part of the platform and there would be too much behavioral compatibility risk in changing it. Therefore, the stability of the algorithm can be relied on as a resource in possible string switch implementations. Switching on the hash code in the desugaring confers a number of benefits. First, most immediately the hash code maps the string to an integer value, matching the type required for the existing switch statement. Second, switching on the hash code of a string bounds the worst case behavior. The simplest way to see if a chosen string is in a set of other strings, such as the set of string case labels, would be to compare the chosen string to each of the strings in the set. This could be expensive since the chosen string would need to be traversed many times, potentially once for each case label. The hash code of a string is typically cached after it is first computed. Therefore, when switching on the hash code, the chosen string is not expected to be traversed more than twice (once to compute the hash code if not cached, again to compare against strings from the set of strings with the same hash value — a set usually with only one element).

If instead of a series of two desugared switch statements, only a single switch statement were desired in the desugaring, extra synthetic state variables could be used to contend with hash collisions, fall-through control flows, and default cases, as described in the Project Coin strings in switch proposal. A goto construct could be used to eliminate state variables, but goto is neither available in the source language nor in javac's intermediate representation. However, by a novel use of nested labeled breaks, a single switch statement can be used in the desugaring without introducing additional synthetic control variables.

Consider the strings switch statement in the method f below

static void f(String s) { // Original sugared code
  switch (s) {
    case "azvl":
      System.out.println("azvl: "+s); // fallthrough
    case "quux":
      System.out.println("Quux: "+s); // fallthrough
    case "foo":
      int i = 5; //fallthrough
    case "bar":
      System.out.println("FooOrBar " + (i = 6) + ": "+s);
    case "bmjrabc": // same hash as "azvl"
      System.out.println("bmjrabc: "+s);
    case "baz":
       System.out.println("Baz " + (i = 7) + ": "+s); // fallthrough
      System.out.println("default: "+s);

and the following desugaring procedure. Create a labeled block to enclose the entire switch statement. Within that enclosing block, create a series of nested blocks, one for each case label, including a default option, if any. In the innermost block, have a switch statement based on the hash code of the strings in the original case labels. For each hash value present in the set of case labels, have an if-then-else chain comparing the string being switched on to the cases having that hash value, breaking to the corresponding label if there is a match. If a match does not occur, if the original switch has a default option, a break should transfer control to the label for the default case; if the original case does not have a default option, a break should occur to the switch exit label.

If a hash value only corresponds to a single case label, the sense of the equality/inequality comparison in the desugared code can be tuned for branch prediction purposes. After the block for a case label is closed, the code for that alternative appears. In the original switch code, there are two normal completion paths of interest: the code for an alternative is run and execution falls through to the next alternative or there is an unlabeled break to exit the switch. In the desugaring, these paths are represented by execution falling through to code for the next alternative and by a labeled break to the label synthesized for the switch statement exit. The preservation of fall through semantics is possible because the code interspersed in the nested labeled statements appears in the same textual order as in the original "sugared" string switch. Local variables can be declared in the middle of a switch block. In desugared code, such variable declarations are hoisted out to reside in the block for the entire switch statement; the declaration of the variable and its uses are then renamed to a synthetic value to avoid changing the meaning of names in other scopes. Sample results of this procedure are shown below.

static void f(String s) { // Desugared code
  $exit: {
    int i$foo = 0;
    $default_label: {
      $baz: {
        $bmjrabc: {					    
          $bar: {
            $foo: {
              $quux: {
                $azvl: {
                  switch(s.hashCode()) { // cause NPE if s is null
	          case 3010735: // "azvl" and "bmjrabc".hashCode()
                      if (s.equals("azvl"))
		        break $azvl;          
                      else if (s.equals("bmjrabc"))
                        break $bmjrabc;
                        break $default_label;
                    case 3482567: // "quux".hashCode()
                      if (!s.equals("quux")) // inequality compare
                        break $default_label;
                      break $quux;
                    case 101574: // "foo".hashCode()
                      if (s.equals("foo")) // equality compare
                        break $foo;
                      break $default_label;
                    case 97299:  // "bar".hashCode()
                      if (!s.equals("bar"))
                        break $default_label;
                      break $bar;
                    case 97307: // "baz".hashCode()
                      if (!s.equals("baz"))
                        break $default_label;
                      break $baz;
                      break $default_label;
                System.out.println("azvl: "+s); // fallthrough
              } //quux
              System.out.println("Quux: "+s); // fallthrough
            } //foo
            i$foo = 5;
          System.out.println("FooOrBar " + (i$foo = 6) + ": "+s);
          break $exit;
        System.out.println("bmjrabc: " + s);
        break $exit;
      } //baz
      System.out.println("Baz " + (i$foo = 7) + ": "+s); // fallthrough
    System.out.println("default: "+s);

While the series of two switches and the labeled break-based desugaring were both viable alternatives, we choose the series of two switches since the transformation seemed more localized and straightforward. The two-switch solution also has simpler interactions with debuggers. If string switches become widely used, profiling information can be used to guide future engineering efforts to optimize their performance.

Wednesday Feb 03, 2010

java.util.Objects and friends

A small project I worked on during JDK 7 milestones 05 and 06 was the introduction of a java.util.Objects class to serve as a home for static utility methods operating on general objects (6797535, 6889858, 6891113). Those utilities include null-safe or null-tolerant methods for comparing two objects, computing the hash code of an object, and returning a string for an object, operations generally relating to the methods defined on java.lang.Object.

The code to implement each of these methods is very short, so short it is tempting to not write tests when adding such methods to a code base. But the methods they aren't so simple that mistakes cannot be made; replacing such helper methods with a common, tested version from the JDK would be a fine refactoring.

The current set of public methods in java.util.Objects is:

  • static boolean equals(Object a, Object b)

  • static boolean deepEquals(Object a, Object b)

  • static <T> int compare(T a, T b, Comparator<? super T> c)

  • static int hashCode(Object o)

  • static int hash(Object... values)

  • static String toString(Object o)

  • static String toString(Object o, String nullDefault)

  • static <T> T nonNull(T obj)

  • static <T> T nonNull(T obj, String message)

The first two methods define two equivalence relations over object references. Unlike the equals methods on Object, the equals(Object a, Object b) method handles null values. That is, true is returned if both arguments are null or if the first argument is non-null and a.equals(b) returns true. A method with this functionality is an especially common utility method to write, there are several versions of it in the JDK, so I expect the two-argument equals will be one of the most heavily used methods in the Objects class.

The second equivalence relation is defined by the deepEquals method. The equals and deepEquals relations can differ for object arrays; see for the javadoc for details. Equality implies deep equality, but the converse is not true. For example, in the program below arrays c and d are deep-equals but not equals.

public class Test {
   public static void main(String... args) {
       Object common = "A string in common.";
       Object[] a = {common};
       Object[] b = {common};
       Object[] c = {a};
       Object[] d = {b};
       // c and d are deepEquals, but not equals

A third equivalence relation is the object identity relation defined by the == operator on references, but since that is already built into the language, no library support is needed. Identity equality implies equals equality and deepEquals equality.

Next, Objects includes a null-tolerant Comparator-style method which first compares for object identity using == before calling the provided Comparator. While Comparable classes aren't as widely available as the methods inherited from java.lang.Object, Comparable is a very useful and frequently implemented interface.

Objects has two hash-related methods. The first is a null-handling hash method which assigns null a zero hash code and the second is a utility method for implementing a reasonable hash function for a class just by passing in the right list of values.

The toString methods provide null handling support, in case of a null argument either returning "null" or the provided default string.

Finally, there are two methods to more conveniently handle null checks, intended to be useful when validating method and constructor parameters.

Taken together, the methods in Objects should lessen the pain and tedium of null handling until more systematic approaches are used.

The Objects API was shaped by discussion in various threads on core-libs-dev in September and October 2009. Several other bugs were also fixed as a result of those discussions, one adding a set of compare methods for primitive types (6582946) and another to consistently define the hash codes of the wrapper classes (4245470).

Tuesday Feb 02, 2010

JDK 7: New Component Delivery Model Delivered

Thanks to Kelly, the new component delivery model for jaxp and jax-ws is now available in both JDK 7, as of build 72 of milestone 5, and OpenJDK 6, coming in build 18 (6856630).

As described previously, the JDK build no longer tracks a copy of the jaxp and jax-ws sources under version control. Instead source bundles from the upstream teams are used. The file in the jaxp repository contains the default URL from which the source bundle is downloaded as well as the expected checksum for that file. The analogous setup is used for jax-ws in its repository. To avoid downloading another copy of a bundle or to try out an alternate bundle, several variables can be set in the ant build of one of the repositories. For jaxp,

jaxp-repo-directory$ ant -f build.xml \\
-Ddrops.master.copy.base=path-to-drop-directory \\ \\

If changes local to the JDK are needed, patches can be applied from the new patches directory in the two repositories. For example, patches are a mechanism that could be used to deploy security fixes until a new source bundle with those fixes was externally available.

With this new delivery model, I look forward to low-overhead and coordinated updates to jaxp and jax-ws in OpenJDK 6 and JDK 7.

A possible future consolidation would fold the build logic in the now vestigial jaxp and jax-ws repositories into the main jdk repository.

Monday Feb 01, 2010


Java developers are familiar with dynamic linking. Class files are a kind of intermediate format with symbolic references. At runtime, a class loader will load, link, and initialize new types as needed. Typically the full classpath a class loader uses for searching will have several logically distinct sub components, including the boot classpath, endorsed standards, extension directories, and the user-specified classpath. The manifest of a jar file can also contain Class-Path entries. Together, these paths delineate the boundaries of "jar hell."

For many years, modern Unix systems have also supported dynamic linking for C programs. Instead of a classpath, there is a runpath of locations to look to for resolving symbolic references. Like the classpath, the full runpath has multiple components, including a default component for system facilities (analogous to boot classpath), a component stored in a shared object (analogous to jar file Class-Path entries), as well as an end-user specified component (analogous to the -classpath command line option or CLASSPATH environment variable). The details of linking on Solaris are well explained in Sun's Linker and Libraries Guide. Other contemporary Unix platforms like Linux and MacOS have similar facilities, although the details of the various commands differ.

One of the tasks the JDK's launcher has handled is setting a suitable runpath for the JVM and platform libraries. Historically a runpath was needed to link in the desired JVM, such as client or server, and other system libraries. The client JVM and the server JVM are separate shared objects which support the same set of interfaces; by interpreting the command line flags the launcher selects which JVM to link in. Operationally, the linking is initiated by the Unix dlopen library call. So that the caller of the java command did not need to set LD_LIBRARY_PATH, after selecting the JVM to run the launcher would modify the LD_LIBRARY_PATH environment variable by prepending the path to the JVM shared object (and paths to other directories with JDK native system libraries). However, the runtime linker only reads the value of LD_LIBRARY_PATH when a process starts. Therefore, to have the new value take effect, the launcher would call an exec-family system call to start the process anew. Such re-execing to set LD_LIBRARY_PATH is not recommended practice on Unix systems.

The re-execing to set LD_LIBRARY_PATH had a number of unpleasant consequences in the launcher code. There is only a narrow path to pass information between the exec parent and the exec child, such as by modifying environment variables, which is generally discouraged. To decide whether or not an exec was needed, the launcher checked whether the prefix of LD_LIBRARY_PATH had the expected value; if it did, no exec was done for that purpose and infinite exec loops were avoided. Presetting LD_LIBRARY_PATH to the right value before calling java could thus be used to suppress the exec. There were also complications with correctly supporting multiple LD_LIBRARY_PATH variables on Solaris1 and handling suid java executions on Linux.2

The proper way to accommodate such dependencies is not to set LD_LIBRARY_PATH but rather to use the runtime linker facilities analogous to jar file Class-Path entries; the facility is the $ORIGIN dynamic string token for the runtime linker. As the name implies, $ORIGIN is expanded to the path directory of the object file in question; thus relative paths to other directories can be specified. Therefore, as long as the directory structure of the JDK and JRE are known, $ORIGIN can be used to record any necessary dependencies.

For some time, the JDK build has actually used $ORIGIN in creating its native libraries. Therefore, it may have been the case that LD_LIBRARY_PATH was not actually needed. However, verifying that LD_LIBRARY_PATH was not actually needed would require building an exec-free JDK on all supported Unix platforms and running tests that exercise the all libraries in the directories no longer added to LD_LIBRARY_PATH. The engineering for Kumar's purge of execing for LD_LIBRARY_PATH was generally straightforward: deleting the the LD_LIBRARY_PATH-related code in the Unix java_md.c file and doing builds on all platforms. Most of the effort of getting this fix back involved running tests to verify everything still worked. The testing revealed an unneeded, troublesome symlink that was removed at the same time LD_LIBRARY_PATH usage was purged.

While the launcher no longer execs to set the LD_LIBRARY_PATH, there are still cases where an exec will occur for other reasons. If the java command is requested to change data models using the -d32 or -d64 flag, that is, a 32-bit java command is asked to run a 64-bit JVM or vice versa, an exec is needed to effect the change. Also, multiple JRE support, where a different version is requested via the -version:foo flag, will also cause an exec if a different Java version needs to be run. However, before Kumar's fix the common case was that the launcher would exec once; now the common case is that the launcher will exec zero times.3

I'm very happy this messy use of LD_LIBRARY_PATH has finally been removed in JDK 7. The removal makes the launcher code both simpler and more maintainable. Unless your use of java relies on the number of execs that occur, the change should be largely transparent, other than startup being marginally faster. One situation to be aware of is launching a LD_LIBRARY_PATH-free JDK 7 java command from a JDK 6 or earlier java process. If the LD_LIBRARY_PATH variable of the older JDK is not cleared, it can affect the liking of the JDK 7 process.

1 Since Solaris 7, that OS line has supported three LD_LIBRARY_PATH variables:

  • LD_LIBRARY_PATH_32: if set, overrides LD_LIBRARY_PATH for 32-bit processes.

  • LD_LIBRARY_PATH_64: if set, overrides LD_LIBRARY_PATH for 64-bit processes.

  • LD_LIBRARY_PATH: used by both 32-bit and 64-bit processes is not overridden by a data model specific variable.

On Solaris, back in JDK 1.4.2 I fixed the launcher to properly take into account all three variables (4731671); on re-exec the data model specific environment variable is unset and LD_LIBRARY_PATH contains the old data model specific value prepended with the JDK system paths. Tests to verify all this used to live in and around test/tools/launcher/, but they have thankfully been deleted as they are no longer relevant.

2 For suid or sgid binaries, LD_LIBRARY_PATH is handled differently to avoid security problems. While the Solaris runtime linker applies more scrutiny to LD_LIBRARY_PATH in this case, on Linux glibc sets LD_LIBRARY_PATH to the empty string. Since the empty string will not contain the expected JDK system directories, the prefix-checking logic detected this case to avoid an infinite exec loop (4745674). Running java suid or sgid isn't necessarily recommended, but it is possible. To actually resolve linking dependencies for such binaries, OS-specific configuration may be needed to add JDK directories to the set of trusted paths.

3 Before my batch of launcher fixes in JDK 1.4.2, the number of execs was even more varied. Specifying a different data model would exec twice, once to change the data model and again to set the LD_LIBRARY_PATH for that data model. From JDK 1.4.2 until the purge of LD_LIBRARY_PATH, the launcher used a single exec to set the LD_LIBRARY_PATH to the target data model (4492822).

Monday Jan 25, 2010

What is the launcher?

One surprisingly tricky piece of the Java platform is the launcher, the set of C code that uses the JNI invocation API to get the JVM started and begin running the main class. While conceptually simple, the launcher is complicated by straddling the boundary between the host system and the JVM, often wrestling with native platform issues like thread configuration that need to be managed before starting the JVM. The launcher's tasks include selecting which VM to run (client, server, etc.) and running in the requested data model, 32-bit or 64-bit.

In the jdk Mercurial repository, the source code of the launcher is primarily composed of:

  • src/share/bin/java.{c, h}

  • Other files in the src/share/bin directory, including the Java launcher infrastructure utilties, jli_util.{c, h}.

  • src/solaris/bin/java_md.{c, h} (covers both Solaris and Linux using #defines)

  • src/windows/bin/java_md.{c, h} (covers various and sundry versions of MS Windows)

The launcher code is used to build the executables in the JDK bin directory; every invocation of java and commands like javac first executes through the launcher. Consequently mistakes in the launcher can cause severe build problems and cross-platform testing is especially important. On doing numerical work, Prof. Kahan has advised "The best you can hope for is that if you do your job very well, no one will notice" and working on the launcher has a similar flavor; you don't want your work to get noticed!

From JDK 1.4.2 through JDK 6 I was the lead launcher maintainer, amongst other responsibilities. When I first took over launcher maintenance, I introduced regression tests and performed several rounds of refactoring. Over time, my launcher activities shifting to reviewing and advising others on their launcher-related projects including:

  • In JDK 5, multiple JRE support (java -version:foo to invoke Java version foo from some other version) and ergonomics to provide better out-of-the-box tuned performance.

  • In JDK 6, the splashscreen functionality and classpath wildcards.

Since JDK 6 first shipped, I've happily handed over the reigns of primary launcher care and feeding to Kumar, while still being involved in code reviews and writing the occasional blog entry. Kumar moved common functinality of the launcher into a shared library to make the source more robust and speed up the builds.

In the near future I'll be writing about a long-standing launcher flaw concerning the Unix exec system call and LD_LIBRARY_PATH Kumar has fixed in JDK 7.

Monday Nov 30, 2009

Projec Coin: Post-Devoxx Update, closures and exception handling

As has been announced recently at Devoxx and covered in various places, including threads on the coin-dev mailing list, Mark Reinhold made several announcements about JDK 7 at this year's Devoxx:

  1. JDK 7 will have a form of closures.

  2. The JDK 7 schedule is being extended to fall 2010.

On the first announcement, the coin-dev list is not the appropriate forum to discuss closures in Java. Closures are hereby decreed as off-topic for coin-dev.

Mark's blog entry "Closures for Java" invites those with an informed opinion to participate in the current discussion; watch Mark's blog for news about creation of a new list or project, etc., to host this closures effort.

On the second announcement, while the JDK 7 schedule has been extended, many of the current final five (or so) Project Coin features have not yet been fully implemented, specified, and tested. Therefore, there will not be a general reassessment of Project Coin feature selection or another call for proposals in JDK 7. The final five (or so) proposals remain selected for inclusion in JDK 7 and work will continue to complete those features. However, given its technical merit and the possibility of providing useful infrastructure for ARM, improved exception handling is now being reconsidered for inclusion in JDK 7. No other "for further consideration" proposal is under reconsideration.

Wednesday Nov 18, 2009

Project Coin at Devoxx 2009

Wednesday evening local time I gave a talk about Project Coin at Devoxx 2009. The video of the talk will be available on Parleys in due course; in the mean time, my slides are online. Temping the wrath of the demo gods, I enjoyed showing the support Project Coin features have today in a developer build of NetBeans.

Project Coin: Milestone 5 Language Features in NetBeans

To go along with the language changes available in JDK 7 milestone 5, the NetBeans team has created a developer build of NetBeans supporting the same set of language changes, including improved integer literals, the diamond operator, and strings in switch.

In addition to just accepting the new syntax, the NetBeans build has some deeper support too. For example, when auto-completing on a constructor with type arguments, the diamond operator is offered as a completion. To see what bounds were computed for a diamond instance, you can just hit Ctrl and click on the constructor; the bounds will appear in a pop-up. To encourage use of strings in switch, NetBeans will recognized the pattern of "if(s.equals("foo") {...} else if (s.equals("bar)..." and offer to transform that into strings in switch, just click on the "Convert to switch" lightbulb on the left hand side.

This NetBeans build with Coin support is based on NetBeans 6.8, but when 6.8 ships later this year it will not include support for Project Coin features. Project Coin support will be included in subsequent NetBeans releases.




« June 2016

No bookmarks in folder