Thursday Sep 24, 2009

JSR 199 meets JSR 203

The Compiler API, JSR 199, added in JDK 6, provided the ability to invoke a Java compiler via an API. Now, in JDK 7, there is a new feature, More New I/O APIs for the Java Platform, JSR 203, which provides a rich file system abstraction. This past week I've put together some code to connect the two.[Read More]

Wednesday Jul 01, 2009

Minor updates for jtreg

jtreg 4.0 b03 is now available, and fixes a number of minor issues. You can find it on the standard jtreg download page

It includes the following minor fixes and updates:

  1. Do not follow symlinks when cleaning a directory.
  2. Avoid potential NPE when cleaning a directory.
  3. Allow annotation processing of class files in jtreg directives.
    New option /process for @compile tag disables requirement that one or more \*.java files must be given to @compile.
  4. Set test.src and test.classes properties for @compile tasks.
    These properties can now be used by annotation processors invoked by the @compile task.
  5. Recategorize -e option as a general option instead of a JDK one.
  6. Set MKS to use case-sensitive compare in launcher script.
  7. Remove obsolete reference to JAVA_HOME from launcher script.
  8. (jtdiff) Avoid NPE in standard jtdiff if no output file given.

Friday Jun 12, 2009

javac: plates, coins, and progress

As we reported at JavaOne, a lot has been going on for javac over the past year.

Under the auspices of Project Coin, various small language changes are being considered: strings in switch, arm blocks, binary literals, large arrays, and the diamond and "elvis" operators. Project Jigsaw is investigating the use of modules; JSR 292 is providing support for the "invoke-dynamic" bytecode, and JSR 308 will provide support for annotations on types.

In addition, within javac, we've been finding and fixing bugs, including issues in the type system, and improving the diagnostics that may be generated. We've worked to produce an ANTRL-based grammar for the compiler, and we've worked with the OpenJDK release team to release javac as part of OpenJDK 6.

Monday Dec 08, 2008

Raising the (langtools) quality bar

Recently, we've been working to raise the quality bar for the code in the OpenJDK langtools repository.

Before OpenJDK, the basic quality bar was set by the JDK's product team and SQE team. They defined the test suites to be run, how to run them, and the target platforms on which they should be run. The test suites included the JDK regression tests, for which the standard was to run each test in its own JVM (simple and safe, but slow), and the platforms were the target platforms for the standard Sun JDK product.

Even so, the bar was somewhat higher in selected areas. The javac team has pushed the use of running the javac regression tests in "same JVM" mode, because it is so much faster. Starting up a whole JVM to compile a three line program to verify that a particular error message is generated is like using a bulldozer to crack an egg. Likewise, as a pure Java program, it has been reasonable to develop the compiler and related tools, and to run the regression tests, on non-mainstream supported platforms.

With the advent of OpenJDK, the world got a whole lot bigger, and expectations got somewhat higher, at least for the langtools component. If nothing else, there's a bigger family of developers these days, with a bigger variety of development environments, to be used for building and testing OpenJDK.

We've been steadily working to make it so that all the langtools regression tests can be run in "same JVM" mode. This has required fixes in a number of areas:

  • in the regression test harness (jtreg)
  • in tools like javadoc, which used to be neither reusable nor re-entrant. This made it hard to test it with different tests in the same VM. javadoc is now reusable, re-entrant is coming soon
  • in the tests themselves: some tests we changed to make them same-VM safe; others, like the apt tests, we simply marked as requiring "othervm" mode. Marking a test as requiring "othervm" allows these tests to succeed when the default mode for the rest of the test suite is "samevm".

We've also made it so that you can run the langtools tests without building a full JDK, by using the -Xbootclasspath option. For a while, that left one compiler test out in the cold ( but that test was finally rewritten, recently.

We've been working to use Hudson to build and test the langtools repository, in addition to the standard build and test done by Release Engineering and QA teams. This allows us (developers) to perform additional tests more easily, such as running FindBugs, or testing "developer" configurations as well as "product" configurations. (i.e. the configurations an OpenJDK developer might use.) This has also made us pay more attention to the documented way to run the langtools regression tests, using the standard Ant build file. In practice, the Sun's "official" test runs are done using jtreg from the command line, and speaking for myself, I prefer to run the tests from the command line as well, to have more control over which tests to run or rerun, and how to run them.

The net result of all of this is that the langtools regression tests should all always pass, however they are run. This includes

  • as part of testing a fully built JDK
  • as part of testing a new version of langtools, using an earlier build of JDK as a baseline
  • from the jtreg command line in "other vm" mode
  • from the jtreg command line in "same vm" mode
  • from the <jtreg> Ant task, such as used in the standard build.xml file
  • on all Java platforms

Tuesday Oct 21, 2008

jtreg update

I've been working on some bug fixes and features for the OpenJDK Regression Test Harness, jtreg. This new version has just been released in the standard place on the OpenJDK website. Here is a quick summary of the changes.
  1. New option -noreport.

    An issue with the underlying JT Harness is that it always generates reports about all the tests in the test suite, instead of just the tests you selected to run. This means that there may be a noticeable pause when the tests have completed while jtreg writes out reports. For those folk that never read the reports anyway, you can now disable the reports with this new option.

  2. Fix -exclude option in -samevm mode.

    This combination of options did not work correctly. Now, it should.

  3. Improve version detection in script.

    The wrapper shell script tries to verify it is running on JDK 1.5 or later. The version detection used to be confused if you were running JDK in headless mode, when a spurious message could appear on System.out. Such messages are now filtered out.

  4. Allow System.setIO calls in -samevm mode.

    jtreg tries to insulate tests from each other, so that side effects in one test cannot affect the functioning of another. Previously, tests were not allowed to call System.setIO when running in samevm mode. This prevented a number of tests from running in samevm mode. However, since jtreg only supports running one test at a time, there is no reason to prohibit this call.

  5. Fix the initialization of jtreg-specific system properties for samevm tests to use a space separated list, not the default List.toString() format ("[a,b,c]").

    It was an oversight that a suitable toString() method was not provided on an internal class, resulting in the default toString() for a List being used, instead of the intended format of a space-separated list.

  6. Set the classpath for -samevm tests to include standard jtreg directories, the same as for -othervm tests.

    This is to reduce any unnecessary differences between othervm mode and samevm mode.

  7. Enable XML report via system property.

    While some folk don't want reports at all, other folk want more reports. The system property can be used to select which report types are generated by the underlying JT Harness report facilities. The property can be set to a space separated list of any of the values "text", "html", "xml". The default is "text html". The XML format will be useful for a possible upcoming new plugin for Hudson.

  8. Reduce false positives when setting keywords from test description.

    Some users reported problems with -ignore:quiet and -k:!ignore incorrectly ignoring tests with -XDignore.symbol.file=true. This fixes that issue.

  9. Recategorize the -e option in the "Main" group instead of the "JDK" group.

  10. Update FAQ with reference to -e as a way to override the PATH setting.

  11. (jtdiff) Add convenience 'jtdiff' script.

  12. (jtdiff) Refactor and add new "super-diff" mode.

    "Super-diff" mode is a way of analyzing a matrix of test runs, organized by platform and date. It runs a standard jtdiff for each row in the matrix, thus comparing test results across platforms for each date, and for each column in the matrix, thus comparing the history of the test results for each platform. See jtdiff -help -super for more details.

Tuesday Oct 07, 2008

Speeding up tests: a langtools milestone

We've reached an interesting milestone for the regression tests for the OpenJDK langtools repository. You can now run all the tests using the jtreg -samevm option. In this mode, jtreg will run all the tests in the same JVM whenever possible. This means that the test run completes much faster than if every test creates one or more separate JVMs to run.

This has been a background activity ever since I joined the JDK Language Tools group. It has required work on a number of fronts.

  • We've had to fix a number of bugs in jtreg itself. There was a chicken and egg problem: there was no demand for the feature since it didn't work work enough, and with no demand, there were no engineering cycles to fix it. But, I wanted to run the javac tests fast, and I took over jtreg, so the chicken and egg became one.
  • We converted a lot of javac tests so that they could be run in the same VM. These tests mostly started out as "shell tests" (i.e. written in ksh.) They executed javac and then compared the output against a golden file. By fixing jtreg, and by adding hidden switches to javac, we converted the tests to use jtreg built-in support for golden files.

This was enough to get the javac tests to run in samevm mode, which has been the state for a while now, but there was still the issue of the other related tools, such as apt, javadoc, javap, javah. And, the fact that not all the tests could be run in samevm mode meant that anyone wanting to run all the tests had to use lowest common denominator, slow mode.

  • javap was the next to get fixed: it needed a rewrite anyway, and when that happened, it was natural to fix those tests to run fast as well.
  • javah was similar: there's a partial rewrite underway (to decouple javah from javadoc), and so those tests have been fixed.
  • javadoc has always been the tough one, and has the second largest group of tests after javac. A while back, I took the time out to figure out why the tests failed in samevm mode. It turns out to be a number of factors, mostly relating to the fact that javadoc is a fairly old program, and is somewhat showing its age. Internally it was using static instances a lot, and as a result, was neither reentrant nor reusable. In addition, there were classloader issues when creating the classloader for the doclet: javadoc was not following the recommended "parent classloader" pattern. Having identified those issues, we've been working to fix them, and it is the result of fixing the last of those issues that gives the milestone today.
  • In parallel with the work on javadoc, and with the milestone in sight, we checked out the final tool, apt. There were 24 test failures there when using samevm mode, and since the tool is scheduled to be decommissioned in Java 8, we "fixed" those tests simply by marking them as requiring othervm mode. That doesn't make them run any faster, but it does mean they don't fail if you run the tests with samevm mode as the default mode.

What does all this mean? It means the tests run is less than a quarter of the time than before. Using jtreg samevm mode, and using my laptop, 1421 tests run and pass in a little over 5 minutes, compared to 22 minutes in the standard othervm mode. As a developer waiting to move on the next step, that's a big saving :-) Although all the supporting work has not yet made it back to the master workspace, that's all underway, and at some point we'll throw the switch so that samevm mode is the default for the langtools repository. So far, we've been targeting the JDK 7 repository, but Joe Darcy is keen to see as much of this work as possible in the OpenJDK 6 repository as well.

Can we do the same for the main jdk repository? In principle yes, but it will take someone to do it. The good news is that jtreg now supports the samevm feature well, and the payoff is clear. It does not have to be done all at once: as we did in the langtools workspace, all it takes is someone interested to work on it section by section. The payoff is worthwhile.

Monday Oct 06, 2008

javac, meet ANTLR; ANTLR, meet javac

A while back, we created a new OpenJDK project, Compiler Grammar, so that we could investigate integrating an ANTLR grammar for Java into javac. Thanks to some hard work by our intern Yang Jiang, with assistance from Terence Parr, the initial results of that work are now available.

The grammar currently supports Java version 1.5, although the goal is to fully support the -source option and support older (and newer) versions of the language as well. Right now, the performance is slower than that of standard javac, so this will not be the default lexer and parser for javac for a while, but even so, it should prove an interesting code base for anyone wishing to experiment with potential new language features. And, it does mean that the grammar files being used have been fully tested\* in the context of a complete Java compiler.

We are also looking to align the grammar more closely with the grammar found in JLS.

This version of javac is in the langtools component of the compiler-grammar set of Mercurial OpenJDK repositories.

\*There are currently a few test failures in the regression test suite. Some are to be expected, because the error messages generated by the parser do not match the errors given by the standard version of javac; the other failures are being investigated.

Monday Aug 11, 2008

Improving javac diagnostics: update

Back in May, I talked about some ideas for improving javac diagnostics. Maurizio has now started making those ideas a reality. See his own recent blog entry for an update.

Sunday Jul 06, 2008

Towards a multi-threaded javac

I just finished a vacation with my family, and in the in-between times, I made significant progress towards a multi-threaded javac.

Before you get too excited, let me qualify that by saying that there is some low-hanging fruit for this task, and there's a complete rewrite of the compiler. I'm only talking about the former; there are no plans to do the latter.

The difficulty with a multi-threaded javac is that the Java language is quite complicated these days, and as a result the compiler is also quite complicated internally, to cope with all the consequences of interdependent source files. The current compiler is not set up for concurrent operation, and adapting it would be error-prone and destabilizing. (For more details on the compiler's operation, see Compilation Overview on the OpenJDK Compiler Group web pages.)

The low hanging fruit comes by considering the compilation pipeline in three segments: input, process, and output. The source files can all be read and parsed in parallel, because there are no interdependencies there. (Well, almost none. More on that later.) Likewise, once a class declaration has the contents of its class file prepared internally, it can be written out in the background while the compiler begins to work on the next class.

We've known about this low hanging fruit for a while, and Tom Ball recently submitted a patch with a code for parsing source files in parallel. So, faced with a family vacation, I loaded up my laptop with the bits needed to explore this further.

Parallel parsing

If you're parsing files in parallel, the primary conflict is access to the Log, the main class used by the rest of the compiler to generate diagnostics. It is reasonably obvious that even if you're parsing the source files in parallel together, you don't want to see interleaved any diagnostics that might be generated: you want to see all the diagnostics for each file grouped together. Initially, I was thinking to create custom code in the parser to group parser diagnostics together, but since the scanner (lexer) can also generate diagnostics, it seemed better and less intrusive to give each thread its own custom Log that could save up diagnostics until they can all be reported together. The previous work on the Log class made it somewhat "hourglass shaped", with a bunch of methods which are used by the rest of the compiler to create and report diagnostics, and a back end that knows how to present the diagnostics that are generated. In between is a single "report" method, which was originally introduced to make it easy to subtype Log to vary the way that diagnostics are presented. Now, however, that method provided an excellent place to divide Log in two, into an abstract BasicLog, that provides the front end API used by the body of the compiler, and subtypes to handle the diagnostics that are reported. The main compiler uses Log it always did -- one of the Big Rules for the compiler is to minimize change -- but the threads for the new parser front end can now use a new subtype of BasicLog that buffers up diagnostics and reports them together when the work of the thread is complete.

This refactoring forced one other cleanup in Log, which was an ugly hangover from the introduction of JSR 199, the Compiler API. The Diagnostic objects that get created had an ugly hidden reference bug to the Log that created them, which if used incorrectly could provoke NullPointerExceptions or other problems if you tried to access the source code containing the diagnostic. For those that are interested, it's because of interaction with the Log.useSource method which sets temporary state in Log, but the bottom line is that one more refactoring later, the DiagnosticSource interface became a much better DiagnosticSource object, providing a much cleaner standalone abstraction for information about the source code containing the location of the diagnostic.

(Log used to be one big do-everything class; it has slowly been getting better over the years, and watch out for the upcoming exciting new work that Maurizio is doing to improve the quality and presentation of diagnostics. Luckily, these refactorings I'm describing here will not interfere with that work too much.)

There are some other shared resources used by the Parser: most notably, the compiler's name table, but these were easily fixed by synchronizing a few strategic methods.

That, then was sufficient for the first goal — to parse source files in parallel. :-)   Writing the class files concurrently was somewhat more interesting.

Background class generation and writing

Apart from the general refactoring for Log, the work to parse source files in parallel turned out to be very localized, almost to a single method in JavaCompiler, which is responsible for parsing all the source files on the command line. That one method can choose whether to parse the source files sequentially, as before, or in parallel. There is no such easy method for writing out class files. This is because the internal representation of a generated class file may be quite large, and the compiler pipeline is normally set up to write out class files as soon as possible, and to reclaim the resources used. Because of the memory issues and the primarily serial nature of the upstream pipeline, the general goal was not to write all the class files in parallel, but merely to be able to do the file IO in the background. Thus the design goal was to write classes using a single background thread fed by a limited capacity blocking queue, and so improving the flexibility of the compiler pipeline would improve the ability to write out class files in the background. In particular, it was also desirable to fix an outstanding bug such that either all the classes in a source file should be generated, or none should. The current behavior of generating classes for the contents of a source file until any errors are detected does not fit well with simple build systems like make and Ant that use simple date stamps to determine if the compiled contents of a class file are up to date with respect to the source file itself.

There were already some ideas for reorganizing the compiler pipeline within the main JavaCompiler class. Previously, a big "compile" method in JavaCompiler had been broken up into methods attribute, flow, desugar and generate, representing the different stages of processing for each class to be compiled. These methods could be composed in various ways depending on the compilation policy, which is an internal control within the compiler. The methods communicated via lists of work to be processed, and although the concept was good, it never paid off quite as well as anticipated because of the memory required to build all of the items on the lists before handing the list to the next stage. The latest idea that had been developing was to use iterators or queues to connect the compilation phases, rather than lists.

Another refactoring later, it turned out that queues were the way to go (as in java.util.Queue), because they fit the abstraction required and caused less change elsewhere in the compiler.

In a related improvement, the main "to do" list was also updated. Previously, it was just a simple list of items to be processed, using a simple javac ListBuffer. It was updated to implement Queue, and more importantly, to provide additional access to the contents grouped according to the original source file. This made it easier to process all the classes for a source file together, including any anonymous inner classes. Previously, anonymous inner classes were handled much later than their enclosing classes, because while top level and nested classes are discovered and put on the "to do" list very early, anonymous inner classes are not discovered until much later.

However, an earlier bug fix got in the way of being able to effectively complete processing the contents of a single source file all together.

Normally, the compiler uses a very lazy approach to the overall compilation strategy, advancing the processing of each class as needed, with a "to do" list to make sure that everything that needs to be done eventually really does get done. However, limitations in the pipeline precluded that approach in the desugaring phase. If the supertypes of a class are being compiled in the same compilation as the subtype, they need to be analyzed before the subtype gets desugared, because desugaring is somewhat destructive. The previous implementation could not do on demand processing of the supertypes, so instead the work on the subtypes was deferred by putting them back on the "to do" list to be processed later, after any supertypes had been processed, thus defeating any attempt to process these files together. The new, better implementation is simply to advance the processing of the supertypes as needed.

All this refactoring was somewhat easier to implement than to describe here, and again per the Big Rules, the work was reasonably localized to the JavaCompiler and ToDo classes, with little or no changes to the main body of the compiler. The net result is more flexibility in the compiler pipeline, with a better implementation of the code to generate code file by file, rather than class by class. And, to bring the story back to the original goal, it makes it easier adapt the final method of the pipeline so that it was do its work serially, or with a background queue for writing class files in the background. :-)

And now ...

So where is this work now? Right now, it's here on my laptop with me in a plane somewhere between Iceland and Greenland, so let's hope for a safe journey the rest of the way back to California. The work needs some cleaning up, and more testing, on more varied machines. I've been running all the compiler regression tests and building OpenJDK with this new compiler, and it looks good so far. Finally, it will need to be code reviewed, and pushed into the OpenJDK repositories, probably as a series on smaller changesets, rather than one big one. So watch out for this work coming soon to a repository near you ...

Wednesday May 07, 2008

Improving javac diagnostics

In javac land, we're looking at improving the diagnostic messages generated by the compiler ...

Here's one of my anti-favorite messages that I got from the compiler recently:

/w/jjg/work/jsr294/7/langtools/test/tools/javac/superpackages/ getTask(,,<? super>, java.lang.Iterable<java.lang.String>, java.lang.Iterable<java.lang.String>, java.lang.Iterable<? extends>) in cannot be applied to (<nulltype>,, <nulltype>, java.util.List<java.util.List<java.lang.String>>, <nulltype>, java.lang.Iterable<capture#81 of ? extends>) JavaCompiler.CompilationTask task1 = c.getTask(

It is my pet peeve message type ("can't apply method...") and also includes wildcards, captured type variables, and a <nulltype>. The text of the error message, excluding source file name and highlighted lines is a whopping 577 characters :-) Who says we don't need to improve this?

We have various ideas in mind. This first list is about the content and form of the messages generated by javac.

  • Omit package names from types when the package is clear from the context. For example, use Object instead of java.lang.Object, String instead of java.lang.String, etc, when those are the only classes named Object, String etc, in the context.

  • Don't embed long signatures in message. Instead, restructure the message into a shorter summary plus supporting information.

    For example,

           Method name cannot be applied to given types
           required: types
           found: types
  • Don't embed captured and similar types in signatures, since they inject wordy non-Java constructions into the context of a Java signature. Instead, use short placeholders, and a key.

    For example, in the message above, replace
      java.lang.Iterable<capture#81 of ? extends>
    with a note following the rest of the message:
       where #1 is a capture of ? extends

Put these suggestions together, and the message above becomes:

/w/jjg/work/jsr294/7/langtools/test/tools/javac/superpackages/ method JavaxCompiler.getTask cannot be applied to given types
  required: (Writer, FileManager, DiagnosticListener<? super JavaFileObject>, Iterable<String>, Iterable<? extends JavaFileObject>)
  found: (#null, StandardFileManager, #null, List<List<String>>, #null, Iterable<#1>)
    #1 is the capture of ? extends JavaFileObject

It is a lot shorter (less than half the length of the original message, if you're counting), and more importantly, it breaks the message down into segments that are easier to read and understand, one at a time. It still has a long file name in it, and I'll address that below.

The following ideas are more about the presentation of messages. javac is typically used in two different ways: batch mode (output to a console), and within an IDE, where the messages might be presented as "popup" messages near the point of failure, and in a log window within the IDE.

When used in batch mode, either directly from the command line or in a build system, the compiler could allow the user to control the verbosity of the diagnostics. If you're compiling someone else's library, you might not be worried about the details in any warnings that might be generated. If you're compiling your own code, you might be comfortable with a quick summary of each diagnostic, or you might want as much detail as possible.

When used in an IDE, it would be good to provide the IDE with more access to the component elements of the diagnostic, so that the IDE could improve the presentation of the message. For example,

  • display the base name of the file containing the error, and link it to the compilation unit, instead of displaying the full file name as above
  • use different fonts for the message text and the Java code or signatures contained within it
  • hyperlink types used in the diagnostic to their declaration in the source code
  • given the resource key for the message, an IDE could use the key as an index into additional documentation specific to the type of the error message, explaining the possible causes for the error, and more importantly, what might be done to fix the problem.
To support these suggestions, the compiler could be instructed to generate diagnostics in XML, so that they could be "pretty-printed" in the IDE log window.

Here's how these ideas could be used to improve the presentation of the example message. method JavaxCompiler.getTask cannot be applied to given types    [More info...]
  required: (Writer, FileManager, DiagnosticListener<? super JavaFileObject>, Iterable<String>, Iterable<? extends JavaFileObject>)
  found: (#null, StandardFileManager, #null, List<List<String>>, #null, Iterable<#1>)
    #1 is a capture of ? extends JavaFileObject    [More info...]

OK, I'll leave the real presentation design to the UI experts, but I hope you get the idea of the sort of improvements that might be possible.

Finally, we'll be looking at improving the focus of error messages. For example, this means that if the compiler can determine which of the arguments is at fault in a particular invocation, it should give a message about that particular argument, instead of about the invocation as a whole. However, care must also be taken not to narrow the focus of an error message incorrectly, so that the message becomes misleading. A typical example of that is when the compiler is parsing source code, and having determined that the next token is not one of A or B, it then checks C. If that is not found the compiler may then report "C expected", when a better message would have be "A, B or C expected." This means that such optimizations have to be studied carefully on a case by case basis, whereas all of the preceding suggestions can be applied more generally to all diagnostics.

So, do you have any "pet peeve" messages you get from the compiler? Do you have suggestions on how the messages could be improved, or how they get presented? Add a comment here, or mail your suggestions to the OpenJDK compiler group mailing list, compiler-dev at

Thanks to Maurizio and others for contributing some of the suggestions here.

See Also

Tuesday May 06, 2008

Evolving KSL

As some of you may know, we've made changes recently to the KSL project that was started last year.

We thought it would be fun to share some of the ideas we had along the way.

For inspiration, we decided to throw a bunch of ideas into a kitchen sink, for real, to see what that might inspire. See how many of your favorite language ideas are represented here.

Hint: There are no wrong answers, but there have been some creative ones. :-)

Of course, as soon as we did that, this little guy wanted to get in on the act. He may not quite understand the gist of KSL, but you can't fault his enthusiasm: there's a keyboard, a couple of mice, a couple of monitors and even a KVM cable in the sink, if you look carefully!

For information about what we finally came up with, see the Compiler Group KSL page.

(Thanks to Alex for helping with the first photo, and to Greg Quayle for creating the sculpture in the second.)

Monday May 05, 2008

jtharness vs jtreg

Now that the source code for the OpenJDK Regression Test Harness (jtreg) is available, this provides an overview of how jtreg relates to JT Harness.

In the beginning, there was a need to run compatibility tests on all Java plaforms, and thus was born the "Java Compatibility Kit" (JCK), and the test harness to run those tests (JavaTest). Not long after, there was a need to run a different type of test, regression tests, for JDK. They are fundamentally a different style of test (think white box and black box), but rather than write another, different harness, we adapted JavaTest, and thus was born "jtreg". Eventually, we needed to be able to run tests on different types of platforms, such as Personal Java, a precursor to the J2ME platform. JavaTest evolved to fill the need, and gave rise to the basic architecture you see today.

As you can see, JavaTest, now open sourced under the name JT Harness, is composed of a number of distinct components

  • The core harness. This provides the general ability to represent tests, test results, and to sequence through the execution of a series of tests.
  • Utilities. These provide support to the other JT Harness components.
  • The user interface. JT Harness provides a rich graphical user interface, as well as a command line interface.

And, deliberately, there's a hole. By itself, JT Harness cannot directly run tests in any test suite. It needs to be given a plugin for each test suite, that define how to locate and read tests, how to configure and execute tests, and other additional information, often used by the GUI, to provide documentation specific to the test suite. The plugin code is not complicated and is often a direct use or simple extension of the utility classes provided by JT Harness. Note: most test suites do not need to use all of the JT Harness utility classes, which explains why JT Harness may have more build-time dependencies than are required to build and run any individual test suite.

Thus, the complete picture is more as follows:

This is the form used for most test suites, such as JCK and other TCKs. The code to plugin to the harness is bundled in the test suite, along with a copy of JavaTest or JT Harness itself. These test suites all leverage the user interfaces provided by the harness, both the CLI and GUI. When the user runs JT Harness, and tells it to open a test suite, the harness looks inside the test suite for details of the plugin to be used in conjunction with that test suite.

Because jtreg evolved from JavaTest early on, and specifically, before the JavaTest user interfaces evolved to what you see today, jtreg has a slightly different architecture.

jtreg still utilizes the harness' plugin architecture to configure how to read and execute the tests in the test suite, but it provides its own custom command line interface for setting up a test run. The jtreg CLI provides options tailored to running the JDK regression test suite. It can also invoke the JT Harness GUI, but this is just for monitoring a test run, and does not (yet) provide much support for reconfiguring a test run with different options from within the GUI, as you can do for other test suites.

jtreg is also different from other test suites in that jtreg is distributed separately from the tests that it gets to execute. TCKs are generally built and distributed as self-contained products in their own right, whereas the JDK regression tests do not get prebuilt: they exist and are executed in their source form in each JDK developer's repository. Per the The JDK Test Framework: Tag Language Specification, the tests are compiled and executed as needed.

For more information, see ...

Thursday May 01, 2008

OpenJDK Regression Test Harness (jtreg) - open source

The OpenJDK Regression Test Harness, also known as "jtreg", is now available with an open source license.[Read More]

Tuesday Mar 04, 2008

M&Ms: Milestones and Members

M&M's are great, so here's a few with no calories whatever.[Read More]

Friday May 18, 2007

The King is dead. Long live the King!

Well, the phrase may not be entirely apt, but it does carry the right sentiment, especially to the monarchists amongst us.

As you may have read, Peter is moving on to new adventures, but you can be sure that there are still plenty of adventures still to be had with javac, and the rest of us on the javac team are looking forward to carrying on with everything that is coming up.

With the compiler being open sourced last year, and the recent announcements regarding OpenJDK, and JDK 7 ahead, these are exciting times, albeit somewhat turbulent at times. So, stay tuned for more details, and in the meantime, thanks go to Peter for all his contributions.


Discussions about technical life in my part of the OpenJDK world.


« July 2016