Raising the (langtools) quality bar

Recently, we've been working to raise the quality bar for the code in the OpenJDK langtools repository.

Before OpenJDK, the basic quality bar was set by the JDK's product team and SQE team. They defined the test suites to be run, how to run them, and the target platforms on which they should be run. The test suites included the JDK regression tests, for which the standard was to run each test in its own JVM (simple and safe, but slow), and the platforms were the target platforms for the standard Sun JDK product.

Even so, the bar was somewhat higher in selected areas. The javac team has pushed the use of running the javac regression tests in "same JVM" mode, because it is so much faster. Starting up a whole JVM to compile a three line program to verify that a particular error message is generated is like using a bulldozer to crack an egg. Likewise, as a pure Java program, it has been reasonable to develop the compiler and related tools, and to run the regression tests, on non-mainstream supported platforms.

With the advent of OpenJDK, the world got a whole lot bigger, and expectations got somewhat higher, at least for the langtools component. If nothing else, there's a bigger family of developers these days, with a bigger variety of development environments, to be used for building and testing OpenJDK.

We've been steadily working to make it so that all the langtools regression tests can be run in "same JVM" mode. This has required fixes in a number of areas:

  • in the regression test harness (jtreg)
  • in tools like javadoc, which used to be neither reusable nor re-entrant. This made it hard to test it with different tests in the same VM. javadoc is now reusable, re-entrant is coming soon
  • in the tests themselves: some tests we changed to make them same-VM safe; others, like the apt tests, we simply marked as requiring "othervm" mode. Marking a test as requiring "othervm" allows these tests to succeed when the default mode for the rest of the test suite is "samevm".

We've also made it so that you can run the langtools tests without building a full JDK, by using the -Xbootclasspath option. For a while, that left one compiler test out in the cold (versionOpt.sh) but that test was finally rewritten, recently.

We've been working to use Hudson to build and test the langtools repository, in addition to the standard build and test done by Release Engineering and QA teams. This allows us (developers) to perform additional tests more easily, such as running FindBugs, or testing "developer" configurations as well as "product" configurations. (i.e. the configurations an OpenJDK developer might use.) This has also made us pay more attention to the documented way to run the langtools regression tests, using the standard Ant build file. In practice, the Sun's "official" test runs are done using jtreg from the command line, and speaking for myself, I prefer to run the tests from the command line as well, to have more control over which tests to run or rerun, and how to run them.

The net result of all of this is that the langtools regression tests should all always pass, however they are run. This includes

  • as part of testing a fully built JDK
  • as part of testing a new version of langtools, using an earlier build of JDK as a baseline
  • from the jtreg command line in "other vm" mode
  • from the jtreg command line in "same vm" mode
  • from the <jtreg> Ant task, such as used in the standard build.xml file
  • on all Java platforms
Comments:

Doing that, it seems you/your team has got lots of knowledge on best practices in writing better test cases/approach.

Is it possible for you or someone in your team to share the knowledge with rest of us?

Posted by Mayur Patel on December 08, 2008 at 06:39 AM PST #

There's no magic here.
-- Write test cases. (Otherwise, you'll have no tests.)
-- Write test cases such that developers want to run them. (Otherwise, they won't.)
-- Automate the running of tests. (Otherwise, there will be times when the tests should have been run, but were not.)
-- Automate the reporting of test failures. (Otherwise, there will be times when the failures go unnoticed until it's too late.)

The only magic is finding the time to balance the short term cost against the long term benefits. :-)

Posted by Jonathan Gibbons on December 09, 2008 at 04:00 AM PST #

Post a Comment:
Comments are closed for this entry.
About

Discussions about technical life in my part of the OpenJDK world.

Search

Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today