By sandoz on Oct 30, 2007
Last week i spent some time giving some needed TLC to Jersey's unit test infrastructure.
There was some really dreadful crufty code hacked together (the kind that you write in 5 minutes just to try something out and and you say i will change it later but somehow you never do and it gets used more and you spend silly amounts of time dealing with it) that was used to enable the unit testing of resource classes in-memory without depending on a HTTP container.In addition there were unit tests using the light weight HTTP server, which used crufty HttpURLConnection code.
So there were two different sets of crufty code test infrastructure essentially doing the same thing, namely making HTTP-based client requests. Finally i got so fed up with this state of affairs i decided to do something about it.
The result is a simple RESTful client side API for making HTTP requests and processing HTTP responses that reuses classes and concepts from the JAX-RS API and Jersey. See here for a unit test that tests accept processing in memory, and see here for a unit test that tests matrix parameters using the Light Weight HTTP server. Notice that once a ResourceProxy has been obtained the code uses the same API.
Now that there is a common client-side API for making HTTP requests the next step is to further abstract out the configuration/deployment mechanism such that we can have one way of defining a unit test for testing one or more resource classes and be able to deploy them in any container (in-memory, LW HTTP server, or servlet).
Currently this client side API is part of the Jersey implementation at this location (and not the Jersey API). It was very instructive (and also very tedious!) to convert all the relevant unit tests over to using this API as it resulted in many little improvements and ideas to make further improvements (so it can be considered somewhat battle tested in the context of being used as a testing infrastructure API). Once those further improvements have been applied i plan to document it and move it over to the Jersey API for general use. But, if you are feeling brave you can still use it in the latest release. If you do let me know what you think.
While on the subject of testing TLC i watched a great interview on the Scoble show with ZFS inventors Jeff Bonwick and Bill Moore. At some point during the interview Jeff mentions that the testing code coverage of ZFS is over 90% (i cannot recall the exact number, it could be close to 99%) and it allowed them to make major changes to the ZFS code base without the fear of not knowing they had broken something. That point really resonated with me as i have found the Jersey unit tests have given me the confidence to make major internal changes (soon we will need to do some major refactoring of the URI dispatching). However, the code coverage of the Jersey unit tests is not at 99%. A Jersey release is built, using Hudson, every time the source code changes and that release is tested by running the unit tests. Emma code coverage is integrated and below is the trend graph generated by Hudson:
As you can see some more TLC is required to increase the code coverage. I have been told that the code coverage is not bad for a newish project, but still it makes me very nervous that at least 40% of methods are not being tested. Even if Jersey is early access we should strive for the highest quality possible for stable releases.