Improving Test Coverage in OpenDS

For the last few weeks (and still for the next couple of weeks), we're in a push to improve our test coverage for the OpenDS project. This means that we're not adding lots of new features at the same rate we had been before this push (although there are still some that are sneaking through), but it will help us to ensure that when we get back to developing new features that we've got a good test framework in place to help us catch any problems that might come up.

Two of the main tools that we're using to help us with our testing are TestNG and EMMA. TestNG is similar to JUnit, but offers additional features like the ability to define test groups, and the ability to use annotations to mark your classes and methods accordingly (I do know that JUnit has recently released a new version with some of these features, but I haven't evaluated them to see how the two stack up now). I really like the @DataProvider annotation, which makes it possible to separate your test data from the test logic, but since it treats each run with a different data element as a separate test case it does seem to inflate the number of test cases (for example, TestNG reports that we're running over 10,700 test cases, but we only have about 950 test methods). Probably my biggest complaint with TestNG at the moment is that it doesn't have an assertNotEquals method (which is easy to work around but annoying nonetheless), so I guess that says something about the framework. I also hate like the use of JavaScript in the HTML-formatted reports that it generates and on the TestNG website, but that's another matter altogether.

EMMA is a tool that makes it possible to provide code coverage analysis for Java applications. It will allow you to see what code is being called, and is especially useful in testing because it helps you see what still needs to be tested. It allows you to browse all of your source code so that you can see lines in green, yellow, and red (covered, partially covered, and not covered, respectively). It's annoying that the source code display uses a variable-width font rather than one that is fixed-width, but again that's a pretty minor grievance. I also don't really care for the way that it generates the report layout using filenames that change between runs (e.g., clicking refresh after re-running coverage tests might show me a different class), but that's also easy to work around.

We're currently up to about 41% code coverage, which I will grant you isn't as high as it should be (writing test cases was a step that we skipped early on in the project to help us get the basic framework up and running as quickly as possible), but it is rising pretty quickly since we were at only 12% a month ago. There are still big chunks of code (especially corner cases) that aren't covered, but in general things are looking pretty good. I am, however, very much looking forward to getting back to adding features once our coverage gets up where it needs to be, and in the future we will be ensuring that we provide adequate test coverage as we write the code so we don't fall behind again.

Comments:

Post a Comment:
Comments are closed for this entry.
About

cn_equals_directory_manager

Search

Top Tags
Categories
Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today