Friday Apr 01, 2011

Test Driven Development in a Large Team

I had a problem to work out.  I know of plenty of wrong-headed things that Monitoring does.  I want to clean up this behavior for GlassFish 3.2  But I want to use the tenets of Test Driven Development.  Namely -- tests FIRST, implement SECOND.  Normally that would mean - add some tests now.  The tests will fail every time until we get around to fixing the code.  Eventually the tests will all start passing as we add in all the fixes.  Overtime we see lots of failures dropping to zero failures.

But the problem is that we can't do this is a multiple-developer project with continuous automated runs of the enormous suites of tests.  We can't add tests that will fail (at least not for more than just one cycle).  And I don't want to maintain yet another Hudson job that runs a subset of tests just for monitoring.  Plus it just doesn't feel right.  The tests ought to pass ALL THE TIME.

An excellent solution was provided by Carla Mott.  Simple and elegant:

1) Create a test for what you want the new behavior to be.
2) Now set the results of that test to the converse of the actual results -- i.e. if they pass, report as a failure and vice versa

Result: 
The tests will pass every time until you fix the issue.  Then it will fail every time until you fix the test. 

We can add lots of these tests for new features, now.  When we implement these fixes when we are under the steaming pressure head of deadlines and milestones we won't have to 'remember' to add Dev Tests.  They are already done and will break like popcorn popping as we complete tasks.  It's impossible to not have working devtests upon completion.


About

ByronNevins

Search

Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today