Friday Apr 01, 2011

Test Driven Development in a Large Team

I had a problem to work out.  I know of plenty of wrong-headed things that Monitoring does.  I want to clean up this behavior for GlassFish 3.2  But I want to use the tenets of Test Driven Development.  Namely -- tests FIRST, implement SECOND.  Normally that would mean - add some tests now.  The tests will fail every time until we get around to fixing the code.  Eventually the tests will all start passing as we add in all the fixes.  Overtime we see lots of failures dropping to zero failures.

But the problem is that we can't do this is a multiple-developer project with continuous automated runs of the enormous suites of tests.  We can't add tests that will fail (at least not for more than just one cycle).  And I don't want to maintain yet another Hudson job that runs a subset of tests just for monitoring.  Plus it just doesn't feel right.  The tests ought to pass ALL THE TIME.

An excellent solution was provided by Carla Mott.  Simple and elegant:

1) Create a test for what you want the new behavior to be.
2) Now set the results of that test to the converse of the actual results -- i.e. if they pass, report as a failure and vice versa

Result: 
The tests will pass every time until you fix the issue.  Then it will fail every time until you fix the test. 

We can add lots of these tests for new features, now.  When we implement these fixes when we are under the steaming pressure head of deadlines and milestones we won't have to 'remember' to add Dev Tests.  They are already done and will break like popcorn popping as we complete tasks.  It's impossible to not have working devtests upon completion.


Wednesday Mar 23, 2011

CLI Logfile and DevTests

Recently a colleague added a test to the Admin Development Tests.  This is a huge suite of tests that we developers use as a sanity test that exercise the administration of a running GlassFish server.

here is the symptom of the problem -

 The test failed on her machine.
The test failed on my machine
The test succeeded on the Hudson build that runs it automatically -- it passed 100% of the time

I finally tracked this problem down today with the help of an invaluable tool for asadmin.  I always set this variable in my environment:

AS_LOGFILE=D:\\glassfish3\\glassfish\\cli.log

When this is set -- every asadmin command is written into the logfile.  It includes the full command with all arguments, the time-stamp and the return value.

I quickly figured out the problem by looking in this file on my platform, Windows, and on the Hudson platform, Linux.  Here it is:

Command inside the test itself:  asadmin list -m \*

Command inside cli.log

03/22/2011 21:55:27 EXIT: 1 asadmin --user admin --passwordfile D:\\gf\\v2\\appserv-tests/config/adminpassword.txt --host localhost --port 4848 --echo=true --terse=true list -m .svn apps build build.xml byron derby.log dome.bat foo foo2 foo2.bat hudson.sh Jira.java mon.bat nbproject pom.xml q qqq resources seconddb src 

OOPS!!  The star is special in Windows calling the asadmin script -- even from Runtime.exec().  The star was replaced with the names of every file that happened to be sitting in the directory that the tests are running in.

 Solution:  In the code use "\\"\*\\"" for Windows and '\*' otherwise


About

ByronNevins

Search

Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today