By Byronnevins-Oracle on Apr 01, 2011
I had a problem to work out. I know of plenty of wrong-headed things
that Monitoring does. I want to clean up this behavior for GlassFish 3.2 But I
want to use the tenets of Test Driven Development. Namely -- tests
FIRST, implement SECOND. Normally that would mean - add some tests
now. The tests will fail every time until we get around to fixing the
code. Eventually the tests will all start passing as we add in all the
fixes. Overtime we see lots of failures dropping to zero failures.
But the problem is that we can't do this is a multiple-developer project with continuous automated runs of the enormous suites of tests. We can't add tests that will fail (at least not for more than just one cycle). And I don't want to maintain yet another Hudson job that runs a subset of tests just for monitoring. Plus it just doesn't feel right. The tests ought to pass ALL THE TIME.
An excellent solution was provided by Carla Mott. Simple and elegant:
1) Create a test for what you want the new behavior to be.
2) Now set the results of that test to the converse of the actual results -- i.e. if they pass, report as a failure and vice versa
The tests will pass every time until you fix the issue. Then it will fail every time until you fix the test.
We can add lots of these tests for new features, now. When we implement these fixes when we are under the steaming pressure head of deadlines and milestones we won't have to 'remember' to add Dev Tests. They are already done and will break like popcorn popping as we complete tasks. It's impossible to not have working devtests upon completion.