Test where the failures are likely to be
By Darcy-Oracle on Apr 25, 2008
There is a old joke about walking along one night and coming across someone looking down underneath a streetlight for lost keys. Stopping to help look, after a minute or two of searching you remark, "Your keys don't seem to be here. Where did you drop them?" "Well, I dropped them over in that ally, but it's way too dark to look there!"
While at Berkeley, one of the lessons I learned from Professor Kahan was "test where the failures are likely to be, ", which he stated much more mathematically as "seeking the singularly points of analytic functions." Especially for numerical applications, the tricky inputs to the code can differ markedly from algorithm to algorithm. For example, this was the underlying reason the Pentium fdiv bug was not caught sooner. A new SRT divider algorithm was being used and while billions and billions of existing tests were run and looking fine, new tests targeting the new algorithm were apparently not written. After learning of the general problem, Professor Kahan was able to write a short test program that probed at likely failure points, boundaries in a lookup table, and found incorrect quotients after executing for under a minute.
I keep Professor Kahan's advice in mind went writing regression tests for my JDK work, especially on numerics. At least on occasion, this methodology has flagged a bug unrelated to the code at hand. Tests I wrote for an initially internal "getExponent" method on floating-point numbers included checking adjacent floating-point values around each transition to the next exponent; the lucky by-catch of this was a HotSpot bug which was then corrected. From a code coverage perspective testing at every exponent value is not needed since the code executed is the same, but such thoroughness helps provide robustness against other kinds of failures and didn't take much more time or code in this case.
While the mathematics behind certain math library tests can be quite sophisticated, in some ways the structure of their input is relatively simple compared to, say, the set of legal strings to a Java compiler. In the worst case, for a single-argument floating-point method an exhaustive test "just" has to make sure each of the 232 or 264 possible inputs has the proper value. The set of possible Java programs is much, much larger and categorizing the set of notable transition points can be challenging, but looking for likely failures is still applicable and worthwhile as one aspect of testing.