Reading the occasional off-hand remark about helmets in Chris's blog has led me to take a look at risk homeostasis. It is one of the arguments advanced by the "anti-helmet" web sites that Chris has provided links to. The basic idea is that people tend to maintain a constant level of absolute risk in the face of wide variations in the safety of a particular activity. For instance, if we improve roads — better surfacing, cambered bends, etc. — people will tend to drive faster, so that their chance of adverse consequence per unit time stays the same (although the chance of a crash per unit distance would go down). I personally find this rather disturbing, and wish it were not true. Unfortunately, there is plenty of evidence that it is the case. The classic studies were done on road safety, but the idea is now being applied to other areas too. For instance, some poeple have the perception that antiretroviral drugs make HIV much less of a problem in the "developed world" these days, and sure enough, infection rates here seem to be rising.
Since I find it rather depressing to think about this in the context of people's lives at risk, I started to wonder whether it applies to software development. If we insist that people build the full train before they put back their changes, will they put back stuff that is known to compile, but is less likely to work (correctly)? If we insist that people write unit tests for their code, will they then spend less time thinking about whether their algorithm really works?
I don't have any answers for these questions, but I'm going to be on the lookout for examples.