Last week, I attended a talk by Matt Doar about Common Problems with Bug Trackers
or, phrased more directly, Bug Trackers: Do They Really All Suck?
. The talk was hosted by the Silicon Valley ACCU
; I'm scheduled to give their October talk about common floating-point issues and misperceptions.
Matt had a number of interesting observations about the often mundane process of working with a bug tracker. First, a poll of the audience revealed that while nearly everyone uses a bug tracker regularly, no one loves their bug tracker. In contrast, many people will passionately defend tools for other infrastructural tasks like source code management (CVS, Subversion, etc.). A likely cause for this lack of affection is the multiple parties who use a bug tracker, engineers, quality organization, program management, and the disparate ways those groups use the tool and the resulting compromises in the tool's design. Matt identified four broad problem areas with bug trackers:
- Designing Workflows (What is the state transition diagram?)
- Meanings Of Fields (What does "priority" mean? How do priority and severity differ?)
- One Bug, Many Releases (How should the same problem being fixed in multiple releases be tracked?)
- Trusting History (Can time-based queries be performed?)
These are consistent with discussions and concerns I've had about bug trackers over the years. One observation was that a priority field should implicitly or explicitly include a notion of what party to whom the priority applies, the customer, engineering, testing, etc.