By user12610627 on Jan 08, 2007
What is Quality?I've always recalled the following quote from E.W Dijkstra (in fact it's actually posted on the door to my office in his handwriting).
"Being a better programmer means being able to design more effective and trustworthy programs and knowing how to do that efficiently"
My interpretation: Quality means efficiency. Quality software is efficent for the user of that software. Quality programmming is efficency in developing the software. I believe both can be objectively measured.
Note that reliability (trustworthyness) is normally a prerequisite to efficiency, since in most cases programs that break end up consuming more time than those that don't. In theory, however, it's possible the cost of recovering from faulty software + the normal uptime of such software may still be less than a competing software product. In that case by my measurement the former would indeed provide higher "quality".
Note that "showstopper" bugs = zero quality, since the task cannot be completed at all.
I often try to get the overall point across by saying something like this:
We could have a butt-ugly GUI and no documentation but if the user is able to get the task done faster with our tool than with the competitor's tool, then our software is higher quality, bottom line.
For the non-interactive parts of programs we can measure the efficiency in terms of resource utilization: cpu-time, network-bandwidth, memory-use. To optimize these aspects we leverage less resource-intensive algorithms, data-structures, and communication protocols.
There is a limit on runtime performance improving quality. For example, I don't need a car that can drive 200 mph if I always obey the 65 mph speed limit.
For interactive programs we have to also measure the amount of time consumed by the human user.
To optimize this aspect we provide "tools". Tools are software programs that partially automate user tasks on behalf of the user. The combination of user + tool should result in the task being completed faster than user alone (or user + your competitor's tool) (there's also a learnability aspect of the tool in addition to usability which must be factored in, but let's ignore that for the moment).
Experienced programmers know that to improve the efficiency (in terms of cpu use) of the non-interactive parts of programs you should not visually inspect your code, but rather use a profiler as follows:
- Run the application under the cpu-profiler. The cpu profiler outputs a list of the methods called by your application sorted by time consumed
- You ignore everything after the first item in the list (important!)
- You open the source file containing the method and edit the code.
- Go to 1 (if you've done your job a new method will be first in the list)
My argument is that the same approach should be applied to interactive programs. We don't necessarily have a tool like the cpu profiler but that doesn't matter. An approach might be like this:
- Sit a test user in front of your user interface.
- Use a tool like Macromedia captivate to record the session. Start recording now.
- If the user hits a showstopper bug. That is the #1 problem. Stop and fix it now. Then Go to 1
- Stop recording when the user completes the task
- Assemble your team and review the session; Break down the session into discrete functional steps and note the amount of time spent in each one
- Order the steps by the amount of time consumed
- Assign as many resources as necessary to the first step in the list.
- Have them alter the design or implementation and then Go to 1.
In the case of software development tools like those provided by Sun, the human user is another programmer and we need to measure his performance to determine the quality of our tools.
The most efficient way of doing this is to have the tool developer himself act as the user (eat your own dogfood) and in cases where this is easy to do you can definitely see a difference in quality (for example the Netbeans and Eclipse Java editor plugins are noticeably far superior to any other plugins, largely due to the fact (in my opinion) that the developers themselves use them on their own code in their daily work.
Unfortunately, in cases where this isn't so easy (in particular Enterprise Tools) typical managers and developers don't even bother to test their tools on real use cases. For example, Java plugin developers who never wrote an applet, JSP-editor developers who never tried to build a real web site, etc. etc.
In such cases the result is typically tools that are highly unusable in obvious ways, which aren't identified until after a release when real users try to use them.
I used to work in Telecom, where we built C programs that ran on embedded systems (switches, routers, and other devices). When it came time to test our software nobody said "here you go, why don't you test it out on our digital switch". Instead we had to rely on simulation. We did this by means of scripts. So if we had a digitial subscriber line talking to a switch, we had one script that simulated the DSL and another that simulated the switch. Basically we tested our software against itself. Because standard wire protocols were used this worked well. Note that this strategy is commonplace in life, for example in sports: during practice your starting team plays against a "scout" team which simulates your real opponent.
In my opinion, the same approach can be used for enterprise software tools integrated with Web Services as in Sun Java CAPS. Note that as above nobody is going to say "Hey, here you go, why don't you test your enterprise tools on my enterprise". Instead in each case we need to simulate the enterprise software problem that our tool is supposed to solve.
WSDL provides a standard communication protocol that can effectively be used to script a simulated enterprise application environment, which can then be used to test the quality of our development tools in the manner described above.