Wednesday Jan 10, 2007

More on Quality

Sun's CEO, Jonathan Schwartz, also has a definition of quality. And as he mentions in that post, he's also a big believer in simplicity and efficiency. So am I.

He provides a dead-simple measure of quality for Sun which consists of asking a customer one question: "Would you recommend Sun?".

As a programmer I can apply the same measurement to determine the quality of my program, by asking a user of my program if he would recommend using my program to others.

What I tried to describe in my previous post was a dead-simple procedure for raising a program to the level where it will get a "Yes".

I can name many programs (not written by me) for which I would answer "Yes"

Unfortunately, for the vast majority of programs I've tried my answer would probably be "No", or "Not really", or "Only because there's no alternative", or even "Hell no!".

Here are a few examples of both:

  • Netbeans and Eclipse Java Plugins - Yes
  • All other Netbeans and Eclipse Plugins I've tried- No
  • Microsoft Excel - Yes
  • Microsoft Outlook - No
  • Google Search - Yes
  • Froogle - No

In my experience, in each case where I would answer "Yes" I was able to produce a satisfactory result in a short or reasonable amount of time, without getting stuck, and without having to alter the result significantly due to the limitations or defects of the program.

Maybe I'm dumb, but I honestly can't write software I can't test. So I presume my program is testable. Given that, a starting point is to test the program myself on a (possibly simulated) real-world problem and then at least be able to honestly say I myself would want to use the program.

Over the years at various companies I've actually asked my colleagues if they would want to use their own programs in the real world and the answer was often "No".

From my experience, "low-quality" programs tend to manifest lack of quality in one of four ways:

  1. Lack of functionality (I didn't implement the full Design Spec)
  2. Outright bugs (crashing, hanging, misbehavior, visual bugs like inconsistent colors or margins, or ugly layouts, etc)
  3. Slow performance
  4. Excessive memory use

In addition, from my experience, management tends to be very ineffective in prioritizing such problems. Generally, those in category (2) are placed at the top of the list, followed by (1). Due to insufficient testing of real use cases (3) and (4) usually don't even make it to the list until after a release when real users try to use it. In addition, (3) and (4) typically aren't addressed until several releases later (if ever) because during the next release cycle you get a new pile of (1)'s and (2)'s. Instead your customers struggle along feeling dissatisfied, unless they can find an alternative to using your program at all.

The (as I mentioned, dead-simple) procedure described in my previous post crosscuts all of the above manifestations of lack of quality, by literally measuring their importance based on the performance of the user. Lack of functionality is only important if the user actually uses those features. The importance of outright bugs is relative to the impact they have on the user's performance (obviously very important if they are of the "showstopper" variety, but possibly of low importance if there are reasonable workarounds). Likewise, execution performance, memory-use or other resource issues only matter if it affects the user (note, however, that this is extremely common from my experience; for example, I'd say this is the main problem with Netbeans, JavaEE, AJAX toolkits, (I could name many others,...) and not lack of functionality or outright bugs).

I've avoided the issue of "learnability" so far, but that is a prerequisite to all of the above. Obviously, if the barrier to entry is too high, you won't have users and any quality problems are therefore irrelevant.

Monday Jan 08, 2007

Quality Software

What is Quality?

I've always recalled the following quote from E.W Dijkstra (in fact it's actually posted on the door to my office in his handwriting).

"Being a better programmer means being able to design more effective and trustworthy programs and knowing how to do that efficiently"

My interpretation: Quality means efficiency. Quality software is efficent for the user of that software. Quality programmming is efficency in developing the software. I believe both can be objectively measured.

Note that reliability (trustworthyness) is normally a prerequisite to efficiency, since in most cases programs that break end up consuming more time than those that don't. In theory, however, it's possible the cost of recovering from faulty software + the normal uptime of such software may still be less than a competing software product. In that case by my measurement the former would indeed provide higher "quality".

Note that "showstopper" bugs = zero quality, since the task cannot be completed at all.

I often try to get the overall point across by saying something like this:

We could have a butt-ugly GUI and no documentation but if the user is able to get the task done faster with our tool than with the competitor's tool, then our software is higher quality, bottom line.

For the non-interactive parts of programs we can measure the efficiency in terms of resource utilization: cpu-time, network-bandwidth, memory-use. To optimize these aspects we leverage less resource-intensive algorithms, data-structures, and communication protocols.

There is a limit on runtime performance improving quality. For example, I don't need a car that can drive 200 mph if I always obey the 65 mph speed limit.

For interactive programs we have to also measure the amount of time consumed by the human user.

To optimize this aspect we provide "tools". Tools are software programs that partially automate user tasks on behalf of the user. The combination of user + tool should result in the task being completed faster than user alone (or user + your competitor's tool) (there's also a learnability aspect of the tool in addition to usability which must be factored in, but let's ignore that for the moment).

Experienced programmers know that to improve the efficiency (in terms of cpu use) of the non-interactive parts of programs you should not visually inspect your code, but rather use a profiler as follows:

  1. Run the application under the cpu-profiler. The cpu profiler outputs a list of the methods called by your application sorted by time consumed
  2. You ignore everything after the first item in the list (important!)
  3. You open the source file containing the method and edit the code.
  4. Go to 1 (if you've done your job a new method will be first in the list)

My argument is that the same approach should be applied to interactive programs. We don't necessarily have a tool like the cpu profiler but that doesn't matter. An approach might be like this:

  1. Sit a test user in front of your user interface.
  2. Use a tool like Macromedia captivate to record the session. Start recording now.
  3. If the user hits a showstopper bug. That is the #1 problem. Stop and fix it now. Then Go to 1
  4. Stop recording when the user completes the task
  5. Assemble your team and review the session; Break down the session into discrete functional steps and note the amount of time spent in each one
  6. Order the steps by the amount of time consumed
  7. Assign as many resources as necessary to the first step in the list.
  8. Have them alter the design or implementation and then Go to 1.

In the case of software development tools like those provided by Sun, the human user is another programmer and we need to measure his performance to determine the quality of our tools.

The most efficient way of doing this is to have the tool developer himself act as the user (eat your own dogfood) and in cases where this is easy to do you can definitely see a difference in quality (for example the Netbeans and Eclipse Java editor plugins are noticeably far superior to any other plugins, largely due to the fact (in my opinion) that the developers themselves use them on their own code in their daily work.

Unfortunately, in cases where this isn't so easy (in particular Enterprise Tools) typical managers and developers don't even bother to test their tools on real use cases. For example, Java plugin developers who never wrote an applet, JSP-editor developers who never tried to build a real web site, etc. etc.

In such cases the result is typically tools that are highly unusable in obvious ways, which aren't identified until after a release when real users try to use them.

I used to work in Telecom, where we built C programs that ran on embedded systems (switches, routers, and other devices). When it came time to test our software nobody said "here you go, why don't you test it out on our digital switch". Instead we had to rely on simulation. We did this by means of scripts. So if we had a digitial subscriber line talking to a switch, we had one script that simulated the DSL and another that simulated the switch. Basically we tested our software against itself. Because standard wire protocols were used this worked well. Note that this strategy is commonplace in life, for example in sports: during practice your starting team plays against a "scout" team which simulates your real opponent.

In my opinion, the same approach can be used for enterprise software tools integrated with Web Services as in Sun Java CAPS. Note that as above nobody is going to say "Hey, here you go, why don't you test your enterprise tools on my enterprise". Instead in each case we need to simulate the enterprise software problem that our tool is supposed to solve.

WSDL provides a standard communication protocol that can effectively be used to script a simulated enterprise application environment, which can then be used to test the quality of our development tools in the manner described above.

About

user12610627

Search

Categories
Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today