Don't Keep Your Users Hostage!

In recent blogs, I've gone on at length about misusing performance numbers to make expensive things look competitive (just imagine the thought process behind saying "we don't use distributed benchmarks on z" - as if there was anything "distributed" about database or OLTP applications! - while trying to convince customers to move their "distributed" applications to z) and comparing systems deliberately run at high utilization (due to capital costs), with others deliberately not run at high utilization, and chirping "Our systems can run at high utilization!". Sigh. In this post, however, I want to discuss a formative lesson I got on this subject many years ago. It bears relevance to this, too.

An ancient conflict

About 25 years ago, I was involved in a battle in the long-running war between MVS/TSO and VM/CMS. For people outside the mainframe world this may be incomprehensible (just as mainframers will shake their head over Unixers doing "BSD vs. SysV" or "vi vs. emacs" wars. For the latter, they'll just say "they both stink!"). I definitely was in the VM camp (which is where I spent much of my career), despite having done internals work in MVS. My boss asked me to call a famous IBMer, Walter Doherty, and ask him how many more CMS users we could put on a given machine, compared to TSO (both were timesharing environments; TSO being the "time sharing option" of MVS. Or, as we VMers called it "The Slow One"). The idea was to collect expert testimony backing our preferred system.

The Economic Value of Rapid Response Time

I have to interject that Walt Doherty is notable for being among the first to recognise and quantify the importance of good response time in interactive systems. His paper "The Economic Value of Rapid Response Time" (1982, IBM document GE20-0752; you can see it here on Jim Elliott's site) broke new ground in showing that response time with low latency - ideally, close to the limits of human perception at 0.1 seconds - and with little variability, promoted higher productivity for system users, and that this could be measured to be more valuable in financial terms than the cost of the computer assets needed to deliver the response time. This was a revelation in those long-lost times when the relatively small number of people using computers (in the early PC days, and often on oversubscribed minicomputers or mainframes) tolerated long or inconsistent response times.

Don't keep your users hostage

Somehow I managed to get Mr. Doherty's telephone number and posed the question, confident that he would give me a range of ratios between CMS and TSO - something like "between 2.5 and 5 times as many users". Imagine my surprise when he responded by bluntly saying "Your question is wrong." He went on to say "If all you do is count users without managing the response times and service you're providing for them, you'll just give them bad response and keep them hostage - you've got their data, and you won't let them get at it. This was a revelation to me.

The proper measure of systems

The important lesson Doherty taught me (eventually he took pity on me, and told me that indeed, CMS could support 2.5 to 5 times as many users as TSO could at comparable service levels), was that you must manage to service levels, not to merely how many users you manage to get to log on to your system, and then condemn to watching the hourglass...

Thanks to a pioneer

This post is in part a public thanks to a pioneer. He made a lasting contribution to the field, and he taught me a valuable lesson. Provide rapid (subsecond) response time to your users when feasible, and with low variability (uneven response time - sometimes fast, sometimes slow) is very disruptive to users). And, don't measure your systems by "how many users" you can cram onto it. This lesson really influenced me in a large way, and I'm really grateful. I'm also adamant on the distinction, which you may have seen in my recent posts!

On a sad note

An additional note of respect, for a great pioneer who has just passed. Bob Tomasulo was the 1997 Eckert-Mauchly Award recipient for the ingenious algorithm which enabled out-of-order execution processors to be implemented. Tomasulo's algorithm was first used in the floating point processor of the IBM 360/91. His work was a great contribution to high performance computing that influenced many later computer architects.


Post a Comment:
Comments are closed for this entry.



« November 2015