Response to Joe Temple's blog on my blog...
By jsavit on Sep 19, 2008
I'd would prefer to ignore this and let sleeping dogs snooze. We've made our points and neither of us is going to convince the other. We probably won't convince anybody else who already has a position staked out. However, a lot of what Mr. Temple said about Sun product, and about IBM System z is wrong. I dislike misrepresentation of facts, and especially misrepresentation of what I said, so I'm going to pick up the sword at least this one more time.
So, I'm going to respond to his blog. I won't recap my dismembering of the phony comparisons that accompanied the z10 announcement, as you can find that on my blog at Ten Percent Solution and No, there isn't a Santa Claus. Especially read the latter, because that is where Mr. Temple and I previously conversed, and I extended him the courtesy of saying "Joe Temple is an IBM Distinguished Engineer, and in my opinion a person who has earned respect". All the more reason for my disappointment at seeing his blog. Consequently, I feel compelled to respond to some of Mr. Temple's distortions.
First: He says of me "he compares the LSPR scaling ratios to Industry benchmark results on UNIX SMPs." I'll be blunt: Joe knows I did the exact opposite. See Ten Percent Solution where on March 18, 2008, I said of LSPR "What I object to is it being used as a marketing tool in an official IBM announcement to extrapolate performance for comparison to a completely different platform based on a workload that isn't even the same as the LSPR benchmark workload." and I describe '"\*legitemate\* purpose IBM has for LSPR: for same-platform-family capacity planning. Anything else should be marked with disclaimers that admit to the "this is only an estimate". For cross-platform comparisons, there are the standard open benchmarks which IBM refuses to publish for System z.' Joe read the material I quote here long before his post saying I said the opposite.
Let's go to the beginning of this. My blog responded to IBM's February announcement of the z10 and a related IBM blog that had the text "384.5 RPEs is approximately equivalent to the number of z10 RPEs at 90% when you use 20 RPEs equal to 1 MIP where MIPS are based on the LSPR curve for the z10." It was IBM using LSPR to compare z to Unix benchmarks, and it was me saying why that was baseless. Joe accuses me of what I clearly was refuting, on a blog he read months before his blog. I do not like being misrepresented in this manner.
The IBM announcement originally had a footnote 3 that had "justification" for their claims, and the blog elaborated on it. When it turned out that this fuzzy math was based on inappropriate use of a 3rd party's tools, both the announcement page and the blog were airbrushed to remove those claims. I wish I had the foresight to have printed or saved those pages, but the text I quoted has been removed. Dear Reader - feel free to speculate why.
Fortunately, the Internet has cached copies of the IBM press release before it was censored. See the original text or use http://tinyurl.com/5u3emk. That page includes the removed text: "\*3 Source: On Line Transaction Processing Relative Processing Estimates (OLTP-RPEs): Derivation of 760 Sun X2100 2.8 Opteron processor cores with average OLTP-RPEs per Ideas International of 3,845 RPEs and available utilization of 10% and 20 RPEs equating to 1 MIPS compared to 26 z10 EC IFLs and an average utilization of 90%." So, there is IBM using RPEs inappropriately. The IBM blog used a mythical "LSPR ratio" (there is no such thing - there are lots of ratios - one for each combination of benchmark and platform combination) to extrapolate from z9 to z10, though, unfortunately nobody kept a copy of the above blog page. I'll use Joe's wording, it is IBM that "compares the LSPR scaling ratios to Industry benchmark results on UNIX SMPs." Not me.
By the way, the basis of IBM's claim of replacing 1,500 servers was that 1,375 of them were essentially idle. With that trick, why not claim you can replace 15,000 or 150,000, or 1,500,000 if they're powered off! See the IBM blog entry or for convenience, http://tinyurl.com/5lxzrn for IBM blogger Tony Pearson saying "125 Backup machines running idle ready for active failover in case a production machine fails. 1250 machines for test, development and quality assurance, running at 5 percent average utilization" (bold fonted for emphasis) I think we can agree that any contemporary machine can replace a large number of machines doing essentially nothing. It just costs much more if you do that on z.
Joe says that Sun disparages TPC-C because we don't do it well. Not so, IBM says the same. Read the IBM document at ftp://ftp.software.ibm.com/eserver/benchmarks/wp_TPC-E_Benchmark_022307.pdf which describes TPC-C as "an aging benchmark losing relevance", and lists its deficiencies for current computers and workloads. So, IBM also acknowledges that TPC-C is broken (except for the cases where they still use it - not on IBM z, of course - to convince the credulous).
Frankly, I've argued that we should blow away the numbers on TPC-C (I believe our current servers could handily do this), and then hold up the trophy for best TPC-C and say "we won the record", and then say outright it's bogus without people making spurious claims that we only say so because we can't do it. The counter argument, which has won so far, is that we've long made a statement of principle that TPC-C is broken, and it would be misleading and a distraction to then go out and publish new TPC-C results. Unlike some other vendors who both disparage it (see above) and also run it, depending on which audience they have handy.
What is comical about this is that Joe implies that Sun doesn't run one particular benchmark because we're unable to "keep up", while defending IBM not publishing ANY standard benchmark on System z. This is called "chutzpah". I can't make up my mind whether z-advocates truly believe they deserve a free pass and are exempt from proving performance and price/performance, or if this is a cynical ploy to avoid publishing z's price/performance.
In fact, Sun has many world record performance results on its servers. See for example a Java app server benchmark (IBM is there, but not for z, of course). Same with web serving. Or the world's largest data warehouse on Sun and Sybase. Or world record with SAP using Oracle and Sun. Click here for many more. There are lots of them.
In sharp contrast, IBM refuses to publish performance results on System z in a way that would permit direct platform comparison. Despite the handwaving and chaff Joe spreads about cache coherency (much of which is fanciful and wrong), many of the standard benchmarks are indeed very good predictors for application performance: such as web serving, file serving, Java application servers and databases. There is nothing mystical about it. These are exactly the workloads IBM wants you to run on z without offering any public evidence that z performs them adequately. Need I add that the industry standard benchmarks vary widely in terms of cache coherence and threadedness? IBM publishes none of them for z.
IBM does run workloads on z similar to some industry benchmarks, but only show performance relative to other z models. See the LSPR page where WASDB (on z/OS) and WASDB/L (on z/Linux)is a Java application server benchmark "written to open Web and Java Enterprise APIs, making the WASDB application portable across J2EE-compliant application servers." It's very much like the Java app server benchmarks Joe says are invalid and so different from what you run on a mainframe. Except it does run on a mainframe, and IBM says it is a valid predictor of performance, contrary to what Joe says. What they won't do is make it easy for you to do a direct comparison with any other platform. Think about it. Also take a time to look at the actual benchmark descriptions: just as with the open benchmarks they have different levels of parallelism and cache interference. To describe the mainframe workloads as being one one side of parallel nirvana, purgatory or hell (see Temple's remarks on his blog, or on the copy below) is nonsensical, as the LSPR studies include all three. IBM also doesn't run the z/OS LSPR studies under z/VM, so his references to virtualization overhead is irrelevant: IBM reports the non-virtualized results, and they still scale poorly.
I do not understand Joe's obsession with simplistic characterizations of the parallelism and scalability of our processors and industry-standard benchmarks, which have a wide range of properties. Labelling them as a group having a single characteristic for scalability, NUMA sensitivity, parallelism, etc, is simply wrong. An M-series enterprise server is very different in performance characterstics and other features from a coolthreads chip-multithreading server. To lump them together as identical is silly. A T5220, for example, has uniform memory latency - it's not NUMA in the least. Not so with an E25K or an M8000, which do have NUMA properties. Sun recognizes that different workloads have different processor requirements, and makes SPARC and Solaris binary compatible systems that can handle all of them so you can trivially move your application to the compatible system that runs it best. Unlike with IBM, where by Joe's own words (see below), you have to switch platform architectures and (consequently) all the platform software if an application turns out to not to be a good performance match for z.
Joe is mistaken in how he characterizes the different processor families. One difference is that Sun's enterprise servers increase cache and I/O bandwidth as processors are added, while IBM's don't. As you enable CPs in an IBM z9 or z10 "book" you decrease the cache available to each processor, since a "book" contains a fixed amount of cache, regardless of CPUs. Not too bad for z/OS because it uses shared address spaces (eg: LPA and CSA) for multiple jobs, but a bigger problem for Linux under z/VM which has to discard cache and TLB contents on every dispatch. (Joe must have missed Sun's E25K product, which, like z, also had a large L3 cache. That's hardly unique to IBM). Anyway, the cache limitations may explain IBM z's sublinear scalability shown in all the LSPR tests, and the fact that IBM doesn't publish LSPR for more than 32 CPUs in a single OS instance. See IBM LSPR report and notice how doubling the number of CPUs doesn't double the measured throughput and performance. It's sometimes stated that IBM doesn't do full-system tests of a single OS instance on z for reasons of cost, but that's obviously not the case: IBM runs LSPR tests on fully-configured z systems but they need multiple OS instances to drive the boxes and still can't get near linear scale.
Joe also neglected to respond to my blog's mention of negative scale on IBM mainframes (which he has already seen): a 16 way z990 doing only 1,322 web transactions per second, and a 24-way doing under 1,000. That's right - Adding CPUs resulted in lower performance!. So much for the claims of scalability for IBM mainframe. Crikey! Any contemporary 1RU x86 or SPARC server will dramatically outperform that at a tiny fraction of the cost, floor space, software licenses, staff, and environmentals. Talk about "inconvenient facts". Instead of misapplying Gunther's book, I suggest Joe read "IBM Mainframes: Architecture and Design, 2nd ed" (Prasad and Savit). Or my VM performance and internals books, for that matter.
So, I urge you to simply ignore all this hand-waving about cache and NUMA properties and the supposed magic capabilities on IBM's z line. It's completely inaccurate, in the cases where it isn't irrelevant. It's just chaff to distract people from looking at z's disastrously bad price/performance.
Joe also disingenuously claims IBM doesn't enjoy monopoly pricing for z. Of course it does. If you want to run z/OS applications, you have nowhere to run them except on IBM processors using IBM's z/OS, CICS, VTAM, etcetera (it's not just the hardware price). IBM has the luxury of charging high margins because there are no Amdahl or Hitachi systems to compete against, nor alternative sources for the software I just listed, and because they know rehosting an application takes a substantial amount of effort. I have hands-on experience with the excellent Unikix application suite, now owned by Clerity Inc. It absolutely can rehost a z/OS batch + CICS application on open systems without rewriting. I've seen successful migrations with a replacement TCO a mere 10% to 15% of the mainframe's TCO (that's TCO, folks. Not acquisition cost.) But, it requires a good deal of effort. The barriers to exit from mainframe are much, much higher than between Unix dialects. That's the whole point of Open Systems, and why you shouldn't check into a z-motel its hard to check out of.
So, yes. Mainframes are proprietary and have monopoly pricing. Just try to buy a z/OS equivalent or a system to run it on from anyone else. See for example IBM shuts up competitor PSI by buying it - The INQUIRER. In contrast, you can run Solaris on SPARC machines from us and from a partner+competitor, and you can run Solaris on x86 servers from dozens of vendors - including IBM.
Which reminds me: Joe distorts the agreement between IBM and Sun with his statement "Finally, Sun itself recognized System z and zVM as "the premier virtualization platform" when Sun and IBM jointly announced support of Open Solaris on IBM hardware." We did no such thing: the statement was support for Solaris on IBM's x86 (Intel) servers, not on "System z and zVM". Surely Joe knows this. See IBM Expands Support for the Solaris OS on x86 Systems. By the way, I've been involved with the OpenSolaris on z project since its inception - I obtained the workstation the non-IBM developers used for the port, personally installed it on z/VM, and it's not finished yet, either. It does give me the chance to do like-to-like performance comparisons of SPARC, Intel, and IBM System z (same app, same OS), and it substantiates everything I've been saying here.
The last paragraph illustrates one of the things that most disturbs me about this sorry episode. I've witnessed a shameless willingness from certain quarters to "just make things up". I've seen claims that you could use z/OS features when running z/Linux under z/VM, and completely imaginary ratios between different computers and their own, and claims that people (like me) said the exact opposite of what they actually said. It's really a sad thing, and makes me doubt the many years I spent as a loyal IBM customer and (alas) fan.
Just to be on the safe side, since some things I've referred to have disappeared from the sites that originally hosted them, here is a copy of the blog article I am responding to. Can't be too careful these days. As he notes, my material (following the "Posted by" line) is mingled with his. His text is in blue italics, just as on his blog.
Response to Jeff Savit Blog
As part of the announcement of z10 IBM made some marketing claims about the large number of distributed Intel servers that could be consolidated with zVM on a z10. The example cited used Sun rack optimized servers with Intel Architecture CPUs. Sun Blogger Jeff Savit objected strenuosly to the claims mainly because of the low utilization assumed on the Sun machines that the claims compared to. You can read it here:
http://blogs.sun.com/jsavit/entry/no_there_isn_t_aI responded, he responded. When I was out of pocket for awhile and did not respond soon enough and his blog cut off replies on that thread. I am putting my latest response here. Thanks to Mainframe blog for providing the venue to do so. My latest responses to Jeff are in blue italics.
Posted by Joe Temple on June 24, 2008 at 11:28 AM EDT #