Once again, "Mainframe Linux vs. Unix"

I just saw the article Mainframe Linux vs. Unix by Ken Milberg, touting mainframe Linux over Unix. I hope responding to his articles doesn't become a habit I have to maintain, but as with my previous post, this article has enough tilt and mistakes to provoke a response.

First, Mr. Milberg says Today's new breed of smaller, cheaper mainframes, paired with the Linux operating system, look like an attractive alternative to Unix on RISC or SPARC servers. One wonders why he feels obligated to specify SPARC, when this applies equally to IBM's POWER - unless this is illustrating a pro-IBM slant (Mr. Milberg edits for IBM web periodicals and is an IBM partner), not to mention the obvious that "RISC or SPARC" is redundant, since SPARC is a RISC chip. Oh well, and we haven't even gotten past the first sentence.

He goes on to say This article compares the features and performance of Linux on the mainframe -- in this case, the IBM System z Server -- and compares it with Unix, in terms of its availability, features and performance. It does nothing of the sort; I wish it did. The article doesn't compare features or performance of mainframe Linux with anything other than by repeating stereotypes. One of the problems with mainframe Linux (and mainframe in general) is that IBM does not publish benchmarks describing its performance, even though they do that with all their other platforms. None of the public benchmarks that are familiar to the open systems world have yet been published. You can only speculate why! Open Systems require Open benchmarking.

Unfortunately, the article just rehashes stale stereotypes about mainframe performance. For example, it refers to "Maximum I/O bandwidth" without quantifying it. Many years ago mainframes certainly had superior I/O bandwidth compared to other platforms, but today's high end open servers use the same high end storage arrays from vendors like Hitachi, Sun, and EMC, and drive them flat out. Sun E15Ks were clocked at over 1 million disk I/Os per second, and over 18GB/second data transfer. I don't think mainframe Linux can get close to that, and possibly not z/OS either, but it could be proven if IBM were willing to run benchmarks in their labs and publish them. Those old assumptions are obsolete: it makes no sense to describe a high-end Sun server as "mid-range" in comparison to a mainframe when it has several times as many CPUs and several times as much RAM. Trying to be fair and not just a vendor zealot, I'm sure the same kind of comparison can be made using IBM's System p RISC machines. Mainframes are no longer the biggest iron in the data center, and the other benefits attributed to mainframes are now shared by other vertically scaled systems that apparently surpass mainframes in power at far less cost.

The article goes on to say Linux on the mainframe becomes a natural evolutionary step for their business' mission-critical applications. Virtually any application that runs Linux on Wintel computers will run on System z, with only a simple recompile. To the first part, one must ask "Why would that be a natural step?". The second part is simply false: while many open systems applications do in fact port easily to mainframe, it's far from "virtually any". Little things like big-endian vs. little-endian crop up, for example. And much of the software you expect simply won't be there, especially for vendor products - the ISV portfolio on mainframe Linux is a fraction of the size of what's available on Intel/AMD platform.

The article goes on to say if a company has a server farm of 200 distributed servers, it can be easily be consolidated into either one or two System boxes, hosting 60-70 Linux servers in a high-availability environment that can scale. Your definition of "easy: will vary, and this is only true if those systems are very under-loaded, as by and large those Linux servers are running on Intel or AMD chips that are faster than System z (rule of thumb: 4 Mhz of Intel == about 1 MIPS of mainframe - as far as such crude rules go). Just do the exact same thing on a 4- or 8-CPU AMD or Intel server using VMware or Xen, at a tiny fraction of the cost, plus the ability to do things like VMotion that don't exist on the mainframe. We at Sun will be glad to sell you a nice server that will do that very well, and so will our competitors at HP, Dell, and IBM too. That's a good thing: vendor competition leads to better price/performance and innovation. By contrast, there is only one vendor you can go to for a mainframe - leading to monopoly price economics.

What else: VM mode allows for thousands of Linux guests per IFL and will probably be the best fit for most installations. VM is definitely the right way to run mainframe Linux if you're going to do that, but scale back your expectations by a factor of 10 or 100. Thousands of penguins per IFL? Aint gonna happen. (If you're wondering what an IFL is: that's an IBM CPU neutered so it can't run z/OS.)

Discussing the UNIX-ish environment on z/OS: The Unix implementation is not native, runs in EBCDIC mode and is just not the most popular of systems in IBM land With that I cannot disagree. It's certified as UNIX, but is alien to the rest of the z/OS environment, and barely used.

Under "Support" the article says If you are already using Linux, why would you want to migrate to Unix? It is more expensive to staff Unix engineers and administrators than their Linux or mainframe counterparts. Uh, well, it could be due to the superior support and capabilities in the Unixes, not a (possibly imaginary) price difference for the admins. Oh, and mainframe Linux engineers/admins are mainframe guys (by definition), and have to have both mainframe and Linux skills. Treat those guys really well if you find them: there aren't that many, and you won't be successful without their skills.

Under "Flexibility" he says Linux, being open source, lends itself to faster innovation, as well as more timely releases of bug fixes. The open source community delivers faster because it does not have to go through the endless development cycles of commercial-based operating systems. Let's point out the obvious here: Solaris is open sourced, so maybe you can apply this problem to AIX. And what about those pesky design and QA cycles. Do you really want to get rid of them? The conclusion is also wrong: look at the innovations in Solaris 10, such as DTrace, ZFS, Solaris Containers, and Fault Management Architecture. There's plenty of innovation in Solaris that Linux people have expressed appreciation of, and sometimes envy.

Regarding "Open", the article says: Anyone that has worked on AIX, Solaris and HP-UX, will tell you that Unix is certainly not Unix. On the other hand, the Linux distributions may have some differences, but the underlying kernel is the same. I'll say it again: Solaris is Open Source. And, the second part is quite wrong. The kernel levels in different distros are different. More important, the administrative tools and conventions are extremely different on the different distros. Even the selection of packages is different. Just as a skilled individual can go between different Unixes, a skilled individual can go between different Linuxes - but it doesn't "just happen" as this article misleadingly states. (For the record: I've worked with Red Hat, SuSE, Debian, Gentoo, Slackware, and Ubuntu - believe me, it's not transparent.)

More misleading stuff about zLinux software being free. Does the article really mean to imply that one can modify source code to WebSphere or Oracle if it's running on Linux? This comes under a bullet section titled "Price", which unfortunately omits the main cost factors: the price of the expensive hardware, the price for the hypervisor, and the price of the service contracts for the "free" Linux. Prepare for sticker shock. Combined with misleading stuff about mainframe security, when that's a property of the operating system (z/OS or z/VM), not the chipset it runs on.

Near the end of the article, the author says Most of the cons of moving to Linux are eliminated by running Linux on System z. Really? How has that been shown? For example, you cannot say that Linux will not scale as much as Unix on System z. Sure I can. It would be interesting to compare this to IBM's AIX/ESA for mainframe if it were still alive, or to the OpenSolaris port if that completes. You cannot make the argument that there is lack of hardware integration and support, as IBM provides that on the mainframe, unlike running Linux on Wintel. For a price, of course. And it's not called "Wintel", when the OS isn't Windows.

To put this in perspective: let's go back to the smaller, cheaper mainframes mentioned in the beginning. That is $95,000 for a single CPU rated at 28 MIPS, no disk, no operating system. Outstanding by the standards of traditional mainframe pricing years ago, but for perspective: I can emulate 30 MIPS of mainframe on my laptop using a program like Hercules, for a cost of maybe $1K or $2K. Let me say that again: I can use emulation software to pretend to be a mainframe - outperforming that $95K IFL despite a 50x or 100x performance hit due to using software to pretend to be silicon - and still outperform that "cheap" mainframe while spending only 1/50th the price.

There may be articles showing that mainframe Linux is a good thing, but this is not one of them.

I'm convinced that at best there is only very marginal applicability for mainframe Linux, as almost anything you can do with it can be done at a fraction of the cost on other platforms. I say this despite my respect for individuals I know who are working on it, and my nostalgia and respect for the platform, especially VM. I suggest to anyone considering this platform: Don't believe me or anyone else. Demand price/performance guarantees, public benchmarks, and real suitability for the applications you intend it for, and disregard the hype.

Comments:

" ... Linux distributions may have some differences, but the underlying kernel is the same."

Horsehockey! Red Hat and SUSE uses different kernel revisions (RHEL4=2.6.9; SLES9=2.6.5; SLES10=2.6.16; RHEL5=2.6.18). Don't think for a minute these dot-revs don't make a difference. They do!

I work for a company who provides Linux drivers for its hardware. Guess what, every supported Linux distro has a different kernel revision, and therefore has a different driver, with different source required! This is not just a case of regression testing, it is serious custom development for each kernel revision!

As for all of the supposed benefits of the mainframe (reliability, scalability, I/O, etc.) most of these are not requirements for low-utilization Linux servers! These qualities are valuable for big, important applications, like DB2 on zOS for your brokerage account. Not an Apache Tomcat development server.

For larger, more important Linux apps (databases, appservers), there is clustering available (i.e., Oracle RAC) to provide reliability and scalability.

So, the only thing zLinux offers the Linux customer is consolidation. But you can do that on VMware with a medium sized x86 server, and you don't even need to recompile!

Mainframes are not obsolete, as there is still a need for highly reliable business systems with deterministic I/O. But Linux on the Mainframe IS obsolete!

Posted by Mark on March 01, 2007 at 01:02 PM MST #

[Trackback] Mister Milberg develops to an major pain in the a...., well ... a major annoyment: This time he tries to tout Mainframe on Linux. But Jeff Savit comes to help: Once again, "Mainframe Linux vs. Unix". He summarize the problems and shortfalls of Mainfram...

Posted by c0t0d0s0.org on March 01, 2007 at 06:39 PM MST #

quote:

To put this in perspective: let's go back to the smaller, cheaper mainframes mentioned in the beginning. That is $95,000 for a single CPU rated at 28 MIPS, no disk, no operating system. Outstanding by the standards of traditional mainframe pricing years ago, but for perspective: I can emulate 30 MIPS of mainframe on my laptop using a program like Hercules, for a cost of maybe $1K or $2K. Let me say that again: I can use emulation software to pretend to be a mainframe - outperforming that $95K IFL despite a 50x or 100x performance hit due to using software to pretend to be silicon - and still outperform that "cheap" mainframe while spending only 1/50th the price.

I'm convinced that at best there is only very marginal applicability for mainframe Linux, as almost anything you can do with it can be done at a fraction of the cost on other platforms.

---

With all due respect, I think you're completely missing the point.

First, CPU usage:
Try doing anything, anything at all on your laptop with CPU usage > 80%
Then try again on a Z with CPU usage at 98-99%, and notice how it still works.

Second, high availability:
z Systems are reputable for their very high availability, high speed I/O (laptop disks at 7200rpm?) and tolerance to faults (almost every element has a spare waiting to replace the first one should it break) There is a reason why banks don't run on laptops, don't you think?

Third and last, volume:
You're just not using a Z if you have 4 servers sitting in a corner.
95K for an IFL? Yeah right, what's it in comparison with 100 servers and the power, cooling and staff to plug/reboot/hammer them though?


You make some valid points, you fail terribly at some others though.

Posted by Dam on August 01, 2007 at 07:48 PM MST #

Hi Dam,

Hang on a bit! The point wasn't that you should replace a z800 with a laptop, but that the price performance Milberg claimed for the new z boxes is still far off the pace set by every other platform. Naturally, if I were to replace a z "for real" I would do it with a server product, but even there we have the same problem for z: a low-end server with 10K RPM disks is going to massively outperform that IFL at a tiny fraction of the price.

Remember that I'm jokingly talking about outperforming a z even while emulating it in software. For real workloads, you never would do such a thing, and would simply run apps with the native CPU architecture.

To your other points: sorry to disappoint you, but Solaris can indeed cope with completely saturated CPUs and still give fine performance, using the resource manager built into it. That z boxes can do that is a testament to the resource managers in z/VM and z/OS, but doesn't apply when Linux is the OS running. Don't even get me started on the challenges of trying to run z Linux on heavily loaded machines. Did I mention that I wrote a book on VM performance, and a few years ago co-taught a class on zLinux performance?

The same story applies to reliability: I am not suggesting that your bank run their production transaction processing on laptops! That would be silly. Of course they would continue to use server-quality machines: but they could be servers with far better price/performance than System z. Since there's competition in open systems space, that provides real value to businesses you can't get with monopoly hardware.

Finally, consolidation is a great idea, and if this was 10 years ago, VM would be the only game in town for virtual machines (did I mention that I have deep knowledge of VM, including teaching internals classes and running one of the largest VM datacenters in the world?). But things are different now, and you can consolidate multiple servers onto a single platform using the several powerful new forms of virtualization available today: Xen, VMware, Solaris Containers, Logical Domains. You can do the very same thing you ask for, but the price and performance are orders of magnitude better.

Thanks for posting, though. I always like having a conversation.

Posted by Jeff Savit on August 02, 2007 at 09:35 AM MST #

The point is, I don't think any of the other vendor machines have as much \*reliability\* as a Z.

Sure, Solaris and AIX boxes run fine, but they're just not fault tolerant in my opinion.


Regarding virtualization, IBM's been running "virtualized" OSes for 30 years, and only now do other vendors catch up and go "OI look, new hype, \*virtualization\*".
LPARs are nothing new here.
I fear that XEN or VM wouldn't be able to handle the same workload as a z9 with multiple LPARs each running z/VM.

It's just about different business needs in my opinion, get "middle" range boxes for medium sized businesses and volumes, get a zFridge for your post-human needs.


Cheers for the reply, glad to see there's discussion to be had.

Posted by Dam on August 03, 2007 at 01:29 AM MST #

Hey, we have a dialogue - fun! :-) How cool is that!

To your points: just to add non-Sun examples (to show I'm not being a Sun zealot), Tandem and Stratus for years had high availability features much more than mainframes, much more than Z. For Sun, I think you should look again: current Sun servers have things like instruction retry that used to be mainframe only. Solaris has predictive fault management architecture (FMA) that takes components offline before they fail. Having worked extensively in both worlds, I have to say that it's hard to make point for point comparisons because the two worlds do things differently, but you can no longer claim without proof that z owns the reliability title. Things have changed (and btw, I recall plenty of OS/390 and VM crashes. Everybody crashes some times!)

By the way, neither z/VM nor zLinux have anything like FMA. Linux on Z doesn't even know how to handle common problems like "I have an error on this part of the disk and need to assign an alternate track". When you look at reliability you can't just look at the box: it's the total of the box, the OS, and the app. On Z, that really means z/OS, not z/VM with Linux (Alas - I really like VM much more, even though I did internals work in both VM and MVS flavors).

You're right that IBM has been doing virtualization 30 years, and credit to them for their innovation. I first worked as a VM systems programmer in 1976, so I know they were there (VM/370 Release 3). The point is that things have changed, and they are no longer the only game in town. In many ways, mainframe VM (now called z/VM) has been leapfrogged, and lacks features available in, for example, VMware. IBM, for example, is trying to catch up to VMware for features like VMotion and memory ballooning.

Basically, anything you can do with zLinux can you can do for a fraction of the cost with PCs and VMware or Xen. Really. And you don't have to port code or go looking for RPMs that don't exist on Z. Forget about consolidating Windows apps on zLinux - that's a port not a conversion. And guess what - over 80% of VMware consolidations are Windows. In that market, zLinux is at best marginal - file and print is what people do, and even that's hard on Z. That Z with multiple LPARs costs a few million bucks; the x4600s that would run rings around it would cost a tiny fraction of it...

Thanks for responding - it's good to have a dialogue. Cheers, Jeff

Posted by Jeff Savit on August 03, 2007 at 07:09 AM MST #

I woudn't argue with any of the technical stuff above as I'm not qualified, but can make some real world observations. In my organisation, the MIS system was moved off the mainframe and onto SQL Server machines a couple of years ago, and the performance and availability has been just terrible ever since. There often seem to be problems with accessing server based apps which can be unavailable or incredibly slow, but in the last 5 years I can only remember a single mainframe outage - which was due to a power supply problem. Assuming that the non-mainframe world has now caught up and indeed passed by, I can only surmise that getting it all to work properly in the real world must be tricky indeed.

Posted by brindleoak on August 04, 2007 at 05:07 PM MST #

Hi brindleoak,

My goodness- we have a little party going on now!

I certainly agree that it can be "tricky" to get things to work in the real world - on both mainframes and other systems.

I _must_ point out, though, that I'm at Sun Microsystems, not Microsoft. Even though we're on less contentious relations now, it would not be the recommendation of people at Sun that you use SQL Server or Windows for an enterprise application. We would recommend Solaris, which is definitely an enterprise OS, and one of the databases that runs on it, such as Oracle, MySQL, PostgresSQL, and indeed DB2 itself, if you wanted the database you were familiar with on mainframe.

regards, Jeff

Posted by Jeff Savit on August 07, 2007 at 07:30 AM MST #

I find all of the comments interesting. By the way a single IFL on a z9 is 580 Mips, not 28 mips. The 28 mips is the lowest you can configure for a general purpose processor. IFL's run at the full capacity of the processor.

Regards...

Posted by Don on January 19, 2008 at 09:44 AM MST #

Don, you are right that a z9 EC engine is rated around 580 MIPS, but it also costs a lot more than the 28 MIPS $95,000 z800 model I was citing as the lowest entry cost for going an IBM mainframe. And, a z9 BC has a lower MIPS count, so it all depends on which machine you want to compare and price.

Note that when not running an IFL, a z9 EC single CPU system will set you back about $900,000 for those 580 MIPS, plus $2K or $3K per month for maintenance.

Then add the costs for DASD and software, which is going to be much more expensive than the CPUs. Monopoly prices apply just as much to the mainframe software market as they do for the mainframe hardware market.

Just for the fun aspect: on a 2 year old desktop PC, I've been able to run Hercules with instruction kernels exceeding 70 MIPS! 70 MIPS of emulated mainframe on my PC (about $1K on today's market) compares pretty favorably to 580 MIPS on the "real thing" for over 100 times as much money.

I could easily run one simulated mainframe CPU cruncher instance per installed CPU, and get a current 4-way Intel or AMD server to exceed 250 MIPS.

Let's not forget, the point of showing how much mainframe I can +emulate+ purely in software on a modern CPU is to illustrate how far off the pace the mainframe is compared to other CPUs. In real life, we would run Linux or Solaris applications natively, not emulated at a cost of 10\^2 native instructions per emulated instruction.

Posted by Jeff Savit on January 19, 2008 at 11:42 AM MST #

I believe you're still misunderstanding the capacity of an IFL engine.

As a previous post pointed out, the IFL engines running Linux aren't capped - they run at the full native speed of the processor.

There's no such thing as a 28 MIPS IFL processor - it would in fact be the 580 MIPS cited, and that capacity indeed costs $95,000. The 28 MIPS engines you refer to are only available as so-called "general purpose" engines (the engines that run z/OS).

Another dimension you don't mention is that IFLs move up when you do your processor upgrades.

Let's say you spend your $95K for an IFL on your z9 processor. A few weeks ago, IBM announced the next processor family, the z10, which features 4.4Ghz processors having over 900 MIPS per engine. The $95K you spent on that prior generation IFL entitles you to run an IFL on your z10 at no additional charge, in this case giving you substantially more capacity essentially free.

Posted by Vince Re on March 19, 2008 at 01:47 AM MST #

Hi Vince,

I guess this blog touched a nerve. The original post is over a year ago, and it still draws comments.

I get it, really. An IFL is not performance knee-capped and includes upgrade entitlement. IBM has such a high margin on Z that it is willing to give away processors crippled to not run z/OS in order to attract business.

But, did you see the estimate reported at SHARE that a single IFL engine costs $500K once you factor in disks, software licenses, maintenance, and (I guess) RAM? That was reported on the mainframe Linux mailing list. Look, just licensing z/VM is going to be about $20K (higher if that's the only CPU licensed, IIRC), and licensing Red Hat or SuSE is going to be $11K, $15K or more for that CPU depending on support level. So, the $95K (which I think is $125K when not z8xx) starts growing.

And lets come back to the point I made back in January: the reason I brought up the emulated MIPS is to show the orders of magnitude difference in price/performance. If I can +emulate+ one or two hundred MIPS on a 2 or 4-way Intel or AMD server that costs a few thousand dollars, then that means the native performance (running applications natively on Intel or AMD) is HUNDREDS of times higher. Even with the discounted price of an IFL engine. When an IFL provides several THOUSAND MIPS for a kilobuck, then we're talking something competitive.

Posted by Jeff Savit on March 19, 2008 at 09:10 AM MST #

Sun colleague Patrick McGehearty sent me the following cogent discussion and said I could add it to the comments:

The emulation example may not be fully convincing since there are so many ways to do emulation. And laptops are also a distraction. Let's go back to the crude rule of thumb: 1 Mainframe MIPS = 4MHz on Intel. That's probably the right order of magnitude, provided you have comparable memory, IO, etc.

So when we look at a 900 MIPS mainframe processor that should be in the range of a 3.6GHz Intel processor, plus or minus. While there's a lot of fudge in that estimate, the bottom line is that a mainframe processor is NOT dramatically better than the comparable generation RISC or Intel or AMD processor. And it may even be slower. If it was dramatically faster, IBM would be bragging about it with Open benchmarks, like they do with their Power6 processor.

On the cost side of things, let's skip the laptops. Those systems have trivial IO and memory and aren't designed with a primary focus on reliability. Instead, look at the serious top-end servers offered by Sun, HP, and IBM/Power6. All of these systems have Unix-derived high reliability options, with track records for working on reliabliity going back at least to the early 90's. Yes, they weren't up to mainframe standards 15-20 years ago, just as Windows might not be today, but Solaris/Sparc, at least, improving and can match up well on both the standard measures and in real world production behavior.

Without a current price book I can't be exact in comparing the best cost/performance complete mainframe from IBM with one from Sun. However, I'd be surprised if Sun did not have at least a 5 to 1 cost/performance advantage if you look at the total cost (HW, SW, power, cooling, support). In some cases, I'd expect 20 to 1. The idea that a single processor system could be worth near $1 million is just ridiculous, when you can buy a well configured 4 processor RISC or CISC (and have vendors compete for your business on price) system that can easily handle several times the load for well under $300K.

For legacy applications where the cost of porting is high, mainframes have a place. For anything else, I can't see it.

Advice to mainframe admins: if you are smart enough to learn mainframe management, Linux/Solaris/Unix management is well within your ability to learn. Unless you plan to retire soon, start expanding your skills because the growth potential of the mainframe business looks really poor to me.

Posted by Jeff Savit on March 20, 2008 at 03:00 AM MST #

"What else: VM mode allows for thousands of Linux guests per IFL and will probably be the best fit for most installations. VM is definitely the right way to run mainframe Linux if you're going to do that, but scale back your expectations by a factor of 10 or 100. Thousands of penguins per IFL? Aint gonna happen. (If you're wondering what an IFL is: that's an IBM CPU neutered so it can't run z/OS.)"

Actually you need to scale those expectation back by a factor of "thousands" comared to today's x86 hardware. The IFL we have (all resources allocated to a single VM) can't keep up with an x86 server from 2 years ago in our testing. Tasks that take the IFL an hour to finish can be done in 2 minutes on one of our Xen guests (sitting on the same host as a DB server no less). Also, as far as IO performance is concerned - we're getting close to 3 times the throughput even on the internal RAID arrays on our x86 servers as we get on the IFL with internal MF disks (my guess is the CPU is the biggest limiter here). Don't buy the hype without making IBM prove the thing will do what they say it will - with YOUR WORKLOADS.

Posted by Josh on June 04, 2008 at 08:34 AM MST #

Josh,

Thanks for your comments, which I definitely agree with. That sounds like the results from Clarkson U, where low-cost Xeon and even Pentium III outperformed their System z, using Xen for their hypervisor.

If you can share your results, please send them to me. I always like seeing actual data.

regards, Jeff

Posted by Jeff Savit on June 05, 2008 at 12:08 AM MST #

Just to make it more fun, what your opinion about Solaris now available for z ??

Posted by Tore on October 27, 2008 at 07:57 PM MST #

Comment about the one million cpu.
Yes, that is to expensive, but fact is that
if you already have a mainframe, then
it make sense and is cost effective to
start one or more IFL:s. And as someone said above, thousands Linux per IFL is not reality, it's more like 20-50 depending on load and purpose.
I know a big company having two mainframes and that runs zVM and zLinux at about the same price or lower than std racks. But it all depends on the mainframe already beeing there, and what prices you can get from IBM.

Posted by Tore on October 27, 2008 at 08:07 PM MST #

One IFL almost = 3.6 GHz ??
Yes, if counting that one only and ignoring the rest.
In a z9 or z10 there is 8 IO cpus dedicated to take care of IO only, and they are the same kind as the IFL (speed and so on).
So when running a Database server you actually have access to 9 CPU:s with this speed. And even more if you count the two or three power cpu:s in each OSA(network adapter).
So you can't just compare the speed of x number of IFLs to x number of x86 cpus. It all depends on the type of load.
And there is load that because of this is cost effective to run in z box.

Posted by Tore on October 27, 2008 at 08:15 PM MST #

Oh yeah, having OpenSolaris for z makes an interesting proposition. Here's how I take it: we at Sun love Solaris and would rather people run it than any other OS, Linux included. So, if some people deploy OpenSolaris on z instead of z/Linux, then we think that's pretty cool.

At the same time, we know there are far better platforms to run Solaris than on z: On SPARC, Intel, and AMD you have the full, tested, and supported implementation on far better price and performance (with vendor choice and competition), and with a massive portfolio of products and services. Those platforms are by far the superior business solution, including for consolidation of virtualized instances.

But, if you have spare CPU cycles on a z you already have, don't plan to reduce it, and want to get experience with OpenSolaris there - and bring a new form of competition to a place that (since AIX on mainframe is long gone, and UTS barely exists), then that can be a good thing.

Posted by Jeffrey Savit on October 27, 2008 at 11:49 PM MST #

(this is for Tore's post starting with"Comment about the one million cpu.")

I partially agree with you: if you already own a box, and it has excess capacity, and you have staff with the required specialized skills, you may as well populate it with more work. But don't let it lead you to committing more money to a more expensive platform.

Keep in mind the "sunk cost fallacy" (Google that term!) and realize that you shouldn't be pouring more money into an expensive item just because you already expended a lot there.

Posted by Jeffrey Savit on October 27, 2008 at 11:58 PM MST #

(This is for Tore's post beginning "One IFL almost = 3.6 GHz ??")

First, a reminder that the origin of the 4:1 rule of thumb that derives that 3.6: it comes from a well-known mainframe performance expert, not from me. As with all rules of thumbs it should be taken in context.

While it's true there are other co-processors on a z, it's easy to mythologize them. All modern servers have auxiliary hardware that offloads I/O processing from the "regular" CPUs. As shown in the June 4 comment from Josh, current x86 servers have been seen to completely smoke mainframes even doing I/O. So, this business about "extra processors" doesn't translate into actual performance that makes up for the shortfall in CPU performance and the extremely high prices on z.

BTW: My personal testing with OSA leaves me unimpressed compared with the GbE and higher available on any modern computer.

Posted by Jeff Savit on October 28, 2008 at 12:38 AM MST #

We have two IFLs currently pushing 16 guests on zVM. Anytime one of them goes CPU crazy and starts reporting that it is using 100% CPU (via top), the other guests start to suffer and we get support calls saying applications are running slow. Magically the calls go away when we quit the CPU intensive processing.

Posted by Anonymous on December 18, 2008 at 07:10 AM MST #

There are several things that have to be watched out for: One is that "top" within the Linux guest is totally unreliable for determining the CPU utilization when running in a z/VM guest. There was recent argument about this among z/Linux users, but even at recent Linux kernel levels the number reported by "top" may not be anywhere near reality. That's something to think about when considering z/Linux...

The other thing to know is that there are several ways that a z/Linux guest under z/VM can go into a hang or unresponsive state. The most common is caused by going into a VM scheduler queue called an "eligible list" designed to prevent excessive storage (RAM) overcommittment. A guest in that queue is not serviced at all - even for long periods of time. There are things you can do to alleviate this, but you need to know how to tune VM.

On the other hand, the problem may be as simple as setting priorities ("SHARE" values) for the guests. Again, having the right data and knowing how to interpret it is key. This is not a simple environment to do performance work on.

You need a REAL performance monitor to do this correctly. I absolutely do NOT recommend using z/Linux, but if you do, you should get the products from Velocity Software. Tell them I told you :-)

Posted by Jeffrey Savit on December 18, 2008 at 09:53 AM MST #

Hi Jeff:

I used to work in Finance in that same large brokerage firm as you that no longer exists.

My question is more specific. If I were to rehost existing mainframe applications on a beefed up Unix Solaris platform what is the ball park figure that I can expect to pay for everything to be up and running?

Posted by Adrian Miranda on January 26, 2009 at 06:35 AM MST #

Hi Adrian,

I'm going to make a very serious answer, and I'm not at all flippant: the answer is "it depends".

Several parts of the answer:

1. How much machine resource is needed? That's the easy part. You can crudely estimate for a first cut, and then refine as you get a working prototype of the migrated application. It will be a lot less money, as I know from experience.

2. The big part: how "as close to identical do you need?" and "how big is the application in terms of software complexity, and how hard will it be to migrate the application?" So, if it's a small FOCUS or SAS application, it's going to be a walk in the park (move the data, minor tweaks to the FOCUS or SAS code). If it's a simple batch application for a very few reports in COBOL based on flat files, also will be easy (move the data, minor tweaks to COBOL source). If it's a sizeable CICS application with hundreds of transactions, thousands of COBOL source files, some BAL modules, many DB2 tables and VSAM files, thousands of tapes out on a silo, etcetera, then it's a different story entirely. It's doable and can save a LOT of money, and I would recommend one of Sun's partners that have a thriving business doing exactly that (like with Unikix), and with tremendous cost savings to customers, but there's no way to answer a "how much does it cost" via a blog entry!

E-mail me, and I'll be glad to talk with you and see if I can help. We can also trade war stories about the company that no longer is there... :-(

Posted by Jeffrey Savit on January 26, 2009 at 06:52 AM MST #

Just finished with benchmark testing of Zlinux on Z9,Z10,VMware (Blades),and Sun. Z9 IFL J2EE app server ration is no more than 15 J2EE application servers per IFL,Z10 performance is 60% better than 9, VMware with higher amounts of CPU's is next best and the Sun platform screams. Good Luck, just say "no way" when it comes to performance of high transaction apps on the mainframe, you just won't get your bang for your buck and virtualization is NOT for everything.

Identity is concealed.

Posted by firstandlastnames Withheld on February 23, 2010 at 12:11 PM MST #

Thanks for the comment - I really appreciate it! BTW: 60% is almost exactly the figure IBM cites for the maximum performance improvement of a z10 CPU over a z9 CPU. So, whatever you're doing, it did a good job of exercising the z CPUs.

Every once in a while I get a comment on my blog, or an e-mail sent directly to me where somebody reveals their terrible performance with z/Linux, or other problems with it. I've gotten some really hair-raising stories and revealing raw data of poor performance. Data - I love looking at data - that's where the truth comes out!

It shouldn't be a surprise to anyone by now. The z/Linux platform is a very expensive and slow platform, and modern virtualization technologies on higher performance, lower-cost processors are much better solutions. Java in particular has been a really bad performance. And databases. And web. And file serving. And others. There just doesn't seem to be a workload where it is competitive.

If you want vertical scale, there are much better platforms. If you want virtualization, there are much better more capable platforms there as well: Solaris Containers, Logical Domains, Oracle VM, VMware, Hyper-V, Citrix Xenworks, just to name a few.

Posted by Jeffrey Savit on February 23, 2010 at 01:25 PM MST #

Hi, can anyone supply z10 with z/Linux benchmark comparisons to latest Intel i7 processors with Linux? Executables are natively compiled from C++. Thanks!

Posted by Max J. Pucher on June 15, 2010 at 08:44 PM MST #

IFLs don't do DOS or MVS for CICS (usually COBOL) transaction engines. You pay the big $ for this.

OS or GNU/Linux is the www facing Unix front end which may be controlled by RDBMS. The money you do not trust to this. M class are pretty cool if all you want is an HA Unix.

It is a good thing that IFLs do not burn the rest of the system.

One problem with any Unix that I know is that it uses memory management hardware to protect memory, but does not use timers to protect CPU. This needs to be fixed in GNU/Linux, OpenSolaris and AIX too.

Evil programmers calling the same function with the same args over and over again because they do not like the result (errno==EAGAIN). For example poll() on non-blocking I/O!

Posted by Andrew Buckeridge on June 29, 2010 at 11:02 PM MST #

3 years and this blog entry still gets comments!

Max: I don't have access to z10 or Intel i7, so can't produce such benchmarks. It would be helpful if a customer, or better yet IBM, published such benchmarks. I've done informal benchmarks on M-series and z9 for some C codes. An M-series at 2.15GHz outperformed the z9 by 2x to 6x. That shouldn't be considered comprehensive or complete, and those are not the fastest SPARC or z processors, but I consider them illustrative :-)

Andrew: You are quite right about IFL's inability to run DOS or MVS CICS, and the much higher price tag if you want a "regular" z CPU to do that. I'm not sure what you mean in your other comments. Feel free to explain if you want. I think "HA Unix" is a pretty broad category (and yes, M-series is very good at it). I don't know what you mean about IFL's not "burning the rest of the system". Solaris certainly does use timers to control access to CPU - is that what you mean by "protect"? I don't know what you're trying to say there.

Alas, programmers on every platform sometimes misunderstand the APIs they use, and may just reissue a failed request because they hope it might work the next time!

Posted by Jeffrey Savit on June 30, 2010 at 04:15 AM MST #

Since the advent of Memory Management Units Unix has used Memory
Management to enforce its file access model as all such calls must
go through to the protected kernel. The kernel trap does the basic
sanity checking and kernel then facilitates the message passing if
all is ready and okay.

The problem with Unix is the use of preemptive multitasking. Some other
OSes (they do exist) are based on Communicating Sequential Processes.
Any useful CSP program must yield to transfer a message some time.

In cooperative multitasking every process must yield. When the time is
up rather than doing a preemptive context switch the OS throws a
watchdog event. This protects CPU resources in the same way that a
segfault protects memory resources.

Modern hardware has both timers and memory management. Currently only
memory management is used to protect systems. A timer could be used to
protect CPU too rather than do a preemptive leaving the dorked process
to do more harm later.

The problem you have comes from evil programmers calling the same
function again with the same arguments if they do not like the result.

One that is quite common is spinning on unexpected EAGAIN after poll()
with non blocking I/O. EAGAIN is not necessarily the same as
EWOULDBLOCK and non blocking I/O is used when you are not using poll()
or select(). With non blocking I/O you must block on some thing else.

In GNU/Linux (which is still preemptive multitasking) I change the
priority of any process that fails to yield from user space. Such
processes are run least and no longer get in the way of power
management so power dissipation is reduced. These leaves more thermal
reserve with in CPU's TPD so that useful software can run when ready.
This is important on a SunRay server running crappage such as Adope
PeDoFile reader or Crash player.

If you were a kernel hacker you would do this in the scheduler.
This would be a half arsed, but Unix compatible solution.

In Solaris (or GNU/Linux) for each thread issue
> while :; do :; done &
and see what happens to system performance.

> Insanity: doing the same thing over and over again
> and expecting different results.
Albert Einstein

Posted by Andrew Buckeridge on June 30, 2010 at 01:42 PM MST #

Andrew, I may be misunderstanding you, but I'll respond based on what I think you're discussing.

I'm a big fan of preemptive multitasking. Rather than being a problem, on the contrary it's the fundamental and successful method used by most operating systems to control CPU access. The limitation of CSP is that processes must cooperate, as you said, and must occasionally yield. Arbitrary programs cannot be relied upon to do this, and (trivially) any compute-bound program never will, making CSP unsuitable. Certainly an incorrectly coded process can't be relied upon to yield.

This is hardly a Unix-only issue, as preemptive multi-tasking is the norm for operating systems as diverse as z/OS and Unix. IMO this is the only reasonable way to manage CPU resources in a multiprogramming environment with arbitrary applications - whether they are written correctly or not. FWIW, operating systems often provide a way to kill a runaway user process when it has exceeded a time limit. You can do this in z/OS with the TIME= parameter in JCL, or use 'ulimit -t' in Unix.

FWIW, I tried your experiment "while :; do :; done" on Solaris and it did no harm to my response time or performance (at the same time I was doing file transmissions via scp). For fun, I ran it in a zone capped at 15% CPU, and it only was able to get 15% of the CPU - as specified. It was completely harmless. Solaris does a nice job scheduling and controlling CPU resources.

I'm also not sure what you mean in your points about incorrectly written programs. That's also not a Unix-specific issue. I supported mainframe systems and remember applications going into a tight loop because programmers (for example) repeatedly tried to open a non-existent file! All operating systems have interfaces that can be mis-used.

Since this is off-topic for this blog entry, I'm going to ask that we drop this line of discussion. If you want, feel free to e-mail me - or open a blog of your own and we can continue there.

--Jeff

Posted by Jeffrey Savit on July 01, 2010 at 05:35 AM MST #

Jeff,

Wonderful blog and great thread indeed. Very relevant to me, right now, since we are in the process of migrating our intel-server-based solution to something more powerful. I'm considering z10 but also sparc (or power) solutions. My problem is not horsepower but scalability and reliability, so right now i think the z10 is my best option, not only in terms of delivery times and TCO but also in TCA. I can see you are not a fan of the concepts involved in zLinux so, being the many-boxes model out of the question for me, do you think Oracle TSeries with maybe DB2 would offer significant benefits compared with the z?

--hector

Posted by Hector Isais on September 23, 2010 at 01:34 AM MST #

Hi Hector,

Thanks for the comment. It amazes me how long this blog entry has received comments. There must be some controversy here! :-)

I think you would be very disappointed in the z10 for TCA, TCO, scalability and reliability. As I've noted on the blog here and in other entries, the z has incredibly high acquisition and ongoing costs, and poor price/performance. In fact, with the z196 announcements (the z10 follow-on), some IBM-fans said that the new mainframe finally has "competitive" single-CPU performance, which you can take as a belated confession that the z10 and earlier systems did NOT have competitive performance. Since they cost so much, they won't have competitive price/performance, even with the z196. You'll also find that z still has scalability problems (install 4x the number of CPUs, get much less than 4x more capacity), and z/Linux is lacking the reliability features associated with z.

Instead, I would give a look at SPARC, x86 and even POWER as better choices. You mention scale as an issue, but current x86 designs go to 8 core times 8 socket designs for 64 CPU cores in a server, and 1TB RAM. That's pretty high scale, and you may be mistaken in thinking that a z10 is a more powerful platform. In fact, I doubt that a z10 outperforms a high-end Nehalem server, despite the z10's higher cost. SPARC of course goes higher yet, as does POWER, and all of them have virtualization features.

You ask specifically about T-series, and there the question is really whether or not DB2 application (I assume you mean UDB) is capable of using the many CPU threads of that processor. There used to be an IBM "Ranger" team that promoted the use of DB2 on SPARC (I don't know if they're still around), but they said that DB2/UDB worked very well on SPARC servers. Of course, I would prefer you run Oracle DB, but I understand you've set the conditions for which software is under consideration :-) If it turns out that either DB2 itself or the applications are single-thread in nature, you may not get the value of T-series servers which are designed for throughput computing of many parallel threads. You may be able to work around this by using multiple application instances, since that would give you parallelism that exploits the processor. Or, consider using the M-series servers which provide very good single-thread performance and allow extremely high scale, within the same frame and without disruption.

I hope that helps.

regards, Jeff

Posted by Jeffrey Savit on September 24, 2010 at 06:44 AM MST #

Regarding the performance of Mainframes. A z10 with 64 cpus, give 28.000 MIPS (see link below). An 8-socket Nehalem-EX gives 3.200 MIPS - under software emulation. Software emulation is 5-10 times faster than native code. If Nehalem-EX could run Mainframe software natively, eight x86 cpus would give 16.000 - 32.0000 MIPS. Ergo, you need eight Intel x86 cpus to match 64 Mainframe cpus.
http://en.wikipedia.org/wiki/Hercules_emulator#Performance

Another source, an Linux expert who ported Linux to mainframes, claimed year 2003, that 1 MIPS == 4 x86 MHz. Hence, 28.000 MIPS corresponds to 112GHz. Pick a 8-core Nehalem-EX, which runs at 2.3GHz = 8 x 2.3 GHz = 18.4GHz. But, Nehalem-EX is much faster clock for clock, than Pentium 4. Whereas 1 MIPS then, is 1 MIPS today. Nehalem-EX may be 2 times faster than one Pentium 4, at same clock speed. Then those 18.4GHz of Pentium 4 MHz, corresponds to todays 36.8GHz. Ergo, again you need just a few of Nehalem-EX to match 28.000 MIPS.
http://www.mail-archive.com/linux-390@vm.marist.edu/msg18587.html

You have two independent sources, one who ported Linux to Mainframes, another who wrote Mainframe emulators - that states that Mainframe CPU are dog slow.

The new z196 cpu is 50% faster than the z10 Mainframe CPU. This means you dont need 8 Nehalem-EX to match the biggest z196 mainframe, but you need 50% more Nehalem-EX: you need 12 socket Nehalem-EX server to match the z196. That is horrendously bad performance from the Mainframe.

Posted by Kebabbert on September 27, 2010 at 07:40 PM MST #

Great comment, Kebabbert, pointing out the disparity between mainframe and x86 performance. I think you accidentally said "Software emulation is 5-10 times faster than native" but meant to say emulation is "slower than native" - emphasizing how a Nehalem-class processor demonstrates its power relative to a z processor since it can so effectively emulating z in software. I've used that comparison too :-)

Also, Barton did not port Linux to mainframe - instead, he's a mainframe performance expert who heads Velocity Software and wrote the best software for measuring z/VM and z/Linux performance. I do not recommend using z/Linux, but for those who make that choice I strongly recommend getting the Velocity products or you will have even worse problems. In any case, as you observe, current x86 processors do a lot more per clock than the preceding ones. In fairness, current z systems also do more per clock than their predecessors. This just shows that you have to be careful with rules of thumb, which may be helpful for rough guidance but need to be substantiated with measurement of actual systems.

Posted by Jeffrey Savit on September 28, 2010 at 12:40 AM MST #

Hi, why I am in agreement about poor mainframe performance compared to Intel it is not possible to compare on a MHz comparison. The only truly interesting number for a system is throughput and not processor performance. Throughput takes processor speed, branch prediction, thread prefetching, memory speed, memory caching, I/O bandwidth to memory, disk and network into account. Finally more important than most people realize is compiler technology. For C++ mainframes are quite bad, which is why IBM uses co-processors for floating point and linear optimization and Java code. For COBOL Intel has less advantage than for other languages.

IBM has for example through the purchase or rational improved C++ performance by a 100%. Their new compiler for z196 supposedly cleverly uses the co-processors - which are quite expensive to purchase. The interesting part of z196 is that the z196, xPower and Intel processors can be installed in the same rack and the system management software allows for fault tolerant handling of memory, disk and network resources.

It will be interesting to run our Linux versions in parallel on all three and compare the processors in the same environment.

Posted by Max J. Pucher on September 28, 2010 at 04:32 PM MST #

Max,

Thanks for the comments and interesting points! I agree with your main points about the importance of throughput as the primary measure.

However - floating point operations on z are NOT delegated to a co-processor, nor are C/C++ applications. Perhaps you are thinking of the zAAP processors which as used for Java (and IIRC, a few other things like XML processing). The point of zAAP, zIIP, and IFL processors isn't that they are faster than a regular z CPU (they aren't) but that they are sharply discounted and excluded from IBM's normal software licensing costs for z/OS. If you buy a standard CPU you pay massive additional software license fees for z/OS and related products.

There is also a lot of confusion about IBM's new z196 and blades: the z196 does NOT run in the blade chassis; it is connected to one via Ethernet. It does provision systems to blades, but frankly several competing products from Sun, HP and others have for years provided virtual and physical provisioning, resource assignment, and resiliency to handle failures. Despite the hype, the capabilities claimed for the z196 do less than products that have been available on the market for a long time.

Posted by Jeffrey Savit on September 29, 2010 at 12:54 AM MST #

Remember one IFL/zIIP/zAAP/ICF/CP is one zXXX core, not a whole processor. This doesn't affect the calculations but gives much more credit to z-processor speed. No doubt z IS damn expensive, but in our zLinux Oracle tests it has been a very attractive platfrom to run Oracle database servers when having TCO in mind. There are cases when Linux on z is the most valuable option. In our house we also use VMWare, Power, Citrix, Solaris, SPARC, HP SD etcetc, so yes, we have hetergoneous environment to make comparisons with. At this point there are also Power machines competing with z and to be honest, I think even IBM don't know how to segment these two Linux cabable virtualization environments. The most important thing to note is that: OracleVM, VMWare etc. are software virtualization technologies and currently from _license point of view_ only p and z also includes hardware virtualization.

And to be honest "one guy said that the performance difference is about 1 MIPS = 4x86Mhz" or "software emulation is approximately 5-10 times slower" or "something is approximately something in somewhere". This sounds ridiculous. Even if you are running something on your superduper Solaris/SPARC (no wonder why there reads \*.sun.com on upper left corner) you need to test it with real world load and make your own conclusions, which were in our case surprisingly good (we were expecting total crap). Usually the performance and hardware costs are not the real issues today, but the software costs. All that matters in this cloudhype world is how cheap you can provide the whole service.

Posted by Mikael on October 04, 2010 at 06:38 PM MST #

Mikael,

Thanks for the comments. I'm beginning to feel that I should just let this be a general bulletin board on the topic.

Certainly, an IFL (or other engine) is a core, not a processor. That's the same as all competing server products today: all are multi-core products, so it's hardly an advantage of z. Quad-core, 8-core, even 16-core with our new T3 SPARC server. I also must correct you: it is incorrect to say that only IBM's p and z are hardware partitioning for license purposes, and SPARC dynamic domains do exactly the same thing. For that matter, Solaris Containers also are respected for licensing purposes.

The rule of thumb about MIPS and MHz is just that - a rule of thumb - and should not be expected to be accurate, even though it was proposed by one of the foremost mainframe performance experts. That said, I agree with you that one should use real workloads not magic conversion figures (that's why I criticized Joe Temple for doing so - and his numbers unrealistic, too). I also agree that real workloads should be measured and that TCO should be evaluated, not just the cost of the hardware or the speed of the machine. The business about software emulation performance merely is there to illustrate the sharp disparity in price and performance.

But that brings me back to a question I keep raising: why go to a more expensive hardware platform, that requires difficult and time consuming conversion (adding costs, project risk and opportunity risk), when other platforms like Oracle VM Server, Solaris Containers, Hyper-V, Xen, etc, provide virtualization that can be used to manage software costs, support current application versions rather than old ones, and can be consolidated using "physical to virtual" (P2V) tools rather than staff or consultant-intensive manual conversion? Or use an integrated platform like Oracle Exadata and Exalogic which provide outstanding features for consolidation and performance.

Posted by Jeffrey Savit on October 05, 2010 at 09:42 AM MST #

In our experience zLinux delivered the lowest TCA and TCO when hosting Oracle databases and that too by a huge factor (more than 40% reduction in costs, not taking into account power or cooling and other stuff..... So, not sure if any of the 'speed' comparisons matter as decisions are based on TCO analysis these days.

Is there a benchmark available where exactly the same workload was run on each of the technologies which simulates real life workloads and shows the difference?

Posted by guest on July 28, 2011 at 10:31 PM MST #

5 years and this blog entry still gets comments!

Look - if you take a bunch of old low-utilization servers (from any vendor) and consolidate their workloads onto a recent product, you have a good chance they will "fit". When working with software products that are licensed by the CPU - as you are - then the entire savings is due to reducing the number of CPUs, even if that replacement server is disproportionately expensive, as it is with a mainframe. But it's rather a cheat: comparing a consolidated replacement against unconsolidated old RISC or x86 systems, when you should have compared the mainframe to virtualizing on a new RISC or x86 system.

Then you could have enjoyed the same license cost savings or more, with a simpler conversion process (using a P2V method) and a lower cost server. This would eliminate the conversion and opportunity costs you had to incu. Companies do this all the time for much better financial return and lower risk than a replatform to mainframe, and it's not even newsworthy anymore.

I was going to let this pass on my blog, but since the mainframe topic has been raised again: there's been a lot of hype about a new entry-level z114 mainframe at $75,000. I priced low-end SPARC and (to be inclusive) Power systems that would compete against this, and found that those RISC alternatives would provide about 4x more capacity than the entry z114 at 1/4 to 1/3 the price (that is - in the $25 to $30K range for more powerful systems). And, there's been a bit of chatter about whether that $75,000 is realistic: unlike the open systems, that price doesn't seem to include OS licenses, disk, or network connectivity, so the lowest *true* price will be much, much more.

To answer your question: yes, there are many benchmarks that could be run to provide a view of real-life workload performance, but none have been run on mainframes, or to be specific: no results have been published. You may wish to wonder why.

Posted by guest on July 29, 2011 at 02:04 AM MST #

In the case where TCO savings of 50%+ was obtained, it was a brand new purchase....meaning the alternative was sized on UNIX and the equivalent on the mainframe and the TCO savings were in the order of 50%....it was not a replacement of old servers running at low utilization, which I agree is not the right comparison as you cannot take age old servers and claim TCO savings by consolidating to a very powerful latest generation server. Regarding the sizing for UNIX etc, it was done after performing a PoC. So, not sure if this is a one off or this can be consistently obtained.

Posted by Sree on August 01, 2011 at 07:37 AM MST #

Sree, thanks for the followup. That certainly filters out the "not the right comparison" case so often advertised. I must say that it runs counter to my experiences, and would be interested in seeing the details of the performance and money data and analysis, if you are are able to publish it. regards, Jeff

Posted by Jeff Savit on August 01, 2011 at 09:26 AM MST #

Guys,

I am looking for $Cost/MIP for zLinux. The cost is inclusive of all costs (datacenter cost / sq ft, labor, storage etc) and not just hardware cost. I need it for zLinux (and not zOS).

That would allow me to compute how much savings can be made if I free 500-1000 MIPS by moving workload off zLinux to mid-range servers.

If there is any published MIPS to GHz conversion ratio that you can share, that would be good as well.

Thanks.

Posted by guest on September 15, 2011 at 04:57 AM MST #

Hi (and sorry for delay in posting your comment - I've been on the road).

I'm sorry, but there's really no MIPS to MHz ratio that truly works - the preceding comments discuss that at length. I really need to do a separate blog entry on that! Even within mainframe it's not much value: some instructions take a hundred times more clock cycles than others, so MIPS on a given system varies dramatically depending on what you're doing. You can look above for rules of thumb, but be warned that they really are not good science.

The real answer is to do a proper study of the system, which takes time and effort. Add up the costs for the existing system if you can - and include software licenses as well as maintenance, HW and environment. I suggest contacting your friendly vendors who will have the motivation to add up the costs - and of course you should check their numbers to see if they are valid. If you want me to contact you at your e-mail address just let me know.

Posted by Jeff Savit on September 22, 2011 at 03:56 PM MST #

I have had the pleasure to test one of our DB which is used by some interactive applications against a z10. Paramount in this is that users require a fast response. The DB is quite small and application logic is typically handled on the DB. The DB is very normalized and queries can span several views deep (incorporating on the
calc). Load is typically in burst with long periods of low activity.
Currently it runs on a 2-way quad-core (x5460) . Test I did showed that in none of the queries the z10 was even close to the old intel box, Response times where sometimes 3x slower.
Even the z11 is barely able to keep up in response times. Makes my wonder how it would fare against the latest intel architecture.

Posted by guest on October 23, 2011 at 06:39 AM MST #

Hi Jeff! Thanks for your insights. Just wanted to say hi, and get a 2012 timestamp attached to this blog. Hope all is well.

Posted by Dave Brillhart on June 13, 2012 at 10:58 AM MST #

Nice to hear from you, Dave! :-)

Posted by guest on June 13, 2012 at 12:40 PM MST #

This debate has been going on for too long. all platforms have a purpose hence these vendors can make money. The bottom line is not who has the best hardware or capability or all the technical stuff Jeff eluded to it is whose selling skills are the best.
In my 26 years IT career I have worked on mainframes, PC desktops, and yes even SUN Unix and yes there is a great distinction between these platforms. Yes, mainframes are very stable but then again they are VERY expensive and the skills have all died. I have worked on a very small form factor Sun V240. This box never stopped, we never had downtime, and we never had performance or hardware problems. It basically ran like uuuhh a mainframe. The reliability argument no longer holds for mainframes as technology got cheaper. What do hold is how expensive mainframes still are, how expensive they are to maintain and operate. Sure, the mainframe vendors will try to prove this wrong with nice graphs and analysis from their point of view. All just smoke and mirrors.
The problem is vendors selling mainframes have inefficient processes, they forgot who the customer is. They no longer care. It is all about making money. The truth is that most of IBM revenue still comes from selling mainframes and the rest comes from selling shelveware.
Unfortunately, Microsoft is not much better, they still suck after so many years of evolution, and this operating systems still fails to become an enterprise contender. Hell, you will argue that so many enterprises are running Microsoft, yes and they are trying hard to get out of it.

If you have lots of cash to waste, buy a mainframe. If you don’t and want reliability use something else like SUN but please never think Microsoft is the answer. That will be the most expensive decision you take.

Posted by guest on September 10, 2012 at 10:06 PM MST #

Post a Comment:
Comments are closed for this entry.
About

jsavit

Search

Categories
Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today