Response to: IBM, Sun and HP: Comparing UNIX Virtualization Offerings

In his article IBM, Sun and HP: Comparing UNIX Virtualization Offerings, Ken Milberg of IBM Systems Magazine makes a number of observations about the different vendor's virtualization products, many of which are wrong.

Like Jim Laurent, who has already posted observations on this article, I'll confine myself to the comments on Sun technology. Well, actually, I'll make some points about the IBM technology comments he makes too. Somebody better qualified than me on HP products will have to address any possible inaccuracies there.

The article starts with a brief compliment about Solaris, but then goes into a digression about volume mirroring. As Jim points out, the author could have used ZFS (certainly for the data components of the storage) which he already had mentioned, and "hundreds of commands" sounds like an exaggeration to me as well. Perhaps he could have done something as prosaic as a script? In any case, ZFS sounds like it would have been ideal for easy management of non-root filesystem data, and there are ways to make managing root filesystems a low-pain activity too.

The author then goes on to talk about Solaris Containers (otherwise called zones), saying "this method requires all partitions have the same OS and patch levels. Their virtualization essentially virtualizes an OS environment more-so than hardware. In fact, they don't emulate any of the underlying hardware.". First of all, they're not partitions - a revealing remark that shows an IBM-product frame of mind, Yes, they do virtualize OS abstractions instead of hardware - that's the great thing about them, The preceding posts on this blog have gone to (perhaps tedious) description of how complex and expensive it is to virtualize hardware details. Solaris Containers bypass all that to provide light-weight private computing environments with essentially no overhead at all. It's not the solution for all problems - what is? - but provides an excellent, efficient way of addressing server sprawl. System p virtualization doesn't. Here's a pop quiz: how many 8GB instances of AIX can you have on a 32GB server? (There's a trick here, obviously, but even the "obvious" answer of 4 isn't a big solution for server sprawl - I'll mention the actual answer some other time).

He makes several other observations, only some of which are on target: OS level must be exactly the same across all containers. Well, that's not completely true: containers share the same kernel, but you can have different software levels for everything else. More important: that's actually a best practice you want to encourage. If this was the halcyon days of mainframe timesharing with VM/CMS - each user in his or her own virtual machine - you would (or should!) see dozens or hundreds of virtual machines with the same OS level. Otherwise you get chaos of different patch levels to manage. The small footprint in the Containers model lets you have dozens, hundreds, or even thousands of private environments on the same Solaris server. One kernel fault will bring down every container...There's also limited security isolation as a result of a single kernel across containers. What that means is one breach will impact every container in the OS image. - well, that's only partially true, and somewhat misleading. A kernel breach would affect all environments - just as it would with IBM's z/VM or VMware's VMware products. However, a bad guy getting root privileges in one container gets no access at all to other containers. Nada, zip, zilch, zero. So, his observation is misleading.

He goes on to say "From a licensing perspective, one must also be aware ISVs will charge on a per CPU basis across all containers in the single image, even though they may need only a part of the OS image capacity". Well, that varies from ISV to ISV, as Jim Laurent pointed out. His comments on ISV pricing on the basis of the whole system are basically wrong.

He goes further astray with "Sun containers also can't share I/O, which is not a good thing." He's absolutely wrong here - this is a howler. By default, Solaris Containers are set up in 'sparse root' mode in which most of its file systems are shared. You can mix and match any combination of shared and dedicated storage assets for containers, at a volume, partition, slice, or directory level. Or (and this is a trick I like) set up a loopback filesystem within a file. Solaris Containers have far more flexible I/O sharing. Ditto for non-disk assets of all types. Just RTFM.

He complains of the management interface: "one must type endlessly from the command line and, when you type so many commands, you can make mistakes. You could use the Solaris Container Manager, though I suspect you may have similar problems that I had with the Solaris Management Console in configuring storage resources.". That's wrong on at least two levels: first, you can put most information in configuration files as input to zonecfg and related commands. So, you do not type endlessly from the command line. In fact, for automation and scripting you want (in my opinion) a CLI and config files to free you from the tyranny of having to point and click over and over. Moreover, he simply ignores the Sun GUI for containers. Very odd journalistic behavior to complain about there being a GUI deficit in the same paragraph where he mentions the GUI and that he never tried it. Odd.

He goes on to mischaracterize Sun's efforts with Xen. There's no a big secret here - the development is clearly there on OpenSolaris.org. That's one of the consequences of being open! He could even download the bits that are available for testing.

He then gets totally confused with the idea that Windows could run on the (SPARC architecture). He works with IBM, so he should be familiar with the idea that there are multiple product lines. Our SPARC systems run Solaris and some Linux distros; our x64/86 systems run Solaris, Linux, and Windows. Not a difficult concept to grasp, and certainly easier than the mix of systems IBM has. VMware, of course, currently runs on x64/86. Xen is emerging on the same platform. Logical domains will be on T1000/T2000. That's a pretty rich portfolio providing customer choice on popular platforms.

He goes on to say " they continue to want to be all things to everyone. With up to 5 separate ways to virtualize, they'll continue to confuse most people. IBM has one consistent strategy and method of virtualization. They also have only one hardware platform for their POWER5 technology." The last sentence is a marvel of circular reasoning. There's only one hardware platform for one of their hardware platforms? Well, duh. IBM certainly does not have a consistent virtualization strategy. LPAR on System p is unlike LPAR on System z, which is unlike z/VM on System z (though the latter two at least share some DNA). Unfortunately for IBM, they have no operating system of their own on the most popular computing platform on the earth (Intel/AMD's x64/86) - so they have no virtualization story there either. They used to have a dialect of AIX that ran there, but they killed it long ago.

Which brings me to Mr. Milberg's closing comments:

  • A 39 year history of virtualization, offering a very mature technology. That's totally wrong. The form of virtualization in System p is totally unlike the virtual machine technology from IBM's history. All that trap and emulate stuff I described? Doesn't do it, and most of the other technical aspects of VM. In fact, the System p style of CPU partitioning was first done by Amdahl (Multiple Domain Facility) before IBM was forced to respond and come out with LPAR.
  • Capped and uncapped partitions. Allowing users to take advantage of unused clock cycles via a shared processor pool is an innovation that no one else has. HP requires a workload manager system similar to PLM, while Sun has nothing. - that's absurd. Sun has dynamic resource pools and the Solaris Resource Manager that both (in different ways) let customers distribute CPU resources with a fine degree of control. More RTFM is needed.
  • SMT - Only IBM has it - Not only does the relevance of this escape me, but it's painting a deficiency as a benefit. Sun has it's Chip Multi Threading (CMT) that is a far more advanced processor design.
  • Dedicated or Shared I/O on a virtual partition - Only IBM has it. Very wrong. Solaris Containers provide both.
  • IBM has only one virtualization strategy (APV) - maybe for THIS hardware platform (which seems to have multiple OS strategies - is it to be AIX or Linux now?), but IBM certainly has more than one virtualization strategy. There will be a lot of shocked IBM employees in the Endicott labs who are working on the descendant of the actual 39-year old technology.
  • One hardware platform for AIX and Linux partitions (POWER5) - this actually documents IBM reneging on a promise. Linux, obviously, runs on many platforms (which has little to do with IBM), but some of us remember when AIX was also supposed to run on mainframe (AIX/370, AIX/ESA) and on Intel, but IBM reneged on those promises and canceled those projects. So, sure there's only one hardware platform LEFT for AIX, and a vanishingly small percentage of Linux runs there.

By contrast, Solaris runs both on SPARC - scaling from single-CPU to massive supercomputers, and on the Intel/AMD architecture, the world's volume leader, and has outstanding virtualization capabities on both. Give it a try! Back at Jim Laurent's blog entry on this article, he closes with reasons for trying out Solaris. Don't believe any of us - you can see for yourself.

Comments:

[Trackback] Ken Milberg compares virtualization offerings from IBM, HP and Sun in his article for IBM Systems Magazine . I was beginning to get into details in this post, when I realized Jim Laurent has already done so here . I restrict myself therefore to the S...

Posted by Waiting for I/O on February 26, 2007 at 03:58 PM MST #

[Trackback] In IBM Systems Magazine you will find a comparision about

Posted by c0t0d0s0.org on February 26, 2007 at 08:32 PM MST #

BTW, interesting Sun Blueprint:

"Beginners Guide to LDoms: Understanding and Deploying Logical Domains

It would be great if you can blog about it!!

Rayson

Posted by Grid on March 01, 2007 at 03:52 PM MST #

The link to the IBM article seems to be borked? Mark

Posted by Mark on August 01, 2007 at 11:56 PM MST #

Hi Mark, you're right - the old link no longer works. Fortunately Google came to the rescue and I was able to find the article I linked at this URL: http://www.ibmsystemsmag.com/opensystems/enewsletterexclusive/13462p1.aspx Thanks for pointing the broken link out! regards, Jeff

Posted by Jeff Savit on August 02, 2007 at 09:15 AM MST #

Post a Comment:
Comments are closed for this entry.
About

jsavit

Search

Categories
Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today