Thursday Dec 31, 2009

Art drives language

After seeing the movie Avatar, a friend of a friend asked if there is a word for "an attraction to blue aliens." I came up with "xenopavoniphilia." That may not be the best color match, but it rolls off the tongue nicely - for a 16-letter word!

Wednesday Dec 09, 2009

Virtual Overhead?

So you're wondering about operating system efficiency or the overhead of virtualization. How about a few data points?

SAP created benchmarks that measure transaction performance. One of them, the SAP SD, 2-Tier benchmark, behaves more like real-world workloads than most other benchmarks, because it exercises all of the parts of a system: CPUs, memory access, I/O and the operating system. The other factor that makes this benchmark very useful is the large number of results submitted by vendors. This large data set enables you to make educated performance comparisons between computers, or operating systems, or application software.

A couple of interesting comparisons can be made from this year's results. Many submissions use the same hardware configuration: two Nehalem (Xeon X5570) CPUs (8 cores total) running at 2.93 GHz, and 48GB RAM (or more). Submitters used several different operating systems: Windows Server 2008 EE, Solaris 10, and SuSE Linux Enterprise Server (SLES) 10. Also, two results were submitted using some form of virtualization: Solaris 10 Containers and SLES 10 on VMware ESX Server 4.0.

Operating System Comparison

The first interesting comparison is of different operating systems and database software, on the same hardware, with no virtualization. Using the hardware configuration listed above, the following results were submitted. The Solaris 10 and Windows results are the best results on each of those operating systems, on this hardware. The SLES 10 result is the best of any Linux distro, with any DB software, on the same hardware configuration.

Operating SystemDBResult (SAPS)
Solaris 10Oracle 10g21,000
Windows Server 2008 EESQL Server 200818,670
SLES 10MaxDB 7.817,380

(Note that all of the results submitted in 2009 cannot be compared against results from previous years because SAP changed the workload.)

With those data points, it's very easy to conclude that for transactional workloads, the combination of Solaris 10 and Oracle 10g is roughly 20% more powerful than Linux and MaxDB.

Virtualization Comparison

The virtualization comparison is also interesting. The same benchmark was run using Solaris 10 Containers and 8 vCPUs. It was also run using SLES 10 on VMware ESX, also using 8 vCPUs.

Operating SystemVirtualizationDBResult (SAPS)
Solaris 10Solaris ContainersOracle 10g15,320
SLES 10VMware ESXMaxDB 7.811,230

Interpretation

Some of the 36% advantage of the Solaris Containers result is due to the operating systems and DB software, as we saw above. But the rest is due to the virtualization tools. The virtualized and non-virtualized results for each OS had only one difference: virtualization was used. For example, the two Solaris 10 results shown above used the same hardware, the same OS, the same DB software and the same workload. The only difference was the use of Containers and the limitation of 8 vCPUs.

If we assume that Solaris 10/Oracle 10G is consistently 21% more powerful than SLES 10/MaxDB on this benchmark, than it's easy to conclude that VMWare ESX has 13% more overhead than Solaris Containers when running this workload.

However, the non-virtualized performance advantage of the Solaris 10 configuration over that of SLES 10 may be different with 8 vCPUs than with 8 cores. If Solaris' advantage is less, then the overhead of VMware is even worse. If the advantage of Solaris 10 Containers/Oracle over VMware/SLES 10/MaxDB with 8 vCPUs is more than the non-virtualized results, than the real overhead of VMware is not quite that bad. Without more data, it's impossible to know.

But one of those three cases (same, less, more) is true. And the claims by some people that VMware ESX has "zero" or "almost no" overhead are clearly untrue, at least for transactional workloads. For compute-intensive workloads, like HPC, the overhead of software hypervisors like VMware ESX is typically much smaller.

What Does All That Mean?

What does that overhead mean for real applications? Extra overhead means longer response times for transactions or fewer users per workload, or both. It also means that fewer workloads (guests) can be configured per system.

In other words, response time should be better (or maximum number of users should be greater) if your transactional workload is running in a Solaris Container rather than in a VMware ESX guest. And when you want to add more workloads, Solaris Containers should support more of those workloads than VMware ESX, on the same hardware.

Qualification

Of course, the comparison shown above only applies to certain types of workloads. You should test your workload on different configurations before committing yourself to one.

Disclosure

For more detail, see the results for yourself.

SAP wants me to include the results:
Best result for Solaris 10 on 2-way X5570, 2.93GHz, 48GB:
Sun Fire X4270 (2 processors, 8 cores, 16 threads) 3,800 SAP SD Users, 21,000 SAPS, 2x 2.93 GHz Intel Xeon x5570, 48 GB memory, Oracle 10g, Solaris 10, Cert# 2009033.
Best result for any Linux distro on 2-way X5570, 2.93GHz, 48GB:
HP ProLiant DL380 G6 (2 processors, 8 cores, 16 threads) 3,171 SAP SD Users, 17,380 SAPS, 2x 2.93 GHz Intel Xeon x5570, 48 GB memory, MaxDB 7.8, SuSE Linux Enterprise Server 10, Cert# 2009006.
Result on Solaris 10 using Solaris Containers and 8 vCPUs:
Sun Fire X4270 (2 processors, 8 cores, 16 threads) run in 8 virtual cpu container, 2,800 SAP SD Users, 2x 2.93 GHz Intel Xeon X5570, 48 GB memory, Oracle 10g, Solaris 10, Cert# 2009034.
Result on SuSE Enterprise Linux as a VMware guest, using 8 vCPUs:
Fujitsu PRIMERGY Model RX300 S5 (2 processors, 8 cores, 16 threads) 2,056 SAP SD Users, 2x 2.93 GHz Intel Xeon X5570, 96 GB memory, MaxDB 7.8, SUSE Linux Enterprise Server 10 on VMware ESX Server 4.0, Cert# 2009029.
SAP, R/3, reg TM of SAP AG in Germany and other countries.

Addendum, added December 10, 2009:

Today an associate reminded me that previous SAP SD 2-tier results demonstrated the overhead of Solaris Containers. Sun ran four copies of the benchmark on one system, simultaneously, one copy in each of four Solaris Containers. The system was a Sun Fire T2000, with a single 1.2GHz SPARC processor, running Solaris 10 and MaxDB 7.5:

  1. 2006029
  2. 2006030
  3. 2006031
  4. 2006032

The same hardware and software configuration - but without Containers - already had a submission:
2005047

The sum of the results for the four Containers can be compared to the single result for the configuration without Containers. The single system outpaced the four Containers by less than 1.7%.

Second Addendum, also added December 10, 2009:

Although this blog entry focused on a comparison of performance overhead, there are other good reasons to use Solaris Containers in SAP deployments. At least 10, in fact, as shown in this slide deck. One interesting reason is that Solaris Containers is the only server virtualization technology supported by both SAP and Oracle on x86 systems. <script type="text/javascript"> var sc_project=2359564; var sc_invisible=1; var sc_security="22b325fd"; var sc_https=1; var sc_remove_link=1; var scJsHost = (("https:" == document.location.protocol) ? "https://secure." : "http://www."); document.write("");</script>

counter for tumblr

Tuesday Aug 18, 2009

Drive Your Car for Free!?

We need MPK - a new way to measure the energy effiency of cars.

For many years, we've used MPG (miles per gallon; apologies to the the parts of the world that use a sane measurement system...) to measure fuel efficiency of cars. But as the automotive industry converts from fossil fuels to electric cars, we will have confusion - until the government establishes a new method.

In the news last week, GM claimed that the Chevy Volt will get 230 MPG. That leads me to picture a conversation at a car dealer:

Car Salesperson: "...and the Volt gets 230 MPG!"
Customer: "Wow! It doesn't use much gas, does it?"
Sales: "That's right, it doesn't. And for the first 40 miles, it only uses the battery. Do you drive more than 40 miles most days?"
Customer: "No, just 20 miles to work and back. So usually I won't use any gas?"
Sales: "That's right."
Customer: "So... basically I drive it for free?"
(The salesperson knows that's not entirely true, and doesn't want to lie to the customer, so he just smiles.)
Customer: "And if I drive it more than 40 miles, I'll be getting 230 MPG... that's ten times better than the car I drive now! It will cost less than one-tenth what I pay now!"

Of course the problem is that electricity isn't free, and the rules allow a car company to publish a number which has little bearing on real life. As the article mentions, the Volt's battery must be recharged regularly. The GM CEO said that should cost "about 40 cents at off-peak electricity rates in Detroit." GM is not lying to the public, nor are they pretending that the only expense of operation is the cost of gasoline. Evidently the number 230 was derived by following the rules.

Unfortunately, most of us don't live in Detroit. For the USA, the average residential rate is 11.6 cents per kilowatt-hour (kWh), according to the Energy Information Administration.

What does it really cost to drive the Volt? According to the article, GM says it estimates 10 kWh to charge the Volt. That means For the first 40 miles per day, it costs $1.16. The article also says the Volt "might get as much as 50 mpg" when running on the gasoline engine. That's much better than almost every other car on the road, but not 230 MPG. If you drive 80 miles in a day, you will use a full battery charge plus at least 0.8 gallons. Together, that will cost:

  • $1.16 for the first 40 miles
  • $2.00 for the next 40 miles (average gas price of $2.50 from www.eia.doe.gov)

Total: at least $3.16.

My car gets about 26 MPG. Driving 80 miles costs me $7.69 - more than double. (The quoted "as much as 50 MPG" means I pay less than double, but we don't really know how much more I pay than I would with a Volt.) Most of us would save money by driving the Volt.

The most important benefit of the Volt is reducing fossil fuel emissions. (I won't get into the fossil fuel emissions at the electric generation plant, 50% of which comes from burning coal...) But the misperception that the Volt - at 230 MPG - is ten times less costly than your car may become widespread, unless GM is required to report energy efficiency in some other way.

Don't get me wrong: I am happy that GM has developed the Volt, and I know that they didn't just make up the number 230. We do need to pay more attention to gas mileage and fossil fuel emissions. I hope that the Volt is successful.

And I hope that soon there will be other electric cars. But how will we compare their energy efficiency? A competing car company may state that their electric car is more efficient than the Volt. How will we know?

For electric cars, we need a new metric: miles per kilowatt-hour (MPK). That will allow us to compare the cost of driving two different electric cars.

And for mixed-energy cars (like the Volt) we will need to compare the efficiency of the gas engine (MPG), and the electric engine (MPK), and we need to know how far the car will go on its electric engine. The last one could be MPC: miles per charge. Without all of those numbers, we can't compare the costs of driving two different cars.

Sound complicated? You bet, especially when you include the need for highway vs. city driving. But until we have more information, as consumers, we'll be confused, perhaps misled, by claims like "230 MPG."

Monday May 04, 2009

Layered Virtualization

It's time for another guest blogger.

Solving a Corner Case

One of my former colleagues, Joe Yanushpolsky (josephy100 -AT- gmail.com) was recently involved in the movement of a latency-sensitive Linux application to Solaris as part of platform consolidation. The code was old and it required access to kernel routines not available under BrandZ. Using VirtualBox as a virtual x86 system, the task was easier than expected.

Background

VirtualBox enables you to run multiple x86-based operating system "guests" on an x86 computer - desktop or server. Unlike other virtualization tools, like VMware ESX, VirtualBox allows you to keep your favorite operating system as the 'base' operating system. This is called a Type 2 hypervisor. For existing systems - especially desktops and laptops - this means you can keep your current setup and applications and maintain their current performance. Only the guests will have reduced performance - more on that later.

Here is Joe's report of his tests.

The goals included allowing many people to independently run this application while sharing a server. It would be important to isolate each user from other users. But the resource controls included with VirtualBox were not sufficiently granular for the overall purpose. Solaris Containers (zones) have a richer set of resource controls. Would it be possible to combine Containers and VirtualBox?

The answer was 'yes' - I tried two slightly different methods. Each method starts by installing VirtualBox in the global zone to set up a device entry and some of the software. Details are provided later. After that is complete, the two methods differ.

  1. Create a Container and install VirtualBox in it. This is the Master WinXP VirtualBox (MWVB) Container. If any configuration steps specific to a WinXP environment are needed, they can be done now. When a Windows XP environment is needed, clone the MWVB Container and install WinXP in the clone. Management of the Container can be delegated to the user of the WinXP environment if you want.
  2. Create a Container and install VirtualBox in it. This is the Master CentOS VirtualBox (MCVB) Container. Install CentOS in the Container. When a CentOS environment is needed, clone the MCVB - including the copy of CentOS that's already in the Container - to create a new Container. Management of the Container can be delegated to the user of the CentOS environment if you want.
In each case, resource controls can be applied to the Container to ensure that everyone gets a fair share of the system's resources like CPU, RAM, virtual memory, etc.

When the process is complete, you have a guest OS, shown here via X Windows.

CentOS picture

Not only did the code run well but it did so in a sparse root non-global zone

Well that was easy! How about Windows?
Windows XP picture

Now, this is interesting. As long as the client VM is supported by VirtualBox, it can be installed and run in a Solaris/OpenSolaris Container. I immediately thought of several useful applications of this combination of virtualization technologies:

  • migrate existing applications that are deemed "unmovable" to latest eco-friendly x64 (64-bit x86) platforms
  • reduce network latency of distributed applications by collapsing the network onto a large memory system with zones, regardless of which OS the application components were originally written in
  • on-demand provisioning, as a service, an entire development environment for Linux or Windows developers. When using ZFS, this could be accomplished in seconds - is this a "poor man's" cloud or what?!
  • eliminate ISV support issues that are currently associated with BrandZ's lack of support for recent Linux kernels or Solaris 8 or 9 kernel
  • what else can you create?
Best of all, Solaris, OpenSolaris and VirtualBox can be downloaded and used free of charge. Simple to build, easy to deploy, low overhead, free - I love it!

Performance

The advantage of having access to application code through Containers more than compensated for a 5% overhead (on a laptop) due to having a second kernel. The overall environment seems to be disk-sensitive (SSDs to the rescue!). Given that typical server load in a large IT shop is 15-20%, a number of such "foreign" zones could be added without impacting overall server performance.

Future Investigations

It would be interesting to evaluate scalability of the overall environment by testing different resource controls in Solaris Containers and in VirtualBox. I'd need a machine bigger than the laptop for that :-).

Installation Details

Here are the highlights of "How to install." For more details, follow instructions in the VirtualBox User manual.

  • Install VirtualBox on a Solaris x64 machine in the global zone so that the vboxdrv driver is available in the Solaris kernel.
  • Create a target zone with access to the vboxdrv device ("add device; set match=/dev/vboxdrv; end").
  • In the zone, clean up the artifacts of the previous VirtualBox installation in the global zone. All you need to do is to uninstall the SUNWvbox package and remove references to /opt/VirtualBox directory.
  • Install VirtualBox package in the zone.
  • Copy the OS distro into a file system in the global zone (e.g. /export/distros/centos.iso, and configure a loopback mount into the zone ("add fs; set dir=/mnt/images; set special=/export/distros; set type=lofs; end").
  • Start VirtualBox in the zone and install the client OS distro.
What advantages does this model have over other virtualization solutions?
  • The Solaris kernel is the software layer closest to the hardware. With Solaris, you benefit from the industry-leading scalability of Solaris and all of its innovations, like:
    • ZFS for data protection - currently, neither Windows nor Linux distros have ZFS. You can greatly improve storage robustness of your Windows or Linux system by running it as a VirtualBox guest.
    • SMF/FMA, which allows the whole system to tolerate hardware problems
    • DTrace, which allows you to analyze system performance issues while the apps are running. Although you can use DTrace in the 'base' Solaris OS environment to determine which guest is causing the performance issue, and whether the problem is network I/O, disk I/O, or something else, DTrace will not be able to "see" into VirtualBox guests to help figure out which particular application is the culprit - unless the guest is running Solaris, in which case you run DTrace in the guest!
  • Cost: You can download and use Solaris and OpenSolaris without cost. You can download and use VirtualBox without cost. Some Linux distros are also free. What costs less than 'free?'
What can you do with this concept? Here are some more ideas:
  • Run almost any Linux apps on a Solaris system by running that Linux distro in VirtualBox - or a combination of different Linux distros.
  • Run multiple Windows apps - even on different versions of Windows - on Solaris.
Additional notes are available from the principal investigator, Joseph Yanushpolsky: josephy100 -AT- gmail.com .

Friday May 01, 2009

Zonestat 1.4 Bug Fixes

Attention Zonestat Fans: three bugs have been found in Zonestat 1.4. Two of them only happen if a zone boots or halts while using Zonestat. The third only happens if a zoneID is larger than 999.

I fixed all three bugs and posted v1.4.1 on the web site: http://opensolaris.org/os/project/zonestat/

Specifically:

  • Bug: if a zone with dedicated CPUs is booted between poolcfg and output, zonestat can get confused or halt.
  • Bug: if a zone is halted between "zoneadm list" and kstats, zonestat can get confused or halt.
  • Bug: zones with a zoneID number > 999 are ignored.

Thursday Apr 23, 2009

What's Pluto?

Recently a young person asked me why Pluto isn't a planet. This seemed like a good educational opportunity. The explanation I used is much simpler than the official, scientific explanation - with its "planetary discrimants" and "aggregate masses" - and turned out better than I anticipated, so I thought I would share it with you. It seems appropriate for people with at least a fourth or fifth grade education.

The study of science results in "an organized body of knowledge gained through ... research." Observational science gathers data about the universe and classifies objects, life forms, etc. according to characteristics of those things.

Applying those concepts to our solar system, we can measure those objects and group those with similar characteristics together.

Let's try that.

One of the most obvious characteristics of solar system objects is their composition. All of these objects fall neatly into one of three categories as shown by the accompanying graph:

  1. major rocky objects, e.g. Earth, Mars - these all have densities greater than 3.8 g/cm\^3
  2. gaseous objects, e.g. Jupiter, Saturn, some or all of which have cores of rock and/or metallic hydrogen - these all have densities between 0.69 and 1.6 g/cm\^3
  3. rock-ice bodies - objects with significant amounts of rock and ice, e.g. comets, Pluto and its satellite Charon - their densities range from 1.0 to 3.0 g/cm\^3, with almost all of them in the range 1.0 to 2.0 g/cm\^3.
The clearest distinction is between the three densest inner objects (Mercury, Venus, Earth) and all but one of the outer, rocky/icy objects. Haumea, in case you haven't heard of it, is the newest object to be labeled a "dwarf planet" by the International Astronomical Union.

Although there is overlap in density between the gaseous objects and the rocky/icy objects, no overlap between their sizes (masses) exists, as this next graph shows (Earth is arbitrarily assigned a value of 1,000,000 and the rest are scaled to that value):

Visually, three groupings are discernible: the gaseous objects, the large rocky objects, and everything else. Mathematically, the groupings are separated by more than an order of magnitude. In other words, the smallest member of one group is at least ten times the mass of the largest member of the next group. Uranus is more than 14 times the mass of Earth, and Mercury is almost 20 times the mass of Eris, which is more massive than Pluto.

Besides physical characteristics, the most useful ones are the orbital elements.

All members of our solar system orbit the sun, or orbit another non-stellar object which in turn orbits the sun. These orbits are, almost entirely, described by Newtonian mechanics, the basic elements of which were first described by Johannes Kepler in 1609. Although an orbit has several characteristics, the simplest of them is the semi-major axis, which is often called the "average" distance between the orbiting body and the sun. Although not quite accurate, it's close enough for this purpose.

Here is a graph of 17 of the most important bodies in our solar system. It shows the semi-major axis of each, relative to the semi-major axis of Earth, which is called an Astronomical Unit. It includes the four major rocky objects, the four gaseous objects, all five of the currently recognized dwarf planets, three asteroids, and five other relevant objects. Note that the orbital distances of Eris (68 AU) and Sedna (526 AU) are off the scale of this graph.

Again, three groups appear in the graph: the inner rocky objects, the gaseous objects, and the outer bodies. Separation between objects increases from the inner bodies to the outer ones, but suddenly, starting with Orcus, the separation between orbital distances shrinks considerably.

From those three characteristics: density, mass, and orbital distance, it seems clear that there are at least three groups of major bodies in the solar system:

  1. inner, rocky bodies, each having a mean density greater than 3.8 g/cm\^3, a mass larger than one-tenth Earth's mass, and an orbital distance less than than 2 AU
  2. giant gaseous objects, each with a mean density less than 1.5 gm/cm\^3, a mass larger than 14 times Earth's mass, and an orbital distance between 5 and 32 AU
  3. distant icy, rocky bodies, with a mean density less than 3 gm/cm\^3 (and almost all less then 2), and an orbital distance greater than 35 AU.
Object TypeDensity
g/cm\^3
Mass
(Earth=1)
Semi-Major Axis
(AU, Earth=1)
Inner, rocky>3.8>0.1<2
Giant, gaseous<1.5>145-32
Distant, icy, rocky<2 (except one)<1/200th>35

All three of those groupings can be displayed in one graph, which shows the distinction between the three different groups at a glance:

In the graph above, the green bars show the range of values for the inner, rocky bodies, scaled so that the highest value is 100. The blue bars show the ranges of values for the gaseous bodies. The purplish bars shows the ranges of values for the outer icy, rocky bodies.

Note that only two ranges overlap: densit for the gaseous and the icy rocky bodies. For all of the other characteristics, there are clear gaps between the ranges of the groups.

Now that we have clear groupings, the question becomes "which of those groups should be planets?" I hope it's obvious that the first two categories should be included in the list of 'planets.' The third group can be included or not, depending on how you want to define the term 'planet.'

However, there are two other factors which help me to decide. Initially, 'planets' were the five wandering lights that weren't the Sun and Moon. These seven wandering lights were so important that early western cultures assigned a day of worship to each, leading eventually to the names of our days.

If 'planets' started with the five wandering stars, it makes sense to add other bodies which have similar characteristics - Uranus and Neptune - yielding a total of eight planets. But none of the others - Pluto, Haumea, Quaoar, etc. - are like the original wandering lights in the sky.

Further, if we were to include the outer, rocky, icy bodies in the list of planets, the list grows significantly. Today, the list would include 13 members, but another 40 known objects might be categorized with Eris, Pluto, et al., and another 150 or more are probably out there. If the category 'planet' can have 8 members or 200, I'll go with 8.

Finally, regarding the question "is it 'right' to 'demote' Pluto?" The list of planets has grown and shrunk several times throughout history. More than 25 bodies have been labeled 'planets' only to be 'demoted' later. Pluto is nothing special in this regard.

Wednesday Apr 08, 2009

Zonestat 1.4 Now Available

I have posted Zonestat v1.4 at: the Zone Statistics project page (click on "Files" in the left navbar).

Zonestat is a 'dashboard' for Solaris Containers. It shows resource consumption of each Container (aka Zone) and a comparison of consumption against limits you have set.

Changes from v1.3:

  • BugFix: various failures if the pools service was not online. V1.4 checks for the existence of the pools packages, and behaves correctly whether they are installed and enabled, or not.
  • BugFix: various symptoms if the rcapd service was not online. V1.4 checks for the existence of the rcap packages, and behaves correctly whether they are installed and enabled, or not.
  • BugFix: mis-reported shared memory usage
  • BugFix: -lP produced human-readable, not machine-parseable output
  • Bug/RFE: detect and fail if zone != global or user != root
  • RFE: Prepare for S10 update numbers past U6
  • RFE: Add option to print entire name of zones with long names
  • RFE: Add timestamp to machine-consumable output
  • RFE: improve performance and correctness by collecting CPU% with DTrace instead of prstat

Note that the addition of a timestamp to -P output changes the output format for "machine-readable" output.

For most people, the most important change will be the use of DTrace to collect CPU% data. This has two effects. The first effect is improved correctness. The prstat command - used in V1.3 and earlier, can horribly underestimate CPU cycles consumed because it can miss many short-lived processes. The mpstat has its own problems with mis-counting CPU usage. So I expanded on a solution Jim Fiori offered, which uses DTrace to answer the question "which zone is using a CPU right now?"

The other benefit to DTrace is the improvement in performance of Zonestat.

The less popular, but still interesting additions include:

  • -N expands the width of the zonename field to the length of the longest zone name. This preserves the entire zone name, for all zones, and also leaves the columns lined up. However, the length of the output lines will exceed 80 characters.
  • The new timestamp field in -P output makes it easier for tools like the "System Data Recorder" (SDR) to consume zonestat output. However, this was a change to the output format. If you have written a script which used -P and assumed a specific format for zonestat output, you must change your script to understand the new format.

Please send questions and requests to zones-discuss@opensolaris.org .

Wednesday Apr 01, 2009

Patching Zones Goes Zoom!

Accelerated Patching of Zoned Systems

Introduction

If you have patched a system with many zones, you have learned that it takes longer than patching a system without zones. The more zones there are, the longer it takes. In some cases, this can raise application downtime to an unacceptable duration.

Fortunately, there are a few methods which can be used to reduce application downtime. This document mentions many of them, and then describes the performance enhancements of two of them. But the bulk of this rather bulky entry is the description and results of my newest computer... "experiments."

Executive Summary, for the Attention-Span Challenged

It's important to distinguish between application downtime, service downtime, zone downtime, and platform downtime. 'Service' is the service being provided by an application or set of applications. To users, that's the most important measure. As long as they can access the service, they're happy. (Doesn't take much, does it?)

If a service depends on the proper operation of each of its component applications, planned or unplanned downtime of one application will result in downtime of the service. Some software, e.g. web server software, can be deployed in multiple, load-balanced systems so that the service will not experience downtime even if one of the software instances is down.

Applying an operating system patch may require service downtime, application downtime, zone downtime or platform downtime, depending on the patch and the entity being patched. Because in many cases patch application will require application downtime, the goal of the methods mentioned below, especially parallel patching of zones, is to minimize elapsed downtime to achieve a patched, running system.

Just Enough Choices to Confuse

Methods that people use - or will soon use - to improve the patching experience of zoned systems include:

  • Live Upgrade allows you to copy the existing Solaris instance into an "alternative boot environment," patch or upgrade the ABE, and then re-boot into it. Downtime of the service, application, or zone is limited to the amount of time it takes to re-boot the system. Further, if there's a problem, you can easily re-boot back into the original boot environment. Bob Netherton describes this in detail on his weblog. Maybe the software should have a secondary name: Live Patch.

  • You can detach all of the zones on the system, patch the system (which doesn't bother to patch the zones) and then re-attach the zones using the "update-on-attach" method which is also described here. This method can be used to reduce service downtime and application downtime, but not as much as the Live Upgrade / Live Patch method. Each zone (and its application(s)) will be down for the length of time to patch the system - and perhaps reboot it - plus the time to update/attach the zone.

  • You can apply the patch to another Solaris 10 system with contemporary or newer patches, and migrate the zones to that system. Optionally, you can patch the original system and migrate the zones back to it. Downtime of the zones is less than the previous solution because the new system is already patched and rebooted.

  • You can put the zones' zonepath (i.e. install the zones onto) very fast storage, e.g. an SSD or a storage array with battery-backed DRAM or NVRAM. The use of SSDs is described below. This method can be used in conjunction with any of the other solutions. It will speed up patching because patching is I/O intensive. However, this type of storage device is more expensive per MB, so this solution may not make fiscal sense in many situations.

  • Sun has developed an enhancement to the Solaris patching tools which is intended to significantly decrease the elapsed time of patching. It is currently being tested at a small number of sites. After it's released you can get the Zones Parallel Patching patch, described below. This solution decreases the elapsed time to patch a system. It can be combined with some of the solutions above, with varying benefits. For example, with Live Upgrade, parallel patching reduces the time to patch the ABE, but doesn't reduce service downtime. Also, ZPP offers little benefit for the detach/attach-on-upgrade method. However, as a stand-alone method, ZPP offers significant reduction in elapsed time without changing your current patching process. ZPP was mentioned by Gerry Haskins, Director of Software Patch Services.

Disclaimer 1: the "Zones Parallel Patching" patch ("ZPP") is still in testing and has not yet been released. It is expected to be released mid-CY2009. That may change. Further, the specific code changes may change, which may change the results described below.

Disclaimer 2: the experiment described below, and its results, are specific to one type of system (Sun Fire T2000) and one patch (120534-14 - "the Apache patch"). Performance improvements using other hardware and other patches will produce different results.

Yet More Background

I wanted to better understand two methods of accelerating the patching of zoned systems, especially when used in combination. Currently, a patch applied to the global zone will normally be applied to all non-global zones, one zone at a time. This is a conservative approach to the task of patching multiple zones, but doesn't take full advantage of the multi-tasking abilities of Solaris.

I learned that a proposed patch was created that enables the system administrator to apply a patch in the global zone which patches the global and then patches multiple zones at the same time. The parallelism (i.e. "the number of zones that are patched at one time") can be chosen before the patch is applied. If there are multiple "Solaris CPUs" in the system, multiple CPUs can be performing computational steps at the same time. Even if there aren't many CPUs, one zone's patching process can be using a CPU while another's is writing to a disk drive.

<tangent topic="Solaris vCPU"> I use the phrase "Solaris CPUs" to refer to the view that Solaris has of CPUs. In the old days, a CPU was a CPU - one chip, one computational entity, one ALU, one FPU, etc. Now there are many factors to consider - CPU sockets, CPU cores per socket, hardware threads per core, etc. Solaris now considers "vCPUs" - virtual processors - as entities on which to schedule processors. Solaris considers each of these a vCPU:

  • x86/x64 systems: a CPU core (today, can be one to six per socket, with a maximum of 24 vCPUs per system, ignoring some exotic, custom high-scale x86 systems)
  • UltraSPARC-II, -III[+], -IV[+]: a CPU core, max of 144 in an E25K
  • SPARC64-VI, -VII: a CPU core: max of 256 in an M9000
  • SPARC CMT (SPARC-T1, -T2+): a hardware thread, maximum of 256 in a T5440
</tangent>

Separately, I realized that one part of patching is disk-intensive. Many disk-intensive workloads benefit from writing to a solid-state disk (SSD) because of the performance benefit of those devices over spinning-rust disk drives (HDD).

So finally (hurrah!) the goal of this adventure: how much performance advantage would I achieve with the combination of parallel patching and an SSD, compared to sequential patching of zones on an HDD?

He Finally Begins to Get to the Point

I took advantage of an opportunity to test both of these methods to accelerate patching. The system was a Sun Fire T2000 with two HDDs and one SSD. The system had 32 vCPUs, was not using Logical Domains, and was running Solaris 10 10/08. Solaris was installed on the first HDD. Both HDDs were 72GB drives. The SSD was a 32GB device. (Thank you, Pat!)

For some of the tests I also applied the ZPP. (Thank you, Enda!) For some of the tests I used zones that had zonepaths on the SSD; the rest 'lived' on the second HDD.

As with all good journeys, this one had some surprises. And, as with all good research reports, this one has a table with real data. And graphs later on.

To get a general feel for the different performance of an HDD vs. an SSD, I created a zone on each - using the secondary HDD - and made clones of it. Some times I made just one clone at a time, other times I made ten clones simultaneously. The iostat(1) tool showed me the following performance numbers:

 r/sw/skr/skw/swaitactvsvc_t%w%b
clone x1 on HDD56227833332306.423172
clone x1 on SSD35379115616500016
clone x10 on HDD35470182327431952462599
clone x10 on SSD354295824133026241541034

At light load - just one clone at a time - the SSD performs better than the HDD, but at heavy load the SSD performs much much better, e.g. nine times the write throughput and 13x the write IOPS of the HDD, and the device driver and SSD still have room for more (34% busy vs. 99% busy).

Cloning a zone consists almost entirely of copying files. Patching has a higher proportion of computation, but those results gave me high hopes for patching. I wasn't disappointed. (Evidently, every good research report also includes foreshadowing.)

In addition to measuring the performance boost of the ZPP I wanted to know if that patch would help - or hurt - a system without using its parallelization feature. (I didn't have a particular reason to expect non-parallelized improvement, but occasionally I'm an optimist. Besides, if the performance with the ZPP was different without actually using parallelization, it would skew the parallelized numbers.) So before installing the patch, I measured the length of time to apply a patch. For all of my measurements, I used patch 120543-14 - the Apache patch. At 15MB, it's not a small patch, nor is it very large patch. (The "Baby Bear" patch, perhaps? --Ed.) It's big enough to tax the system and allow reasonable measurements, but small enough that I could expect to gather enough data to draw useful conclusions, without investing a year of time...

#define TEST while(1) {patchadd; patchrm;}

So, before applying the ZPP, and without any zones on the system, I applied the Apache patch. I measured the elapsed time because our goal is to minimize elapsed time of patch application. Then I removed the Apache patch.

Then I added a zone to the system, on the secondary HDD, and, I re-applied the Apache patch to the system, which automatically applied it to the zone as well. I removed the patch, created two more zones, and applied the same patch yet again. Finally, I compared the elapsed time of all three measurements. Patching the global zone alone took about 120 seconds. Patching with one non-global zone took about 175 seconds: 120 for the global zone and 55 for the zone. Patching three zones took about 285 seconds: 120 seconds for the global zone and 55 seconds for each of the three zones.

Theoretically, the length of time to patch each zone should be consistent. Testing that theory, I created a total of 16 zones and then applied the Apache patch. No surprises: 55 seconds per zone.

To test non-parallel performance of the ZPP, I applied it, kept the default setting of "no parallelization," and then re-did those tests. Application of the Apache patch did not change in behavior nor in elapsed time per zone, from zero to 16 zones. (However, I had a faint feeling that Solaris was beginning to question my sanity. "Get inline," I told it...)

How about the SSD - would it improve patch performance with zero or more zones? I removed the HDD zones and installed a succession of zones - zero to 16 - on the SSD and applied the Apache patch each time. The SSD did not help at all - the patch still took 55 seconds per zone. Evidently this particular patch is not I/O bound, it is CPU bound.

But applying the ZPP does not, by default, parallelize anything. To tell the patch tools that you would like some level of parallelization, e.g. "patch four zones at the same time," you must edit a specific file in the /etc/patch directory and supply a number, e.g. '4'. After you have done that, if parallel patching is possible, it will happen automatically. Multiple zones (e.g. four) will be patched at the same time by a patchadd process running in each zone. Because that patchadd is running in a zone, it will use the CPUs that the zone is assigned to use - default or otherwise. This also means that the zone's patchadd process is subject to all of the other resource controls assigned to the zone, if any.

Changing the parallelization level to 8, I re-did all of those measurements, on the HDD zones and then the SSD zones. The performance impact was obvious right away. As the graph to the right shows, the elapsed time to patch the system with a specific number of zones was less with the ZPP. ('P' indicates the level of parallelization: 1, 8 or 16 zones patched simultaneously. The blue line shows no parallelization, the red line shows the patching of eight zones simultaneously.) Turning the numbers around, the patching "speed" improved by a factor of three.

How much more could I squeeze out of that configuration? I increased the level of parallelization to 16 and re-did everything. No additional performance, as the graph shows. Maybe it's a coincidence, but a T2000 has eight CPU cores - maybe that's a limiting factor.

At this point the SSD was feeling neglected, so I re-did all of the above with zones on the SSD. This graph shows the results: little benefit at low zone count, but significant improvement at higher zone counts - when the HDD was the bottleneck. Combining ZPP and SSD resulted in patching throughput improvement of 5x with 16 zones.

That seems like magic! What's the trick? A few paragraphs back, I mentioned the 'trick': using all of the scalability of Solaris and, in this case, CMT systems. Patching a system without ZPP - especially one without a running application - leaves plenty of throughput performance "on the table." Patching muliple zones simultaneously uses CPU cycles - presumably cycles that would have been idle. And it uses I/O channel and disk bandwidth - also, hopefully, available bandwidth. Essentially, ZPP is shortening the elapsed time by using more CPU cycles and I/O bandwidth now instead of using them later.

So the main caution is "make sure there is sufficient compute and I/O capacity to patch multiple zones at the same time."

But whenever multiple apps are running on the same system at the same time, the operating system must perform extra tasks to enable them to run safely. It doesn't matter if the "app" is a database server or 'patchadd.' So is ZPP using any "extra" CPU, i.e. is there any CPU overhead?

Along the way, I collected basic CPU statistics, including system and user time. The next graph shows that the amount of total CPU time (user+sys) increased slightly. The overhead was less than 10% for up to 8 zones. Another coincidence? I don't know, but at that point the overhead was roughly 1% per zone. The overhead increased faster beyond P=8, indicating that, perhaps, a good rule of thumb is P="number of unused cores." Of course, if the system is using Dynamic Resource Pools or dedicated-cpus, the rule might need to be changed accordingly. TANSTAAFL.

Conclusion

All good tales need a conclusion. The final graph shows the speedup - the increase in patching throughput - based on the number of zones, level of parallelization, and device type. Specific conclusions are:
  1. Parallel patching zones significantly reduces elapsed time if there are sufficient compute and I/O resources available.
  2. Solid-state disks significantly improve patching throughput for high-zone counts and similar levels of parallelization.
  3. The amount of work accomplished does not decrease - it's merely compressed into a shorter period of elapsed time.
  4. If patching while applications are running:
    • plan carefully in order to avoid impacting the responsiveness of your applications. Choose a level of parallelization commensurate with the amount of available compute capacity
    • use appropriate resource controls to maintain desired response times for your applications.

Getting the ZPP requires waiting until mid-year. Getting SSDs is easy - they're available for the Sun 7210 and 7410 Unified Storage Systems and for Sun systems.

<script type="text/javascript"> var sc_project=2359564; var sc_invisible=1; var sc_security="22b325fd"; var sc_https=1; var sc_remove_link=1; var scJsHost = (("https:" == document.location.protocol) ? "https://secure." : "http://www."); document.write("");</script>

counter for tumblr

Wednesday Mar 04, 2009

AMD Names Fab Unit

The newest major chip manufacturing corporation finally has a name: GlobalFoundries. The next step is to build its new fab.

AMD has been slowly spinning off its chip fab business for the past few years, and in the process is building a new US$4.2 billion fab in the northeast USA - near Albany, New York. Cash-strapped AMD was only able to do this via a joint venture with an investment fund headquartered in Abu Dhabi.

More details are available at eWeek, PCWorld, and the major newspaper in Albany, the Times Union.

Tuesday Feb 10, 2009

Zones to the Rescue

Recently, Thomson Reuters "demonstrated that RMDS [Reuters Marked Data Systems software] performs better in a virtualized environment with Solaris Containers than it does with a number of individual Sun server machines."

This enabled Thomson Reuters to break the "million-messages-per-second barrier."

The performance improvement is probably due to the extremely high bandwidth, low latency characteristics of inter-Container network communications. Because all inter-Container network traffic is accomplished with memory transfers - using default settings - packets 'move' at computer memory speeds, which are much better than common 100Mbps or 1Gbps ethernet bandwidth. Further, that network performance is much more consistent without extra hardware - switches and routers - that can contribute to latency.

Articles can be found at: http://finance.yahoo.com/news/Sun-Microsystems-and-Thomson-bw-14306924.html

Thursday Jan 29, 2009

Group of Zones - Herd? Flock? Pod? Implausibility? Cluster!

Yesterday, Solaris Cluster (aka "Sun Cluster") 3.2 1/09 was released. This release has two new features which directly enhance support of Solaris Zones in Solaris Clusters.

The most significant new functionality is a feature called "Zone Clusters" which, at this point, 'merely' :-) provides support for Oracle RAC nodes in Zones. In other words, you can create an Oracle RAC cluster, using individual zones in a Solaris Cluster as RAC nodes.

Further, because a Solaris Cluster can contain multiple Zone Clusters, it can contain multiple Oracle RAC clusters. For details about configuring a zone cluster, see "Configuring a Zone Cluster" in the Sun Cluster Software Installation Guide and the clzonecluster(1CL) man page.

The second new feature is support for exclusive-IP zones. Note that this only applies to failover data services, not scalable data services nor with zone clusters.

Friday Jan 09, 2009

Zones and Solaris Security


An under-appreciated aspect of the isolation inherent in Solaris Zones (aka Solaris Containers) is their ability to use standard Solaris security features to enhance security of consolidated workloads. These features can be used alone or in combination to create an arbitrarily strong level of security. This includes DoD-strength security using Solaris Trusted Extensions - which use Solaris Zones to provide labeled, multi-level data classification. Trusted Extensions achieved one of the highest possible Common Criteria independent security certifications.

To shine some light on the topic of Zones and security, Glenn Brunette and I recently co-authored a new Sun BluePrint with an overly long name :-) - "Understanding the Security Capabilities of Solaris Zones Software." You can find it at http://www.sun.com/blueprints.

Tuesday Jan 06, 2009

Equus: Sine of the Horse

A completely random thought: is there a name for the motion of a carousel horse? One of those would be informative and impish:

  1. Donusoidal
  2. Torusoidal

[For you overly serious types :-) - Yes, I know that both of those answers are incorrect because 'donut' and 'torus' refer to 3-dimensional surfaces. My question and answers are not meant to be geometrically correct. However, if there really is a term describing the combination of sinusoidal and circular motions of a carousel horse, I would like to know what it is.]

Monday Dec 22, 2008

Ice Storm 2008

On December 12, the northeast USA had a severe ice storm, which left 0.5" - 1" (1-2.5 cm) of ice on everything. Trees, heavy with ice, bent and broke, snapping wires and cutting electricity to over 200,000 homes and businesses. My house was one of the unfortunate ones.

However, with every challenge there are opportunities - in this case, photographic ones. So I fired up the DSLR and started snapping - pictures, not wires.

One pine tree in my backyard was so laden with ice that its tip - normally 25 feet in the air - was dangling in the pond. It looks like the pine tree was thirsty and is taking a drink. (Click on the image to see a larger image.)

Later, the surface of the pond froze, trapping the tip in the pond. Fortunately - for the tree - the surface melted two days later, allowing it to shake itself free.

A birch tree in the front yard performed a similar feat, but it looked more like it was bowing. I doubt it was trying lick the snow - it knows better.

I like the loss of background clutter that night - and flash! - brings to shots like that one.

Another birch was bent, and its upper half reached, like fingers, through the branches of a Shadblow tree, itself coated in ice.

Another night shot, a large pine seems to loom gloomily over a 6-foot blue spruce. Normally its arms jut out parallel to the ground, but the ice pinned its arms to its sides.

But by far the worst damage nearby was a 40-to-45-foot pine tree in the backyard. For years, it has been leaning out over the pond. No more - the weight of the ice snapped it in two, about eight feet up the trunk. In the first image, only the remaining trunk is obvious...

...but in the next picture, it's clear that the tree decided to "take a dip" in the pond. To give you some scale, the pond is 40 feet wide. The tree reached all the way across and stripped some branches off of a tree on the far side of the pond.

As someone mentioned to me - the ice storm was "just Mother Nature doing some pruning."

P.S. Nighttime brought another interesting view: moonlight refracting through the ice on tree branches.

Wednesday Dec 10, 2008

OpenSolaris 2008.11 Has Landed


OpenSolaris 2008.11 was officially launched today. Highlights of the announcement:
  • Toshiba will sell Toshiba laptops with OpenSolaris 2008.11 pre-installed
  • tailored for software development and student use
  • new features include "Time Slider" - literally a GUI slider allowing easy access to ZFS snapshots... Slide your way back to last week. :-)
  • world-class performance, with several world record benchmark results.
For more, and information on the live web chat at 12:00 EST, see http://www.opensolaris.com/get/index.jsp .
About

Jeff Victor writes this blog to help you understand Oracle's Solaris and virtualization technologies.

The views expressed on this blog are my own and do not necessarily reflect the views of Oracle.

Search

Archives
« May 2015
SunMonTueWedThuFriSat
     
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
      
Today