Patching Zones Goes Zoom!

Accelerated Patching of Zoned Systems

Introduction

If you have patched a system with many zones, you have learned that it takes longer than patching a system without zones. The more zones there are, the longer it takes. In some cases, this can raise application downtime to an unacceptable duration.

Fortunately, there are a few methods which can be used to reduce application downtime. This document mentions many of them, and then describes the performance enhancements of two of them. But the bulk of this rather bulky entry is the description and results of my newest computer... "experiments."

Executive Summary, for the Attention-Span Challenged

It's important to distinguish between application downtime, service downtime, zone downtime, and platform downtime. 'Service' is the service being provided by an application or set of applications. To users, that's the most important measure. As long as they can access the service, they're happy. (Doesn't take much, does it?)

If a service depends on the proper operation of each of its component applications, planned or unplanned downtime of one application will result in downtime of the service. Some software, e.g. web server software, can be deployed in multiple, load-balanced systems so that the service will not experience downtime even if one of the software instances is down.

Applying an operating system patch may require service downtime, application downtime, zone downtime or platform downtime, depending on the patch and the entity being patched. Because in many cases patch application will require application downtime, the goal of the methods mentioned below, especially parallel patching of zones, is to minimize elapsed downtime to achieve a patched, running system.

Just Enough Choices to Confuse

Methods that people use - or will soon use - to improve the patching experience of zoned systems include:

  • Live Upgrade allows you to copy the existing Solaris instance into an "alternative boot environment," patch or upgrade the ABE, and then re-boot into it. Downtime of the service, application, or zone is limited to the amount of time it takes to re-boot the system. Further, if there's a problem, you can easily re-boot back into the original boot environment. Bob Netherton describes this in detail on his weblog. Maybe the software should have a secondary name: Live Patch.

  • You can detach all of the zones on the system, patch the system (which doesn't bother to patch the zones) and then re-attach the zones using the "update-on-attach" method which is also described here. This method can be used to reduce service downtime and application downtime, but not as much as the Live Upgrade / Live Patch method. Each zone (and its application(s)) will be down for the length of time to patch the system - and perhaps reboot it - plus the time to update/attach the zone.

  • You can apply the patch to another Solaris 10 system with contemporary or newer patches, and migrate the zones to that system. Optionally, you can patch the original system and migrate the zones back to it. Downtime of the zones is less than the previous solution because the new system is already patched and rebooted.

  • You can put the zones' zonepath (i.e. install the zones onto) very fast storage, e.g. an SSD or a storage array with battery-backed DRAM or NVRAM. The use of SSDs is described below. This method can be used in conjunction with any of the other solutions. It will speed up patching because patching is I/O intensive. However, this type of storage device is more expensive per MB, so this solution may not make fiscal sense in many situations.

  • Sun has developed an enhancement to the Solaris patching tools which is intended to significantly decrease the elapsed time of patching. It is currently being tested at a small number of sites. After it's released you can get the Zones Parallel Patching patch, described below. This solution decreases the elapsed time to patch a system. It can be combined with some of the solutions above, with varying benefits. For example, with Live Upgrade, parallel patching reduces the time to patch the ABE, but doesn't reduce service downtime. Also, ZPP offers little benefit for the detach/attach-on-upgrade method. However, as a stand-alone method, ZPP offers significant reduction in elapsed time without changing your current patching process. ZPP was mentioned by Gerry Haskins, Director of Software Patch Services.

Disclaimer 1: the "Zones Parallel Patching" patch ("ZPP") is still in testing and has not yet been released. It is expected to be released mid-CY2009. That may change. Further, the specific code changes may change, which may change the results described below.

Disclaimer 2: the experiment described below, and its results, are specific to one type of system (Sun Fire T2000) and one patch (120534-14 - "the Apache patch"). Performance improvements using other hardware and other patches will produce different results.

Yet More Background

I wanted to better understand two methods of accelerating the patching of zoned systems, especially when used in combination. Currently, a patch applied to the global zone will normally be applied to all non-global zones, one zone at a time. This is a conservative approach to the task of patching multiple zones, but doesn't take full advantage of the multi-tasking abilities of Solaris.

I learned that a proposed patch was created that enables the system administrator to apply a patch in the global zone which patches the global and then patches multiple zones at the same time. The parallelism (i.e. "the number of zones that are patched at one time") can be chosen before the patch is applied. If there are multiple "Solaris CPUs" in the system, multiple CPUs can be performing computational steps at the same time. Even if there aren't many CPUs, one zone's patching process can be using a CPU while another's is writing to a disk drive.

<tangent topic="Solaris vCPU"> I use the phrase "Solaris CPUs" to refer to the view that Solaris has of CPUs. In the old days, a CPU was a CPU - one chip, one computational entity, one ALU, one FPU, etc. Now there are many factors to consider - CPU sockets, CPU cores per socket, hardware threads per core, etc. Solaris now considers "vCPUs" - virtual processors - as entities on which to schedule processors. Solaris considers each of these a vCPU:

  • x86/x64 systems: a CPU core (today, can be one to six per socket, with a maximum of 24 vCPUs per system, ignoring some exotic, custom high-scale x86 systems)
  • UltraSPARC-II, -III[+], -IV[+]: a CPU core, max of 144 in an E25K
  • SPARC64-VI, -VII: a CPU core: max of 256 in an M9000
  • SPARC CMT (SPARC-T1, -T2+): a hardware thread, maximum of 256 in a T5440
</tangent>

Separately, I realized that one part of patching is disk-intensive. Many disk-intensive workloads benefit from writing to a solid-state disk (SSD) because of the performance benefit of those devices over spinning-rust disk drives (HDD).

So finally (hurrah!) the goal of this adventure: how much performance advantage would I achieve with the combination of parallel patching and an SSD, compared to sequential patching of zones on an HDD?

He Finally Begins to Get to the Point

I took advantage of an opportunity to test both of these methods to accelerate patching. The system was a Sun Fire T2000 with two HDDs and one SSD. The system had 32 vCPUs, was not using Logical Domains, and was running Solaris 10 10/08. Solaris was installed on the first HDD. Both HDDs were 72GB drives. The SSD was a 32GB device. (Thank you, Pat!)

For some of the tests I also applied the ZPP. (Thank you, Enda!) For some of the tests I used zones that had zonepaths on the SSD; the rest 'lived' on the second HDD.

As with all good journeys, this one had some surprises. And, as with all good research reports, this one has a table with real data. And graphs later on.

To get a general feel for the different performance of an HDD vs. an SSD, I created a zone on each - using the secondary HDD - and made clones of it. Some times I made just one clone at a time, other times I made ten clones simultaneously. The iostat(1) tool showed me the following performance numbers:

 r/sw/skr/skw/swaitactvsvc_t%w%b
clone x1 on HDD56227833332306.423172
clone x1 on SSD35379115616500016
clone x10 on HDD35470182327431952462599
clone x10 on SSD354295824133026241541034

At light load - just one clone at a time - the SSD performs better than the HDD, but at heavy load the SSD performs much much better, e.g. nine times the write throughput and 13x the write IOPS of the HDD, and the device driver and SSD still have room for more (34% busy vs. 99% busy).

Cloning a zone consists almost entirely of copying files. Patching has a higher proportion of computation, but those results gave me high hopes for patching. I wasn't disappointed. (Evidently, every good research report also includes foreshadowing.)

In addition to measuring the performance boost of the ZPP I wanted to know if that patch would help - or hurt - a system without using its parallelization feature. (I didn't have a particular reason to expect non-parallelized improvement, but occasionally I'm an optimist. Besides, if the performance with the ZPP was different without actually using parallelization, it would skew the parallelized numbers.) So before installing the patch, I measured the length of time to apply a patch. For all of my measurements, I used patch 120543-14 - the Apache patch. At 15MB, it's not a small patch, nor is it very large patch. (The "Baby Bear" patch, perhaps? --Ed.) It's big enough to tax the system and allow reasonable measurements, but small enough that I could expect to gather enough data to draw useful conclusions, without investing a year of time...

#define TEST while(1) {patchadd; patchrm;}

So, before applying the ZPP, and without any zones on the system, I applied the Apache patch. I measured the elapsed time because our goal is to minimize elapsed time of patch application. Then I removed the Apache patch.

Then I added a zone to the system, on the secondary HDD, and, I re-applied the Apache patch to the system, which automatically applied it to the zone as well. I removed the patch, created two more zones, and applied the same patch yet again. Finally, I compared the elapsed time of all three measurements. Patching the global zone alone took about 120 seconds. Patching with one non-global zone took about 175 seconds: 120 for the global zone and 55 for the zone. Patching three zones took about 285 seconds: 120 seconds for the global zone and 55 seconds for each of the three zones.

Theoretically, the length of time to patch each zone should be consistent. Testing that theory, I created a total of 16 zones and then applied the Apache patch. No surprises: 55 seconds per zone.

To test non-parallel performance of the ZPP, I applied it, kept the default setting of "no parallelization," and then re-did those tests. Application of the Apache patch did not change in behavior nor in elapsed time per zone, from zero to 16 zones. (However, I had a faint feeling that Solaris was beginning to question my sanity. "Get inline," I told it...)

How about the SSD - would it improve patch performance with zero or more zones? I removed the HDD zones and installed a succession of zones - zero to 16 - on the SSD and applied the Apache patch each time. The SSD did not help at all - the patch still took 55 seconds per zone. Evidently this particular patch is not I/O bound, it is CPU bound.

But applying the ZPP does not, by default, parallelize anything. To tell the patch tools that you would like some level of parallelization, e.g. "patch four zones at the same time," you must edit a specific file in the /etc/patch directory and supply a number, e.g. '4'. After you have done that, if parallel patching is possible, it will happen automatically. Multiple zones (e.g. four) will be patched at the same time by a patchadd process running in each zone. Because that patchadd is running in a zone, it will use the CPUs that the zone is assigned to use - default or otherwise. This also means that the zone's patchadd process is subject to all of the other resource controls assigned to the zone, if any.

Changing the parallelization level to 8, I re-did all of those measurements, on the HDD zones and then the SSD zones. The performance impact was obvious right away. As the graph to the right shows, the elapsed time to patch the system with a specific number of zones was less with the ZPP. ('P' indicates the level of parallelization: 1, 8 or 16 zones patched simultaneously. The blue line shows no parallelization, the red line shows the patching of eight zones simultaneously.) Turning the numbers around, the patching "speed" improved by a factor of three.

How much more could I squeeze out of that configuration? I increased the level of parallelization to 16 and re-did everything. No additional performance, as the graph shows. Maybe it's a coincidence, but a T2000 has eight CPU cores - maybe that's a limiting factor.

At this point the SSD was feeling neglected, so I re-did all of the above with zones on the SSD. This graph shows the results: little benefit at low zone count, but significant improvement at higher zone counts - when the HDD was the bottleneck. Combining ZPP and SSD resulted in patching throughput improvement of 5x with 16 zones.

That seems like magic! What's the trick? A few paragraphs back, I mentioned the 'trick': using all of the scalability of Solaris and, in this case, CMT systems. Patching a system without ZPP - especially one without a running application - leaves plenty of throughput performance "on the table." Patching muliple zones simultaneously uses CPU cycles - presumably cycles that would have been idle. And it uses I/O channel and disk bandwidth - also, hopefully, available bandwidth. Essentially, ZPP is shortening the elapsed time by using more CPU cycles and I/O bandwidth now instead of using them later.

So the main caution is "make sure there is sufficient compute and I/O capacity to patch multiple zones at the same time."

But whenever multiple apps are running on the same system at the same time, the operating system must perform extra tasks to enable them to run safely. It doesn't matter if the "app" is a database server or 'patchadd.' So is ZPP using any "extra" CPU, i.e. is there any CPU overhead?

Along the way, I collected basic CPU statistics, including system and user time. The next graph shows that the amount of total CPU time (user+sys) increased slightly. The overhead was less than 10% for up to 8 zones. Another coincidence? I don't know, but at that point the overhead was roughly 1% per zone. The overhead increased faster beyond P=8, indicating that, perhaps, a good rule of thumb is P="number of unused cores." Of course, if the system is using Dynamic Resource Pools or dedicated-cpus, the rule might need to be changed accordingly. TANSTAAFL.

Conclusion

All good tales need a conclusion. The final graph shows the speedup - the increase in patching throughput - based on the number of zones, level of parallelization, and device type. Specific conclusions are:
  1. Parallel patching zones significantly reduces elapsed time if there are sufficient compute and I/O resources available.
  2. Solid-state disks significantly improve patching throughput for high-zone counts and similar levels of parallelization.
  3. The amount of work accomplished does not decrease - it's merely compressed into a shorter period of elapsed time.
  4. If patching while applications are running:
    • plan carefully in order to avoid impacting the responsiveness of your applications. Choose a level of parallelization commensurate with the amount of available compute capacity
    • use appropriate resource controls to maintain desired response times for your applications.

Getting the ZPP requires waiting until mid-year. Getting SSDs is easy - they're available for the Sun 7210 and 7410 Unified Storage Systems and for Sun systems.

<script type="text/javascript"> var sc_project=2359564; var sc_invisible=1; var sc_security="22b325fd"; var sc_https=1; var sc_remove_link=1; var scJsHost = (("https:" == document.location.protocol) ? "https://secure." : "http://www."); document.write("");</script>

counter for tumblr
Comments:

In option 2 right at the top, is it possible to (update) attach all your detached zones in parallel and fake a sort of poor mans ZPP?

Posted by Mike Ramchand on April 01, 2009 at 07:16 AM EDT #

Yes, you can attach all of the detached zones at once. The only drawback to that, of which I am aware, is some loss of control. For example, with ZPP you can apply just one patch, with parallelization, during a short window of opportunity, and apply another patch later.

However, I don't have any experience with the detach-patch-attach method. I have heard that it works. I don't know if parallel reattachment - as you suggest - is recommended.

Posted by Jeffrey Victor on April 01, 2009 at 07:42 AM EDT #

I'm curious about the zone's that were being patched in the test. Where they whole root, or sparse root?

Wouldn't sparse root be faster then whole root, since only non-inherited files/files systems get updated?

Posted by Tim on April 02, 2009 at 12:06 AM EDT #

The zones in this test were all sparse-root. You are correct: patching a sparse-root zone should be faster than patching a whole-root zone.

Posted by JeffV on April 02, 2009 at 12:59 AM EDT #

It is important to understand update on attach - it "updates" only those packages that have the SUNW_PKG_ALLZONES variable set to true in /var/sadm/pkg/<pkg>/pkginfo, those that are inherited from the global zone, and maybe a few others. Consider the following, which shows that firefox does not get patched by update on attach.

# zonecfg -z template-1 info inherit-pkg-dir
<no output>
# zoneadm -z template-1 detach
<no output>
# patchadd 125539-05
Validating patches...
...
Done!
# zoneadm -z template-1 attach -u
<no output>
# zoneadm -z template-1 boot -s
<no output>
# showrev -p | grep SUNWfirefox
Patch: 125539-04 Obsoletes: Requires: Incompatibles: Packages: SUNWfirefox, SUNWfirefox-devel
Patch: 125539-05 Obsoletes: Requires: Incompatibles: Packages: SUNWfirefox, SUNWfirefox-devel
# zlogin template-1 showrev -p | grep SUNWfirefox
Patch: 125539-04 Obsoletes: Requires: Incompatibles: Packages: SUNWfirefox, SUNWfirefox-devel

After using update on attach, you will need to apply the patch set in each zone as well. This pass will apply a small minority of the patches (especially in a sparse zone) and can be done in parallel by executing patchadd (or install_cluster) from within each zone.

Posted by Mike Gerdts on April 02, 2009 at 01:26 AM EDT #

Mike is right, Zones-Upgrade-On-Attach has to be taken carefully if it satisfies your expectations. Also look closely to the available backout-options for single patches out of whole patchclusters (difficult!) if you experience a problem in a zone after the upgrade.

The upcoming enhancement for patchadd for beeing able to install a patch into the global zone and after that the same patch into the the configured number of zones in parallel is a huge gain in speed.
Old style was to install the patch global then in every single non-global zone serially. With parallel patching the limiting factor starts is in many cases to be the disk-IO saturation (think 10 zone-roots on one local disk versus zone-roots on write-cached SAN storage).
Ususally customers install whole patchclusters anyway and patchadd with the parallel zones patch option would go through that cluster by the commonly used install_all_patches scripts. So no difference in handling here, except that the non-global-zones will be no longer patched serially. Example: if your global zone with multiple patches takes 1h to patch and every non-global zone takes 1h (serially), then you may be finished with e.g. 5 zones and parallel patching in:
1h + 1.25h = 2.25h (parallel, zones are compeeting)
compared to
1h + 5h = 6h (old style)
That's really fast. (numbers are examples)

Posted by Thomas Wagner on May 19, 2009 at 12:26 AM EDT #

Post a Comment:
Comments are closed for this entry.
About

Jeff Victor writes this blog to help you understand Oracle's Solaris and virtualization technologies.

The views expressed on this blog are my own and do not necessarily reflect the views of Oracle.

Search

Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today