Cloning Isn't Just for Sheep Any More



While it may not have the social implications nor headline appeal of the now famous Dolly the Sheep, the zone cloning feature introduced with Solaris 10 11/06 is worth further investigation. Before we do that, it is probably a good idea to review basic zone creation and installation prior to the new cloning capability.

Building Zones the Old Fashioned Way

The first step in the creation of a zone is establishing it's configuration. This is done by conversing with our friend, zonecfg(1M), who handles all the details of writing the configuration xml file in /etc/zones and updating the zones index file /etc/zones/index.

Such a conversation might go something like....
# zonecfg -z zone1
zone1: No such zone configured
Use 'create' to begin configuring a new zone.
zonecfg:zone1> create
zonecfg:zone1> set zonepath=/zones/zone1
zonecfg:zone1> add inherit-pkg-dir; set dir=/opt; end
zonecfg:zone1> add net; set physical=iprb0; set address=192.168.100.101/24; end
zonecfg:zone1> add fs; set dir=/export; set special=/export; set options=[rw,nosiud]; set type=lofs; end
zonecfg:zone1> verify
zonecfg:zone1> commit
zonecfg:zone1> exit
#


If you grok zones then you recognize this as a typical sparse root zone. If you have attended one of my zones best practices workshops then you will also notice that I'm following my own advice and making /opt an inherited package directory.

A quick check to make sure all is well.
# zoneadm list -cv
  ID NAME             STATUS         PATH                           BRAND     
   0 global           running        /                              native    
   - zone1            configured     /zones/zone1                   native    


All is as it should be (which is always the case for a how-to example).

The next step is a rather magical affair where the zoneroot is populated. This process is initiated by uttering the following sequence
# zoneadm -z zone1 install
Once spoken, fantastic things start happening behind the scenes - all of them by our good friend Live Upgrade. The actual sequence of events is something like
  1. Create the new zoneroot if it doesn't already exist. If it does exist make sure the permissions are set to 700 and it is owned by [0,0].

  2. Mount all of the inherit-pkg-dir and file systems listed in the zone configuration file.

  3. Create a candidate list of files for the new zoneroot by looking at the global zone contents file /var/sadm/install/contents.

    On my laptop daily driver, this totals approximately 2 million files.

  4. Pick from this list all files that should be delivered to the new zone root by removing all files from packages that are marked as global zone only (SUNW_PKG_THISZONE is set to true)

    We're still over 2 million files, folks!

  5. From the remaining list of files, remove all of those that will be delivered via inherit-pkg-dir directories.

    This is why I like inherit-pkg-dir. We are now down to about 2,300 files. If not for inherit-pkg-dir I would be hitting my boss up for a lot more storage.

  6. Copy all of these files from the global zone into the new zoneroot, replacing commonly edited configuration files with those that were originally delivered with the package (ie /etc/passwd).

  7. Once the files are in place there is one more step to perform. Some of the package have preinstall and postinstall scripts that might do something important. These need to be run, even if all of the files are delivered via inherited directories. So in package dependency order, all of the packages identified as applicable to the new zone (SUNW_PKG_THISZONE=false) are installed sequentially.

  8. Update the zones index file /etc/zones/index marking the new zone as installed.

  9. Unmount all of the file systems mounted in step 2.
And we are done, with the first part. The amount of time this takes can be estimated as O(sparseness, number of packages, disk speed). To speed up this process I would have to increase the degree of sparseness, which is pretty hard to do once /opt has been added. I could also decrease the number of packages in the global zone - this has some interesting possibilities. I could also get faster disks, but that isn't always practical, especially with a small server configuration or a home system. I may be talking myself into a minimal global zone installation with full root zones - but that sounds like a topic for another day.

Enough of the theory, how long did this really take ?

On a relatively clean Nevada (aka OpenSolaris Community Edition) install it was almost 10 minutes. The output is below and I have annotated it with the installation steps outlined above.
# time zoneadm -z zone1 install
[1] [2] Preparing to install zone .
[3] [4] [5] Creating list of files to copy from the global zone.
[6] Copying <1934> files to the zone.
[7] Initializing zone product registry.
Determining zone package initialization order.
Preparing to initialize <1290> packages on the zone.
Initialized <1290> packages on zone.                                 
[8] [9] Zone  is initialized.
Installation of <1> packages was skipped.
Installation of these packages generated warnings: 
The file  contains a log of the zone installation.

real    9m38.951s
user    1m26.582s
sys     2m51.252s

But we're still not done, are we ? We still have first boot processing which includes initial population of the SMF repository (which is O(number of services, speed of disks)) and system identification (which is either constant if a sysidconfig file is supplied or O(Bob's increasingly bad typing rate) if we choose an interactive dialog).

For this example the first boot process took about 3 minutes to complete.

We now have a pristine zone, ready for work. But there is more to do, isn't there ? We have to install some software, or at least configure software that is already present. In fact, these customizations might be more complicated than the zone installation process. If I had invested in developing automation scripts or was using some form of advanced provisioning technology this might not be a big deal. If I'm doing this manually then it may be quite a bit of work - and work that I don't want to repeat with regularity. In other words: I'm not likely to use lots of zones and I don't particularly look forward to OS updates.

Let's look at this a bit more and see if we can make this any easier.

This example comes from my (about to be posted) Zones workshop. In our new non-global zone we will replace the Solaris version Webmin with the community release from webmin.com.

A quick pkgchk(1M) of SUNWwebminu shows that its contents are in /usr/sfw and SUNWwebminr deposits it's payload in /etc/webmin and an SMF manifest in /var/svc/manifest/application/management. Performing the same task on the community edition of Webmin shows that it will install in /etc/webmin and /opt/webmin. The clashing of /etc/webmin indicates that these cannot easily co-exist, but complete replacement is possible (all inherit-pkg-dir destinations are contained in a single directory).

So begin by removing the Solaris version of webmin. This is all done in our new zone.
# zonename
zone1

# pkgrm SUNWwebminr SUNWwebminu



pkgrm: ERROR: unable to remove 
/usr/sfw/bin 
/usr/sfw 
/usr 
## Updating system information.

Removal of  partially failed.

At this point the root package SUNWwebminr is completely gone and SUNWwebminu is marked as partially installed. One more pkgrm(1M) and it will gone, at least as far as our package contents are concerned. The bits in /usr/sfw are still there, but without the configuration files in /etc/webmin, they are just that, bits in a directory.
# pkgrm SUNWwebminu

The following package is currently installed:
   SUNWwebminu  Webmin - Web-Based System Administration (usr)
                (i386) 11.11.0,REV=2007.01.23.02.15

Do you want to remove this package? [y,n,?,q] y

## Removing installed package instance 
(A previous attempt may have been unsuccessful.)
## Verifying package  dependencies in global zone
## Processing package information.
## Removing pathnames in class 
## Updating system information.

Removal of  was successful.

Now to install the new webmin package. While you weren't looking I put the package in /var/tmp. But there are some things to do before we can proceed. Remember, the package wants to write into /opt/webmin, but /opt is read-only. We can do a couple of things: mount a writable file system (LOFS, local real disk or NFS) onto /opt/webmin in our new zoneroot or we could create a symbolic link for /opt/webmin that would point somewhere writable. The link is much less confusing, so let's go that route this time.

In the global zone do something like
# ln -s /local/webmin /opt/webmin
# mkdir -p /zones/zone1/root/local/webmin
Now we are ready to proceed. In zone zone1, do the following
# pkgadd -d /var/tmp/webmin-1.320.pkg

The following packages are available:
  1  WSwebmin     Webmin - Web-based system administration
                  (all) 1.320

Select package(s) you wish to process (or 'all' to process
all packages). (default: all) [?,??,q]: 



Webmin has been installed and started successfully. Use your web
browser to go to

  http://zone1:10000/

and login with the name and password you entered previously.


Installation of  was successful.
Now we have a nicely customized non-global zone with one application ready to go. It wasn't all that bad, but there were a few manual steps. Multiply this by 20 or so for all of the other applications and configuration steps that you need to do for your system standards and then by 20 or so for the numbers of zones you want to provision and it is suddenly looking like a tremendous amount of work.

Until Solaris 10 11/06.

Send in the Clones: Solaris Zone Cloning

Zone cloning is a new feature that bypasses all of the steps in the zone installation process and replaces them by copying the source zoneroot and performing a sys-unconfig(1M). Of course this makes perfect sense - if you duplicate the installation process you should get the exact same results (a wise science teacher taught me that a long time ago). So why not short cut the process and just copy the zoneroot, sys-unconfig(1M), fix up the zones index file and you are done.

But it gets better than that. If we are copying the zone root then any customization performed on that zoneroot will be preserved. This includes the SMF repository. Not only do we skip the initial import, we also preserve any customizations, such as service related security hardening. Our new cloned zone would also have the community edition of Webmin instead of the one in Solaris. And it's configured, enabled, and will start automatically when the new zone boots - without requiring me to do anything else. Now that's cool.

Let's see how all this works.

Step 1 - create a new zone configuration using our clone source as a template. We need to change the zoneroot and IP address. In more complex configurations, other attributes might need to be changed, but for this simple example this is all that is required.
# zonecfg -z zone2
zone2: No such zone configured
Use 'create' to begin configuring a new zone.
zonecfg:zone2> create -t zone1
zonecfg:zone2> set zonepath=/zones/zone2
zonecfg:zone2> select net address=192.168.100.101; set address=192.168.100.102/24; end
zonecfg:zone2> verify
zonecfg:zone2> commit
zonecfg:zone2> exit
Instead of installing a new zone, let's clone from zone1.
# time zoneadm -z zone2 clone zone1
WARNING: read-write lofs file system on '/export' is configured in both zones.
Copying /zones/zone1...

real    0m31.135s
user    0m0.431s
sys     0m3.818s

# zoneadm -z zone2 boot
# zlogin -C zone2  (or supply a sysidconfig file)
Now we're getting somewhere. Zone creation, including application configuration and setup is reduced from about 15 minutes down to 31 seconds. This is getting really cool.

Clones to the left of me, zpools to the right

But wait, there's more! There's an opportunity to make this even more efficient by taking advantage of ZFS clones. Note that this is only available in OpenSolaris at present, but consider the implications of the following example.

Note the use of zone relocation (move) - also a nifty new feature in Solaris 10 11/06.
# zpool create zfs_zones c4d0t0s2
# zoneadm -z zone1 move /zfs_zones/zone1
A ZFS file system has been created for this zone.
Moving across file systems; copying zonepath /zones/zone1...
Cleaning up zonepath /zones/zone1...

# zonecfg -z zone3
zone3: No such zone configured
Use 'create' to begin configuring a new zone.
zonecfg:zone3> create -t zone1
zonecfg:zone3> set zonepath=/zfs_zones/zone3
zonecfg:zone3> select net address=192.168.100.101; set address=192.168.100.103/24; end
zonecfg:zone3> verify
zonecfg:zone3> commit
zonecfg:zone3> exit

# time zoneadm -z zone3 clone zone1
WARNING: read-write lofs file system on '/export' is configured in both zones.
Cloning snapshot local/zone1@SUNWzone3
Instead of copying, a ZFS clone has been created for this zone.

real    0m11.402s
user    0m0.380s
sys     0m0.412s

Wow! Under 12 seconds and a completely configured and ready to run zone is built. Throw in a sysidconfig file and we're ready to run. And by using a ZFS clone, almost no additional disk space was required for this new zone.
# df -k | grep zfs_zones
zfs_zones            1007616      27  808469     1%    /zfs_zones
zfs_zones/zone1      1007616  198590  808469    20%    /zfs_zones/zone1
zfs_zones/zone3      1007616  198592  808469    20%    /zfs_zones/zone3

1GB - 200MB - 200MB should be 600MB, but it's not. Since the zones are nearly identical at this point, only 200MB is consumed from the zpool.

Practical applications of Zone Cloning

Development environments and testbeds seem a very good fit. Build one standard configuration of a zone and clone it as necessary for each developer or test scenario. If things go wrong, which can happen while testing, just delete the zone and re-clone it. 30 seconds later you are back in business.

Shhhh - don't tell anyone, but I like the privilege restrictions of zones. I'm very likely to give a developer the root password to the zone and let them do what they need to do. The worst they can do is destroy their environment. The impact to me is two zoneadm(1M) invocations and about 30 seconds of clock time.

The better use case comes when you combine this with another new feature in Solaris 10 11/06. Zone migration. Imagine the following scenario.
  1. Mount a file system containing a company standard non-global zoneroot
  2. Attach the zone to the system (zonecfg create -a and zoneadm attach)
  3. Clone this new zone as many times as needed
  4. Detach the original zone from the server (zoneadm detach)
  5. Unmount the detached zoneroot filesystem
This sounds a lot like jumpstart and flasharchives, doesn't it ? You bet it does, and it has many of the same benefits. The flashzone (I'm making up this phrase) can be delivered via USB stick, NFS file services, network file copy (scp), embedded in a server flasharchive. The possibilities are very intriguing.

I hope that this has helped introduce you to a few new zones features with Solaris 10 11/06 (and one in OpenSolaris). As I ponder the combination of these new features I find myself beginning to think that a minimal global zone and cloned full root zones may in fact be a superior practice. We'll explore that in more detail soon.

Technocrati Tags:
Comments:

Post a Comment:
  • HTML Syntax: NOT allowed
About

Bob Netherton is a Principal Sales Consultant for the North American Commercial Hardware group, specializing in Solaris, Virtualization and Engineered Systems. Bob is also a contributing author of Solaris 10 Virtualization Essentials.

This blog will contain information about all three, but primarily focused on topics for Solaris system administrators.

Please follow me on Twitter Facebook or send me email

Search

Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today