Testing Critical System Components Without Turning Your System Into A Brick
By Ali Bahrami-Oracle on Apr 19, 2006
I work on the Solaris runtime linker. One thing you quickly learn in this work is that a small mistake can bring down your system. The runtime linker is an extreme example, but the same thing is true of other core system components. Modifying core parts of a running machine can be a risky game.
There is a time honored strategy for dealing with this:
Minimize your exposure
Deal with it
That's not much of a safety net. "Deal with it" can sometimes be a slow, painful process. There has been little improvement in this area for years. Now however, the advent of Solaris zones and ZFS gives us some powerful new options that can make recovery easy and instantaneous.
I'm going to talk about how to do that here. Much of this discussion is linker-specific detail and background motivation, followed by some general comments about zones and ZFS. This is followed by an actual example of how I built a test environment. Please feel free to skip right to the example.
The Linker Testing Problem
The runtime linker lies at the heart of nearly every process on a Solaris (or any modern) operating system. This makes modifying and testing it problematic: If you install a runtime linker that has an error, your entire running system will instantly break. Since everything on the system is dynamically linked, this isn't a casual breakage. Rather, your system is unable to execute anything. Recovery may require booting a memory resident copy of the OS from the installation CD, restoring working files, and rebooting. One moment, you were focusing on solving a problem. Now, your attention is yanked away and focused on system recovery. Once you get your system back, you have to go back and try to remember what you were thinking when it broke. Your productivity is shot.
If you work on something as central as the runtime linker, the odds of never breaking it are stacked against you. That it is going to happen is a simple mathematical fact. If you are careful and methodical, it will happen less. Unless you shy away from doing valuable work though, it is an ever present possibility. Since we can't eliminate the possibility, we have to accommodate it.
Our main strategy in this game is avoidance. To avoid this problem, most linker testing is done against a local copy of the linker components, without installing them in the standard locations (/bin, /usr, etc). We do this by manipulating the command PATH, and setting linker environment variables. We may install them later, if testing seems to show that they are OK, and if we believe that there may be system interactions we need to guard against. The good news is that this approach usually works, and can be managed with a reasonable amount of effort. It has some limits though:
It is complicated to get 100% right. Sometimes we end up using linker components from the standard places instead of the ones we think we're using.
It isn't a 100% accurate representation of how anyone else uses the linker. It is a close approximation, but not perfect.
It isn't efficient: Since it isn't a perfect test, we often have to do a a real install and test again before we know for sure that things are OK.
An ideal approach would not require so much human judgement. It would reflect the user experience exactly.
How would the ideal testing environment for the runtime linker subsystem look? Here's my wish list:
Keeps your system in a completely stock and vanilla configuration, without altering system files.
Lets you modify any system file, from the point of view of the software you're testing, without violating the previous point. I want to use my test linker subsystem, installed in the standard places, without having it affect anything except my tests.
Quick and easy to setup.
Upgrades to the operating system should be quick and easy to do.
Lets you access and use your development environment from within the test environment exactly as you can outside, and with the same filesystem paths.
Testing mistakes can't take down the system.
Self healing: After I mess it up, I want to be able to reset to a working vanilla state with a simple command, and without having to remember what I changed.
Run on a standard system with only modest extra resources. More disk space is OK. Using another computer isn't.
In years past, you might have tried to construct something like this by constructing an image of the system in a test area, and then applying the chroot(2) system call (probably in the form of the /usr/sbin/chroot command) to make it appear like the real system. This can work, but it has some big drawbacks:
Requires a lot of work to set up.
Requires a lot of ongoing work to track system changes from release to release (which in the Solaris group, come every 2 weeks).
Requires a lot of work to keep stable and correct.
If you've ever set up an anonymous FTP server, you know how much manual work is involved. Imagine doing it for an entire OS and then having to keep up with daily changes. People have tried this, but it ends up being too much ongoing effort to manage and maintain. No one minds doing work up front, but afterwards, we really want a system that can take care of itself. The goal is to save time and effort, not to simply redirect it.
What we really need is a sort of super chroot: One that sets itself up and doesn't demand so much from us. Something that creates a virtual instance of the machine we're using, that is created automatically by the system, so we don't have to construct a Solaris root filesystem manually. Something easy to create, lightweight in operation, that is essentially identical to our installed system, and something that we can play with, wreck, and reset with little or no overhead.
Before Solaris 10, this would have been a tall order. As of Solaris 10, it is standard stuff: We can build it using Solaris Zones in conjunction with ZFS. Not only can we do it, but it's easy.
You can read more about Solaris Zones at the OpenSolaris website. Quoting from that page:
Zones are an operating system abstraction for partitioning systems, allowing multiple applications to run in isolation from each other on the same physical hardware. This isolation prevents processes running within a zone from monitoring or affecting processes running in other zones, seeing each other's data, or manipulating the underlying hardware. Zones also provide an abstraction layer that separates applications from physical attributes of the machine on which they are deployed, such as physical device paths and network interface names.
The main instance of Solaris running on your system is known as the global zone. A given system is allowed to have 1 or more non-global zones: These are virtualized copies of the main system that present the programs running within them with the illusion that they are running on separate and distinct systems. Zones come in two flavors: Sparse, and Whole Root. The difference is that a sparse zone uses loopback mounts to re-use key filesystems (/, /usr, /platform) from the main system in a readonly mode, wheras a whole root zone makes a complete copy of these filesystems. A whole root zone allows you install different Solaris packages into its root filesystem this is what we need for linker testing.
Zones are extremely easy to set up. They provide us with the ability to create an environment in which we can install and test the runtime linker without running the risk of taking down the machine. The worst that can happen is that we wreck the zone, but the damage will always be contained. A non-global zone cannot damage the global zone. If we do damage the non-global zone, it is easy to halt, destroy, and recreate it, all without any need to halt or reboot the main system.
This is a big leap forward, and by itself, would be worth using. However, setting up a whole root zone can take half an hour. To really make this approach win, we need to be able to reset a zone much faster than that. We can do this using ZFS.
ZFS is a powerful new filesystem that is making its debut with Solaris 10, Update 2. ZFS makes it cheap and easy to create an arbitrary number of filesystems on any Solaris system, from small desktop machines to large servers.
ZFS has a snapshot facility that allows you to capture a readonly copy of a filesystem (even really large ones) in a matter of seconds. A snapshot requires almost no disk space initially, as all the file data blocks are shared. As the main filesystem is modified, the snapshot continues to reference the old data blocks. Once a snapshot has been made, ZFS allows you to roll back the main filesystem to the state captured by the snapshot. This operation is trivial to do, and essentially instantaneous.
Each Solaris whole root zone has a copy of the main system filesystems, kept at a location you specify when you create the zone. ZFS therefore presents us with a solution to the problem of how to rapidly and easily reset a linker test environment:
Create a ZFS filesystem to hold the zone data.
Create a zone in the ZFS filesystem, and do the initial login that finishes the Solaris "install" process for the zone.
Halt the zone.
Capture a ZFS snapshot of the filesystem.
Restart the zone.
Once this is done, you can use the zone for testing, as if it were a especially convenient second system that can see the same files your real system can see. When you need to reset it:
Halt the zone.
Revert to the zone snapshot.
Restart the zone.
I created such a zone using my Ultra 20 desktop system. Here are the commands to do the above:
% zoneadm -z test halt % zfs rollback -r tank/test@baseline % zoneadm -z test boot
These commands take 7 seconds from start to finish! Speed is not going to be a problem.
Let's walk through the construction of the linker test zone I have on my desktop system. The first step is to get a ZFS filesystem set up. My system has an extra disk (/dev/rdsk/c2d0) that I will use for this purpose. It doesn't have any pre-existing data on it that I care about saving, so I will dedicate the entire thing for ZFS to use.
I need to create a ZFS pool, and then create a filesystem within it. Following the ZFS examples I've seen, I'm going to name the pool "tank". I will mount the filesystem on /zone/test.
root# zpool create -f tank c2d0 root# zfs create tank/test root# zfs set mountpoint=/zone/test tank/test root# df -k /zone/test Filesystem kbytes used avail capacity Mounted on tank/test 241369088 98 241368561 1% /zone/test
That took 4 seconds.
The next step is to create the zone within the ZFS filesystem now mounted at /zone/test. In order to allow installing linker components into the root and usr filesystems, this needs to be a whole root zone. At Sun, all of our home directories are automounted via NFS, with NIS used to manage user authentication. So, I'll need to give my zone a network interface. This interface needs a unique IP address, different from the main system address. I do most of my development work in a local filesystem (/export/home), so I'll arrange for it to appear within my test zone as well. My host is named rtld, so I will name my test zone rtld-test. Summarizing these decisions:
Zone Hostname: rtld-test
Zone IP: 172.20.25.173
Zone Type: Whole Root
Zone Path: /zone/test
Loopback Mounts: /export/home
Let's create a test zone:
root# chmod 700 /zone/test root# zonecfg -z test test: No such zone configured Use 'create' to begin configuring a new zone. zonecfg:test> create -b zonecfg:test> set autoboot=true zonecfg:test> set zonepath=/zone/test zonecfg:test> add net zonecfg:test:net> set address=172.20.25.173 zonecfg:test:net> set physical=nge0 zonecfg:test:net> end zonecfg:test> add fs zonecfg:test:fs> set dir=/export/home zonecfg:test:fs> set special=/export/home zonecfg:test:fs> set type=lofs zonecfg:test:fs> end zonecfg:test> info zonename: test zonepath: /zone/test autoboot: true pool: fs: dir: /export/home special: /export/home raw not specified type: lofs options:  net: address: 172.20.25.173 physical: nge0 zonecfg:test> verify zonecfg:test> commit zonecfg:test> exit root# zoneadm -z test verify root# zoneadm -z test install Preparing to install zone
. Creating list of files to copy from the global zone. Copying <120628> files to the zone. Initializing zone product registry. Determining zone package initialization order. Preparing to initialize <974> packages on the zone. Initialized <974> packages on zone. Zone is initialized. Installation of these packages generated errors: Installation of these packages generated warnings: The file contains a log of the zone installation. root# zoneadm list -cv ID NAME STATUS PATH 0 global running / - test installed /zone/test
This part of the process takes about 12 minutes on this system.
The output from "zoneadm list" shows us that the zone is installed, but not running. To get it running for the first time, we must boot it, and then login to the console and finish the installation process. This is the same process a standard Solaris goes through after the initial reboot smf initializes, you are asked some questions about hostname, root password, and name service, and then the system is ready for use. Before using it though, we halt it and capture a snapshot, for later use.
root# zoneadm -z test boot root# zlogin -C test [Install Output omitted] ~. [Connection to zone 'test' console closed] root# zoneadm list -cv ID NAME STATUS PATH 0 global running / 12 test running /zone/test root# zoneadm -z test halt root# zfs snapshot tank/test@baseline root# zoneadm -z test boot
This last part takes about 5 minutes. In total, we can go from no ZFS and no zone, to having a usable linker test zone in well under half an hour. This story is going to get even better soon: There are "zone cloning" features coming soon which will greatly lower the time it takes to create new zones.
Now that we have a test zone, let's experiment with it. In this section, I will be using two separate terminal windows, one logged into the global zone, and one logged into the test zone. I will show interactions with the global zone on the left, and the test zone on the right. In this example, I remove the runtime linker (/lib/ld.so.1) and demonstrate that (1) This does not take down the system, and (2) It is easily and quickly repaired.
The first step is to log into the test zone. The uname command is used as a trivial way to show that both zones are operating normally.
ali@rtld% uname SunOS
ali@rtld% ssh rtld-test Password: passwd ali@rtld-test% uname SunOS
Now, let's simulate the situation in which a bad runtime linker is installed, by simply removing it.
ali@rtld-test% su - Password: passwd root@rtld-test# rm /lib/ld.so.1
That is normally all it takes to wreck a working system. However, the global zone is unharmed, and my system continues to run.
ali@rtld% uname SunOS
root@rtld-test# uname uname: Cannot find /lib/ld.so.1 Killed root@rtld-test# ls ls: Cannot find /usr/lib/ld.so.1 Killed
Since my system is still running, I can quickly repair the broken test environment. In this simple case, I can repair the damage by copying /lib/ld.so.1 from my global zone into the test zone.
ali@rtld% su - Password: passwd root@rtld# cp /lib/ld.so.1 \\ /zone/test/root/lib
root@rtld-test# uname SunOS
That's fine if the damage is simple, but what if the situation is more complex? The ld.so.1 from the global zone may be incompatible with other changes made to the linker components in the test zone, in which case, the above fix will not work. In that case, we will want to exercise the ability to quickly reset the test zone to a known good state. First, let's break it again:
root@rtld-test# rm /lib/ld.so.1 root@rtld-test# uname uname: Cannot find /lib/ld.so.1 Killed
This time, we'll reset the test zone, from the global one:
root@rtld# zoneadm -z test halt root@rtld# zfs rollback \\ -r tank/test@baseline root@rtld# zoneadm -z test boot
# Connection to rtld-test closed by remote host. Connection to rtld-test closed.
The test zone is back, good as new and ready for use:
ali@rtld% ssh rtld-test Password: passwd ali@rtld-test(501)% uname SunOS
Conclusions: A Rising Tide Floats All Boats
I've started to regard the test zone the same way I view an Etch-A-Sketch®: I can play with it, mess it up, learn from the results, and then I give it a quick shake and it is ready to go again. This is cool stuff!
Before doing this experiment, I had never used zones or ZFS. I had heard about them, but nothing more. I sat down on Friday morning to see what I could do with them, and I had the solution described here working within 8 hours of effort. It's hard to beat that return on investment. The result is a real leap forward in terms of how easily and completely we can test our work.
Zones and ZFS provide new
and powerful abilities not available elsewhere. They're included in
the standard system for free, and not as expensive add ons.
They're simple and easy to use. Once you play with them, I am confident
that you'll start seeing uses for them in your daily work.