Wednesday Nov 16, 2005

ZFS and the all-singing, all-dancing test suite

"A product is only as good as its test suite"

This is a saying that I first heard while helping to get a storage company off the ground. The full import of this statement didn't quite hit me until a few months later.


Whenever you write software, it's a very human thing, and as such, limited by our own cognitive capacity to provide correctness. I'm sure there are people out there who are smarter than me and make fewer mistakes. But no matter how good they are, the number is still greater than zero. There will always be bugs because there is a limit to how much even the smartest of us can fit into our heads at a given time.


So what do you do in order to drive the the number of bugs in a given piece of software towards zero? You write tests. And the end quality of your product depends very directly on how good these tests are at exploring the boundary conditions of your code. If you only test the simple cases, only the simple things will actually work.


Naturally, when you're developing something that other people entrust their livelihood to, like a filesystem or a high-end storage array, you want to make darn sure that their trust is not misplaced. To that end, we wrote a very aggressive test suite for ZFS.


But first, a little background. As Eric has diagrammed on his Source Code Tour, ZFS is broken up into several layers: ZPL (ZFS Posix Layer), DMU (Data Management Unit), and SPA (Storage Pool Allocator). In terms of external dependencies, the DMU and the SPA (about 80% of the kernel code) need very little from the kernel. The ZPL, which plugs into the Solaris VFS and vnode layer, depends on many interfaces from throughout the kernel. Fortunately, it tends to be less snarly than the rest of the code.


What we've done then is to allow the DMU and SPA to be compiled in both kernel and user context (the ZPL is kernel-only). The user-level code can't be used to mount filesystems, but we can plug a test harness, called ztest, into the DMU and SPA (in place of the ZPL) and run the whole shebang in user-land. Running in user-land gives us fast test cycles (no reboot if we botched the code), and allows us to use a less extreme debugging environment.


Now that we have our code in user-land all ready for a test harness to drive it, what do we do? We do what would typically be called white-box testing, where we purposefully bang on sore spots in our code, along with the standard datapath testing. What kind of tests? Anything and everything, all in parallel, all as fast as we can:

Read, write, create, and delete files and directories
Just what you would expect from your standard filesystem stress test.
Create and destroy entire filesystems and storage pools
It's important to test not only the data path, but the administrative path as well, since that's where a lot of problems typically occur -- doing some administrative operation while the filesystem is getting the crap beat out of it.
Turn compression on/off while data is in flight
Since all ZFS administrative operations are on-line, things can change at any time, and we have to cope.
Change checksum algorithms
Same deal. If the checksum algorithm changes mid-flight, all old blocks are still fine (our checksums and compression functions are on a per-block basis), and any new blocks go out with the new checksum algorithm applied.
Add, replace, attach, and detach devices in the pool
This is a big one. Since all ZFS operations are 100% on-line, we can't pause, wait a moment, grab a big fat lock, or anything that would slow us down. Any administrative operations must not disrupt the flight of data and (of course) must ensure that data integrity is in no way compromised. Since adding disks, replacing disks, or upgrading/downgrading mirrors are "big deals" administratively, we must make sure that we don't trip over any edge conditions in our data path.
Change I/O caching and scheduling policies on-the-fly
The I/O subsystem in ZFS is nothing to sneeze at, and there are many tunables internal to the code. One day, we hope to have the code adaptively tune these on-the-fly in response to workload. In the meantime, we verify that fiddling with them doesn't lead to bad behavior.
Scribble random garbage on any redundant data
Whenever our random device addition happens to have created a redundant configuration (either a mirror or RAID-Z), we take the opportunity to test our self-healing data (automatically repair corrupted data). We destroy data in a random, checkerboard pattern since destroying all copies of a given piece of data is not a particularly interesting data point. We're not PsychicFS, after all.
Scrub, verify, and resilver devices
This ensures that as we're running, we can traverse the contents of the pool in a safe way, verifying all copies of our data along the way, and repairing anything we notice to be out of place.
Force violent crashes to simulate power loss
Since we're in user-land, a "kill -9" will simulate a violent crash due to power loss. No "sync your data to disk", no "here it comes", just "bang, you're dead". We even break up our low-level writes to disk so that they're non-atomic at the syscall level. This way, we simulate partial writes in the event of a power failure. Furthermore, when we re-start the test suite and resume from the simulated power failure, we first verify the integrity of the entire storage pool to ensure everything is totally consistent.


Remember, we're doing all of the above operations as fast as we can. To make things even racier, we put the backing store of the pool under test in /tmp, which is an in-memory filesystem under Solaris. This way, we can expose many race conditions that simply would not occur at the speed of normal disks.


All in all, this means that we put our code through more abuse in about 20 seconds than most users will see in a lifetime. We run this test suite every night as part of our nightly testing, as well as when we're developing code in our private workspaces. And it obeys Bill's first law of test suites: the effectiveness of a test suite is directly proportional to its ease of use. We simply just type "ztest" and off it goes.


Of course all this wonderful test suite leaves out the poor ZPL. Fortunately we have a dedicated test team that also does more traditional testing, running ZFS in the kernel, acting as a real filesystem. They run a whole slew of tests that also try to get maximal code coverage, verify standards compliance, and generally abuse the system as much as possible. While we strive to get ztest covering as much of our code base as possible, nothing beats the real deal in terms of giving ourselves warm, fuzzy feelings.


As our friends at Veritas once said, "If it's not tested, it doesn't work." And I couldn't agree with them more.

ZFS vs. The Benchmark


It's been said that developing a new filesystem is an over-constrained problem. The interfaces are fixed (POSIX system calls at the top, block devices at the bottom), correctness in the face of bad hardware is not optional, and it must be fast, or nobody cares. Well, gather 'round, settle back, and listen to (or read) the tale of ZFS vs. "The Benchmark".


Once, not long ago, we had a customer write a benchmark for Solaris that was meant to stress the VM system. The idea was fairly straightforward: mmap(2) a large file, then randomly dirty it, forcing the system to do some paging in the process. Pretty simple, right?


Our beloved customer then goes to run this on an appropriately large file, stored on a UFS filesystem. The benchmark then proceeds to grind the system to a near standstill. Cursory inspection shows that the single disk containing the UFS filesystem has 24,000 outstanding I/Os to it. This is the same disk that contains the root filesystem. Not a good sign.


As you might well imagine, the system was pretty unresponsive. Not hung, but so slow it was beyond tolerable for even the most patient of us. The benchmark had caused enough memory pressure that most non-dirty pages (like our shell, ls, vmstat, etc.) were evicted from memory. Whenever we wanted to type a command, these pages had to be fetched off of disk. That's right, the same disk with 24,000 outstanding I/Os. The disk was a real champ, cranking out about 400 IOPS, which meant that any read request (for, say, the ls command) took about one minute to work its way to the front of the I/O queue. Like I said, a trying situation for someone attempting to figure out what the system is doing.


The score so far? The Benchmark: 1 UFS: 0

And in this corner...

At this point in time, ZFS was still in its early stages, but I had just finished writing some fancy I/O scheduling code based on some ideas that had been bouncing around in my head for a couple of years. Having explained what I had just done to Bryan Cantrill, he had the bright idea of running this same benchmark using ZFS. While I was still busy waving my hands and trying to back down on my bold claims, he had already started up the benchmark using ZFS.


The difference in system response was breathtaking. While running the same workload, the system was perfectly responsive and behaved like a machine without a care in the world. Wow. I then promptly went back to making bold claims.


One of the features of our advanced I/O scheduler in ZFS is that each I/O has both a priority and a deadline associated with it. Higher priority I/Os get deadlines that are sooner than lower priority ones. Our I/O issue policy is then to issue the I/O within the oldest deadline group that has the lowest LBA (logical block address). This means that for I/Os that share the same deadline, we do a linear pass across the disk, kind of like half of an elevator algorithm. We don't do a full elevator algorithm since it tends to lead to over-servicing the middle of the disk and starving the outer edges.


How does this fancy I/O scheduling help, you ask? In general, a write(2) system call is asynchronous. The kernel buffers the data for the write in memory, returns from the system call, and later sends the data to disk. On the other hand, a read(2) is inherently synchronous. An application is blocked from making forward progress until the read(2) system call returns, which it can't do until the I/O has come back from disk. As a result, we give writes a lower priority than reads. So if we have, say, several thousand writes queued up to a disk, and then a read comes in, the read will effectively cut to the front of the line and get issued right away. This leads to a much more responsive system, even when it's under heavy load. Since we have deadlines associated with each I/O, a constant stream of reads won't permanently starve the outstanding writes.


At this point, the clouds cleared, the sun came out, birds started singing, and all was Goodness and Light. The Benchmark had gone from being a "thrash the disk" test to being an actual VM test, like it was originally intended.


Final score? The Benchmark: 0 ZFS: 1

About

bill

Search

Categories
Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today