A new look at an old SA practice: separating /var from /
By jsavit on Mar 21, 2010
An old school SA practice...
This is probably the geekiest blog title I've used - but today's blog is a short look at two variations on the old sysadmin practice
/, inspired by recent "how do I do this?" calls.
Why do it? How was it done before?
This was traditionally done to ensure that growing space consumption in
/var, perhaps caused by core, log or package files,
didn't exhaust critical parts of your file system. This could happen because some program kept dumping core or generating log entries.
Exhausting space would add further injury by causing other failures.
Several techniques can prevent such problems. One method is to use
coreadm to put core files somewhere else,
and to use
/etc/logadm.conf to rotate log files on a schedule that is consistent with your disk space and retention policy. But, the biggest hammer and most complete solution was to keep
/var in a separate file system by giving
it a dedicated UFS file system on its own disk slice. That way, even if something went amok and filled
/var, it had no effect on
other file systems.
The disadvantage, of course, is the hassle of creating and sizing separate disk slices. You had to plan how many slices you needed and how big they were, and if you got them wrong it was really inconvenient to change them. You might have one slice and file system too big, wasting space you really needed in another slice, but reallocating space was a drag. Having storage allocated into little islands was a real time-waster, especially on the itty-bitty disk drive capacities we used to live with.ZFS, and in this case ZFS boot, pretty much eliminated this inconvenience - as I'll discuss in a moment.
But now, an old joke...
Before I go into the two examples that came up, a classic joke from mathematics or science class.
The professor is in the front of the classroom and writes an equation on the blackboard (I'm picturing the professor I had when studying Fourier transforms in EE class, but I won't try to do his accent.) Pointing to it, he tells the class, "As you can see, this theorem is clearly trivial."
Turning back to the blackboard he pauses for a moment, puts his hand on his chin and says "Hmmm.... just a moment." Now he starts working on the equation's derivation, covering blackboard after blackboard with equations - everything from α to ω. He fills all the blackboards in the classroom, mumbles "excuse me, I'll be right back," and then goes into an adjacent empty classroom to use its blackboards.
Twenty minutes pass. Finally, the professor returns to the classroom. He beams at the students with a big smile and says "I was right. It is trivial!"
I think this may be relevant to the rest of the post! :-)
The trivial case with ZFS root file system
I'll start with the straightforward case first. I was contacted by a long time friend
(who has exceptional knowledge of Solaris and other operating systems, but is new to ZFS)
who had wanted to restrict
for a fresh installation of Solaris 10 that he had just done. He used ZFS boot and selected the option that allocated a
separate ZFS dataset for
/var, and wanted to know if there was an easy way to control its size.
/var(it's an option you specify during install), and after installation completed I logged in and issued the following commands:
# zpool list NAME SIZE USED AVAIL CAP HEALTH ALTROOT rpool 15.9G 4.16G 11.7G 26% ONLINE - # zfs list NAME USED AVAIL REFER MOUNTPOINT rpool 4.62G 11.0G 34K /rpool rpool/ROOT 3.12G 11.0G 21K legacy rpool/ROOT/s10x_u8wos_08a 3.12G 11.0G 3.06G / rpool/ROOT/s10x_u8wos_08a/var 65.6M 11.0G 65.6M /var rpool/dump 1.00G 11.0G 1.00G - rpool/export 265K 11.0G 23K /export rpool/export/home 242K 11.0G 242K /export/home rpool/swap 512M 11.5G 42.0M -
Right - all I should need to do is set a quota on
rpool/ROOT/s10x_u8wos_08a/var, so let's do that.
I picked a quota slightly larger than the amount of space already consumed so I could easily test filling it up by
creating dummy files with random data. I did that once to make sure I didn't mess up the syntax, and once more in earnest
to exceed the quota:
# zfs set quota=80m rpool/ROOT/s10x_u8wos_08a/var # zfs get quota rpool/ROOT/s10x_u8wos_08a/var NAME PROPERTY VALUE SOURCE rpool/ROOT/s10x_u8wos_08a/var quota 80M local # dd if=/dev/urandom of=/var/XX1 bs=1024 count=10000 10000+0 records in 10000+0 records out # zfs list rpool/ROOT/s10x_u8wos_08a/var NAME USED AVAIL REFER MOUNTPOINT rpool/ROOT/s10x_u8wos_08a/var 75.3M 4.67M 75.3M /var # dd if=/dev/urandom of=/var/XX2 bs=1024 count=10000 write: Disc quota exceeded 4737+0 records in 4737+0 records out # # ls -l XX\* -rw-r--r-- 1 root root 10240000 Mar 17 14:15 XX1 -rw-r--r-- 1 root root 4849664 Mar 17 14:16 XX2 # zfs list rpool/ROOT/s10x_u8wos_08a/var NAME USED AVAIL REFER MOUNTPOINT rpool/ROOT/s10x_u8wos_08a/var 80.1M 0 80.1M /var
Mission accomplished: the second file reached the quota allocated to this ZFS dataset as required.
The only odd thing (in my opinion) is the odd spelling "Disc" instead of "Disk" in the message
write: Disc quota exceeded.
So, if I'm building a Solaris system and want to keep
/var from exhausting disk space, all I need is one command
to set the quota. Sweet.
A less trivial case, with zones
Shortly after the preceding example, I was contacted by a customer who wanted to do something similar
/var within Solaris Containers.
He tried to create the zone with
/var defined as a delegated ZFS file system
using legacy mounts.
There seems to be a chicken-and-egg situation about what parts of the zone's filesystem must already be mounted
before the zone can boot, but then you can't delegate it to the zone.
Instead, I created a ZFS dataset and assigned it to the zone's
# zfs create rpool/zones/vartest # zfs list rpool/zones/vartest # cat varzone.cfg create set zonepath=/zones/varzone set autoboot=false add net set physical=e1000g0 set address=192.168.56.164 end add fs set dir=/var set special=/zones/vartest set type=lofs end add inherit-pkg-dir set dir=/opt end verify commit # zonecfg -z varzone -f varzone.cfg # zoneadm -z varzone install A ZFS file system has been created for this zone. Preparing to install zone <varzone>. Creating list of files to copy from the global zone. Copying <2899> files to the zone. Initializing zone product registry. Determining zone package initialization order. Preparing to initialize <1062> packages on the zone. Initialized <1062> packages on zone. Zone
is initialized. Installation of <2> packages was skipped. The file </zones/varzone/root/var/sadm/system/logs/install_log> contains a log of the zone installation.
So far so good. After booting the zone without incident, I set a quota and fill it up
(note: this is a much bigger
/var because I'm building a zone in a Solaris instance with
a bunch of additional software in
# zfs list rpool/zones/vartest NAME USED AVAIL REFER MOUNTPOINT rpool/zones/vartest 274M 9.31G 274M /zones/vartest # zfs set quota=300m rpool/zones/vartest
Within the zone, I exhaust allocated space using the same method as before:
# dd if=/dev/urandom of=/var/xx1 bs=1024 count=100000 write: Disc quota exceeded 26369+0 records in 26369+0 records out
So, I was able to create a separate
/var for the zone, and manage its space independently from the zone's root.
WARNING: I do not know if this is a supported or recommended procedure, even though it seems to work.
My recommendation is that it's more important to impose a quota on the zone's ZFS-based zone root, in order to control
its total accumulation of disk space. That protects other zones and other applications that may be using the same ZFS pool.
/var was especially important with the small boot disk capacities we had to work with in Ye Olde Days, and perhaps became less
important with the large disks we have now.
However, this becomes important again due to the availability of relatively low capacity Solid State Disk (SSD)
boot drives being used for fast local boot disks with low power consumption, and because of virtual environments in which a
single Oracle Solaris instance might host many containers, each with its own
/var and pattern of space consumption.
So, maybe this is a useful Old School idea that has new, and slightly different relevance today.