X

An initial encounter with ZFS

After ZFS became available in onnv_27, I immediately upgraded my desktop
system to the newly minted bits. After some initial setup, I've been
happily using ZFS for all of my non-root, non-NFSed data. I'm getting
about 1.7x my storage due to ZFS's compression, and have new-found
safety, since my data is now mirrored.

During the initial setup, my intent was to use only the slices I'd already
set up to do the transfer. What I did not plan for was the fact that ZFS
does not currently allow you to remove a non-redundant slice from a storage
pool without destroying the pool; here's what I did, as well as what I should
have done:

My setup

Before I began, my system layout was fairly simple:

c0t0d0:
Total disk cylinders available: 24620 + 2 (reserved cylinders)
Part Tag Flag Cylinders Size Blocks
0 root wm 1452 - 7259 8.00GB (5808/0/0) 16779312
1 unassigned wm 0 0 (0/0/0) 0
2 backup wm 0 - 24619 33.92GB (24620/0/0) 71127180
3 swap wu 0 - 1451 2.00GB (1452/0/0) 4194828
4 unassigned wm 0 0 (0/0/0) 0
5 unassigned wm 0 0 (0/0/0) 0
6 unassigned wm 0 0 (0/0/0) 0
7 aux0 wm 7260 - 24619 23.91GB (17360/0/0) 50153040
c0t1d0:
Total disk cylinders available: 24620 + 2 (reserved cylinders)
Part Tag Flag Cylinders Size Blocks
0 altroot wm 0 - 5807 8.00GB (5808/0/0) 16779312
1 unassigned wm 0 0 (0/0/0) 0
2 backup wm 0 - 24619 33.92GB (24620/0/0) 71127180
3 swap wu 5808 - 7259 2.00GB (1452/0/0) 4194828
4 unassigned wm 0 0 (0/0/0) 0
5 unassigned wm 0 0 (0/0/0) 0
6 unassigned wm 0 0 (0/0/0) 0
7 aux1 wm 7260 - 24619 23.91GB (17360/0/0) 50153040

That is, two 34GB hard disks, partitioned identically. There are four
slices of major interest:
c0t0d0s0    /            8G    root filesystem
c0t0d0s7 /aux0 24G data needing preserving
c0t1d0s0 /altroot 8G alternate root, currently empty
c0t1d0s7 /aux1 24G some data (/opt) which needed preserving

My goal was to create a 24Gig mirrored ZFS pool, using the underlying
slices of /aux0 and /aux1. /altroot would be my initial stepping stone.

The process

Without any prior experience setting up a ZFS pool, I did the following
steps:

... remove /altroot from /etc/vfstab ...
# zpool create mypool c0t1d0s0
invalid vdev specification
use '-f' to override the following errors:
/dev/dsk/c0t1d0s0 contains a ufs filesystem
# zpool create -f mypool c0t1d0s0
# zpool list
NAME SIZE USED AVAIL CAP HEALTH ALTROOT
mypool 7.94G 32.5K 7.94G 0% ONLINE -
# zfs set compression=yes mypool
# zfs create pool/opt
# zfs set mountpoint=/new_opt mypool/opt
... copy data from /aux1/opt to /new_opt, clear out /aux1 ...
... remove /aux1 from vfstab, and remove the /opt symlink ...
# zfs set mountpoint=/opt mypool/opt
# df -h /opt
Filesystem size used avail capacity Mounted on
mypool/opt 7.9G 560M 7.4G 6% /opt
#

I now had all of the data I needed off of /aux1, and I wanted to add it
to the storage pool. This is where I made a mistake; zfs, in
its initial release, cannot remove a non-redundant device from a pool
(this is being worked on). I did:
# zpool add -f mypool c0t1d0s7 (\*MISTAKE\*)
# zpool status mypool
pool: mypool
state: ONLINE
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
mypool ONLINE 0 0 0
c0t1d0s0 ONLINE 0 0 0
c0t1d0s7 ONLINE 0 0 0
# zfs create mypool/aux0
# zfs create mypool/aux1
# zfs set mountpoint=/new_aux0 mypool/aux0
# zfs set mountpoint=/aux1 mypool/aux1
... move data from /aux0 to /new_aux0 ...
... remove /aux0 from /etc/vfstab ...
# zfs set mountpoint=/aux0 mypool/aux0

And now I was stuck; I wanted to end up with a configuration of:
    mypool
mirror
c0t0d0s7
c0t1d0s7

but there was no way to get there without removing c0t1d0s0 from the
pool, which ZFS doesn't allow you to do directly. I ended up creating a
new pool, "pool" with c0t0d0s7 in it, copying all of my data \*again\*,
destroying "mypool", then mirroring c0t0d0s7 by doing:
# zpool attach pool c0t0d0s7 c0t1d0s7
#

The right way


If I'd planned this all better, the right way to build the pool I wanted
would have been to do:
# zpool create pool c0t1d0s0
... create /opt_new, move data to it ...
# zpool replace pool c0t1d0s0 c0t1d0s7
... create /aux0_new, move data to it ...
# zpool attach pool c0t1d0s7 c0t0d0s7
... clean up attributes, wait for sync to complete ...
# zpool iostat -v
capacity operations bandwidth
pool used avail read write read write
------------ ----- ----- ----- ----- ----- -----
pool 15.0G 8.83G 0 1 1.47K 6.99K
mirror 15.0G 8.83G 0 1 1.47K 6.99K
c0t1d0s7 - - 0 1 2.30K 7.10K
c0t0d0s7 - - 0 1 2.21K 7.10K
------------ ----- ----- ----- ----- ----- -----
#

The real lesson here is to do a little planning if you have to juggle
slices around; missteps can take some work to undo. Read about "zpool replace"
and "zpool attach"; they are very useful for this kind of juggling.

Once I got everything set up, everything just works; despite having
cut my available storage in /aux0 and /aux1 in half (48GB -> 24GB, due to
mirroring), compression is giving me back a substantial fraction of the
loss (~70%, give or take, and assuming the ratio holds steady):

# zfs get -r compressratio pool
NAME PROPERTY VALUE SOURCE
pool compressratio 1.70x -
pool/aux0 compressratio 1.54x -
pool/aux1 compressratio 2.01x -
pool/opt compressratio 1.68x -
#

and ZFS itself is quite zippy. I hope this is a useful lesson, and that you
enjoy ZFS!

Tags: [
,
]

Join the discussion

Comments ( 2 )
  • Arnaud Ziéba Sunday, January 27, 2008

    Very useful as an introduction to ZFS. Thanks.

    Arnaud Ziéba


  • Steinar Friday, June 6, 2008

    ZFS looks interesting, I'm toying around with the idea of using OpenSolaris for a new fileserver.


Please enter your name.Please provide a valid email address.Please enter a comment.CAPTCHA challenge response provided was incorrect. Please try again.