Live Upgrade and ZFS Versioning

Thanks to John Kotches and Craig Bell for bringing this one up in the comments of an earlier article. I've included this in a new update to my Live Upgrade Survival Tips, but though it worthy posting all by itself.

ZFS pool and file system functionality may be added with a Solaris release. These new capabilities are identified in the ZFS zpool and file system version numbers. To find out what versions you are running, and what capabilities they provide, use the corresponding upgrade -v commands. Yes, it is a bit disconcerting at first, using an upgrade command, not to upgrade, but to determine which features exist.

Here is an example of each output, for your reference.

# zpool upgrade -v
This system is currently running ZFS pool version 31.

The following versions are supported:

VER  DESCRIPTION
---  --------------------------------------------------------
 1   Initial ZFS version
 2   Ditto blocks (replicated metadata)
 3   Hot spares and double parity RAID-Z
 4   zpool history
 5   Compression using the gzip algorithm
 6   bootfs pool property
 7   Separate intent log devices
 8   Delegated administration
 9   refquota and refreservation properties
 10  Cache devices
 11  Improved scrub performance
 12  Snapshot properties
 13  snapused property
 14  passthrough-x aclinherit
 15  user/group space accounting
 16  stmf property support
 17  Triple-parity RAID-Z
 18  Snapshot user holds
 19  Log device removal
 20  Compression using zle (zero-length encoding)
 21  Deduplication
 22  Received properties
 23  Slim ZIL
 24  System attributes
 25  Improved scrub stats
 26  Improved snapshot deletion performance
 27  Improved snapshot creation performance
 28  Multiple vdev replacements
 29  RAID-Z/mirror hybrid allocator
 30  Encryption
 31  Improved 'zfs list' performance

For more information on a particular version, including supported releases,
see the ZFS Administration Guide.


# zfs upgrade -v
The following filesystem versions are supported:

VER  DESCRIPTION
---  --------------------------------------------------------
 1   Initial ZFS filesystem version
 2   Enhanced directory entries
 3   Case insensitive and File system unique identifier (FUID)
 4   userquota, groupquota properties
 5   System attributes

For more information on a particular version, including supported releases,
see the ZFS Administration Guide.

In this particular example, the kernel supports up to zpool version 31 and ZFS version 5.

Where you can run into trouble with this is when you create a pool or file system and then fall back to a boot environment that is older and doesn't support those particular versions. The survival tip is keep your zpool and vfs versions at a level that is compatible with the oldest boot environment that you will ever fall back to. A corollary to this is that you can upgrade your pools and file systems when you have deleted the last boot environment that supports that particular version.

Your first question is probably, "what versions of ZFS go with the particular Solaris releases ?" Here is a table of Solaris releases since 10/08 (u6) and their corresponding zpool and zfs version numbers.

Solaris ReleaseZPOOL VersionZFS Version
Solaris 10 10/08 (u6)103
Solaris 10 5/09 (u7)103
Solaris 10 10/09 (u8)154
Solaris 10 9/10 (u9)224
Solaris 10 8/11 (u10)295
Solaris 11 11/11 (ga)335
Solaris 11.1346

Note that these versions are for the release as well as if you have patched a system to that same level. In other words, a Solaris 10 10/08 system with the latest recommended patch cluster installed might be at the 8/11 (u10) level. You can always use zpool upgrade -v and zfs upgrade -v to make sure.

Now you are wondering how you create a pool or file system at a version different than the default for your Solaris release. Fortunately, ZFS is flexible enough to allow us to do exactly that. Here is an example.

# zpool create testpool testdisk

# zpool get version testpool
NAME      PROPERTY  VALUE    SOURCE
testpool  version   31       default

# zfs get version testpool
NAME      PROPERTY  VALUE    SOURCE
testpool  version   5        -

This pool and associated top level file system can only be accessed on a Solaris 11 system. Let's destroy it and start again, this time making it possible to access it on a Solaris 10 10/09 system (zpool version 15, zfs version 4). We can use the -o version= and -O version= when the pool is created to accomplish this.
# zpool destroy testpool
# zpool create -o version=15 -O version=4 testpool testdisk
# zfs create testpool/data

# zpool get version testpool
NAME      PROPERTY  VALUE    SOURCE
testpool  version   15       local

# zfs get -r version testpool
NAME      PROPERTY  VALUE    SOURCE
testpool  version   4        -
testpool/data  version   4        -

In this example, we created the pool explicitly at version 15, and using -O to pass zfs file system creation options to the top level dataset, we set that to version 4. To make things easier, new file systems created in this pool will be at version 4, inheriting that from the parent, unless overridden by -o version= at the time the file system is created.

The last remaining task is to look at how you might upgrade a pool and file system when you have removed an old boot environment. We will go back to our previous example where we have a version 15 pool and 4 dataset. We have removed the Solaris 10 10/09 boot environment and now the oldest is Solaris 10 8/11 (u10). That supports version 29 pools and version 5 file systems. We will use zpool/zfs upgrade -V to set the specific versions to 29 and 5 respectively.

# zpool upgrade -V 29 testpool
This system is currently running ZFS pool version 31.

Successfully upgraded 'testpool' from version 15 to version 29

# zpool get version testpool
NAME      PROPERTY  VALUE    SOURCE
testpool  version   29       local

# zfs upgrade -V 5 testpool
1 filesystems upgraded

# zfs get -r version testpool
testpool       version   5        -
testpool/data  version   4        -

That didn't go quite as expected, or did it ? The pool was upgraded as expected, as was the top level dataset. But testpool/data is still at version 4. It initially inherited that version from the parent when it was created. When using zfs upgrade, only the datasets listed are upgraded. If we wanted the entire pool of file systems to be upgraded, we should have used -r for recursive.
# zfs upgrade -V 5 -r testpool
1 filesystems upgraded
1 filesystems already at this version

# zfs get -r version testpool
NAME           PROPERTY  VALUE    SOURCE
testpool       version   5        -
testpool/data  version   5        -

Now, that's more like it.

For review, the tip is to keep your shared ZFS datasets and pools are the lowest versions supported by the oldest boot environments you plan to use. You can always use upgrade -v to see what versions are available for use, and by using -o version= and -O version, you can create new pools and datasets that are accessible by older boot environments. This last bit can also come in handy if you are moving pools around systems that might be at different versions.

Thanks again to Craig and John for this great tip.

Technocrati Tags:

Comments:

Is there any way to manually revert back to an older ZFS version? Before I found your great blog entry I made the mistake of running zpool upgrade on two zpools, rpool and dpool, then had the misfortune to try to revert back to the previous BE and now cannot boot / mount rpool / root so get:

NOTICE: zfs_parse_bootfs: error 48
Cannot mount root on rpool/51 fstype zfs
panic[cpu2]/thread=180e000: vfs_mountroot: cannot mount root.

Can live with just getting back to my new BE which is u10 to match zpool version 29 (forgot about zfs upgrade and did not do that). Hate to have to go back to tapes now. Tried to boot u10 dvd then zpool import rpool per Live Upgrade revert BE instructions and can mount rpool that way but cannot get luactivate to work for me to go back to new, patched, BE. Any help?

Thanks in advance.

Paul

Posted by guest on May 18, 2012 at 06:34 PM CDT #

Hey Paul,

I wish I had gotten to you before you did that :-) Unfortunately, there is no zpool downgrade.

There are a couple of things you can do (and I realize this doesn't help a lot some three weeks later)

1. You could create a new root and data pool, set at the specific zpool and zfs versions (zpool create -o version= -O version=) for your oldest BE. You can use LU to move your old boot environment (lucreate -s oldbe -p newpool). You will have to manually copy your data, but since you haven't messed with the ZFS versions, zfs send/recv should work.

2. You could apply the latest recommended patch cluster to your oldest boot environment. I know that sort of defeats the purpose of having the old boot environment, but the ZFS/zpool supported version numbers will change with the kernel patch.

3. Restore from backup. I'm guessing that this is the direction you chose, and that is the least risky and easiest path.

Sorry that this happened to you. Perhaps others will learn from your misfortune.

Posted by Bob Netherton on June 04, 2012 at 10:33 AM CDT #

Post a Comment:
  • HTML Syntax: NOT allowed
About

Bob Netherton is a Principal Sales Consultant for the North American Commercial Hardware group, specializing in Solaris, Virtualization and Engineered Systems. Bob is also a contributing author of Solaris 10 Virtualization Essentials.

This blog will contain information about all three, but primarily focused on topics for Solaris system administrators.

Please follow me on Twitter Facebook or send me email

Search

Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today