Cold storage migration with Zones on Shared Storage
By mgerdts on Apr 30, 2013
A question on the Solaris Zones forum inspired this entry in a place that perhaps more people will see it. The goal in this exercise is to migrate a ZOSS (zones on shared storage) zone from a mirrored rootzpool to a raidz rootzpool.
WARNING: I've used lofi devices as the backing store for my zfs pools. This configuration will not survive a reboot and should not be used in the real world. Use real disks when you do this for any zones that matter.
My starting configuration is:
# zonecfg -z stuff info rootzpool rootzpool: storage: dev:lofi/1 storage: dev:lofi/2
My zone, stuff, is installed and not running.
# zoneadm -z stuff list -v ID NAME STATUS PATH BRAND IP - stuff installed /zones/stuff solaris excl
I need to prepare the new storage by creating a zpool with the desired layout.
# zpool create newpool raidz /dev/lofi/3 /dev/lofi/4 /dev/lofi/5 /dev/lofi/6 /dev/lofi/7 # zpool status newpool pool: newpool state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM newpool ONLINE 0 0 0 raidz1-0 ONLINE 0 0 0 /dev/lofi/3 ONLINE 0 0 0 /dev/lofi/4 ONLINE 0 0 0 /dev/lofi/5 ONLINE 0 0 0 /dev/lofi/6 ONLINE 0 0 0 /dev/lofi/7 ONLINE 0 0 0 errors: No known data errors
Next, migrate the data. Remember, the zone is not running at this point. We can use zfs list to figure out the name of the zpool mounted at the zonepath.
# zfs list -o name,mountpoint,mounted /zones/stuff NAME MOUNTPOINT MOUNTED stuff_rpool /zones/stuff yes
# zfs snapshot -r stuff_rpool@migrate
# zfs send -R stuff_rpool@migrate | zfs recv -u -F newpool
The -u option was used with zfs receive so that it didn't try to mount the zpool's root file system at the zonepath when it completed. The -F option was used to allow it to wipe out anything that happens to exist in the top-level dataset in the destination zpool.
Now, we are ready to switch which pool is in the zone configuration. To do that, we need to detach the zone, modify the configuration, and then attach it. Prior to attaching, we also need to ensure that newpool is exported.
# zoneadm -z stuff detach Exported zone zpool: stuff_rpool # zpool export newpool # zonecfg -z stuff zonecfg:stuff> info rootzpool rootzpool: storage: dev:lofi/1 storage: dev:lofi/2 zonecfg:stuff> remove rootzpool zonecfg:stuff> add rootzpool zonecfg:stuff:rootzpool> add storage dev:lofi/3 zonecfg:stuff:rootzpool> add storage dev:lofi/4 zonecfg:stuff:rootzpool> add storage dev:lofi/5 zonecfg:stuff:rootzpool> add storage dev:lofi/6 zonecfg:stuff:rootzpool> add storage dev:lofi/7 zonecfg:stuff:rootzpool> end zonecfg:stuff> exit
In the commands above, I was quite happy that zonecfg allows the up arrow or ^P to select the previous command. Each instance of add storage was just four keystrokes (^P, backspace, number, enter).
# zoneadm -z stuff attach Imported zone zpool: stuff_rpool Progress being logged to /var/log/zones/zoneadm.20130430T144419Z.stuff.attach Installing: Using existing zone boot environment Zone BE root dataset: stuff_rpool/rpool/ROOT/solaris Cache: Using /var/pkg/publisher. Updating non-global zone: Linking to image /. Processing linked: 1/1 done Updating non-global zone: Auditing packages. No updates necessary for this image. Updating non-global zone: Zone updated. Result: Attach Succeeded. Log saved in non-global zone as /zones/stuff/root/var/log/zones/zoneadm.20130430T144419Z.stuff.attach # zpool status stuff_rpool pool: stuff_rpool state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM stuff_rpool ONLINE 0 0 0 raidz1-0 ONLINE 0 0 0 /dev/lofi/3 ONLINE 0 0 0 /dev/lofi/4 ONLINE 0 0 0 /dev/lofi/5 ONLINE 0 0 0 /dev/lofi/6 ONLINE 0 0 0 /dev/lofi/7 ONLINE 0 0 0 errors: No known data errors
At this point the storage has been migrated. You can boot the zone and move on to the next task.
You probably want to use zfs destroy -r stuff_rpool@migrate once you are sure you don't need to revert to the old storage. Until you delete it (or the source zpool) you can use zfs send -I to send just the differences back to the old pool. That's left as an exercise for the reader.