Solaris and OpenSolaris coexistence in the same root zpool
By user12611829 on Nov 04, 2008
Note: This is totally a science experiment. I fully expect to see the two guys from Myth Busters showing up at any moment. It also requires at least build 100 of OpenSolaris on both the host and guest operating system to work around the hostid difficulties.
With the caveats out of the way, let me set the back story to explain how I got here.
Until virtualization technologies become ubiquitous and nothing more than BIOS extensions, multi-boot configurations will continue to be an important capability. And for those working with [Open]Solaris there are several limitations that complicate this unnecessarily. Rather than lamenting these, the possibility of leveraging ZFS root pools, now in Solaris 10 10/08, should offer up some interesting solutions.
What I want to do is simple - have a single Solaris fdisk partition that can have multiple versions of Solaris all bootable with access to all of my data. This doesn't seem like much of a request, but as of yet this has been nearly impossible to accomplish in anything close to a supportable configuration. As it turns out the essential limitation is in the installer - all other issues can be handled if we can figure out how to install OpenSolaris into an existing pool.
What we will do is use our friend VirtualBox to work around the installer issues. After installing OpenSolaris in a virtual machine we take a ZFS snapshot, send it to the bare metal Solaris host and restore it in the root pool. Finally we fix up a few configuration files to make everything work and we will be left with a single root pool that can boot Solaris 10, Solaris Express Community Edition (nevada), and OpenSolaris.
How cool is that :-) Yeah, it is that cool. Let's proceed.
Prepare the host systemThe host system is running a fresh install of Solaris 10 10/08 with a single large root zpool. In this example the root zpool is named panroot. There is also a separate zpool that contains data that needs to be preserved in case a re-installation of Solaris is required. That zpool is named pandora, but it doesn't matter - it will be automatically imported in our new OpenSolaris installation if all goes well.
# lustatus Boot Environment Is Active Active Can Copy Name Complete Now On Reboot Delete Status -------------------------- -------- ------ --------- ------ ---------- s10u6_baseline yes no no yes - s10u6 yes no no yes - nv95 yes yes yes no - nv101a yes no no yes - # zpool list NAME SIZE USED AVAIL CAP HEALTH ALTROOT pandora 64.5G 56.9G 7.61G 88% ONLINE - panroot 40G 26.7G 13.3G 66% ONLINE -One challenge that came up was the less than stellar performance of ssh over the VirtualBox NAT interface. So rather than fight this I set up a shared NFS file system in the root pool to stage the ZFS backup file. This made the process go much faster.
In the host Solaris system
# zfs create -o sharenfs=rw,anon=0 -o mountpoint=/share panroot/share
Prepare the OpenSolaris virtual machineIf you have not already done so, get a copy of VirtualBox, install it and set up a virtual machine for OpenSolaris.
Important note: Do not install the VirtualBox guest additions. This will install some SMF services that will fail when booted on bare metal.
Send a ZFS snapshot to the host OS root zpoolLet's take a look around the freshly installed OpenSolaris system to see what we want to send.
Inside the OpenSolaris virtual machine
bash-3.2$ zfs list NAME USED AVAIL REFER MOUNTPOINT rpool 6.13G 9.50G 46K /rpool rpool/ROOT 2.56G 9.50G 18K legacy rpool/ROOT/opensolaris 2.56G 9.50G 2.49G / rpool/dump 511M 9.50G 511M - rpool/export 2.57G 9.50G 2.57G /export rpool/export/home 604K 9.50G 19K /export/home rpool/export/home/bob 585K 9.50G 585K /export/home/bob rpool/swap 512M 9.82G 176M -My host system root zpool (panroot) already has swap and dump, so these won't be needed. And it also has an /export hierarchy for home directories. I will recreate my OpenSolaris Primary System Administrator user once on bare metal, so it appears the only thing I need to bring over is the root dataset itself.
Inside the OpenSolaris virtual machine
bash-3.2$ pfexec zfs snapshot rpool/ROOT/opensolaris@scooby bash-3.2$ pfexec zfs send rpool/ROOT/opensolaris@scooby > /net/10.0.2.2/share/scooby.zfsWe are now done with the virtual machine. It can be shut down and the storage reclaimed for other purposes.
Restore the ZFS dataset in the host system root poolIn addition to restoring the OpenSolaris root pool, the canmount property should be set to noauto. I also destroy the NFS shared directory since it will no longer be needed.
# zfs receive panroot/ROOT/scooby < /share/scooby.zfs # zfs set canmount=noauto panroot/ROOT/scooby # zfs destroy panroot/sharedNow mount the new OpenSolaris root filesystem and fix up a few configuration files. Specifically
- /etc/zfs/zpool.cache so that all boot environments have the same view of available ZFS pools
- /etc/hostid to keep all of the boot environments using the same hostid. This is extremely important and failure to do this will leave some of your boot environments unbootable - which isn't very useful. /etc/hostid is new to build 100 and later.
# zfs set canmount=noauto panroot/ROOT/scooby # zfs set mountpoint=/mnt panroot/ROOT/scooby # zfs mount panroot/ROOT/scooby # cp /etc/zfs/zpool.cache /mnt/etc/zfs # cp /etc/hostid /mnt/etc/hostid # bootadm update-archive -f -R /mnt Creating boot_archive for /mnt updating /mnt/platform/i86pc/amd64/boot_archive updating /mnt/platform/i86pc/boot_archive # umount /mntMake a home directory for your OpenSolaris administrator user (in this example the user is named admin). Also add a GRUB stanza so that OpenSolaris can be booted.
# mkdir -p /export/home/admin # chown admin:admin /export/home/admin # cat > /panroot/boot/grub/menu.lst <<DOO title Scooby root (hd0,3,a) bootfs panroot/ROOT/scooby kernel$ /platform/i86pc/kernel/$ISADIR/unix -B $ZFS-BOOTFS module$ /platform/i86pc/$ISADIR/boot_archive DOOAt this point we are done. Reboot the system and you should see a new GRUB stanza for our new OpenSolaris installation (scooby). Cue large audience applause track.
Live Upgrade and OpenSolaris Boot Environment AdministrationOn interesting side effect, on the positive side, is the healthy interaction of Live Upgrade and beadm(1M). For your Solaris and nevada based installations you can continue to use lucreate(1M), luupgrade(1M), and luactivate(1M). On the OpenSolaris side you can see all of your Live Upgrade boot environments as well as your OpenSolaris boot environments. Note that we can create and activate new boot environments as needed.
When in OpenSolaris
# beadm list BE Active Mountpoint Space Policy Created -- ------ ---------- ----- ------ ------- nv101a - - 18.17G static 2008-11-04 00:03 nv95 - - 122.07M static 2008-11-03 12:47 opensolaris - - 2.83G static 2008-11-03 16:23 opensolaris-2008.11-baseline R - 2.49G static 2008-11-04 11:16 s10u6 - - 97.22M static 2008-11-03 12:03 s10x_u6wos_07b - - 205.48M static 2008-11-01 20:51 scooby N / 2.61G static 2008-11-04 10:29 # beadm create doo # beadm activate doo # beadm list BE Active Mountpoint Space Policy Created -- ------ ---------- ----- ------ ------- doo R - 5.37G static 2008-11-04 16:23 nv101a - - 18.17G static 2008-11-04 00:03 nv95 - - 122.07M static 2008-11-03 12:47 opensolaris - - 25.5K static 2008-11-03 16:23 opensolaris-2008.11-baseline - - 105.0K static 2008-11-04 11:16 s10u6 - - 97.22M static 2008-11-03 12:03 s10x_u6wos_07b - - 205.48M static 2008-11-01 20:51 scooby N / 2.61G static 2008-11-04 10:29For the first time I have a single Solaris disk environment that can boot Solaris 10, Solaris Express Community Edition (nevada) or OpenSolaris and have access to all of my data. I did have to add a mount for my shared FAT32 file system (I have an iPhone and several iPods - so Windows do occasionally get opened), but that is about it. Now off to the repository to start playing with all of the new OpenSolaris goodies like Songbird, Brasero, Bluefish and the Xen bits.
Technocrati Tags: Sun Solaris Virtualization VirtualBox