Solaris and OpenSolaris coexistence in the same root zpool

Some time ago, my buddy Jeff Victor gave us FrankenZone. An idea that is disturbingly brilliant. It has taken me a while, but I offer for your consideration VirtualBox as a V2P platform for OpenSolaris. Nowhere near as brilliant, but at least as unusual. And you know that you have to try this out at home.

Note: This is totally a science experiment. I fully expect to see the two guys from Myth Busters showing up at any moment. It also requires at least build 100 of OpenSolaris on both the host and guest operating system to work around the hostid difficulties.

With the caveats out of the way, let me set the back story to explain how I got here.

Until virtualization technologies become ubiquitous and nothing more than BIOS extensions, multi-boot configurations will continue to be an important capability. And for those working with [Open]Solaris there are several limitations that complicate this unnecessarily. Rather than lamenting these, the possibility of leveraging ZFS root pools, now in Solaris 10 10/08, should offer up some interesting solutions.

What I want to do is simple - have a single Solaris fdisk partition that can have multiple versions of Solaris all bootable with access to all of my data. This doesn't seem like much of a request, but as of yet this has been nearly impossible to accomplish in anything close to a supportable configuration. As it turns out the essential limitation is in the installer - all other issues can be handled if we can figure out how to install OpenSolaris into an existing pool.

What we will do is use our friend VirtualBox to work around the installer issues. After installing OpenSolaris in a virtual machine we take a ZFS snapshot, send it to the bare metal Solaris host and restore it in the root pool. Finally we fix up a few configuration files to make everything work and we will be left with a single root pool that can boot Solaris 10, Solaris Express Community Edition (nevada), and OpenSolaris.

How cool is that :-) Yeah, it is that cool. Let's proceed.

Prepare the host system

The host system is running a fresh install of Solaris 10 10/08 with a single large root zpool. In this example the root zpool is named panroot. There is also a separate zpool that contains data that needs to be preserved in case a re-installation of Solaris is required. That zpool is named pandora, but it doesn't matter - it will be automatically imported in our new OpenSolaris installation if all goes well.
# lustatus 
Boot Environment           Is       Active Active    Can    Copy      
Name                       Complete Now    On Reboot Delete Status    
-------------------------- -------- ------ --------- ------ ----------
s10u6_baseline             yes      no     no        yes    -         
s10u6                      yes      no     no        yes    -         
nv95                       yes      yes    yes       no     -         
nv101a                     yes      no     no        yes    -    

     
# zpool list
NAME      SIZE   USED  AVAIL    CAP  HEALTH  ALTROOT
pandora  64.5G  56.9G  7.61G    88%  ONLINE  -
panroot    40G  26.7G  13.3G    66%  ONLINE  -
One challenge that came up was the less than stellar performance of ssh over the VirtualBox NAT interface. So rather than fight this I set up a shared NFS file system in the root pool to stage the ZFS backup file. This made the process go much faster.

In the host Solaris system
# zfs create -o sharenfs=rw,anon=0 -o mountpoint=/share panroot/share

Prepare the OpenSolaris virtual machine

If you have not already done so, get a copy of VirtualBox, install it and set up a virtual machine for OpenSolaris.

Important note: Do not install the VirtualBox guest additions. This will install some SMF services that will fail when booted on bare metal.

Send a ZFS snapshot to the host OS root zpool

Let's take a look around the freshly installed OpenSolaris system to see what we want to send.

Inside the OpenSolaris virtual machine
bash-3.2$ zfs list
NAME                     USED  AVAIL  REFER  MOUNTPOINT
rpool                   6.13G  9.50G    46K  /rpool
rpool/ROOT              2.56G  9.50G    18K  legacy
rpool/ROOT/opensolaris  2.56G  9.50G  2.49G  /
rpool/dump               511M  9.50G   511M  -
rpool/export            2.57G  9.50G  2.57G  /export
rpool/export/home        604K  9.50G    19K  /export/home
rpool/export/home/bob    585K  9.50G   585K  /export/home/bob
rpool/swap               512M  9.82G   176M  -
My host system root zpool (panroot) already has swap and dump, so these won't be needed. And it also has an /export hierarchy for home directories. I will recreate my OpenSolaris Primary System Administrator user once on bare metal, so it appears the only thing I need to bring over is the root dataset itself.

Inside the OpenSolaris virtual machine
bash-3.2$ pfexec zfs snapshot rpool/ROOT/opensolaris@scooby
bash-3.2$ pfexec zfs send rpool/ROOT/opensolaris@scooby > /net/10.0.2.2/share/scooby.zfs
We are now done with the virtual machine. It can be shut down and the storage reclaimed for other purposes.

Restore the ZFS dataset in the host system root pool

In addition to restoring the OpenSolaris root pool, the canmount property should be set to noauto. I also destroy the NFS shared directory since it will no longer be needed.
# zfs receive panroot/ROOT/scooby < /share/scooby.zfs
# zfs set canmount=noauto panroot/ROOT/scooby
# zfs destroy panroot/shared
Now mount the new OpenSolaris root filesystem and fix up a few configuration files. Specifically
  • /etc/zfs/zpool.cache so that all boot environments have the same view of available ZFS pools
  • /etc/hostid to keep all of the boot environments using the same hostid. This is extremely important and failure to do this will leave some of your boot environments unbootable - which isn't very useful. /etc/hostid is new to build 100 and later.
Rebuild the OpenSolaris boot archive and we will be done with that filesystem.
# zfs set canmount=noauto panroot/ROOT/scooby
# zfs set mountpoint=/mnt panroot/ROOT/scooby
# zfs mount panroot/ROOT/scooby

# cp /etc/zfs/zpool.cache /mnt/etc/zfs
# cp /etc/hostid /mnt/etc/hostid

# bootadm update-archive -f -R /mnt
Creating boot_archive for /mnt
updating /mnt/platform/i86pc/amd64/boot_archive
updating /mnt/platform/i86pc/boot_archive

# umount /mnt
Make a home directory for your OpenSolaris administrator user (in this example the user is named admin). Also add a GRUB stanza so that OpenSolaris can be booted.
# mkdir -p /export/home/admin
# chown admin:admin /export/home/admin
# cat > /panroot/boot/grub/menu.lst   <<DOO
title Scooby
root (hd0,3,a)
bootfs panroot/ROOT/scooby
kernel$ /platform/i86pc/kernel/$ISADIR/unix -B $ZFS-BOOTFS
module$ /platform/i86pc/$ISADIR/boot_archive
DOO
At this point we are done. Reboot the system and you should see a new GRUB stanza for our new OpenSolaris installation (scooby). Cue large audience applause track.

Live Upgrade and OpenSolaris Boot Environment Administration

On interesting side effect, on the positive side, is the healthy interaction of Live Upgrade and beadm(1M). For your Solaris and nevada based installations you can continue to use lucreate(1M), luupgrade(1M), and luactivate(1M). On the OpenSolaris side you can see all of your Live Upgrade boot environments as well as your OpenSolaris boot environments. Note that we can create and activate new boot environments as needed.

When in OpenSolaris
# beadm list
BE                           Active Mountpoint Space   Policy Created          
--                           ------ ---------- -----   ------ -------          
nv101a                       -      -          18.17G  static 2008-11-04 00:03 
nv95                         -      -          122.07M static 2008-11-03 12:47 
opensolaris                  -      -          2.83G   static 2008-11-03 16:23 
opensolaris-2008.11-baseline R      -          2.49G   static 2008-11-04 11:16 
s10u6                        -      -          97.22M  static 2008-11-03 12:03 
s10x_u6wos_07b               -      -          205.48M static 2008-11-01 20:51 
scooby                       N      /          2.61G   static 2008-11-04 10:29 

# beadm create doo
# beadm activate doo
# beadm list
BE                           Active Mountpoint Space   Policy Created          
--                           ------ ---------- -----   ------ -------          
doo                          R      -          5.37G   static 2008-11-04 16:23 
nv101a                       -      -          18.17G  static 2008-11-04 00:03 
nv95                         -      -          122.07M static 2008-11-03 12:47 
opensolaris                  -      -          25.5K   static 2008-11-03 16:23 
opensolaris-2008.11-baseline -      -          105.0K  static 2008-11-04 11:16 
s10u6                        -      -          97.22M  static 2008-11-03 12:03 
s10x_u6wos_07b               -      -          205.48M static 2008-11-01 20:51 
scooby                       N      /          2.61G   static 2008-11-04 10:29 

For the first time I have a single Solaris disk environment that can boot Solaris 10, Solaris Express Community Edition (nevada) or OpenSolaris and have access to all of my data. I did have to add a mount for my shared FAT32 file system (I have an iPhone and several iPods - so Windows do occasionally get opened), but that is about it. Now off to the repository to start playing with all of the new OpenSolaris goodies like Songbird, Brasero, Bluefish and the Xen bits.

Technocrati Tags:
Comments:

Nicely done. Obviously we will make this easier at some point. But be aware that activating LU environments from an OpenSolaris environment with beadm isn't identical to activating them from another LU environment, because the synchronization that occurs via the LU init scripts won't occur.

Posted by Dave Miner on November 07, 2008 at 02:20 AM CST #

How do I do this when my old root has zones! I want to move them
over as well.

Posted by Dan McDonald on November 10, 2008 at 06:13 AM CST #

By far one of your most interesting and useful postings. I can't wait to try this out. And 4 G of memory is going for $40. Swweeeeeet.

Posted by Ben Taylor on December 08, 2008 at 04:40 PM CST #

In the files to save section, you probably want to copy the ssh keys from /etc/ssh to your new instance so that you're not constantly having to clean out ~/.ssh/known_hosts for a system that boots multiple copies of "Solaris" from a single rpool.

Posted by Ben Taylor on December 20, 2008 at 12:04 AM CST #

A brilliant article, Bob. I was very happy when Iain Curtain pointed it out to me. I don't have S10 on the same computer running though, so my starting point was different. I wanted to have nevada and opensolaris run in the same root zpool. Your manual works very good, up to the point of the grub menu. That -IS- different in nevada/OS.
I solved it (wasn't that hard) and now I have nevada/opensolaris running in the same pool. Nice. Very nice indeed. I've read the warning about lu/beadm. I guess you have to use the utils of the OS you run.

Posted by dick hoogendijk on April 12, 2009 at 01:10 AM CDT #

This is a really cool trick. The only thing that is hanging me up is the copying of the /etc/hostid file from solaris 10 to opensolaris. I've tried S10u6 and u7, and there is no /etc/hostid file. The /etc/hostid file in opensolaris does not look to be in a format I am familiar with. Any suggestions?

Posted by Steve Dobbs on May 21, 2009 at 02:07 AM CDT #

Post a Comment:
  • HTML Syntax: NOT allowed
About

Bob Netherton is a Principal Sales Consultant for the North American Commercial Hardware group, specializing in Solaris, Virtualization and Engineered Systems. Bob is also a contributing author of Solaris 10 Virtualization Essentials.

This blog will contain information about all three, but primarily focused on topics for Solaris system administrators.

Please follow me on Twitter Facebook or send me email

Search

Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today