Wednesday Jan 20, 2010
Tuesday Jan 19, 2010
By mrj on Jan 19, 2010
I did this on a VirtualBox guest.. I did a fresh install of s10u8 on a zfs root (you need to use the text installer or do a net based install to install on a zfs root).
I then booted an OpenSolaris iso, and after setting my hostname and hostid, ran the create-be script to create a new OpenSolaris BE on the S10's zfs root.
DISCLAIMER: this is totally unsupported by Sun, could mess up your system, etc. etc.
Write down your hostname, hostid, IP addr, netmask, gateway, and NIS domain.
bash-3.00# uname -a SunOS unknown 5.10 Generic_141445-09 i86pc i386 i86pc bash-3.00# hostid 10fa4034 bash-3.00# echo "hw_serial,0xa?B" | mdb -k hw_serial: hw_serial:32 38 34 38 33 35 38 39 32 0 bash-3.00#Boot your OpenSolaris iso.. Set your hostname and hostid.
jack@opensolaris:~$ pfexec su - root@opensolaris:~# hostname unknown root@unknown:~# hostid 00041f55 root@unknown:~# echo "hw_serial/v 32 38 34 38 33 35 38 39 32 0" | mdb -kw root@unknown:~# hostid 10fa4034 root@unknown:~#Get the create-be script...
root@opensolaris:~# wget http://blogs.sun.com/mrj/resource/create-be root@opensolaris:~# chmod a+x create-beImport your Solaris 10 rpool... Create the OpenSolaris BE (we'll need to manually create the menu.lst entry later since the script doesn't handle s10 menu.lst entries right now.
root@opensolaris:~# zpool import rpool root@opensolaris:~# /root/create-be --build=129 --bename=osol129Add your new BE into your menu.lst. e.g. here is my entry.
title osol129 findroot (pool_rpool,0,a) bootfs rpool/ROOT/snv129 kernel$ /platform/i86pc/kernel/$ISADIR/unix -B $ZFS-BOOTFS module$ /platform/i86pc/$ISADIR/boot_archiveInstall a newer version of grub
root@opensolaris:~# /mnt/sbin/installgrub /mnt/boot/grub/stage1 /mnt/boot/grub/stage2 /dev/rdsk/c0d0s0Now, reboot into your OpenSolaris BE, configure it, and look around. You can reboot back into s10 at any time...
root@unknown:~# uname -a SunOS unknown 5.11 snv_129 i86pc i386 i86pc Solaris root@unknown:~# root@unknown:~# zfs list NAME USED AVAIL REFER MOUNTPOINT rpool 10.4G 28.8G 40K /rpool rpool/ROOT 8.37G 28.8G 21K legacy rpool/ROOT/s10x_u8wos_08a 3.67G 28.8G 3.67G / rpool/ROOT/snv129 4.70G 28.8G 4.70G /mnt rpool/dump 1.00G 28.8G 1.00G - rpool/export 44K 28.8G 23K /export rpool/export/home 21K 28.8G 21K /export/home rpool/swap 1G 29.8G 16K - root@unknown:~# root@unknown:~# beadm list BE Active Mountpoint Space Policy Created -- ------ ---------- ----- ------ ------- s10x_u8wos_08a R - 3.67G static 2010-01-13 12:02 snv129 N / 4.70G static 2010-01-14 14:24 root@unknown:~# beadm activate s10x_u8wos_08a root@unknown:~# reboot ... bash-3.00# uname -a SunOS unknown 5.10 Generic_141445-09 i86pc i386 i86pc bash-3.00# zfs list NAME USED AVAIL REFER MOUNTPOINT rpool 10.4G 28.8G 40K /rpool rpool/ROOT 8.37G 28.8G 21K legacy rpool/ROOT/s10x_u8wos_08a 3.67G 28.8G 3.67G / rpool/ROOT/snv129 4.70G 28.8G 4.70G /mnt rpool/dump 1.00G 28.8G 1.00G - rpool/export 44K 28.8G 23K /export rpool/export/home 21K 28.8G 21K /export/home rpool/swap 1G 29.8G 16K - bash-3.00#
Friday Jan 15, 2010
By mrj on Jan 15, 2010
Now that we have zfs root, it's more difficult, and at the same time, much more powerful. zfs stores platform specific boot information in it's meta data which isn't easily accessed, making it more difficult. But, zfs supports live snapshots which makes it much more powerful.
With zfs, we can "easily move" the machine from one "machine" to another. This generally applies to S10 and Opensolaris, as long as it's running a zfs root.. You can go from a physical machine, VirtualBox guest, xVM guest, etc. to a different physical machine, VirtualBox guest, xVM guest, etc.
For an example, I thought I would share some tricks on how you can transform a x86 box running OpenSolaris to a VirtualBox guest without ever shutting down or rebooting the x86 box.
The first thing you want to do is boot a new VirtualBox guest with an OpenSolaris live install iso. You want to make sure that the zfs version in the install iso matches the zfs version on your x86 box.
Once you have booted the install iso, open a shell. Enable ssh, and then run format. Write down the disk your going to use (e.g. c4t0d0s0), then run fdisk from within format. Usually you will create a single disk partition for Solaris here.. Exit fdisk saving your changes.
Now partition your Solaris disk. Select the 0 partition, set the tag to "root" (without the quotes), and set the range from 1 to the last cylinder. Make sure partition 8 is set to 0 - 0 cylinders, then label the disk and exit format.
On your x86 box, write down your hostname and hostid info.
: core2#; hostname core2 : core2#; hostid 05bdb9c2 : core2#; echo "hw_serial,0xa?B" | mdb -k hw_serial: hw_serial: 39 36 33 31 39 39 33 38 0 0On your VirtualBox guest, update hostname and hostid to match your x86 system.
root@opensolaris:~# hostname core2 root@core2:~# hostid 00041f55 root@core2:~# echo "hw_serial/v 39 36 33 31 39 39 33 38 0 0" | mdb -kw root@core2:~# hostid 05bdb9c2 root@core2:~#Now, create the zfs root on the VirtualBox guest using the disk you saw in format. Make sure that you create the zpool on slice 0 (s0). If your moving multiple pools, you'll probably want to setup multiple pools on the guest now too.
zpool create -R /a -f rpool /dev/dsk/c4t0d0s0Back to the x86 system, snapshot the root pool, then send it to the opensolaris guest (which is 192.168.0.117 in my example). Do the same for all pools you want to move.
zfs snapshot -r rpool@p2v zfs send -R rpool@p2v | ssh firstname.lastname@example.org pfexec /usr/sbin/zfs receive -dF rpoolOnce this completes, if the x86 system is actively being used, you'll want to shut down the apps your using (e.g. databases, etc.), take a snapshot again, then do a differencing zfs send to do a final sync.
Back on the VirtualBox guest, lets finalize the disk. Set bootfs to the BE you want to boot.
zpool set bootfs='rpool/ROOT/--your-bootfs--' rpoolOn the VirtualBox guest, install grub
/a/sbin/installgrub /a/boot/grub/stage1 /a/boot/grub/stage2 /dev/rdsk/c4t0d0s0If your NICs are different, you need to update them. If you have hostname.--nic-- and dhcp.--nic-- files, update them to point to your new NIC(s). You may have neither. Or you may only have a hostname.--nic--. You may also have to update /a/etc/nwam/llp. For multiple BEs, don't forget to update the NICs in all the BEs you want to boot.
devfsadm -r /a -i e1000g mv /a/etc/hostname.--oldnic-- /a/etc/hostname.e1000g0 mv /a/etc/hostname.--oldnic-- /a/etc/dhcp.e1000g0If your using the same IP, you may want to take your x86 box off the net now... Eject the CDROM and reboot your VirtualBox guest... Hopefully it booted right up :-)
- The EU clears Oracle's acquisition of Sun Microsystems
- Solaris 10 and OpenSolaris on the same zfs root pool
- Solaris 10 & OpenSolaris p2p, p2v, v2p
- Switching from Nevada to OpenSolaris
- Updated create-be
- Install Nevada and OpenSolaris on the same zfs rpool
- Installing a small xVM or VirtualBox OpenSolaris guest
- How to do a fresh install into a BE on OpenSolaris
- How small can Solaris go (part 2)
- Intel Xeon 5500 series