Liveupgrade UFS -> ZFS

It took a bit of work but I managed to pursuade my old laptop to live upgrade to nevada build 90 with ZFS root. First I upgraded to build 90 on ufs and then created a BE on zfs. The reason for the two step approach was to reduce the risk a bit. Bear in mind this is all new in build 90 and I am not an expert on the inner workings of live upgrade. So there are no guarantees.

The upgrade failed at the last minute with this error:

ERROR: File </boot/grub/menu.lst> not found in top level dataset for BE <zfs90>
ERROR: Failed to copy file </boot/grub/menu.lst> from top level dataset to BE <zfs90>
ERROR: Unable to delete GRUB menu entry for boot environment <zfs90>.
ERROR: Cannot make file systems for boot environment <zfs90>.

This bug has already been filed (6707013 LU fail to migrate root file system from UFS to ZFS)

However lustatus said all was well so I tried to activate it:

# lustatus
Boot Environment           Is       Active Active    Can    Copy      
Name                       Complete Now    On Reboot Delete Status    
-------------------------- -------- ------ --------- ------ ----------
ufs90                      yes      yes    yes       no     -         
zfs90                      yes      no     no        yes    -         
# luactivate zfs90
System has findroot enabled GRUB
Generating boot-sign, partition and slice information for PBE <ufs90>
ERROR: No such file or directory: cannot stat </etc/lu/ICF.2>
ERROR: cannot use </etc/lu/ICF.2> as an icf file
ERROR: Unable to mount the boot environment <zfs90>.
#

No joy. Can I mount it?

# lumount -n zfs90
ERROR: No such file or directory: cannot open </etc/lu/ICF.2> mode <r>
ERROR: individual boot environment configuration file does not exist - the specified boot environment is not configured properly
ERROR: cannot access local configuration file for boot environment <zfs90>
ERROR: cannot determine file system configuration for boot environment <zfs90>
ERROR: No such file or directory: error unmounting <tank/ROOT/zfs90>
ERROR: cannot mount boot environment by name <zfs90>
# 

With nothing to loose I copied the ICF file for the UFS BE and edited to look like what I suspected one for a ZFS BE would look like. I got lucky as I was right!

# ls /etc/lu/ICF.1
/etc/lu/ICF.1
# cat  /etc/lu/ICF.1
ufs90:/:/dev/dsk/c0d0s7:ufs:19567170
# cp  /etc/lu/ICF.1  /etc/lu/ICF.2
# vi  /etc/lu/ICF.2
# cat /etc/lu/ICF.2

zfs90:/:tank/ROOT/zfs90:zfs:0
# lumount -n zfs90                
/.alt.zfs90
# df
/                  (/dev/dsk/c0d0s7   ): 1019832 blocks   740833 files
/devices           (/devices          ):       0 blocks        0 files
/dev               (/dev              ):       0 blocks        0 files
/system/contract   (ctfs              ):       0 blocks 2147483616 files
/proc              (proc              ):       0 blocks     9776 files
/etc/mnttab        (mnttab            ):       0 blocks        0 files
/etc/svc/volatile  (swap              ): 1099144 blocks   150523 files
/system/object     (objfs             ):       0 blocks 2147483395 files
/etc/dfs/sharetab  (sharefs           ):       0 blocks 2147483646 files
/dev/fd            (fd                ):       0 blocks        0 files
/tmp               (swap              ): 1099144 blocks   150523 files
/var/run           (swap              ): 1099144 blocks   150523 files
/tank              (tank              ):24284511 blocks 24284511 files
/tank/ROOT         (tank/ROOT         ):24284511 blocks 24284511 files
/lib/libc.so.1     (/usr/lib/libc/libc_hwcap1.so.1): 1019832 blocks   740833 files
/.alt.zfs90        (tank/ROOT/zfs90   ):24284511 blocks 24284511 files
/.alt.zfs90/var/run(swap              ): 1099144 blocks   150523 files
/.alt.zfs90/tmp    (swap              ): 1099144 blocks   150523 files
# luumount zfs90
# luactivate zfs90
System has findroot enabled GRUB
Generating boot-sign, partition and slice information for PBE <ufs90>
diff: /.alt.tmp.b-svc.mnt/etc/lu/synclist: No such file or directory

Generating boot-sign for ABE <zfs90>
ERROR: File </etc/bootsign> not found in top level dataset for BE <zfs90>
Generating partition and slice information for ABE <zfs90>
Boot menu exists.
Generating direct boot menu entries for PBE.
Generating xVM menu entries for PBE.
Generating direct boot menu entries for ABE.
Generating xVM menu entries for ABE.
No more bootadm entries. Deletion of bootadm entries is complete.
GRUB menu default setting is unaffected
Done eliding bootadm entries.

\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*

The target boot environment has been activated. It will be used when you 
reboot. NOTE: You MUST NOT USE the reboot, halt, or uadmin commands. You 
MUST USE either the init or the shutdown command when you reboot. If you 
do not use either init or shutdown, the system will not boot using the 
target BE.

\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*

In case of a failure while booting to the target BE, the following process 
needs to be followed to fallback to the currently working boot environment:

1. Boot from Solaris failsafe or boot in single user mode from the Solaris 
Install CD or Network.

2. Mount the Parent boot environment root slice to some directory (like 
/mnt). You can use the following command to mount:

     mount -Fufs /dev/dsk/c0d0s7 /mnt

3. Run <luactivate> utility with out any arguments from the Parent boot 
environment root slice, as shown below:

     /mnt/sbin/luactivate

4. luactivate, activates the previous working boot environment and 
indicates the result.

5. Exit Single User mode and reboot the machine.

\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*

Modifying boot archive service
Activation of boot environment <zfs90> successful.
#

Fixing boot sign

#file /etc/bootsign
/etc/bootsign:  ascii text
# cat  /etc/bootsign
BE_ufs86
BE_ufs90
# vi  /etc/bootsign
# lumount -n zfs90 /a
/a
# cat /a/etc/bootsign
cat: cannot open /a/etc/bootsign: No such file or directory
# cat /a/etc/bootsign
cat: cannot open /a/etc/bootsign: No such file or directory
# cp /etc/bootsign /a/etc
# vi  /a/etc/bootsign 
# cat /a/etc/bootsign
BE_zfs90
# 
# luumount /a
# luactivate ufs90
System has findroot enabled GRUB
Generating boot-sign, partition and slice information for PBE <ufs90>
Activating the current boot environment <ufs90> for next reboot.
The current boot environment <ufs90> has been activated for the next reboot.
# luactivate zfs90
System has findroot enabled GRUB
Generating boot-sign, partition and slice information for PBE <ufs90>
diff: /.alt.tmp.b-hNc.mnt/etc/lu/synclist: No such file or directory

Generating boot-sign for ABE <zfs90>
Generating partition and slice information for ABE <zfs90>
Boot menu exists.
Generating direct boot menu entries for PBE.
Generating xVM menu entries for PBE.
Generating direct boot menu entries for ABE.
Generating xVM menu entries for ABE.
No more bootadm entries. Deletion of bootadm entries is complete.
GRUB menu default setting is unaffected
Done eliding bootadm entries.

\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*

The target boot environment has been activated. It will be used when you 
reboot. NOTE: You MUST NOT USE the reboot, halt, or uadmin commands. You 
MUST USE either the init or the shutdown command when you reboot. If you 
do not use either init or shutdown, the system will not boot using the 
target BE.

\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*

In case of a failure while booting to the target BE, the following process 
needs to be followed to fallback to the currently working boot environment:

1. Boot from Solaris failsafe or boot in single user mode from the Solaris 
Install CD or Network.

2. Mount the Parent boot environment root slice to some directory (like 
/mnt). You can use the following command to mount:

     mount -Fufs /dev/dsk/c0d0s7 /mnt

3. Run <luactivate> utility with out any arguments from the Parent boot 
environment root slice, as shown below:

     /mnt/sbin/luactivate

4. luactivate, activates the previous working boot environment and 
indicates the result.

5. Exit Single User mode and reboot the machine.

\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*

moModifying boot archive service
Activation of boot environment <zfs90> successful.
# lumount -n zfs90 /a
/a
# cat /a/etc/bootsign
BE_zfs90
# luumount /a
# init 6

The system now booted off the ZFS pool. Once up I just had to see if I could create a second ZFS be as a clone of the first and if so haw fast was this.

# df /
/                  (tank/ROOT/zfs90   ):23834562 blocks 23834562 files

# lustatus
Boot Environment           Is       Active Active    Can    Copy      
Name                       Complete Now    On Reboot Delete Status    
-------------------------- -------- ------ --------- ------ ----------
ufs90                      yes      no     no        yes    -         
zfs90                      yes      yes    yes       no     -         
# time lucreate -p tank -n zfs90.2
Checking GRUB menu...
System has findroot enabled GRUB
Analyzing system configuration.
Comparing source boot environment <zfs90> file systems with the file 
system(s) you specified for the new boot environment. Determining which 
file systems should be in the new boot environment.
Updating boot environment description database on all BEs.
Updating system configuration files.
Creating configuration for boot environment <zfs90.2>.
Source boot environment is <zfs90>.
Creating boot environment <zfs90.2>.
Cloning file systems from boot environment <zfs90> to create boot environment <zfs90.2>.
Creating snapshot for <tank/ROOT/zfs90> on <tank/ROOT/zfs90@zfs90.2>.
Creating clone for <tank/ROOT/zfs90@zfs90.2> on <tank/ROOT/zfs90.2>.
Setting canmount=noauto for </> in zone <global> on <tank/ROOT/zfs90.2>.
No entry for BE <zfs90.2> in GRUB menu
Population of boot environment <zfs90.2> successful.
Creation of boot environment <zfs90.2> successful.

real    0m38.40s
user    0m6.89s
sys     0m11.59s
# 

38 seconds to create a BE, something that would take over and hour with UFS.

I'm not foolish brave enough to do the home server yet so that is on nv90 with UFS. When the bug is fixed I'll give it a go.

Comments:

Looks like things are moving in the right direction, shame it's taken so long though.

Will the process be "transparent" when complete? i.e. will I just reformat my 2nd BE (UFS slice) to ZFS and point LU to it and say "use that" ?

Posted by Sean Clarke on May 27, 2008 at 03:35 AM BST #

Yes. You point liveupgrade at an existing pool. So one way to do that is to delete your old 2nd BE turn it into a pool and then point live upgrade at it.

Posted by Chris Gerhard on May 27, 2008 at 06:44 AM BST #

Yes, unfortunatelly, this CR is just filed by Forrest from ZFS testing team yesterday, it sliped from the last cycle testing and cause lucreate broken while create the ICF file, it's annoy.

Chris, Could you update the CR of the work around field? We also tried to fix the ICF.2 but may not update the bootsign. :( I think your way is right.

I'm prety sure this CR would be fixed in snv_91. 99.9% :)

Posted by Robin Guo on May 27, 2008 at 10:47 AM BST #

With regards to Sean Clarke question, I have to notice, that in case you have multiple old/UFS BEs (multiple slices on single disk), you boot to one of them, ludelete(1M) the remaining BEs, then create ZFS pool via concatenation of the slices on which the remainigs BEs lived and then try to lucreate(1M) the new BE on top of the freshly created ZFS pool, you won't be able to do this. Concatenated ZFS pools \*do not support booting\*. If you are lucky, you could join the slices, used for concatenation (the old BE slices) to build ZFS pool, and make up ZFS pool on single slice which you then use as pool for lucreate(1M). With this information kept in mind pick-up the right BE which will allow you to join the remaining slices.

lucreate(1M) error message: ERROR: ZFS pool <rpool> does not support boot environments

Posted by Jan Friedel on June 10, 2008 at 05:12 AM BST #

I just did a fresh install of build 91 and was able to convert over to a ZFS boot LU environment with no problems, so it appears that the bug has been fixed even though it does not appear so in the bug report.

Posted by Wyllys on June 10, 2008 at 12:13 PM BST #

Post a Comment:
Comments are closed for this entry.
About

This is the old blog of Chris Gerhard. It has mostly moved to http://chrisgerhard.wordpress.com

Search

Archives
« April 2014
MonTueWedThuFriSatSun
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
    
       
Today