Live Upgrade with Solaris Volume Manager (SVM) and Zones

As mentioned in my previous post on October 25th, 2007 titled The Live Upgrade Experience, I stated that the Live Upgrade feature of the Solaris operating environment enables you to maintain multiple operating images on a single system. An image called a boot environment, or BE represents a set of operating system and application software packages. The BEs might contain different operating system and/or application versions.


As part of this exercise I want to test Live Upgrade when using Solaris Volume Manager (SVM) to mirror the rootdisk, where Solaris Containers/Zones are deployed.


System Type Used in Exercise



  • SunFire 220R

  • 2/ 450MHZ USII Processors with 4mb of cache

  • 2048mb of memory

  • 2/ internal 18gb SCSI drive

  • Sun StorEdge D1000


Preparing for Live Upgrade



  • I'm starting with a Solaris 10 11/06 release system, which was just freshly installed with my root file system. We will call that the primary boot environment.  I'll begin by logging into the root account, and patch the system with the latest Solaris 10 Recommended Patch Cluster, downloaded via SunSolve.

  • I will also install the required patches from the formerly Sun Infodoc 72099, now Sun InfoDoc 206844. This document provides information about the minimum patch requirements for a system on which Solaris Live Upgrade software will be used.

  • As mention in my previous post,it is imperative that you ensure the target system meets these patch requirements before attempting to use Solaris Live Upgrade software on your system.


Verifying booted OS release


root@sunrise1 # cat /etc/release

Solaris 10 11/06 s10s_u3wos_10 SPARC
Copyright 2006 Sun Microsystems, Inc. All Rights Reserved.
Use is subject to license terms.
Assembled 14 November 2006


Display and list the containers/zones 


root@sunrise1 # zoneadm list -cv

ID NAME STATUS PATH BRAND IP
0 global running / native shared
1 zoneA running /zones/zoneA native shared


Abbreviations Used:


PBE: Primary boot environment
ABE: Alternate boot environment


Installing the latest Solaris Live Upgrade packages you will use a script called liveupgrade20.  The scripts runs silently and installs the latest Solaris Live Upgrade packages. You can run the following command without the -noconsole and -nodisplay options and you will see the GUI install tool.  I ran it as you will see with these options.


root@sunrise1 # pkgrm SUNWlur SUNWluu
root@sunrise1 # mount -o ro -F hsfs `lofiadm -a /solaris-stuff/solaris-images/s10u4/SPARC/solarisdvd.iso` /mnt

root@sunrise1 # cd /mnt/Solaris_10/Tools/Installers
root@sunrise1 # ./liveupgrade20 -noconsole -nodisplay

Note: This will install the following packages SUNWluu SUNWlur SUNWlucfg


root@sunrise1 # pkinfo |grep SUNWluu SUNWlur SUNWlucfg

    application SUNWlucfg                      Live Upgrade Configuration
    application SUNWlur                          Live Upgrade (root)
    application SUNWluu                         Live Upgrade (usr)


Introduction to Solaris Volume Manager (SVM)


Solaris Volume Manager (SVM), is included in Solaris, it allows you to manage large numbers of disks and the data on those disks. Although there are many ways to use Solaris Volume Manager, most tasks include the following:





  • Increasing storage capacity




  • Increasing data availability




  • Easing administration of large storage devices





How does Solaris Volume Manager  (SVM) manage storage


Solaris Volume Manager uses virtual disks to manage physical disks and their associated data. In Solaris Volume Manager, a virtual disk is called a volume. For historical reasons, some command-line utilities also refer to a volume as a metadevice.


From the perspective of an application or a file system, a volume is functionally identical to a physical disk. Solaris Volume Manager converts I/O requests directed at a volume into I/O requests to the underlying member disk.


Solaris Volume Manager volumes are built from disk slices or from other Solaris Volume Manager volumes. An easy way to build volumes is to use the graphical user interface (GUI) that is built into the Solaris Management Console. The Enhanced Storage tool within the Solaris Management Console presents you with a view of all the existing volumes. By following the steps in wizards, you can easily build any kind of Solaris Volume Manager volume or component. You can also build and modify volumes by using Solaris Volume Manager command-line utilities.


For example, if you need more storage capacity as a single volume, you could use Solaris Volume Manager to make the system treat a collection of slices as one larger volume. After you create a volume from these slices, you can immediately begin using the volume just as you would use any “real” slice or device.


On to Configuring Solaris Volume Manager (SVM)


In this exercise we will first create and set up some RAID-0 metadevices for both the root file systems / and the swap partition. Then if goes well, with Live Upgrade we will create the Alternate Boot Environment (ABE). For the purpose of this exercise I had enough disk capacity to create the ABE on another set of disks. However if you do not have enough disk capacity, you will have to break the mirrors off form the PBE and use those disks for your ABE.


 Note: Keep in mind if you are going to break the mirrors off from the PBE and you are going to mirror swap with the lucreate command, it has a bug when it comes to the attach and detach flags. See SunSolve for BugID: 5042861 Synopsis: lucreate cannot perform SVM attach/detach on swap devices.


SVM Commands:


metadb(1M) – create and delete replicas of the metadevice state database
metainit(1M) – configure metadevices
metaroot(1M) – setup system files for root (/) metadevice
metastat(1M) – display status for metadevice or hot spare pool
metattach(1M) – attach a metadevice
metadetach(1M) – detach a metadevice
metaclear(1M) – delete active metadevices and hot spare pools



  1. c0t0d0s2 represents the first system disk (boot) also the PBE

  2. c1t0d0s2 represents the second disk (mirror) also will be used for the PBE

  3. c1t9d0s4 represents the disk where the zones are created for the PBE

  4. c0t1d0s2 represents the first system disk (boot) also the ABE

  5. c1t1d0s2 represents the second disk (mirror) also will be used for the ABE

  6. c1t4d0s0 represents the disk where the zones are created for the ABE

Set up the RAID-0 metadevices (stripe or concatenation volumes)corresponding to the / file system and the swap space, and automatically configure system files (/etc/vfstab and the /etc/system) for the root metadevice.

Duplicate the label's content from the boot disk to the mirror disk for both the PBE and the ABE:


root@sunrise1# prtvtoc /dev/rdsk/c0t0d0s2 | fmthard -s - /dev/rdsk/c1t0d0s2
root@sunrise1# prtvtoc /dev/rdsk/c0t0d0s2 | fmthard -s - /dev/rdsk/c0t1d0s2

Create replicas of the metadevice state database:


Note: Option -f is needed because it is the first invocation/creation of the metadb(1M)


root@sunrise1# metadb -a -f -c 3 c0t0d0s7 c1t0d0s7 c0t1d0s7 c1t1d0s7


Verify meta databases:


root@sunrise1# metadb

        flags           first blk       block count
     a m  p  luo        16              8192            /dev/dsk/c0t0d0s7
     a    p  luo        8208            8192            /dev/dsk/c0t0d0s7
     a    p  luo        16400           8192            /dev/dsk/c0t0d0s7
     a    p  luo        16              8192            /dev/dsk/c1t0d0s7
     a    p  luo        8208            8192            /dev/dsk/c1t0d0s7
     a    p  luo        16400           8192            /dev/dsk/c1t0d0s7
     a        u         16              8192            /dev/dsk/c0t1d0s7
     a        u         8208            8192            /dev/dsk/c0t1d0s7
     a        u         16400           8192            /dev/dsk/c0t1d0s7
     a        u         16              8192            /dev/dsk/c1t1d0s7
     a        u         8208            8192            /dev/dsk/c1t1d0s7
     a        u         16400           8192            /dev/dsk/c1t1d0s7


Creation of metadevices:


Note: Option -f is needed because the file system created on the slice we want to initialize a new metadevice are already mounted.


root@sunrise1# metainit -f d10 1 1 c0t0d0s0
    d10: Concat/Stripe is setup
root@sunrise1# metainit -f d11 1 1 c0t0d0s1
    d11: Concat/Stripe is setup
root@sunrise1# metainit -f d20 1 1 c1t0d0s0
    d20: Concat/Stripe is setup
root@sunrise1# metainit -f d21 1 1 c1t0d0s1
    d21: Concat/Stripe is setup

Create the first part of the mirror:


root@sunrise1# metainit d0 -m d10
    d0: Mirror is setup
    root@sunrise1# metainit d1 -m d11
    d1: Mirror is setup 

Make a copy of the /etc/system, and the /etc/vfstab before proceeding:


root@sunrise1# cp /etc/vfstab /etc/vfstab-beforeSVM
root@sunrise1# cp /etc/system /etc/system-beforeSVM


Change /etc/vfstab and /etc/system to reflect mirror device:


Note: The metaroot(1M) command is only necessary when mirroring the root file system.


root@sunrise1# metaroot d0
root@sunrise1# diff /etc/vfstab /etc/vfstab-beforeSVM
        6,7c6,7
        < /dev/md/dsk/d1        -       -       swap    -       no      -
        < /dev/md/dsk/d0        /dev/md/rdsk/d0 /       ufs     1       no      logging
        ---
        > /dev/dsk/c0t0d0s1     -       -       swap    -       no      -
        > /dev/dsk/c0t0d0s0     /dev/rdsk/c0t0d0s0      /       ufs     1       no      logging  

Note: Don't forget to edit /etc/vfstab in order to reflect the other metadeices: For example in this exercise we mirrored the swap partition, so will have to add the following line to the /etc/vfstab manually.


/dev/md/dsk/d1        -       -       swap    -       no      - 

Install the boot block code on the alternate boot disk:


root@sunrise1# installboot /usr/plaform/`uname -i`/lib/fs/ufs/bootblk /dev/rdsk/c1t0d0s0

Reboot on the new metadevices ( the operating system will now boot encapsulated):


root@sunrise1# shutdown -y -g 0 -i 6 

Attach the second part of the mirror:


root@sunrise1# metattach d0 d20
    d0: submirror d20 is attached 

root@sunrise1# metattach d1 d21
    d1: submirror d21 is attached

Verify all:


root@sunrise1# metastat -p
    d1 -m d11 d21 1
    d11 1 1 c0t0d0s1
    d21 1 1 c1t0d0s1
    d0 -m d10 d20 1
    d10 1 1 c0t0d0s0
    d20 1 1 c1t0d0s0

root@sunrise1# metastat |grep %
    Resync in progress: 41 % done
    Resync in progress: 46 % done 


Note: It would be a best practice to wait for the above resync of the mirrors to finish before proceeding!


Modify the system dump configuration:


root@sunrise1# mkdir /var/crash/`hostname`
root@sunrise1# chmod 700 /var/crash/`hostname`
root@sunrise1# dumpadm -s /var/crash/`hostname`
root@sunrise1# dumpadm -d /dev/md/dsk/d1  

Copy of the /etc/vfstab showing the newly created metadevice for (/) and (swap)


root@sunrise1# cat /etc/vfstab


#device         device          mount           FS      fsck    mount   mount
#to mount       to fsck         point           type    pass    at boot options
#
fd      -       /dev/fd fd      -       no      -
/proc   -       /proc   proc    -       no      -
/dev/md/dsk/d0  /dev/md/rdsk/d0 /       ufs     1       no      logging
/dev/md/dsk/d1        -       -       swap    -       no      -
/dev/dsk/c1t9d0s4       /dev/rdsk/c1t9d0s4      /zones  ufs     2       yes     logging
/dev/dsk/c1t2d0s0      /dev/rdsk/c1t2d0s0      /solaris-stuff  ufs     2       yes     logging
/devices        -       /devices        devfs   -       no      -
ctfs    -       /system/contract        ctfs    -       no      -
objfs   -       /system/object  objfs   -       no      -
swap    -       /tmp    tmpfs   -       yes     -



Record the Path to the Alternate Boot Device


You'll need to determine the path to the alternate root device by using the ls(1) -l command on the slice that is being attached as the second submirror to the root (/) mirror.


root@sunrise1# ls -l /dev/dsk/c1t0d0s0
lrwxrwxrwx   1 root     root          41 Nov  2 20:32 /dev/dsk/c1t0d0s0 -> ../../devices/pci@1f,4000/scsi@5/sd@0,0:a


Here you would record the string that follows the /devices directory: /pci@1f,4000/scsi@5/sd@0,0:a


Solaris Volume Manager users who are using a system with OpenBoot Prom (OBP) can use the OBP nvalias command to define a “backup root” device alias for the secondary root(/) mirror. For example:


ok nvalias rootmirror /pci@1f,4000/scsi@5/sd@0,0:a Note: I needed to change the sd to disk


Then, redefine the boot-devices alias to reference both the primary and secondary submirrors, in the order in which you want them to be used, and store the configuration.


ok printenv boot-device
boot-device =      rootdisk net 

ok setenv boot-device rootdisk rootmirror net
boot-device =      rootdisk rootmirror net


ok nvstor

In the event of primary root disk failure, the system would automatically boot to the second submirror. Or, if you boot manually, rather than using auto boot, you would only enter:


ok boot rootmirror

Note: You'll want to do this to make sure you can boot the submirror!!!!


Now on to creating your ABE. Please note the following:


Now that we have successfully set up,configured, and booted our system with SVM, there is several ways to created the Alternate Boot Environment (ABE). You can use the SVM commands such as metadetach(1M), and metaclear(1M) to break the mirrors, but for this exercise, I had enough disks to create my ABE.


Please note from above when I mentioned that if you are going to have swap mirrored, there is a bug with lucreate(1M), in that lucreate(1M) cannot perform SVM attach/detach on swap devices. The BugID is 5042861.


 If you do not have enough disk space, and you are not going to mirror swap, the Live Upgrade command lucreate will detach the mirrors, preserve the data, and create ABE for you.


Note: Before proceeding you'll want to boot back to the rootdisk, after testing booting from the mirror!


Live Upgrade commands to be used:


lucreate(1M) – create a new boot environment
lustatus(1M)
– display status of boot environments
luupgrade(1M) – installs, upgrades, and performs other functions on software on a boot environment
luactivate(1M) – activate a boot environment
lufslist(1M) – list configuration of a boot environment


Since, I will not be breaking the mirrors to create my ABE in this exercise, I've created a script call lu_create.sh to prepare the other disks for my ABE and create the ABE using the lucreate(1M) command.


The lucreate(1M) command, has several flags. I will use the -C flag. The -C boot_device flag, was provided for occasions when lucreate(1M) cannot figure out which physical storage device is your boot device. This might occur, for example, when you have a mirrored root device on the source BE on an x86 machine.


The -C specifies the physical boot device from which the source BE is booted. Without this option, lucreate(1M) attempts to determine the physical device from which a BE boots. If the device on which the root file system is located is not a physical disk (for example, if root is on a Solaris Volume Manager volume) and lucreate(1M) is able to make a reasonable guess as to the physical device, you receive the query.

Is the physical device devname the boot device for the logical device devname?

If you respond y, the command proceeds.


If you specify -C boot_device, lucreate(1M) skips the search for a physical device and uses the device you specify. The (hyphen) with the -C option tells lucreate(1M) to proceed with whatever it determines is the boot device. If the command cannot find the device, you are prompted to enter it.


If you omit -C or specify -C boot_device and lucreate(1M) cannot find a boot device, you receive an error message.


Use of the -C form is a safe choice, because lucreate(1M) either finds the correct boot device or gives you the opportunity to specify that device in response to a subsequent query.


Copy of the lu_create.sh script


root@sunrise1# cat lu_create.sh
#!/bin/sh

# Created by Mark Huff Sun Microsystems Inc. on 4/08/08
#ScriptName: lu-create.sh

# This script will use Solaris 10 LiveUpgrade commands to create a Alternate Boot Environment (ABE),
# and also create Solaris Volume Manager (SVM) metadevice.

lustatus

metainit -f d110 1 1 c0t1d0s0
metainit -f d120 1 1 c1t1d0s0
metainit d100 -m d110

# The following line will create the ABE with the zones from the PBE
lucreate -C /dev/dsk/c0t1d0s2 -m /:/dev/md/dsk/d100:ufs  \\
-m -:/dev/dsk/c0t1d0s1:swap \\
-m /zones:/dev/dsk/c1t4d0s0:ufs -n s10u4

sleep 10
# The following lines will setup the metadevices for swap and attach the mirrors for / and swap
metainit -f d111 1 1 c0t1d0s1
metainit -f d121 1 1 c1t1d0s1
metainit d101 -m d111
metattach d100 d120
metattach d101 d121

lustatus

sleep 2

echo The lufslist command will be run to lists the configuration of a boot environment BE. The output contain
s the disk slice,file system, file system type, and file system size for each BE mount point. The output also
 notes any separate file systems that belong to a non-global zone inside the BE being displayed as well.

lufslist s10u4

luactivate s10u4

lustatus

sleep 10

echo Please connect to console to see reboot!!

sleep 2

#init 6


After the lu_create.sh script has completed you will now reboot. This system will now boot on the new ABE. 


root@sunrise1# init 6 

Once the system has boot on the new ABE, list the configuration of the boot environment:


root@sunrise1# lufslist s10u4
               boot environment name: s10u4
               This boot environment is currently active.
               This boot environment will be active on next system boot.

Filesystem              fstype    device size Mounted on          Mount Options
----------------------- -------- ------------ ------------------- --------------
/dev/md/dsk/d100        ufs       13811814400 /                   logging
/dev/dsk/c0t1d0s1       swap       4255727616 -                   -
/dev/dsk/c1t4d0s0       ufs        4289863680 /zones              logging  


root@sunrise1# lufslist d0

               boot environment name: d0
               This boot environment is currently active.
               This boot environment will be active on next system boot.

Filesystem               fstype    device size    Mounted on          Mount Options
-----------------------  --------  ------------       ------------------ --------------
/dev/md/dsk/d0         ufs       13811814400   /                   logging
/dev/md/dsk/d1         swap    4255727616    -                   -
/dev/dsk/c1t9d0s4     ufs       36415636992  /zones              logging


At this point, you're ready to run the luupgrade command, however if you encountered problems with the lucreate, you may find it very useful
to the lustatus (1M) utility to see the state of the boot environment.


In my case everything went as planned. Here is the output of the lustatus (1M) utility from my server.


To display the status of the current boot environment:


root@sunrise1# lufstatus
Boot Environment           Is         Active Active     Can     Copy      
Name                             Complete Now     On Reboot  Delete  Status    
--------------------------       --------   ------   ---------  ------  ----------
d0                               yes        no       no         yes     -         
s10u4                            yes        yes      yes        no      -      
 

Renamed the PBE to s10u3 


root@sunrise1# lurename -e d0 -n s10u3

To display the status of the current boot environment:


root@sunrise1# lufstatus
Boot Environment           Is         Active Active     Can     Copy      
Name                             Complete Now     On Reboot  Delete  Status    
--------------------------       --------   ------   ---------  ------  ----------
s10u3                               yes        no       no         yes     -         
s10u4                            yes        yes      yes        no      -     

To display and list the containers/zones    


root@sunrise1 # zoneadm list -cv
ID NAME STATUS PATH BRAND IP
0 global running / native shared
1 zoneA running /zones/zoneA native shared

To upgrade to the new Solaris release, you will use the luupgrade (1M)command with the -u option. The -s option identifies the path to the media.


In my case here I already had it mounted on /mnt before shown above in the section Preparing for Live Upgrade.


I've ran the date(1) command to record when I started the luupgrade(1M)


root@sunrise1# date

Wed Apr 23 12:00:00 EDT 2008


The command line option would be as follows:


root@sunrise1# luupgrade -u -n s10u4 -s /mnt 

The command generated the following output:

183584 blocks
miniroot filesystem is <lofs>
Mounting miniroot at </mnt/Solaris_10/Tools/Boot>
Validating the contents of the media </mnt>.
The media is a standard Solaris media.
The media contains an operating system upgrade image.
The media contains <Solaris> version <10>.
Constructing upgrade profile to use.
Locating the operating system upgrade program.
Checking for existence of previously scheduled Live Upgrade requests.
Creating upgrade profile for BE <s10u4>.
Determining packages to install or upgrade for BE <s10u4>.
Performing the operating system upgrade of the BE <s10u4>.
CAUTION: Interrupting this process may leave the boot environment unstable
or unbootable.
Upgrading Solaris: 100% completed
Installation of the packages from this media is complete.
Updating package information on boot environment <s10u4>.
Package information successfully updated on boot environment <s10u4>.
Adding operating system patches to the BE <s10u4>.
The operating system patch installation is complete.
INFORMATION: The file </var/sadm/system/logs/upgrade_log> on boot
environment <s10u4> contains a log of the upgrade operation.
INFORMATION: The file </var/sadm/system/data/upgrade_cleanup> on boot
environment <s10u4> contains a log of cleanup operations required.
INFORMATION: Review the files listed above. Remember that all of the files
are located on boot environment <s10u4>. Before you activate boot
environment <s10u4>, determine if any additional system maintenance is
required or if additional media of the software distribution must be
installed.
The Solaris upgrade of the boot environment <s10u4> is complete.


I ran the date(1) command to record when the luupgrade(1M) completed.


root@sunrise1# date

Wed Apr 23 15:43:00 EDT 2008

Note: The luupgrade took approximately a total of 3 hours and 43 minutes.


We now must activate the newly create ABE by running the luactivate(1M) command.


root@sunrise1 # luactivate s10u4

\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*

The target boot environment has been activated. It will be used when you
reboot. NOTE: You MUST NOT USE the reboot, halt, or uadmin commands. You
MUST USE either the init or the shutdown command when you reboot. If you
do not use either init or shutdown, the system will not boot using the
target BE.

\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*

In case of a failure while booting to the target BE, the following process
needs to be followed to fallback to the currently working boot environment:

1. Enter the PROM monitor (ok prompt).

2. Boot the machine to Single User mode using a different boot device
(like the Solaris Install CD or Network). Examples:

At the PROM monitor (ok prompt):
For boot to Solaris CD: boot cdrom -s
For boot to network: boot net -s

3. Mount the Current boot environment root slice to some directory (like
/mnt). You can use the following command to mount:

mount -Fufs /dev/dsk/c0t1d0s2 /mnt

4. Run <luactivate> utility with out any arguments from the current boot

 


Boot new ABE


After the boot of the new ABE called s10u4 we will see the that the new ABE has it new SVM metadevices along with the zones that we created when we ran the lu_create.sh script .


I suppose you could nvalias, s10u4at the OBP, to it's correct disk path something like this.


ok nvalias s10u4 /pci@1f,4000/scsi@3/disk@1,0:a

or just boot the path in this case the following

ok boot /pci@1f,4000/scsi@3/disk@1,0:a

Resetting ...

Sun Ultra 60 UPA/PCI (2 X UltraSPARC-II 450MHz), No Keyboard
OpenBoot 3.23, 2048 MB memory installed, Serial #14809682.
Ethernet address 8:0:20:e1:fa:52, Host ID: 80e1fa52.

Rebooting with command: boot /pci@1f,4000/scsi@3/disk@1,0:a
Boot device: /pci@1f,4000/scsi@3/disk@1,0:a File and args:
SunOS Release 5.10 Version Generic_120011-14 64-bit
Copyright 1983-2007 Sun Microsystems, Inc. All rights reserved.
Use is subject to license terms.
TSI: gfxp0 is GFX8P @ 1152x900
Hostname: sunrise1
Configuring devices.
Loading smf(5) service descriptions: 27/27
/dev/rdsk/c1t4d0s0 is clean

sunrise1 console login: root
Password:
Apr 23 18:13:48 sunrise1 login: ROOT LOGIN /dev/console
Last login: Tue Apr 22 14:28:09 on console
Sun Microsystems Inc. SunOS 5.10 Generic January 2005
You have mail.
Sourcing //.profile-EIS.....


root@sunrise1 # cat /etc/release
Solaris 10 8/07 s10s_u4wos_12b SPARC
Copyright 2007 Sun Microsystems, Inc. All Rights Reserved.
Use is subject to license terms.
Assembled 16 August 2007
 


Now that the new ABE, s10u4 is boot you'll see below that the new environment has the new metadevices (d100 for / and d101 for swap.


root@sunrise1 # more /etc/vfstab
#live-upgrade:<Tue Apr 22 12:14:27 EDT 2008> updated boot environment <s10u4>
#device         device          mount           FS      fsck    mount   mount
#to mount       to fsck         point           type    pass    at boot options
#
fd      -       /dev/fd fd      -       no      -
/proc   -       /proc   proc    -       no      -
/dev/md/dsk/d100        /dev/md/rdsk/d100       /       ufs     1       no      logging
#live-upgrade:<Tue Apr 22 12:14:27 EDT 2008>:<s10u4>#  /dev/md/dsk/d1        -       -       swap    -       no      -
/dev/md/dsk/d101        -       -       swap    -       no      -     
/dev/dsk/c1t4d0s0       /dev/rdsk/c1t4d0s0      /zones  ufs     2       yes     logging
/dev/dsk/c1t2d0s0      /dev/rdsk/c1t2d0s0      /solaris-stuff  ufs     2       yes     logging
/devices        -       /devices        devfs   -       no      -
ctfs    -       /system/contract        ctfs    -       no      -
objfs   -       /system/object  objfs   -       no      -
swap    -       /tmp    tmpfs   -       yes     -

 
root@sunrise1 # df -h
Filesystem             size   used  avail capacity  Mounted on
/dev/md/dsk/d100        13G   4.7G   7.8G    38%    /
/devices                 0K     0K     0K     0%    /devices
ctfs                     0K     0K     0K     0%    /system/contract
proc                     0K     0K     0K     0%    /proc
mnttab                   0K     0K     0K     0%    /etc/mnttab
swap                   4.6G   1.4M   4.6G     1%    /etc/svc/volatile
objfs                    0K     0K     0K     0%    /system/object
fd                       0K     0K     0K     0%    /dev/fd
swap                   4.6G    48K   4.6G     1%    /tmp
swap                   4.6G    48K   4.6G     1%    /var/run
/dev/dsk/c1t4d0s0      3.9G   614M   3.3G    16%    /zones

As you can see the new BE also has the container/zone


root@sunrise1 # zoneadm list -cv
ID NAME STATUS PATH BRAND IP
0 global running / native shared
1 zoneA running /zones/zoneA native shared


Log into the zone in the new BE


root@sunrise1 # zlogin zoneA
[Connected to zone 'zoneA' pts/1]
Last login: Mon Apr 21 16:15:41 on console
Sun Microsystems Inc. SunOS 5.10 Generic January 2005
# cd /
# ls -al
total 1004
drwxr-xr-x 18 root root 512 Apr 23 15:41 .
drwxr-xr-x 18 root root 512 Apr 23 15:41 ..
drwx------ 3 root root 512 Apr 21 16:15 .sunw
lrwxrwxrwx 1 root root 9 Apr 22 12:00 bin -> ./usr/bin
drwxr-xr-x 12 root root 1024 Apr 23 17:44 dev
drwxr-xr-x 73 root sys 4096 Apr 23 17:47 etc
drwxr-xr-x 2 root sys 512 Nov 23 09:58 export
dr-xr-xr-x 1 root root 1 Apr 23 17:46 home
drwxr-xr-x 7 root bin 5632 Apr 23 15:03 lib
drwxr-xr-x 2 root sys 512 Nov 22 22:32 mnt
dr-xr-xr-x 1 root root 1 Apr 23 17:46 net
drwxr-xr-x 6 root sys 512 Dec 18 12:15 opt
drwxr-xr-x 54 root sys 2048 Apr 23 13:58 platform
dr-xr-xr-x 81 root root 480032 Apr 23 18:26 proc
drwxr-xr-x 2 root sys 1024 Apr 23 14:16 sbin
drwxr-xr-x 4 root root 512 Apr 23 15:41 system
drwxr-xr-x 5 root root 396 Apr 23 17:48 tmp
drwxr-xr-x 41 root sys 1024 Apr 23 15:34 usr
drwxr-xr-x 43 root sys 1024 Apr 23 15:41 var

# ifconfig -a
lo0:1: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1
inet 127.0.0.1 netmask ff000000
qfe0:1: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 3
inet 192.168.1.27 netmask ffffff00 broadcast 192.168.1.255



Success!! :)"  Give it a try!!


Well their you go!  Live Upgrade using Solaris Volume Manager with solaris containers/zones deployed!!


Well until next time.  My next post will be on patching using LiveUpgrade.


Technorati Profile

Technocrati Tags:

Comments:

Hi,

Thanks for this placing this extensive example on Live Upgrade and Zones.
I do have some remarks and found a Bug.

I did tray Live Upgrade just like you described it and this way it works fine.
However the way we use Solaris 10 and Non global Zones is somewhat different than in the example. Every Non-Global Zone has it’s own Root Volume and /var volume, these volumes are SVM softpartitions on a Lun in a Diskset, so when. When I do a Live Upgrade I have to copy these volumes to new volumes in the Diskset.

My Test system looks like this.
>df -hZ
Filesystem size used avail capacity Mounted on
/dev/md/dsk/d07.9G2.5G 5.3G 33% /
/devices 0K 0K 0K 0% /devices
ctfs 0K 0K 0K 0% /system/contract
proc 0K 0K 0K 0% /proc
mnttab 0K 0K 0K 0% /etc/mnttab
swap 8.3G 1.3M 8.3G 1% /etc/svc/volatile
objfs 0K 0K 0K 0% /system/object
/platform/sun4u-us3/lib/libc_psr/libc_psr_hwcap1.so.1
7.9G 2.5G 5.3G 33% /platform/sun4u-us3/lib/libc_psr.so.1
/platform/sun4u-us3/lib/sparcv9/libc_psr/libc_psr_hwcap1.so.1
7.9G 2.5G 5.3G 33% /platform/sun4u-us3/lib/sparcv9/libc_psr.so.1
fd 0K 0K 0K 0% /dev/fd
/dev/md/dsk/d30 5.9G 635M 5.2G 11% /var
swap 8.3G 32K 8.3G 1% /tmp
swap 8.3G 48K 8.3G 1% /var/run
/dev/md/dsk/d4002 93M 1.0M 90M 2% /home
/dev/md/dsk/d4003 7.6M 1.0M 6.5M 14% /APPL
/dev/md/dsk/d4006 1.9G 788M 1.1G 41% /patchset
/dev/md/dsk/d4007 3.9G 2.0G 1.8G 53% /zones/szd035
/dev/md/dsk/d4004 55M 1.0M 53M 2% /opt/SUNWsymon
/dev/md/dsk/d4005 998M 2.9M 975M 1% /opt/SUNWn1sps/N1_Service_Provisioning_System/agent/data/rsrcMgr
/dev/md/dsk/d4013 7.6M 1.0M 6.5M 14% /zones/szd035/root/APPL
/zones/szd035/dev 3.9G 2.0G 1.8G 53% /zones/szd035/root/dev
/etc/nodename 7.9G 2.5G 5.3G 33% /zones/szd035/root/etc/chassis
/dev/md/dsk/d4012 93M 1.0M 90M 2% /zones/szd035/root/home
/dev/md/dsk/d4011 1.0G 140M 906M 14% /zones/szd035/root/opt/SUNWn1sps
/opt/SUNWsymon 55M 1.0M 53M 2% /zones/szd035/root/opt/SUNWsymon
/platform 7.9G 2.5G 5.3G 33% /zones/szd035/root/platform
/sbin 7.9G 2.5G 5.3G 33% /zones/szd035/root/sbin
/dev/md/dsk/d4009 45M 1.0M 43M 3% /zones/szd035/root/usr/local
/dev/md/dsk/d4008 1.9G 427M 1.4G 23% /zones/szd035/root/var
/dev/md/dsk/d4010 93M 1.0M 90M 2% /zones/szd035/root/var/local
proc 0K 0K 0K 0% /zones/szd035/root/proc
ctfs 0K 0K 0K 0% /zones/szd035/root/system/contract
mnttab 0K 0K 0K 0% /zones/szd035/root/etc/mnttab
objfs 0K 0K 0K 0% /zones/szd035/root/system/object
swap 8.3G 264K 8.3G 1% /zones/szd035/root/etc/svc/volatile
/zones/szd035/root/platform/sun4u-us3/lib/libc_psr/libc_psr_hwcap1.so.1
7.9G 2.5G 5.3G 33% /zones/szd035/root/platform/sun4u-us3/lib/libc_psr.so.1
/zones/szd035/root/platform/sun4u-us3/lib/sparcv9/libc_psr/libc_psr_hwcap1.so.1
7.9G 2.5G 5.3G 33% /zones/szd035/root/platform/sun4u-us3/lib/sparcv9/libc_psr.so.1
fd 0K 0K 0K 0% /zones/szd035/root/dev/fd
swap 8.3G 32K 8.3G 1% /zones/szd035/root/tmp
swap 8.3G 24K 8.3G 1% /zones/szd035/root/var/run

Make Volumes for ABE ZONE

>metainit d4030 -p d40 4g
d4030: Soft Partition is setup
>metainit d4031 -p d40 6g
d4031: Soft Partition is setup

PRepare ABE disk c1t3d0

root
metainit -f d4 1 1 c1t3d0s0
metainit d9 -m d4

var
metainit -f d34 1 1 c1t3d0s3
metainit d39 -m d34

Run lucreate

lucreate -c BE1 -n BE13 -C /dev/dsk/c1t0d0s0 -m /:/dev/md/dsk/d9:ufs -m /var:/dev/md/dsk/d39:ufs -m -:/dev/dsk/c1t3d0s1:swap -m /zones:/dev/md/dsk/d4030:ufs /var:/dev/md/dsk/d4031:ufs:szd035

lufslist -n BE13
boot environment name: BE13
This boot environment is currently active.
This boot environment will be active on next system boot.

Filesystem fstype device size Mounted on Mount Options
----------------------- -------- ------------ ------------------- --------------
/dev/dsk/c1t3d0s1 swap 2151776256 - -
/dev/md/dsk/d9 ufs 8591474688 / -
/dev/md/dsk/d4030 ufs 4294967296 /zones -
/dev/md/dsk/d39 ufs 6444908544 /var -
/dev/md/dsk/d4002 ufs 104857600 /home logging
/dev/md/dsk/d4003 ufs 10485760 /APPL logging
/dev/md/dsk/d4004 ufs 62914560 /opt/SUNWsymon logging
/dev/md/dsk/d4005 ufs 1082130432 /opt/SUNWn1sps/N1_Service_Provisioning_System/agent/data/rsrcMgr logging
/dev/md/dsk/d4006 ufs 2155872256 /patchset logging
/dev/md/dsk/d4007 ufs 4299161600 /zones/szd035 logging

zone <szd035> within boot environment <BE13>
/dev/md/dsk/d4009 ufs 52428800 /usr/local nodevices
/dev/md/dsk/d4010 ufs 104857600 /var/local nodevices
/dev/md/dsk/d4011 ufs 1153433600 /opt/SUNWn1sps nodevices
/dev/md/dsk/d4012 ufs 104857600 /home nodevices
/dev/md/dsk/d4013 ufs 10485760 /APPL nodevices
/dev/md/dsk/d4031 ufs 6442450944 /var nodevices

So far so good..
In the lucreate command I used /zones as a path for all Zones. There is only one Zone so that goes fine.
But when I have more then one Zone I like to keep them separated. The zone path becomes /zones/szd035.

lucreate -c BE1 -n BE9 -C /dev/dsk/c1t0d0s0 -m /:/dev/md/dsk/d9:ufs -m /var:/dev/md/dsk/d39:ufs -m -:/dev/dsk/c1t3d0s1:swap -m /zones/szd035:/dev/md/dsk/d4030:ufs -m /var:/dev/md/dsk/d4031:ufs:szd035

The Lucreate command ends like this and the ABE is not usable.

Creating compare databases for boot environment <BE9>.
Creating compare database for file system </zones/szd035>.
Creating compare database for file system </var>.
Creating compare database for file system </>.
Updating compare databases on boot environment <BE9>.
Making boot environment <BE9> bootable.
ERROR: Unable to remount ABE <BE9>: cannot make ABE bootable
ERROR: no boot environment is mounted on root device </dev/md/dsk/d9>
Making the ABE <BE9> bootable FAILED.
ERROR: Unable to make boot environment <BE9> bootable.
ERROR: Unable to populate file systems on boot environment <BE9>.
ERROR: Cannot make file systems for boot environment <BE9>.

This looks like a Bug to me, do you agree or have any ideas on this?

Posted by Hans on May 08, 2008 at 06:49 AM EDT #

Sorry, typo in the email address

Posted by Hans on May 08, 2008 at 06:51 AM EDT #

What should happen if you create a zone in the PBE AFTER you create the ABE? I tried this and, after switching to the ABE, the ABE could NOT see the newly created zone!!!

Posted by Stewart on June 30, 2008 at 04:53 PM EDT #

Post a Comment:
  • HTML Syntax: NOT allowed
About

mhuff

Search

Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today