The Live Upgrade Experience


After many hours of pondering about writing to my blog, I'm finally taking the plunge and joining the ranks of the Sun bloggers. For my first post I'd like to share my experience with Live Upgrade.


The Live Upgrade feature of the Solaris operating environment enables you to maintain multiple operating images on a single system.  An image called a boot environment, or BE represents a set of operation system and application software packages.  The BEs might contain different operating system and/or application versions.


On a system with the Solaris Live Upgrade software, your currently booted OS environment is referred to as your active, or current BE.  You have one active, or current BE; all others are active.  You can perform any number of modifications to inactive BEs on the same system, then boot from one of those BEs.  If there is a failure or some undesired behavior in the newly booted BE, Live Upgrade software makes it easy for you to fall back to the previously running BE.


I've found that Solaris Live Upgrade has many capabilities.  Upgrading a system to a new Solaris release couldn't be more easy. It involves three simple basic commands.


lucreate(1M) – create a new boot environment
luupgrade(1M) – installs, upgrades, and performs other functions on software on a boot environment
luactivate(1M) – activate a boot environment


Some notes on patching using the Live Upgrade method


Solaris Live Upgrade is not just limited to OS upgrades. In fact it can be used to manage downtime and risk when patching a system as well.


Because you are applying the patches to the inactive boot environment, patching has only minimal impact on the currently running environment. The production applications can continue to run as the patches get applied to the inactive environment. System Administrators can now avoid taking the system down to single-user mode, which is the standard practice for applying the Recommended Patch Clusters.


You can boot the newly patched environment, test your applications, and if you are not satisfied. Well! Just reboot to the original environment. With Solaris 10 8/07, patching Solaris Containers with Solaris Live Upgrade is very much supported. This can have a dramatic impact on decreasing downtime during patching.


Live Upgrade prerequisites when preparing to patch your systems.


System Type Used in Exercise



  • SunFire 220R

  • 2/ 450MHZ USII Processors with 4mb of cache

  • 2048mb of memory

  • 2/ internal 18gb SCSI drive

  • Sun StorEdge D1000


Preparing for Live Upgrade



  • I'm starting with a Solaris 10 11/06 release system, which was just freshly installed with my root file system. We will call that the primary boot environment.  I'll begin by logging into the root account, and patch the system with the latest Solaris 10 Recommended Patch Cluster, downloaded via SunSolve.

  • I will also install the required patches from the formerly Sun Infodoc 72099, now Sun InfoDoc 206844. This document provides information about the minimum patch requirements for a system on which Solaris Live Upgrade software will be used.

  • It is imperative that you ensure the target system meets these patch requirements before attempting to use Solaris Live Upgrade software on your system.


Installing the latest Solaris Live Upgrade packages you will use a script called liveupgrade20.  The scripts runs silently and installs the latest Solaris Live Upgrade packages. You can run the following command without the -noconsole and -nodisplay options and you will see the GUI install tool.  I ran it as you will see with these options.


root@sunrise1 #  pkgrm SUNWlur SUNWluu
root@sunrise1 # mount -o ro -F hsfs `lofiadm -a /apps/solaris-images/s10u4/
SPARC/solarisdvd.iso` /mnt
root@sunrise1 # cd /mnt/Solaris_10/Tools/Installers
root@sunrise1 # ./liveupgrade20 -noconsole -nodisplay

Note: This will install the following packages SUNWluu SUNWlur SUNWlucfg


For the purpose of this exercise, I wanted to see if performing a Live Upgrade of my Solaris 10 11/06 system with four non-global zones, would work as documented. I created these four sparse-root zones with the following perl script. I called the script s10-zone-create.   Feel free to use it at your own risk!


The perl script generated the following output:  Note:  I ran the script for each zone I wanted to create. 


root@sunrise1 # ./s10-zone-create 

Solaris 10 Zone Creator

------------------------------------------------------------------------------

What is the name of the new zone to be created?  zoneA

What is the name of the directory to be used for this zone? [zoneA]

What is the name of the ethernet interface to bind the IP address to? (ex: bge0) qfe0

Do you want to inherit standard directories (lib,platform,sbin,usr) from the global zone? [yN] y

Please verify the following information:

Zone name: zoneA

Zone Directory: zoneA

Zone IP Address: 192.168.1.27

Ethernet Interface: qfe0

Inherit Directories: y

Are these entries correct? [yN] y

Creating the zone...

Done!

Installing the zone, this will take awhile ...

Preparing to install zone <zoneA>

Creating list of files to copy from the global zone.

Copying <2543> files to the zone. 

Intializing zone product registry.

Determining zone package initialization order.

Preparing to initalize <1059> paakages on the zone.

Zone >zoneA> is initialized.

Done!

Now booting the zone...

Done!

Zone setup complete, connect to the virtual console with the following command:

-> zlogin -C -e\\@zoneA<-  \*Exit by typing @.



As instructed I logged onto the zones console using the zlogin (1M) command above. I do this because once the system has been booted, the normal system identification process for a newly installed Solaris OS instance is started.  You will be asked configuration questions concerning the naming service, timezone, and other system parameter the should be answered as appropriate for your site.  I've omitted the output here just to say some time.


Before I start the actual Live Upgrade process I ran the following command just so that you see that my four zones are running.


root@sunrise1 # zoneadm list -cv

ID    NAME      STATUS        PATH           BRAND      IP

 0   global   running   /         native    shared
 1   zoneA    running     /zones/zoneA    native     shared
 2   zoneB    running     /zones/zoneB    native     shared
 3   zoneC    running    /zones/zoneC    native     shared
 4   zoneD    running    /zones/zoneD    native     shared


 Now for the Live Upgrade


Now we are ready to make a copy of the root (/) file system.  In Figure 1; below is what I started with. I used the two internal disks although not very fast, I created a partition on the second disk to the same size as the root (/) partition on the first disk.


Figure 1


Figure 1  Before running the lucreate commend


The object of this next step is to create a copy of the current root (/) file system so that you have what is pictured in Figure 2 below.


Figure 2


Figure 2  After running the lucreate command


You will need to name both the current (active boot environment ans the copy (the inactive boot environment).  The partition you will use for the copy must not appear in the /etc/vfstab. In other words it should not have a file systems on it.  In this case I've named the active environment "sol10u3".


Because the goal of this exercise is to upgrade to Solaris 10 8/07, I've named the new inactive boot environment "sol10u4". You will also need to specify that you want to make a copy of the root (/) file system and where that new partition is to be located. For my exercise I used (c0t1d0s0).  Also note that I've specified the file system is to UFS.


The last three pieces of information are concatenated for the `-m' argument.



To create the new (inactive boot environment I issued the following command line option:


root@sunrise1 # lucreate -c sol10u3 -n sol10u4 -m /:c0t1d0s0:ufs

The command generated outout similar to the following. The time to of completion for this command will vary, depending on the speed of the system and it's disks. In my case my SunFire 220R isn't very fast!! :(


Wed Oct 24 10:28:27 EDT 2007
root@sunrise1 # lucreate -c sol10u3 -n sol10u4 -m /:c0t1d0s0:ufs
Discovering physical storage devices
Discovering logical storage devices
Cross referencing storage devices with boot environment configurations
Determining types of file systems supported
Validating file system requests
The device name <c0t1d0s0> expands to device path </dev/dsk/c0t1d0s0>
Preparing logical storage devices
Preparing physical storage devices
Configuring physical storage devices
Configuring logical storage devices
Analyzing system configuration.
Comparing source boot environment <sol10u3> file systems with the file
system(s) you specified for the new boot environment. Determining which
file systems should be in the new boot environment.
Updating boot environment description database on all BEs.
Searching /dev for possible boot environment filesystem devices

Updating system configuration files.
The device </dev/dsk/c0t1d0s0> is not a root device for any boot environment; cannot get BE ID.
Creating configuration for boot environment <sol10u4>.
Source boot environment is <sol10u3>.
Creating boot environment <sol10u4>.
Creating file systems on boot environment <sol10u4>.
Creating <ufs> file system for </> in zone <global> on </dev/dsk/c0t1d0s0>.
Mounting file systems for boot environment <sol10u4>.
Calculating required sizes of file systems for boot environment <sol10u4>.
Populating file systems on boot environment <sol10u4>.
Checking selection integrity.
Integrity check OK.
Populating contents of mount point </>.
Copying.
Creating shared file system mount points.
Copying root of zone <zoneA>.
Copying root of zone <zoneB>.
Copying root of zone <zoneC>.
Copying root of zone <zoneD>.
Creating compare databases for boot environment <sol10u4>.
Creating compare database for file system </>.
Updating compare databases on boot environment <sol10u4>.
Making boot environment <sol10u4> bootable.
Population of boot environment <sol10u4> successful.
Creation of boot environment <sol10u4> successful.
root@sunrise1 # date
Wed Oct 24 11:26:16 EDT 2007

The lucreate took approximately a total of 58 minutes.



At this point, you're ready to run the luupgrade command, however if you encountered problems with the lucreate, you may find it very useful to the lustatus (1M) utility to see the state of the boot environment.


In my case everything went as planned. Here is the output of the lustatus (1M) utility from my server.


root@sunrise1 # lustatus

Boot Environment     Is              Active         Active            Can        Copy      
Name                       Complete  Now            On Reboot    Delete     Status   
-------------------------- -------------  --------          ---------------    ---------    ----------
sol10u3                    yes            yes             no                 no             -        
sol10u4                    yes            no              yes                no             -

After the new boot environment is created, you can then begin the upgrade procedure.


As shown in the example below I will upgrade from the Solaris 10 11/06 release. After completing this next step you will the situation depicted here in Figure 3.



Figure 3


Figure 3  After running the luupgrade command



To upgrade to the new Solaris release, you will use the luupgrade (1M) command with the -u option. The -s option identifies the path to the media. In my case here I already had it mounted on /mnt before shown above in the section Preparing for Live Upgrade.


 root@sunrise1 # date

Wed Oct 24 11:58:51 EDT 2007 

The command line option would be as follows:


root@sunrise1 # luupgrade -u -n sol10u4 -s /mnt

The command generated the following output:

183584 blocks
miniroot filesystem is <lofs>
Mounting miniroot at </mnt/Solaris_10/Tools/Boot>
Validating the contents of the media </mnt>.
The media is a standard Solaris media.
The media contains an operating system upgrade image.
The media contains <Solaris> version <10>.
Constructing upgrade profile to use.
Locating the operating system upgrade program.
Checking for existence of previously scheduled Live Upgrade requests.
Creating upgrade profile for BE <sol10u4>.
Determining packages to install or upgrade for BE <sol10u4>.
Performing the operating system upgrade of the BE <sol10u4>.
CAUTION: Interrupting this process may leave the boot environment unstable or unbootable.
Upgrading Solaris: 16% completed
Upgrading Solaris: 31% completed
Upgrading Solaris: 47% completed
Upgrading Solaris: 62% completed
Upgrading Solaris: 78% completed
Upgrading Solaris: 93% completed
Upgrading Solaris: 100% completed
Installation of the packages from this media is complete.
Updating package information on boot environment <sol10u4>.
Package information successfully updated on boot environment <sol10u4>.
Adding operating system patches to the BE <sol10u4>.
The operating system patch installation is complete.
INFORMATION: The file </var/sadm/system/logs/upgrade_log> on boot environment <sol10u4> contains a
log of the upgrade operation.
INFORMATION: The file </var/sadm/system/data/upgrade_cleanup> on boot environment <sol10u4> contains a
log of cleanup operations required.
INFORMATION: Review the files listed above. Remember that all of the files are located on boot environment
<sol10u4>. Before you activate boot environment <sol10u4>⁞, determine if any additional system maintenance is required or
⁞if additional media of the software distribution must be installed.
The Solaris upgrade of the boot environment <sol10u4> is complete.
root@sunrise1 # date
Wed Oct 24 18:05:27 EDT 2007

The luupgrade took approximately a total of 6 hours and 7 minutes

The luupgrade process took a considerable amount of time because of the speed of my server and the SCSI disks. Yup!! It's a old timer like myself.  :)


Now I've have created a new (inactive) boot environment and upgraded it to the Solaris 10 8/07 release.  In the next two steps, you will first tell Solaris to use the new environment the next time you boot, and then you will reboot the system so that it comes up running the new environment as shown belowin Figure 4.



Figure 4





Figure 4 After running the luactivate command and rebooting




 At this point your ready to indicate which boot environment is to be used the next time you rebooted. In my case the name is sol10u4.


The command line option is as follows:


root@sunrise1 # luactivate sol10u4

The output of the luactivate command is as follows:


\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*

The target boot environment has been activated. It will be used when you
reboot. NOTE: You MUST NOT USE the reboot, halt, or uadmin commands. You
MUST USE either the init or the shutdown command when you reboot. If you
do not use either init or shutdown, the system will not boot using the
target BE.

\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*

In case of a failure while booting to the target BE, the following process
needs to be followed to fallback to the currently working boot environment:

1. Enter the PROM monitor (ok prompt).

2. Change the boot device back to the original boot environment by typing:


     setenv boot-device /pci@1f,4000/scsi@3/disk@0,0:a


3. Boot to the original boot environment by typing:

     boot

\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*

Activation of boot environment <sol10u4> successful.


Please Note: The warning in the output above. The proper command that must be used to reboot, is "init" or "shutdown".


One final note, if you are finished with a boot environment and no longer need it make sure that you dispose of it properly. The proper way is to use the ludelete (1M) command. If you destroy the boot environment through some other command or mechanism (e.g. newfs, or rm -rf) you risk having your system not boot.


OH Yeah!!  I was quite surprised that Live Upgrade was very nice to change my OBP boot-device entry. WOW That's Kool!!


boot-device /pci@1f,4000/scsi@3/disk@1,0:a  boot-device /pci@1f,4000/scsi@3/disk@0,0:a

Reboot your system


root@sunrise1 # init 6

Success!!!  :)  Give it a try!!



What was kool about this entire exercise, was one the fact that it work the first time as documented, and for customers the reboot would be the only downtime they would experience using the Live Upgrade Method.


On a personal note, I'd like to thank Bob Netherton, Jeff Victor, Steffen Weiberle, and Linda Kately for their insight on this subject. Thank you so much!  Your knowledge is incredible!!


Well until next time.  My next post will be on patching using LiveUpgrade.  Bye for now.


Any question feel free to call me or send me am email.







Technocrati Tags:

Comments:

Post a Comment:
  • HTML Syntax: NOT allowed
About

mhuff

Search

Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today