Live Upgrade Survival Guide

When I started blogging about Live Upgrade, it was always my intention to post a list of tips. In this companion piece to Common Live Upgrade Problems, I will take a look at several proactive things you can do to make your Live Upgrade experience go more smoothly. Some of these are documented, although not always as obviously as I would like. Others are common sense. A few might surprise you.

Since this is getting to be a long article, here are the tips, with direct links down to the explanation and examples.

  1. Keep your patching and packaging utilities up to date
  2. Check the log files
  3. ZFS pool and file system versioning
  4. Use ZFS for your root file system
  5. Don't save the patch backout files
  6. Start using Live Upgrade immediately after initial installation
  7. Keep your patching and packaging utilities up to date
  8. Use the installcluster script instead of luupgrade -t
  9. Keep your boot configurations simple
  10. Keeping /var/tmp clean
Without any further delay, here my Live Upgrade Survival Tips.

1. Always make sure your patching and packaging utilities are up to date

This is the most frequent source of beginners troubles with Live Upgrade, and it is completely unnecessary. As such, if you call me or ask for help over email, my first question to you will be "Have you applied the prerequisite patches ? What about 121430/121341 ?" If your answer is, "I don't know", my response will be "Ok then. I'll wait while you check and apply what is out of date. Call me back when you have finished - long pause - if you are still having troubles."

Live upgrade frequently stresses the installation tools. New versions are supplied on the update media, but we continue to fix corner cases, even after an update is released. It is important to check for any patches related to patching or packaging tools and update them before performing any Live Upgrade activities.

Previously, you had to dig through the Infodoc 72099, then it was rewritten as Infodoc 206844. Today, this document lives on as Solaris Live Upgrade Software Patch Requirements. It is a much better read, but it is still an intimidating list of patches to sort through. To ease the effort on system adminstrators, we now include these patches in the Solaris 10 recommended patch cluster along with a safe way to install them in the current boot environment.

Note: it is still worth checking the status of the Live Upgrade patch itself (121430 SPARC or 121431 x86).

In this example, I'm taking a system from Solaris 10 10/08 (u6) to Solaris 10 10/09 (u8). I am already in the directory where the patch cluster was downloaded and unpacked.

# lofiadm -a /export/iso/s10/s10u8-b08a-x86.iso
/dev/lofi/1
# mount -o ro -F hsfs /dev/lofi/1 /mnt
# pkgadd -d /mnt/Solaris_10/Product SUNWluu SUNWlur SUNWlucfg

# ./installcluster --apply-prepreq --s10cluster
Setup .


Recommended OS Cluster Solaris 10 x86 (2011.06.17)

Application of patches started : 2011.06.29 11:19:11

Applying 120901-03 ( 1 of 11) ... skipped
Applying 121334-04 ( 2 of 11) ... skipped
Applying 119255-81 ( 3 of 11) ... skipped
Applying 119318-01 ( 4 of 11) ... skipped
Applying 121297-01 ( 5 of 11) ... skipped
Applying 138216-01 ( 6 of 11) ... skipped
Applying 122035-05 ( 7 of 11) ... skipped
Applying 127885-01 ( 8 of 11) ... skipped
Applying 145045-03 ( 9 of 11) ... skipped
Applying 142252-02 (10 of 11) ... skipped
Applying 125556-10 (11 of 11) ... skipped

Application of patches finished : 2011.06.29 11:19:13


Following patches were skipped :
 Patches already applied
 120901-03     119318-01     138216-01     127885-01     142252-02
 121334-04     121297-01     122035-05     145045-03     125556-10
 119255-81

Installation of prerequisite patches complete.

Install log files written :
  /var/sadm/install_data/s10x_rec_cluster_short_2011.06.29_11.19.11.log
  /var/sadm/install_data/s10x_rec_cluster_verbose_2011.06.29_11.19.11.log
After installing the new Live Upgrade packages from the installation media, I ran the installcluster script from the latest recommended patch cluster. The --apply-prereq argument tells the script just to install required live upgrade patches in the current boot environment. Since I have run several live upgrades previously from this boot environment, it is not surprising that all of the patches had already been applied. You're mileage will vary.

The --s10cluster argument is the current patch cluster password. The intent is to make you read the included README, if for no other reason than to obtain the latest cluster password.

# lucreate -n s10u8-baseline
Checking GRUB menu...

Population of boot environment  successful.
Creation of boot environment  successful.

# luupgrade -u -s /mnt -n s10u8-baseline

Things will always go better when you have the proper versions of the patching and packaging utilities. This is not just a Live Upgrade survival tip, but a good one for general system maintenance.

2. Always check the logs. Always, always, always

How many problems could we prevent if we just read the documentation or took a look at the logs left after the maintenance activity finishes ? Repetitive success with Live Upgrade may lull you into a false sense of security. Things frequently work so well, and if the final output from the command is not proclaiming the end of civilization, we move on to the next step. Bzzzzt. Not so fast.

Patches

For patching, the situation is rather simple. Look at the summary output from luupgrade (or the installcluster script) and see if any patches failed to install properly. If you missed this, you can always go back into the patch logs themselves to see what happened.
# lumount s10u9-2011-06-23 /mnt
# grep -i failed /mnt/var/sadm/patch/*/log
# grep -i error /mnt/var/sadm/patch/*/log
/mnt/var/sadm/patch/118668-32/log:compress(1) returned error code 2
/mnt/var/sadm/patch/119314-42/log:compress(1) returned error code 2
/mnt/var/sadm/patch/119314-42/log:compress(1) returned error code 2
/mnt/var/sadm/patch/119314-42/log:compress(1) returned error code 2
/mnt/var/sadm/patch/124939-04/log:compress(1) returned error code 2
So no patches failed to install, but there were a few errors. A closer look at the log files will tell us that these are harmless, caused when the existing patch backout files failed to compress. That's fine, they were already compressed.
# cat /mnt/var/sadn/patch/119314-42/log

Installation of  was successful.

This appears to be an attempt to install the same architecture and
version of a package which is already installed.  This installation
will attempt to overwrite this package.

/.alt.s10u9-2011-06-23-undo/var/sadm/pkg/SUNWlvmg/save/119314-42/undo: -- file u
nchanged
compress(1) returned error code 2
The SUNWlvmg backout package will not be compressed.
Continuing to process backout package.

Installation of  was successful.

Upgrades

Upgrades are a bit more tricky because there are two different classes of problems: packages that failed to install and configuration files that couldn't be properly upgraded.

The easiest to see are packages that failed to install. These packages are clearly identified in the last of the output from luupgrade. In case you missed them, we will tell you about them again if you try to luactivate(1M) a boot environment where some packaged failed to install.

As with patching, if you missed the messages, you can look back at the upgrade log file in the alternate boot environment. You can find it at /var/sadm/system/logs/upgrade_log.

# lumount s10u9-2011-06-23 /mnt
# tail -18 /mnt/var/sadm/system/logs/upgrade_log
Installation of  was successful.

The messages printed to the screen by this upgrade have been saved to:

	/a/var/sadm/system/logs/upgrade_log

After this system is rebooted, the upgrade log can be found in the file:

	/var/sadm/system/logs/upgrade_log


Please examine the file:

	/a/var/sadm/system/data/upgrade_cleanup

It contains a list of actions that may need to be performed to complete
the upgrade.  After this system is rebooted, this file can be found at:

	/var/sadm/system/data/upgrade_cleanup

After performing cleanup actions, you must reboot the system.	- Environment variables (/etc/default/init)
Updating package information on boot environment .
Package information successfully updated on boot environment .
Adding operating system patches to the BE .
There may be cases where an upgrade isn't able to process a configuration file that has been customized. In that case, the upgrade process will either preserve the original, saving the new configuration file under a different name, or the reverse, saving the existing file under a new name and installing a new one. How can you tell which of these happened ?

Check the upgrade_cleanup log file. It is so important that we mention it twice as luupgrade finishes its output. Here is a snippet from an upgrade from Solaris 10 10/09 to Solaris 10 9/10.

# lumount s10u9-baseline /mnt
# cat /mnt/var/sadm/system/data/upgrade_cleanup

..... lots of output removed for readability ....

/a/kernel/drv/e1000g.conf: existing file preserved, the new version was installed as /a/kernel/drv/e1000g.conf.new

/etc/snmp/conf/snmpd.conf: existing file renamed to /etc/snmp/conf/snmpd.conf~10

/a/etc/mail/sendmail.cf: existing file renamed to /a/etc/mail/sendmail.cf.old
/a/etc/mail/submit.cf: existing file renamed to /a/etc/mail/submit.cf.old

Sendmail has been upgraded to version 8.14.4 .
After you reboot, you may want to run
/usr/sbin/check-hostname
and
/usr/sbin/check-permissions ALL
These two shell-scripts will check for common
misconfigurations and recommend corrective
action, or report if things are OK.

In this example, we see several different actions taken by the installer.

In the case of /kernel/drv/e1000g.conf (the e1000 driver configuration file), the original contents were preserved and a new default file was installed at /kernel/drv/e1000g.conf.new. Let's see what differences exist between the two files.

# lumount s10u9-baseline /mnt
# diff /mnt/kernel/drv/e1000g.conf /mnt/kernel/drv/e1000g.conf.new
# Copyright 2010 Sun Microsystems, Inc.  All rights reserved.
11c11
< # ident	"@(#)e1000g.conf	1.4	06/03/06 SMI"
---
> # ident	"@(#)e1000g.conf	1.5	10/01/12 SMI"
41,45c41,51
<         # These are maximum frame limits, not the actual ethernet frame
<         # size. Your actual ethernet frame size would be determined by
<         # protocol stack configuration (please refer to ndd command man pages)
<         # For Jumbo Frame Support (9k ethernet packet) 
<         # use 3 (upto 16k size frames)
---
> 	#
> 	# These are maximum frame limits, not the ethernet payload size
> 	# (usually called MTU).  Your actual ethernet MTU is determined by frame
> 	# size limit and protocol stack configuration (please refer to ndd
> 	# command man pages)
> 	#
> 	# For Jumbo Frame Support (9k ethernet packet) use 3 (upto 16k size
> 	# frames).  On PCH adapter type (82577 and 82578) you can configure up
> 	# to 4k size frames.  The 4k size is only allowed at 1gig speed, so if
> 	# you select 4k frames size, you cannot force or autonegotiate the
> 	# 10/100 speed options.
The differences in the two files are just comments. That is a common case, and not unexpected since I had not modified the e1000g driver configuration file.

For /etc/snmp/conf/snmpd.conf, the situation was the reverse. The existing copy was saved with a new file extension of ~10. A quick look shows these two files to be identical.

The last example is from our friend sendmail. Since this upgrade includes a new version of sendmail, it is reasonable to expect several differences in the old and new configuration files.

# diff /mnt/etc/mail/sendmail.cf /mnt/etc/mail/sendmail.cf.old

236c236,237
< O DaemonPortOptions=Name=MTA
---
> O DaemonPortOptions=Name=MTA-v4, Family=inet
> O DaemonPortOptions=Name=MTA-v6, Family=inet6
281c282
< # key for shared memory; 0 to turn off, -1 to auto-select
---
> # key for shared memory; 0 to turn off
284,285c285
< # file to store auto-selected key for shared memory (SharedMemoryKey = -1)
< #O SharedMemoryKeyFile
---
As with an earlier example, the output was truncated to improve readability. In this case, I would take all of my local modifications to sendmail.cf and apply those to the new configuration file. Note that the log file suggests running two scripts after I make these modifications to check for common errors.

There are several other actions the installer can take. To learn more about those, take a look at the top portion of the upgrade_cleanup file where they are all explained in great detail, including recommended actions for the system administrator.

3. Watch your ZFS Pool and File System Version Numbers

Thanks to John Kotches and Craig Bell for bringing this one up in the comments. This one is a bit sneaky and it can catch you totally unaware. As such, I've included this pretty high up in the list of survival tips.

ZFS pool and file system functionality may be added with a Solaris release. These new capabilities are identified in the ZFS zpool and file system version numbers. To find out what versions you are running, and what capabilities they provide, use the corresponding upgrade -v commands. Yes, it is a bit disconcerting at first, using an upgrade command, not to upgrade, but to determine which features exist.

Here is an example of each output, for your reference.

# zpool upgrade -v
This system is currently running ZFS pool version 31.

The following versions are supported:

VER  DESCRIPTION
---  --------------------------------------------------------
 1   Initial ZFS version
 2   Ditto blocks (replicated metadata)
 3   Hot spares and double parity RAID-Z
 4   zpool history
 5   Compression using the gzip algorithm
 6   bootfs pool property
 7   Separate intent log devices
 8   Delegated administration
 9   refquota and refreservation properties
 10  Cache devices
 11  Improved scrub performance
 12  Snapshot properties
 13  snapused property
 14  passthrough-x aclinherit
 15  user/group space accounting
 16  stmf property support
 17  Triple-parity RAID-Z
 18  Snapshot user holds
 19  Log device removal
 20  Compression using zle (zero-length encoding)
 21  Deduplication
 22  Received properties
 23  Slim ZIL
 24  System attributes
 25  Improved scrub stats
 26  Improved snapshot deletion performance
 27  Improved snapshot creation performance
 28  Multiple vdev replacements
 29  RAID-Z/mirror hybrid allocator
 30  Encryption
 31  Improved 'zfs list' performance

For more information on a particular version, including supported releases,
see the ZFS Administration Guide.


# zfs upgrade -v
The following filesystem versions are supported:

VER  DESCRIPTION
---  --------------------------------------------------------
 1   Initial ZFS filesystem version
 2   Enhanced directory entries
 3   Case insensitive and File system unique identifier (FUID)
 4   userquota, groupquota properties
 5   System attributes

For more information on a particular version, including supported releases,
see the ZFS Administration Guide.

In this particular example, the kernel supports up to zpool version 31 and ZFS version 5.

Where you can run into trouble with this is when you create a pool or file system and then fall back to a boot environment that is older and doesn't support those particular versions. The survival tip is keep your zpool and vfs versions at a level that is compatible with the oldest boot environment that you will ever fall back to. A corollary to this is that you can upgrade your pools and file systems when you have deleted the last boot environment that supports that particular version.

Your first question is probably, "what versions of ZFS go with the particular Solaris releases ?" Here is a table of Solaris releases since 10/08 (u6) and their corresponding zpool and zfs version numbers.

Solaris ReleaseZPOOL VersionZFS Version
Solaris 10 10/08 (u6)103
Solaris 10 5/09 (u7)103
Solaris 10 10/09 (u8)154
Solaris 10 9/10 (u9)224
Solaris 10 8/11 (u10)295
Solaris 10 1/13 (u11)325
Solaris 11 11/11 (ga)335
Solaris 11.1346

Note that these versions are for the release as well as if you have patched a system to that same level. In other words, a Solaris 10 10/08 system with the latest recommended patch cluster installed might be at the 8/11 (u10) level. You can always use zpool upgrade -v and zfs upgrade -v to make sure.

Now you are wondering how you create a pool or file system at a version different than the default for your Solaris release. Fortunately, ZFS is flexible enough to allow us to do exactly that. Here is an example.

# zpool create testpool testdisk

# zpool get version testpool
NAME      PROPERTY  VALUE    SOURCE
testpool  version   31       default

# zfs get version testpool
NAME      PROPERTY  VALUE    SOURCE
testpool  version   5        -

This pool and associated top level file system can only be accessed on a Solaris 11 system. Let's destroy it and start again, this time making it possible to access it on a Solaris 10 10/09 system (zpool version 15, zfs version 4). We can use the -o version= and -O version= when the pool is created to accomplish this.
# zpool destroy testpool
# zpool create -o version=15 -O version=4 testpool testdisk
# zfs create testpool/data

# zpool get version testpool
NAME      PROPERTY  VALUE    SOURCE
testpool  version   15       local

# zfs get -r version testpool
NAME      PROPERTY  VALUE    SOURCE
testpool  version   4        -
testpool/data  version   4        -

In this example, we created the pool explicitly at version 15, and using -O to pass zfs file system creation options to the top level dataset, we set that to version 4. To make things easier, new file systems created in this pool will be at version 4, inheriting that from the parent, unless overridden by -o version= at the time the file system is created.

The last remaining task is to look at how you might upgrade a pool and file system when you have removed an old boot environment. We will go back to our previous example where we have a version 15 pool and 4 dataset. We have removed the Solaris 10 10/09 boot environment and now the oldest is Solaris 10 8/11 (u10). That supports version 29 pools and version 5 file systems. We will use zpool/zfs upgrade -V to set the specific versions to 29 and 5 respectively.

# zpool upgrade -V 29 testpool
This system is currently running ZFS pool version 31.

Successfully upgraded 'testpool' from version 15 to version 29

# zpool get version testpool
NAME      PROPERTY  VALUE    SOURCE
testpool  version   29       local

# zfs upgrade -V 5 testpool
1 filesystems upgraded

# zfs get -r version testpool
testpool       version   5        -
testpool/data  version   4        -

That didn't go quite as expected, or did it ? The pool was upgraded as expected, as was the top level dataset. But testpool/data is still at version 4. It initially inherited that version from the parent when it was created. When using zfs upgrade, only the datasets listed are upgraded. If we wanted the entire pool of file systems to be upgraded, we should have used -r for recursive.
# zfs upgrade -V 5 -r testpool
1 filesystems upgraded
1 filesystems already at this version

# zfs get -r version testpool
NAME           PROPERTY  VALUE    SOURCE
testpool       version   5        -
testpool/data  version   5        -

Now, that's more like it.

For review, the tip is to keep your shared ZFS datasets and pools are the lowest versions supported by the oldest boot environments you plan to use. You can always use upgrade -v to see what versions are available for use, and by using -o version= and -O version, you can create new pools and datasets that are accessible by older boot environments. This last tip can also come in handy if you are moving pools around systems that might be at different versions.

4. Use ZFS as your root file system

While Live Upgrade can take away a lot of the challenges of patching and upgrading Solaris systems, one small obstacle can make it nearly impossible to deploy - adequate disk slices. The disk sizes are going much faster than that of Solaris, so on any relatively modern system, there should be adequate space on the internal disks to place at least two, if not more boot environments. This can also include a plethora of zones, if sparse root are used.

The problem is generally not space, but disk slices (partitions). With a regular disk label, there is a limit of 8 partitions (0-7). One of these (slice 2) is taken by the disk utilities to record the size of the disk, so it is not available for our use. Take another for the first swap area, one more for the Solaris Volume Manager (SVM) or two if you are using Veritas encapsulated root disks. Pretty soon, you run out of slices. Of course this assumes that you didn't use the entire boot disk to store things such as local data, home directories, backup configuration data, etc.

In other words, if you didn't plan on using Live Upgrade before provisioning the system, it is unlikely that you will have the necessary slices or space available to start using it later. Perhaps in an upcoming posting, I will put together a little cookbook to give some ideas on how to work around this.

The proper long term answer is to use ZFS for your root file system. As we can see in the Solaris 11 Express release notes, ZFS is now integrated with the new packaging and installation tools to simplify system maintenance. All of the capabilities of Live Upgrade are just built in and they work right out of the box. The key to making all of that work smoothly is the ability to rely on certain ZFS features being available for the root file system (snapshot, clone).

Beginning with Solaris 10 10/08, ZFS has been an optional choice for the root file system. Thanks to some early adopters that have helped sort out the corner cases, ZFS is an excellent choice for use as a root file system. In fact, I would go a bit further and suggest that ZFS is the recommended root file system today.

By using ZFS, the disk slice challenges have just gone away. The only question that remains is whether or not the root pool has enough space to hold the alternate boot environment, but even that has a different look with ZFS. Instead of copying the source boot environment, Live Upgrade makes a clone, saving both time and space. The new boot environment only needs enough disk space to hold the changes between boot environments, not the entire Solaris installation.

Time for another example.

# zfs list -r panroot/ROOT
NAME                                                USED  AVAIL  REFER  MOUNTPOINT
panroot/ROOT                                       36.7G  5.06G    18K  legacy
panroot/ROOT/s10u6_baseline                        10.6M  5.06G  6.92G  /
panroot/ROOT/s10u8-baseline                        34.7M  5.06G  7.08G  /
panroot/ROOT/s10u9-2011-06-23                      1.46G  5.06G  7.73G  /
panroot/ROOT/s10u9-2011-06-23-undo                 1.48G  5.06G  7.66G  /mnt
panroot/ROOT/s10u9-baseline                        12.7G  5.06G  7.43G  /
panroot/ROOT/s10x_u6wos_07b                         119M  5.06G  3.87G  /
Each of these ZFS datasets corresponds to a separate boot environment - a bootable Solaris installation. The space required to keep the extra boot environments around is the sum of the dataset used space plus that of the snapshot (not shown in this example). For this single disk configuration, it would be impossible to hold so many different bootable Solaris images, but a df(1) shows that I have space for at least this many, if not more.

If you are using ZFS as your root file system, you are just one command away from being able to enjoy all of the benefits of Live Upgrade.

5. Don't save patch backout files

At first you might think this is a curious recommendation, but stick with me for a few moments.

One of most important features of Live Upgrade is maintaining a safe fall back in case you run into troubles with a patch or upgrade. Rather than performing surgery on a malfunctioning boot environment, perhaps doing more harm with each patch backed out, why not boot back to a known safe configuration ? One luactivate and an init 0 and you are back to an known operating configuration where you can take your time performing forensic analysis of your troubled boot environment.

That would make all of those undo.Z files littering up /var/sadm/patch somewhat extraneous. And that gets us to the next reason for not saving the backout files, space - but not what you are thinking. Sure, the new boot environment is larger with all of those files laying around, but how much are we talking about ?

More than you think. Quite a bit more, actually.

Here is an example where I have installed the June 23, 2011 recommended patch cluster on a Solaris 10 9/10 system, with and without backout files.

# zfs list -r panroot/ROOT | grep s10u9
panroot/ROOT/s10u9-2011-06-23                      1.46G  4.06G  7.73G  /
panroot/ROOT/s10u9-2011-06-23-undo                 2.53G  4.06G  8.66G  /mnt
panroot/ROOT/s10u9-baseline                        12.7G  4.06G  7.43G  /
That's a gigabyte of difference between the boot environment with and without the undo.Z files. Surely there must be some other explanation. Let's see.
# lumount s10u9-2011-06-23-undo /mnt
# find /mnt/var/sadm/patch -name undo.Z -print | xargs -n1 rm -f 
# zfs list -r panroot/ROOT | grep s10u9
panroot/ROOT/s10u9-2011-06-23                      1.46G  5.06G  7.73G  /
panroot/ROOT/s10u9-2011-06-23-undo                 1.46G  5.06G  7.66G  /mnt
panroot/ROOT/s10u9-baseline                        12.7G  5.06G  7.43G  /
If it was just this one gigabyte, I might not be making such a big deal about it. Did you ever think about those zones you are deploying ? As the zone installer runs through all of those packages for the new non-global zone, it copies all of the applicable undo.Z files, if they are present. This compounds the space problem.

In this example, before removing the undo.Z files, I created a zone on each boot environment, so that I can see the space difference. Remember that these are sparse root zones, and should only be around 100MB in size.

# zfs list -r panroot/zones
NAME                              USED  AVAIL  REFER  MOUNTPOINT-files/root/var/t
panroot/zones                     761M  4.53G    22K  /zones
panroot/zones/with-undo-files     651M  4.53G   651M  /zones/with-undo-files
panroot/zones/without-undo-files  111M  4.53G   111M  /zones/without-undo-files
That's right - there's a 540MB difference between the two zones, and the only difference is whether or not the patch backout files were preserved. Throw in a couple of dozen zones, and this becomes more than just a nuisance. Not only does it take more space and time to create the zones, it also impacts the zone backups. All so that you can keep around files that you will never use.

When you run the installcluster script, don't forget the -d flag. If you prefer luupgrade -t, the magic sequence is -O "-d".

6. Start using Live Upgrade immediately after installation

This tip is largely influenced by how you provision your systems, and the frequency in which you might wipe the configuration and start again. My primary system is something of a lab experiment, but isn't too dissimilar from many development environments I have seen.

Right after I installed Solaris 10 from the 10/08 media, I made created a second boot environment, preserving this initial pristine configuration. Rather than reinstalling Solaris from media or a jumpstart server, I would just boot back to the original boot enviroment, delete the remaining boot environments, and in just a few moments, back to square one.

Another useful boot enviromnent to preserve is the initial customization, done immediately after installation. Users are added, security settings are changed, and a handful of software packages are installed. Preserving this system baseline can be very useful, should your system need to be refreshed in a hurry. In my case, that did happen at 34,000 ft, somewhere over Ohio - but that's a story for another day.

If a system is to live through multiple upgrades, it might be a good idea to encode the Solaris release and the patch cluster in the boot environment name. A taxonomy that works for me is -. For example, s10u9-baseline would be the intial upgrade to Solaris 10 9/10, and s10u9-2011-06-23 would be that same release, but patched using the June 23, 2011 patch cluster.

Putting this all together, we have something like this.

# lustatus
Boot Environment           Is       Active Active    Can    Copy
Name                       Complete Now    On Reboot Delete Status
-------------------------- -------- ------ --------- ------ ----------
s10x_u6wos_07b             yes      no     no        yes    -
s10u6-baseline             yes      no     no        yes    -
s10u8-baseline             yes      no     no        yes    -
s10u9-baseline             yes      yes    yes       no     -
s10u9-2011-06-23           yes      no     no        yes    -

The thing I like about this is arrangement is that I can quickly jump to a particular Solaris release when a customer asks me if a particular feature exists, or some patch has been integrated. I can see how this might be useful for some development environments as well.

7. Remember to install the LU packages from the upgrade media

Using Live Upgrade for an upgrade has an additional set over using it for patching. When performing an upgrade, the Live Upgrade packages from the installation media need to be installed in the current boot environment. After doing this, it is still necessary to check for prerequisite packages, especially if several months has passes since the update was released.

Prior to Solaris 10 8/07 (u4), including Solaris 8 and 9, there were only two Live Upgrade packages: SUNWluu and SUNWlur. Solaris 10 8/07 (u4) and late will have a third package, SUNWlucfg. These packages can be found in the Product directory on the installation media.

Here is an example.

# mount -o ro -F hsfs `lofiadm -a /export/iso/s10/s10u9-ga-x86.iso` /mnt
# pkgadd -d /mnt/Solaris_10/Product SUNWluu SUNWlur SUNWlucfg
# cd 
# ./installcluster --apply-prereq --s10cluster
Now we are ready to use lucreate and luupgrade -u to create a new boot environment and upgrade it to the release on the installation media.

8. Use the installcluster script from the Solaris patch cluster

It would be perfectly acceptable to unpack a Solaris recommended patch cluster and then use luupgrade -t to install the patches into an alternate boot environment. Live Upgrade will build a patch order file based on the metadata in all of the patches, and will generally do the right thing.

Occasionally, it might be more convenient to do things in a slightly different order, or handle patch installation errors just a bit better. That's what the installcluster script does. For corner cases where the patch order file might be incorrectly generated, the script builds its own installation order, working around some inconvenience situations. It also does a better job with error handling, perhaps trying a different way or sequence to install a problematic patch.

The most important difference between the two installation methods is how they report their progress. Let's take a look at the two and see which you like better. First luupgrade -t

# lucreate -n zippy
# luupgrade -t -s /export/patches/10_Recommended-2011-06-23/patches -n zippy
Validating patches...

Loading patches installed on the system...

Done!

Loading patches requested to install.

Architecture for package SUNWstaroffice-core01 from directory SUNWstaroffice-core01.i in patch 120186-22 differs from the package installed on the system.
Version of package SUNWmcosx from directory SUNWmcosx in patch 121212-02 differs from the package installed on the system.
Version of package SUNWmcos from directory SUNWmcos in patch 121212-02 differs from the package installed on the system.
..... lots of similar output deleted .......

The following requested patches are already installed on the system
Requested patch 113000-07 is already installed on the system.
Requested patch 117435-02 is already installed on the system.

..... more output deleted .......

The following requested patches do not update any packages installed on the system
No Packages from patch 121212-02 are installed on the system.
No Packages from patch 125540-06 are installed on the system.
No Packages from patch 125542-06 are installed on the system.

Checking patches that you specified for installation.

Done!

..... yet more output deleted .....

Approved patches will be installed in this order:

118668-32 118669-32 119281-25 119314-42 119758-20 119784-18 119813-13 119901-11
119907-18 120186-22 120544-22 121429-15 122912-25 123896-22 124394-11 124939-04
125138-28 125139-28 125216-04 125333-17 125732-06 126869-05 136999-10 137001-08
137081-05 138624-04 138823-08 138827-08 140388-02 140861-02 141553-04 143318-03
143507-02 143562-09 143600-10 143616-02 144054-04 144489-17 145007-02 145125-02
145797-01 145802-06 146020-01 146280-01 146674-01 146773-01 146803-02 146859-01
146862-01 147183-01 147228-01 147218-01 145081-04 145201-06

Checking installed patches...
Installing patch packages...

Patch 118668-32 has been successfully installed.
See /a/var/sadm/patch/118668-32/log for details

Patch packages installed:
  SUNWj5cfg
  SUNWj5dev
  SUNWj5dmo
  SUNWj5man
  SUNWj5rt

Checking installed patches...
Installing patch packages...

Patch 118669-32 has been successfully installed.
See /a/var/sadm/patch/118669-32/log for details

Patch packages installed:
  SUNWj5dmx
  SUNWj5dvx
  SUNWj5rtx

Checking installed patches...
Executing prepatch script...
Installing patch packages...

Patch 119281-25 has been successfully installed.
See /a/var/sadm/patch/119281-25/log for details
Executing postpatch script...

Patch packages installed:
  SUNWdtbas
  SUNWdtdst
  SUNWdtinc
  SUNWdtma
  SUNWdtmad
  SUNWmfrun

Checking installed patches...
Executing prepatch script...
Installing patch packages...

Patch 119314-42 has been successfully installed.
I think you get the picture. To gauge progress, you have to keep scrolling back to the list of packages, and find the one luupgrade is currently working on. After just a few minutes, the scroll buffer of your terminal window will be exhausted and you will be left guessing how long the operation will take to complete.

Let's compare this to the output from the installcluster script. Note the use of -d from an earlier recommendation.

# lucreate -n zippy
# ./installcluster -d -B zippy --s10cluster
Setup ..............


Recommended OS Cluster Solaris 10 x86 (2011.06.17)

Application of patches started : 2011.07.06 00:25:07

Applying 120901-03 (  1 of 216) ... skipped
Applying 121334-04 (  2 of 216) ... skipped
Applying 119255-81 (  3 of 216) ... skipped
Applying 119318-01 (  4 of 216) ... skipped
Applying 121297-01 (  5 of 216) ... skipped
Applying 138216-01 (  6 of 216) ... skipped
Applying 122035-05 (  7 of 216) ... skipped
Applying 127885-01 (  8 of 216) ... skipped
Applying 145045-03 (  9 of 216) ... skipped
Applying 142252-02 ( 10 of 216) ... skipped
Applying 125556-10 ( 11 of 216) ... skipped
Applying 140797-01 ( 12 of 216) ... skipped
Applying 113000-07 ( 13 of 216) ... skipped
Applying 117435-02 ( 14 of 216) ... skipped
Applying 118344-14 ( 15 of 216) ... skipped
Applying 118668-32 ( 16 of 216) ... success
Applying 118669-32 ( 17 of 216) ... success
Applying 118778-14 ( 18 of 216) ... skipped
Applying 121182-05 ( 19 of 216) ... skipped
Notice the nice clean output. You can always tell where you are in the installation process (nn out of 216) and there is not a lot of extra information cluttering up the controlling terminal.

9. Keep it Simple

This will be the most difficult and controverial of the survival tips, and that's why I have saved it for last. Remember that Live Upgrade must work across three releases and all the various patch combinations and clusters. At the very least, it stresses the installation programs and patching tools.

For a UFS root system, the administrator has a lot of control where the various file systems are laid out. All it takes is enough -m lines, or if that becomes too unwieldy, a list of slices in a control file passed by -M.

ZFS provides a significant simplification of the Solaris file systems, and it is expected that system adminstrators will take advantage of this. Of the Solaris directories (/, /usr, /etc, /var, /opt, /kernel, /platform, /bin, /sbin, /lib, /dev, /devices), only /var is allowed to be broken out into its own dataset. Many legacy operational procedures, some dating back to SunOS 4.x days, will have /usr, /usr/local, /opt, /var and /var/crash split into different file systems. Not only is this not a recommended practice for a ZFS root system, it may actually prevent the use of Live Upgrade. If forced to choose between Live Upgrade and my old configuration habits, I will take Live Upgrade every time.

There will be more

I hope to occasionally revise this article, adding new tips, or reworking some of the current ones. Yes, that would make this more of a Wiki type of document than your typical blog, and that might be where this ends up some day. Until then, feel free to bookmark this page and return to it as often as you need to, especially as you plan out your Live Upgrade activities.

If you have some tips and suggestions, please leave them in the comments. If I can work them up with good examples, I'll add them to the page (with full credit, of course). Technocrati Tags:

Comments:

These are great tips, Bob. I have been happily using LU with ZFS root on my Ultra 25 since U6 shipped. My current U9 BE is still running on the same zpool from 2008. I've LU'ed it dozens of times.

You might recall that I patch with lumount and "pca -R". This has worked reliably for me, even if it wasn't your recommendation. =-) LU issues are few since the U6 days. I wish I could use it everywhere.

One more possible tip would be to take care with zpool upgrade, since you need to be able to boot the oldest BE in the pool. I think there's a warning now, but that's something that users should plan for.

As for patch backout info, that might be an issue for some shops. It's rare that I've had to back out a patch; but, it has happened a couple of times. I can revert BE's, but I also like having more options.

I specify an alternate backout data path to keep /var smaller. I also enable lzjb compression, which helps a bit (even with .Z) and is quite safe for the boot environment. Don't use gzip if you want to boot.

There are a very few undo.Z's that take up most of the space, such as Java SE patches. I usually feel comfortable removing these, since the Java packages can easily be reinstalled.

Bottom line: Even though I keep backout data from hundreds of patches between updates, my modest 72GB zpool has plenty of room left. I only keep a few BE's at a time, as well.

So maybe that's one more tip... if you haven't used an old BE for a long time, evaluate whether you still need it, and clean up where appropriate. -cheers, CSB

P.S. it looks like there have been some recent fixes regarding lucreate with flash archives. I need to play with that some more. Parting thought: I wonder if Solaris 11 will ever get anything like flars?

Posted by Craig S. Bell on July 13, 2011 at 04:06 PM CDT #

Thank-you. The information was very good and helpful. Becoming more cautious of the gzip. Using the previous BEs as recommended.

Some questions I have regarding Live Update BEs and zfs mirrored root pools. (I'm assuming it may also apply to rpool snapshots). I'm using Solaris 10/U9.

After an initial installboot(sparc) or installgrub(x86) created on a zfs mirrored disk, will any Live Update BEs created afterwards, be automatically created or mirrored, onto the mirrored rpool filesystem disk?

Or will the installboot(sparc) or installgrub (x86)have to be re-ran again, on the mirrored disk, to place the new BE(s) onto the mirrored disk(s)?

Does the same analogy apply to zfs snapshots created afterwards?

Hope to hear from you soon, and thank-you for your time.

Bill A.

Posted by Bill A on August 16, 2011 at 11:41 AM CDT #

Bob, LU does not play well on our servers and we wanted to remove it. What I'm unsure of is if Live Upgrade is needed to stand up a zone. There is a SUNWluzone "Live Upgrade (zones support)" package that the SUNWzoneu package depends on (SUNWzoneu is "Solaris Zones (Usr)". If I remove all, can i still create zones? thanks! sfawbush@csc.com

Posted by sfawbush@csc.com on October 12, 2011 at 09:49 AM CDT #

My experience with liveupgrade on zfs is not going quite good with Sun.
Am on Solaris 10 10/09, with containers root path on ZFS too.
I have global and container root on ZFS. Also zone's var is a differnt data set rpool/testzon1/var. Now seems this is issue, if i merge this with / of container, lucreate works fine. But I cannot take risk of merging zone's var to zone's /,as it may shutdown my local zone if /var/core is full.
I did apply latest lu packages and 121430-57 /121430-67, but same issue.

# zfs list
NAME USED AVAIL REFER MOUNTPOINT
rpool 34.1G 32.8G 112K /rpool
rpool/ROOT 6.95G 32.8G 21K legacy
rpool/ROOT/globalbe 6.95G 32.8G 6.86G /
rpool/ROOT/globalbe/var 94.4M 32.8G 94.4M /var
rpool/dump 4.00G 32.8G 4.00G -
rpool/home 26.5K 32.8G 26.5K /home
rpool/seedzone 5.06G 32.8G 5G /rpool/seedzone
rpool/seedzone@SUNWzone1 2.48M - 5.06G -
rpool/swap 16G 48.8G 16K -
rpool/testzon1 75.4M 32.8G 5G /rpool/testzon1
rpool/testzon1/var 42.5M 32.8G 42.5M legacy

Pls assist if you have some info.

Thanks

Posted by Vimal on November 08, 2011 at 05:06 PM CST #

Love Live Upgrade. Hate Oracle / Sun patch packaging. Maybe I just don't get it, but 121430/1 seems to be critical yet is not included in the patch cluster / set. It's so critical that if you happen to be migrating from UFS with meta disks to ZFS using level 66 or 68 on a Sparc box your system will not boot. How can this patch make it out the door of Oracle like this? Not once, but twice! Also, how can it not be included and applied when the "--apply-prereq" flag is used? I can't quite believe that migration from UFS with meta's to ZFS is exactly a corner case. This still needs work.

Posted by guest on November 21, 2011 at 08:39 AM CST #

As usual I'm about 6 months late to the game... Story of my busy life.

One tip missing with zfs roots. Watch your pool versions carefully -- and don't upgrade your zfs version until you are absolutely certain you aren't going back to the oldest BE that supports that zfs.

I ran into a problem reverting when I made the mistake of updating the pool with a newer version which is (of course) incompatible with the older boot environment.

Posted by John Kotches on December 19, 2011 at 10:36 AM CST #

Another tip...

Watch your ZFS versioning for ZFS root systems. If you upgrade the pool to a version of ZFS unsupported by earlier BEs you cannot reboot into the older BE.

So once you upgrade it, you might as well blow away any boot environments dependent on the older on disk version.

Posted by John Kotches on December 19, 2011 at 11:11 AM CST #

LU needs to honor /opt as a separate zfs fs. just like it honors /var as a separate zfs fs. /opt is where third party applications reside and it should not fill up / partition.

Thanks

Posted by Asif Iqbal on January 06, 2012 at 07:21 AM CST #

I have some problems with luve upgrade in an environment with one shared zone.

Here is the configuration of the zone:

<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE zone PUBLIC "-//Sun Microsystems Inc//DTD Zones//EN" "file:///usr/share/lib/xml/dtd/zonecfg.dtd.1">
<!--
DO NOT EDIT THIS FILE. Use zonecfg(1M) instead.
-->
<zone name="zone-3" zonepath="/export/zones/zone-3" autoboot="true" ip-type="exclusive">
<inherited-pkg-dir directory="/lib"/>
<inherited-pkg-dir directory="/platform"/>
<inherited-pkg-dir directory="/sbin"/>
<inherited-pkg-dir directory="/usr"/>
<network address="" physical="bnxe1"/>
<filesystem special="/data/zone-3" directory="/usr/local" type="lofs"/>
</zone>

At first the creation of a new boot environment is successful.

The installation of the Recommended Patches fails:

root@hdvr-tls-1:/data/10_x86_Recommended# ./installpatchset -B s10x_u8wos_X86-RP-05012012 --s10patchset
ERROR: Failed to mount boot environment 's10x_u8wos_X86-RP-05012012'.
root@hdvr-tls-1:/data/10_x86_Recommended# lustatus
Boot Environment Is Active Active Can Copy
Name Complete Now On Reboot Delete Status
-------------------------- -------- ------ --------- ------ ----------
s10x_u8wos_X86 yes yes yes no -
s10x_u8wos_X86-RP-05012012 yes no no yes -

The delete of the second BE (with RP) fails too.

root@hdvr-tls-1:/data/10_x86_Recommended# ludelete s10x_u8wos_X86-RP-05012012
Checking if last BE on any disk...
ERROR: cannot mount '/.alt.s10x_u8wos_X86-RP-05012012': directory is not empty
ERROR: cannot mount mount point </.alt.s10x_u8wos_X86-RP-05012012> device <rpool/ROOT/s10x_u8wos_X86-RP-05012012>
ERROR: cannot mount root device <rpool/ROOT/s10x_u8wos_X86-RP-05012012> to mount point </.alt.s10x_u8wos_X86-RP-05012012> for boot environment <s10x_u8wos_X86-RP-05012012>
ERROR: cannot mount boot environment by name <s10x_u8wos_X86-RP-05012012>
ERROR: Failed to mount BE <s10x_u8wos_X86-RP-05012012>.
ERROR: Failed to mount BE <s10x_u8wos_X86-RP-05012012>.
cat: cannot open /tmp/.lulib.luclb.dsk.8566.s10x_u8wos_X86-RP-05012012
ERROR: This boot environment <s10x_u8wos_X86-RP-05012012> is the last BE on the above disk.
ERROR: Deleting this BE may make it impossible to boot from this disk.
ERROR: However you may still boot solaris if you have BE(s) on other disks.
ERROR: You *may* have to change boot-device order in the BIOS to accomplish this.
ERROR: If you still want to delete this BE <s10x_u8wos_X86-RP-05012012>, please use the force option (-f).
Unable to delete boot environment.

zfs list shows this result:
NAME USED AVAIL REFER MOUNTPOINT
data 7.12G 12.4G 7.11G /data
rpool 10.6G 8.95G 36K /rpool
rpool/ROOT 6.69G 8.95G 21K legacy
rpool/ROOT/s10x_u8wos_08a 6.69G 8.95G 4.47G /
rpool/ROOT/s10x_u8wos_08a@s10x_u8wos_X86-RP-05012012 95K - 4.47G -
rpool/ROOT/s10x_u8wos_08a/var 2.22G 8.95G 2.22G /var
rpool/ROOT/s10x_u8wos_08a/var@s10x_u8wos_X86-RP-05012012 326K - 2.22G -
rpool/ROOT/s10x_u8wos_X86-RP-05012012 311K 8.95G 4.46G /
rpool/ROOT/s10x_u8wos_X86-RP-05012012/var 144K 8.95G 2.22G /var
rpool/ROOT/s10x_u8wos_X86-RP-05012012/zoneds 42K 8.95G 21K /zoneds
rpool/ROOT/s10x_u8wos_X86-RP-05012012/zoneds/hdvr-tls-3-s10x_u8wos_X86-RP-05012012 21K 8.95G 21K /zoneds/hdvr-tls-3-s10x_u8wos_X86-RP-05012012
rpool/dump 1.50G 8.95G 1.50G -
rpool/export 423M 8.95G 413M /export
rpool/export/home 10.3M 8.95G 10.3M /export/home
rpool/swap 2G 11.0G 16K -
root@hdvr-tls-1:/data# ll /So how can I do a liveupgrade and at first delete the allready created boot environments.

Man thanks

Wolfram

Posted by guest on February 01, 2012 at 08:32 AM CST #

Usually after doing a shutdown -y -g60 -i6 for live upgrade, I then end up having to do another reboot to update the OBP

eg cd /var/pca/142707-01
# ./unix.flash-update.SunFire440.sh

Is there an easy way to consolidate OBP and live upgrade into one reboot?

I know I could do an luactivate, and then set auto-boot? to false and shutdown -y -g0 -i0, but I'm a bit worried the OBP update could overwrite/disrupt the luactivate settings or live upgrade stop/start scripts.

Posted by Paul on February 28, 2012 at 02:16 AM CST #

Hi there.

I'm a beginner with LU and am playing with it on a Solaris 10 8/07 u4 VM with respect to patching the o/s, not upgrading.
The current root "/" is 4.2GB with 858M available. I know, not much but its just a play VM.
I've patched the PBE with the prerequisite LU Starter Patchset bundle and I can create the LU ABE to a new VM disk. But when I patch it with the 10_10_x86_0811_patchset it fills the ABE "/" filesystem.
The syntax I'm running is:

luupgrade –n NEW2_BE –t –s /admin/patches/10_x86_0811_patchset/patches/ `cat patch_order`

I'd like to disable the disable the patch "save mode".
You talk about this in section 5 above. Is the syntax you refer to crafted like so:

luupgrade -O -d NEW2_BE –t –s /admin/patches/10_recommended/patches/ `cat \ patch_order`

What would the syntax look like if I used the "installcluster" script?

Thanks in advance.
Gary

Posted by guest on May 02, 2012 at 07:09 PM CDT #

Hi Gary.

Fortunately, the installcluster script syntax is even less complicated.

# ./installcluster -d ..... rest of the command line .....

Posted by Bob Netherton on June 04, 2012 at 10:40 AM CDT #

Hi,
I'm in the middle of a live upgrade from U8 to U11 and got these errors in the log:
# grep -i fail /a/var/sadm/system/logs/upgrade_log
ERROR: attribute verification of </a/usr/bin/tip> failed
Installation of <SUNWcsu> partially failed.
ERROR: attribute verification of </a/var/spool/locks> failed
ERROR: attribute verification of </a/var/adm/aculog> failed
Installation of <SUNWcsr> partially failed.
ERROR: attribute verification of </a/usr/bin/cu> failed
ERROR: attribute verification of </a/usr/bin/uucp> failed
ERROR: attribute verification of </a/usr/bin/uuglist> failed
ERROR: attribute verification of </a/usr/bin/uuname> failed
ERROR: attribute verification of </a/usr/bin/uustat> failed
ERROR: attribute verification of </a/usr/bin/uux> failed
ERROR: attribute verification of </a/usr/lib/uucp> failed
ERROR: attribute verification of </a/usr/lib/uucp/bnuconvert> failed
ERROR: attribute verification of </a/usr/lib/uucp/remote.unknown> failed
ERROR: attribute verification of </a/usr/lib/uucp/uucheck> failed
ERROR: attribute verification of </a/usr/lib/uucp/uucico> failed
ERROR: attribute verification of </a/usr/lib/uucp/uucleanup> failed
ERROR: attribute verification of </a/usr/lib/uucp/uusched> failed
ERROR: attribute verification of </a/usr/lib/uucp/uuxqt> failed
Installation of <SUNWbnuu> partially failed.
ERROR: attribute verification of </a/etc/uucp> failed
ERROR: attribute verification of </a/var/spool/uucp> failed
ERROR: attribute verification of </a/var/spool/uucp/.Corrupt> failed
ERROR: attribute verification of </a/var/spool/uucp/.Workspace> failed
ERROR: attribute verification of </a/var/spool/uucp/.Xqtdir> failed
ERROR: attribute verification of </a/var/spool/uucppublic> failed
ERROR: attribute verification of </a/var/uucp> failed
ERROR: attribute verification of </a/var/uucp/.Admin> failed
ERROR: attribute verification of </a/var/uucp/.Log> failed
ERROR: attribute verification of </a/var/uucp/.Log/uucico> failed
ERROR: attribute verification of </a/var/uucp/.Log/uucp> failed
ERROR: attribute verification of </a/var/uucp/.Log/uux> failed
ERROR: attribute verification of </a/var/uucp/.Log/uuxqt> failed
ERROR: attribute verification of </a/var/uucp/.Old> failed
ERROR: attribute verification of </a/var/uucp/.Sequence> failed
ERROR: attribute verification of </a/var/uucp/.Status> failed
ERROR: attribute verification of </a/etc/uucp/Config> failed
ERROR: attribute verification of </a/etc/uucp/Devconfig> failed
ERROR: attribute verification of </a/etc/uucp/Devices> failed
ERROR: attribute verification of </a/etc/uucp/Dialcodes> failed
ERROR: attribute verification of </a/etc/uucp/Grades> failed
ERROR: attribute verification of </a/etc/uucp/Limits> failed
ERROR: attribute verification of </a/etc/uucp/Permissions> failed
ERROR: attribute verification of </a/etc/uucp/Poll> failed
ERROR: attribute verification of </a/etc/uucp/Sysfiles> failed
ERROR: attribute verification of </a/etc/uucp/Systems> failed
ERROR: attribute verification of </a/etc/uucp/Dialers> failed
Installation of <SUNWbnur> partially failed.
The following packages failed to install correctly
havud301 #
havud301 # grep -i error /a/var/sadm/system/logs/upgrade_log
pkgadd: ERROR: unable to open </mnt/Solaris_10/Product/SUNWcsu/reloc/usr/bin/tip> for reading: (2) No such file or directory
ERROR: attribute verification of </a/usr/bin/tip> failed
pkgadd: ERROR: unable to create package object </a/var/spool/locks>.
pkgadd: ERROR: unable to create package object </a/var/spool/locks>.
ERROR: attribute verification of </a/var/spool/locks> failed
ERROR: attribute verification of </a/var/adm/aculog> failed
/a/var/log/swupas/swupas.error.log preserved
pkgadd: ERROR: unable to create package object </a/usr/lib/uucp>.
pkgadd: ERROR: unable to open </mnt/Solaris_10/Product/SUNWbnuu/reloc/usr/bin/cu> for reading: (2) No such file or directory
pkgadd: ERROR: unable to open </mnt/Solaris_10/Product/SUNWbnuu/reloc/usr/bin/uucp> for reading: (2) No such file or directory
pkgadd: ERROR: unable to open </mnt/Solaris_10/Product/SUNWbnuu/reloc/usr/bin/uuglist> for reading: (2) No such file or directory
pkgadd: ERROR: unable to open </mnt/Solaris_10/Product/SUNWbnuu/reloc/usr/bin/uuname> for reading: (2) No such file or directory
pkgadd: ERROR: unable to open </mnt/Solaris_10/Product/SUNWbnuu/reloc/usr/bin/uustat> for reading: (2) No such file or directory
pkgadd: ERROR: unable to open </mnt/Solaris_10/Product/SUNWbnuu/reloc/usr/bin/uux> for reading: (2) No such file or directory
pkgadd: ERROR: unable to create package object </a/usr/lib/uucp>.
pkgadd: ERROR: unable to open </mnt/Solaris_10/Product/SUNWbnuu/reloc/usr/lib/uucp/bnuconvert> for reading: (2) No such file or directo ry
pkgadd: ERROR: unable to open </mnt/Solaris_10/Product/SUNWbnuu/reloc/usr/lib/uucp/remote.unknown> for reading: (2) No such file or dir ectory
pkgadd: ERROR: unable to open </mnt/Solaris_10/Product/SUNWbnuu/reloc/usr/lib/uucp/uucheck> for reading: (2) No such file or directory
pkgadd: ERROR: unable to open </mnt/Solaris_10/Product/SUNWbnuu/reloc/usr/lib/uucp/uucico> for reading: (2) No such file or directory
pkgadd: ERROR: unable to open </mnt/Solaris_10/Product/SUNWbnuu/reloc/usr/lib/uucp/uucleanup> for reading: (2) No such file or director y
pkgadd: ERROR: unable to open </mnt/Solaris_10/Product/SUNWbnuu/reloc/usr/lib/uucp/uusched> for reading: (2) No such file or directory
pkgadd: ERROR: unable to open </mnt/Solaris_10/Product/SUNWbnuu/reloc/usr/lib/uucp/uuxqt> for reading: (2) No such file or directory
ERROR: attribute verification of </a/usr/bin/cu> failed
ERROR: attribute verification of </a/usr/bin/uucp> failed
ERROR: attribute verification of </a/usr/bin/uuglist> failed
ERROR: attribute verification of </a/usr/bin/uuname> failed
ERROR: attribute verification of </a/usr/bin/uustat> failed
ERROR: attribute verification of </a/usr/bin/uux> failed
ERROR: attribute verification of </a/usr/lib/uucp> failed
ERROR: attribute verification of </a/usr/lib/uucp/bnuconvert> failed
ERROR: attribute verification of </a/usr/lib/uucp/remote.unknown> failed
ERROR: attribute verification of </a/usr/lib/uucp/uucheck> failed
ERROR: attribute verification of </a/usr/lib/uucp/uucico> failed
ERROR: attribute verification of </a/usr/lib/uucp/uucleanup> failed
ERROR: attribute verification of </a/usr/lib/uucp/uusched> failed
ERROR: attribute verification of </a/usr/lib/uucp/uuxqt> failed
pkgadd: ERROR: unable to create package object </a/etc/uucp>.
pkgadd: ERROR: unable to create package object </a/var/spool/uucp>.
pkgadd: ERROR: unable to create package object </a/var/spool/uucp/.Corrupt>.
pkgadd: ERROR: unable to create package object </a/var/spool/uucp/.Workspace>.
pkgadd: ERROR: unable to create package object </a/var/spool/uucp/.Xqtdir>.
pkgadd: ERROR: unable to create package object </a/var/spool/uucppublic>.
pkgadd: ERROR: unable to create package object </a/var/uucp>.
pkgadd: ERROR: unable to create package object </a/var/uucp/.Admin>.
pkgadd: ERROR: unable to create package object </a/var/uucp/.Log>.
pkgadd: ERROR: unable to create package object </a/var/uucp/.Log/uucico>.
pkgadd: ERROR: unable to create package object </a/var/uucp/.Log/uucp>.
pkgadd: ERROR: unable to create package object </a/var/uucp/.Log/uux>.
pkgadd: ERROR: unable to create package object </a/var/uucp/.Log/uuxqt>.
pkgadd: ERROR: unable to create package object </a/var/uucp/.Old>.
pkgadd: ERROR: unable to create package object </a/var/uucp/.Sequence>.
pkgadd: ERROR: unable to create package object </a/var/uucp/.Status>.
ERROR: attribute verification of </a/etc/uucp> failed
ERROR: attribute verification of </a/var/spool/uucp> failed
ERROR: attribute verification of </a/var/spool/uucp/.Corrupt> failed
ERROR: attribute verification of </a/var/spool/uucp/.Workspace> failed
ERROR: attribute verification of </a/var/spool/uucp/.Xqtdir> failed
ERROR: attribute verification of </a/var/spool/uucppublic> failed
ERROR: attribute verification of </a/var/uucp> failed
ERROR: attribute verification of </a/var/uucp/.Admin> failed
ERROR: attribute verification of </a/var/uucp/.Log> failed
ERROR: attribute verification of </a/var/uucp/.Log/uucico> failed
ERROR: attribute verification of </a/var/uucp/.Log/uucp> failed
ERROR: attribute verification of </a/var/uucp/.Log/uux> failed
ERROR: attribute verification of </a/var/uucp/.Log/uuxqt> failed
ERROR: attribute verification of </a/var/uucp/.Old> failed
ERROR: attribute verification of </a/var/uucp/.Sequence> failed
ERROR: attribute verification of </a/var/uucp/.Status> failed
ERROR: attribute verification of </a/etc/uucp/Config> failed
ERROR: attribute verification of </a/etc/uucp/Devconfig> failed
ERROR: attribute verification of </a/etc/uucp/Devices> failed
ERROR: attribute verification of </a/etc/uucp/Dialcodes> failed
ERROR: attribute verification of </a/etc/uucp/Grades> failed
ERROR: attribute verification of </a/etc/uucp/Limits> failed
ERROR: attribute verification of </a/etc/uucp/Permissions> failed
ERROR: attribute verification of </a/etc/uucp/Poll> failed
ERROR: attribute verification of </a/etc/uucp/Sysfiles> failed
ERROR: attribute verification of </a/etc/uucp/Systems> failed
ERROR: attribute verification of </a/etc/uucp/Dialers> failed

I checked in the iso image and from /mnt/Solaris_10/Product/SUNWbnuu/reloc there is nothing. I replaced the iso with one I've used successfully before and there is still nothing below that point.
Any ideas? I've just kicked the upgrade off again and I'm tempted to try it as those packages are already installed on the existing PBE.

Regards,
Keith.

Posted by guest on July 16, 2013 at 07:19 AM CDT #

Post a Comment:
  • HTML Syntax: NOT allowed
About

Bob Netherton is a Principal Sales Consultant for the North American Commercial Hardware group, specializing in Solaris, Virtualization and Engineered Systems. Bob is also a contributing author of Solaris 10 Virtualization Essentials.

This blog will contain information about all three, but primarily focused on topics for Solaris system administrators.

Please follow me on Twitter Facebook or send me email

Search

Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today