Solaris 10 LiveUpdate to a Disk Partition
By jhawk on Oct 15, 2006
At some point, you may want to "capture" an existing installation of Solaris 10. One feature of Solaris that I found useful is the LiveUpdate functionality. This allows you to create a backup image of one or more partitions. It then keeps track of the locations should you need to boot that backup image state.
This is really useful for several reasons. First, you may want to capture the installed state of a given system. Should something go wrong, for example add a bad patch, you could just boot back to that given state.
In the case below, I am formatting a local disk to be my backup disk. I'm running the format utility to repartition the drive to have a single partition (also known as a "free hog") for the whole spindle.
# format -e
select disk 2
modify: set slice 6 to free hog.. the rest set to zero GB
Next.. I'm adding a UFS file system to that spindle. We will need this because the lucreate will need to run "cpio".
# newfs /dev/dsk/c1t1d0s6
newfs: construct a new file system /dev/rdsk/c1t1d0s6: (y/n)? y
Warning: 2496 sector(s) in last cylinder unallocated
/dev/rdsk/c1t1d0s6: 143349312 sectors in 23332 cylinders of 48 tracks, 128 sectors
69994.8MB in 1459 cyl groups (16 c/g, 48.00MB/g, 5824 i/g)
super-block backups (for fsck -F ufs -o b=#) at:
32, 98464, 196896, 295328, 393760, 492192, 590624, 689056, 787488, 885920,
Initializing cylinder groups:
super-block backups for last 10 cylinder groups at:
142447776, 142546208, 142644640, 142743072, 142841504, 142939936, 143038368,
143136800, 143235232, 143333664
The final step.. the LiveUpdate itself. In this case, I'm copying the whole "/" partition onto my newly created disk "
/dev/dsk/c1t1d0s6". This will do the copy on to the new disk. I'm also specifying that the target will support UFS. And the final "-n" will place a label for my boot parameter "newbootenv".
# lucreate -n newbootenv -m /:/dev/dsk/c1t1d0s6:ufs
Discovering physical storage devices
Discovering logical storage devices
Cross referencing storage devices with boot environment configurations
Determining types of file systems supported
Validating file system requests
Preparing logical storage devices
Preparing physical storage devices
Configuring physical storage devices
Configuring logical storage devices
Analyzing system configuration.
No name for current boot environment.
INFORMATION: The current boot environment is not named - assigning name <c1t0d0s0>.
Current boot environment is named <c1t0d0s0>.
Creating initial configuration for primary boot environment <c1t0d0s0>.
The device </dev/dsk/c1t0d0s0> is not a root device for any boot environment.
PBE configuration successful: PBE name <c1t0d0s0> PBE Boot Device </dev/dsk/c1t0d0s0>.
Comparing source boot environment <c1t0d0s0> file systems with the file
system(s) you specified for the new boot environment. Determining which
file systems should be in the new boot environment.
Updating boot environment description database on all BEs.
Searching /dev for possible boot environment filesystem devices
Updating system configuration files.
The device </dev/dsk/c1t1d0s6> is not a root device for any boot environment.
Creating configuration for boot environment <newbootenv>.
Source boot environment is <c1t0d0s0>.
Creating boot environment <newbootenv>.
Creating file systems on boot environment <newbootenv>.
Creating <ufs> file system for </> on </dev/dsk/c1t1d0s6>.
Mounting file systems for boot environment <newbootenv>.
Calculating required sizes of file systems for boot environment <newbootenv>.
Populating file systems on boot environment <newbootenv>.
Checking selection integrity.
Integrity check OK.
Populating contents of mount point </>.