Tuesday Apr 29, 2014

My own private crash-n-burn farm: using kernel zones for speedy testing

I've spent most of the last two years working on a complete rewrite of the ON consolidation build system for Solaris 12. (We called it 'Project Lullaby' because we were putting nightly to sleep). This was a massive effort for our team of 5, and when I pushed the changes at the end of February we wound up with about 121k lines of change over close to 6000 files. Most of those were Makefiles (so you can understand why I'm now scarred!).

We had to do an incredible amount of testing for this project. Introducing new patterns and paradigms for the complete Makefile hierarchy meant that we had to be very careful to ensure that we didn't break the OS. To accomplish this we used (and overloaded) the resources of the DIY test group and also made use of a feature which is now available in 11.2 - kernel zones.

Kernel zones are a type-2 hypervisor, so you can run a separate kernel in them. If you've used non-global zones (ngz) on Solaris in the past, you'll recall the niggle of having to have those ngz in sync with the global when it comes to SRUs and releases.

Using kernel zones offered several advantages to us: I could run tests whenever I wanted on my desktop system (a newish quad-core Intel Core i5 system with 32gb ram), I could quickly test updates of the newly built bits, I could keep the zone at the same revision while booting the global zone with a new build, and (this is my favourite) I could suspend the zone while rebooting the global zone.

Our testing of Lullaby in kernel zones had two components: #1 does it actually boot? and #2 assuming I can boot the kz with Lullaby-built bits, can I then build the workspace in the kz and then boot those new bits in that same kernel zone?

Creating a kernel zone is very, very easy:

limoncello: # zonecfg -z crashs12 create -t SYSsolaris-kz
limoncello: # zoneadm -z crashs12 install -x install-size=40g
limoncello: # zoneadm -z crashs12 boot

I could have used one of the example templates (eg /usr/share/auto_install/sc_profiles/sc_sample.xml) but for this use-case I just logged in and created the necessary users, groups, automount entries and installed compilers by hand. (Meaning pkg install rather than tar xf).

To start with, I ensured that crashs12 was running the same development build as my global zone, but I removed the various hardware drivers I had no need for.

The very first test I ran in crashs12 was a test of libc and the linker subsystem. Building libc is rather tricky from a make(1s) point of view, due to having several generated (rather than source-controlled) files as part of the base. The linker is even more complex - there's a reason that we refer to Rod and Ali as the 'linker aliens'! Once I had my fresh kz configured appropriately, I created a new BE, mounted it, then blatted the linker and libc bits onto it and rebooted. I was really, really happy to see the kz come up and give me a login prompt.

Several weeks after that we got to the point of successful full builds, so I installed the Lullaby-built bits and rebooted:

root@crashs12:~# pkg publisher
nightly origin online F file:///net/limoncello/space/builds/jmcp/lul-jmcp/packages/i386/nightly-nd//repo.osnet/
extra (non-sticky, disabled) origin online F file:///space/builds/jmcp/test-lul-lul/packages/i386/nightly/repo.osnet-internal/
solaris (non-sticky) origin online T http://internal/repo
root@crashs12:~# pkg update --be-name lul-test-1
root@crashs12:~# reboot

This booted, too, but I couldn't get any network-related tests to work. Couldn't ssh in or out. Couldn't for the life of me work out what I'd done wrong in the build, so I asked the linker aliens and Roger for help - they were quick to realise that in my changes to the libsocket Makefiles, I'd missed the filter option. Once I fixed that, things were back on track.

Now that Lullaby is in the gate and I'm working on my next project, I'm still using crashs12 for spinning up a quick test "system" and I'm migrating my 11.1 Virtualbox environment to an 11.2 kernel zone. The 11.2 zone, incidentally, was configured and installed in about 4 minutes using an example AI profile (see above) and a unified archive.

Kernel zones: you know you want them.

Thursday Oct 06, 2011

Announcement: inaugural meeting of Brisbane Oracle Solaris User Group

I'm delighted to announce the inaugural meeting of the Brisbane Oracle Solaris User Group (BrOSUG).

Join us for pizza as we discuss Installation and Packaging in Solaris 11.

Many of you are familiar with Solaris Jumpstart, SysV packages and patches. Solaris 11 changes all of this. Come along and find out how it has changed, and why.

What: Brisbane Oracle Solaris User Group (BrOSUG).
When: 17 October 2011, 12:30pm - 2:30pm
Where: Oracle House, 300 Ann St, Brisbane.
(Come to reception on level 14).

Please confirm your intention to attend with:

James McPherson James dot McPherson-AT-oracle-DOT-com
+61 7 3031 7173

Mark Farmer Mark dot Farmer-AT-oracle-DOT-com
+61 7 3031 7106

Thursday Oct 16, 2008

On stmsboot(1m)

When I started working with SAS (November 2006), our group's project (MPxIO support in mpt(7d)) was already off and running. The part that I was responsible for was booting - which meant fixing stmsboot(1m). Initially I was disappointed that I'd been given what I thought was such a small part of the problem to work on, but I quickly realised that there was a lot more to it than my first impression revealed.

Since we were under some pretty tight time pressure, I didn't really have time to do a redesign of stmsboot to make it more sustainable. The expected arrival of ZFS root meant that there was also some uncertainty about how that would tie in - nothing was nailed down, so I had to make some guesses and keep my eyes peeled for when ZFS root eventuated and then see what further changes needed to be made. We putback those changes into snv_63, and had a few followups in subsequent builds, and all seemed ok.

Then in February 2008 there was a thread on storage-discuss about how to obtain a particular device's lun number after running devfsadm -C (or boot -r, for that matter). I did a little digging and figured out that it would indeed be possible to provide that information - if you were willing to do a little digging and make use of a scsi_vhci ioctl() or two. Using hba-private data, unfortunately, so quite unsupportable. But it got me thinking, and I logged 6673281 stmsboot needs more clues as a placeholder.

Then a short while later I noticed that the -L or -lX options to stmsboot(1m) were now broken, as of snv_83 (nobody had worked on stmsboot(1m) since I made my changes in build 63). Since this is an essential part of the actual interface, I figured it was important enough to log (6673278 stmsboot -L/l is broken on snv_83 and later) but was unable to do much about it until I got Pluggable fwflash(1m) out of the way first. I was also annoyed to find that there were problems with updating /etc/vfstab, too (6691090 stmsboot -d failed to update /etc/vfstab with non-mpxio device paths... things were not looking good, and I was watching code rot for real. Staggering!

The kicker was (6707555 stmsboot is lost in a ZFS root world, and so I knew what I had to do - redesign and rewrite stmsboot from scratch.

The Redesign

I started with 4 guiding principles:

  • require only one reboot

  • listing of MPxIO-enabled devices should be \*fast\*

  • minimise filesystem-dependent lookups, and

  • use libdevinfo and devlinks as much as possible.

I then looked at the overall effects that we need to achieve with the stmsboot(1m) command:

  • enable MPxIO for all MPxIO-capable devices

  • enable MPxIO for specific MPxIO-capable drivers

  • enable MPxIO for specific MPxIO-capable HBA ports

  • disable MPxIO for all MPxIO-capable devices

  • disable MPxIO for specific MPxIO-capable drivers

  • disable MPxIO for specific MPxIO-capable HBA ports

  • update MPxIO settings for all MPxIO-capable drivers

  • update MPxIO settings for specific MPxIO-capable drivers

  • list the mapping between non-MPxIO and MPxIO-enabled devices

  • list device guids, if available

What does the old code do?

The code makes use of a shell script (/usr/bin/stmsboot), a private binary (/lib/mpxio/stmsboot_util) and an SMF service (/lib/svc/method/mpxio-upgrade) which runs on reboot.

The private binary does the heavy lifting, providing a way for the shell script and SMF service to determine what a device's new MPxIO or non-MPxIO mapping is. The old private binary also walked through the device link entries in /dev/rdsk when called with the -L or -l $controller options, printing any device mappings. Finally, the private binary handles the task of re-writing /etc/vfstab.

The shell script (stmsboot) is the user interface part of the facility. Its chief task is to do editing of the driver.conf(4) files for the supported drivers (fp(7d) and mpt(7d)), and to set the eeprom bootpath variable on the x86/x64 platform if disabling or updating MPxIO configurations. (Failing to do this would prevent an x86/x64 host from booting). The shell script also makes backup copies of modified files, and creates a file with instructions on how to recover a system which has failed to boot properly after running the stmsboot script.

The SMF service is armed by the stmsboot script, and runs on reboot. It mounts /usr and root as read-write, invokes the private /lib/mpxio/stmsboot_util binary to rewrite the /etc/vfstab, updates the dump configuration and any SVM metadevice (/dev/md) device mappings, and then (in the old form) reboots the system.

What has changed

The new design makes use of a private cache of device data (stored using an nvlist) gathered from libdevinfo(3LIB) functions, and obviates the requirement for a second reboot since the vfstab rewriting function is reliable - we use the kernel's concept of what devices it has attached so we're always consistent. In addition, the new design provides a significant improvement in execution time when listing device mappings: we don't need to trawl through device links on disk but instead use libdevinfo functions and our private cache to provide the required information.

The data that we store in the cache for each device attached to an MPxIO-capable controller is

  • its devid (eg, id1,sd@n5000cca00510a7cc/aS_________________________________________3QF0EAFP/a

  • its physical path (eg, /pci@0,0/pci10de,5c@9/pci108e,4131@1/sd@0,0)

  • its devlink path (eg, /dev/dsk/c2t0d0, which becomes c2t0d0)

  • its MPxIO-enabled devlink path (eg, /dev/rdsk/c3t500000E011637CF0d0,
    which becomes c3t500000E011637CF0d0)

  • whether MPxIO is enabled for the device in the running system (as a boolean_t B_TRUE or B_FALSE)

These are stored as nvlist properties:

#define NVL_DEVID "nvl-devid"
#define NVL_PATH "nvl-path"
#define NVL_PHYSPATH "nvl-physpath"
#define NVL_MPXPATH "nvl-mpxiopath"
#define NVL_MPXEN "nvl-mpxioenabled"

When we've found an MPxIO-capable device, we check whether it exists in our cached version, and if not, we create an nvlist containing the above properties and keyed off the device's devid. This nvlist is added to the global nvlist. In order to speed operations later, we also add some inverse mappings to the global nvlist:

devfspath -> devid
current devlink path -> devid
current MPxIO-enabled path -> devid
device physical path -> devid

This allows us to search for any of those paths and get the appropriate devid back, the nvlist of which we can then query for the desired properties.

When the mpxio-upgrade service is invoked, we need to determine the mapping for the root device in the currently running system and mount that device as read-write in order to continue with the boot process. We do this by reading the entry for root in /etc/vfstab and finding the physical path of that device in
the running system. We mount /devices/$physicalpath as read-write, then re-invoke stmsboot_util to find the devlink (/dev/dsk...) path for root, which we then remount. This two-remount option is required because the devlink facility is not available to us at this early stage of the boot process devfsadm is not running yet) - until we can determine what the root device is and mount it as read-write.

Once root and /usr have been remounted, we can then invoke stmsboot_util to re-write the vfstab. This is a fairly simple process of scanning through each line of the file and finding those which start with /dev/dsk, determining their mapping in the current system, and re-writing that line. As a safeguard, the new version of the vfstab is written to /etc/mpxio, and we let the mpxio-upgrade script take care of copying that file to /etc/vfstab. Once the vfstab has been updated, we run dumpadm, and if necessary, metadevadm. Finally, we re-generate the system's boot archive - which in fact is the longest single operation of all!

After this, we can disable the svc:/system/device/mpxio-upgrade:default service and exit.

When the mpxio-upgrade script exits, the svc:/system/filesystem/usr:default service takes over and the boot process completes normally - with the new device mappings already active and working. No second reboot required!

I'm not going to claim that the new form of stmsboot(1m) is a beautiful thing, but I do believe that the architecture and implementation that it has now are much more solid and should be easier to extend in the future if required.

Update (17 October 2008, 07:08 Australia/Brisbane): Jason's comment reminded me that I should have mentioned - I pushed these changes into build snv_99 and you can see them in the changelog.

See also links:

Solaris Express SAN Configuration and Multipathing Guide

Solaris ZFS Administration Guide

Linker and Libraries Guide


devfsadm(1m) stmsboot(1m) zfs(1m) zpool(1m)

libdevinfo(3LIB) libnvpair(3LIB) libdevid(3LIB)

fp(7d) mpt(7d) scsi_vhci(7d)


Saturday Jul 19, 2008

mega_sas project page now live

I've helped out with the mega_sas project here and there over the last 7 or so months, and so I'm really glad to see that the project page is now live and unhidden.

We'll be adding to the content on the page as time goes by, updating progress towards the project goals which we outlined in the project proposal email.

Friday Jul 11, 2008

Another (large) step along the road

Just saw this in my mailbox from mjnelson:

Author: mjnelson
Repository: /hg/onnv/onnv-gate
Latest revision: 935563142864dcc57ff3b20395393f5f0323921f
Total changesets: 1
Log message:
6538468 add Mercurial support to ON developer tools
6658967 /etc/publickey entries get removed on upgrade
Portions of 6538468 contributed by Rich Lowe.
Portions of 6538468 contributed by Mike Gerdts.
[extensive list of files changed removed]

Along with a copy (or 6!) of the associated flag day message:

Flag day: ON Developer Tools upgraded to support Mercurial

There's more associated doco to come, of course, but from now on if you're building OpenSolaris (ok, the ON consolidation) from the source then you should really get up to speed on how your workspace management tools behave.

This is all in preparation for the close of build 96: from the open of build 97 the ON gate will be managed with Mercurial.... as a precursor to the eventual move of the gate outside Sun's network and opening up the contribution process yet more.

So we're getting there, bit by bit by bit.

Tuesday Jul 08, 2008

Better late than never - a ZFS bringover-like util

So... it's a little late, but better late than never. Now that I've got my u40m2 re-configured and redone my local source code repositories (not hg repos... yet), I figured it was time to make the other part of what I've mentioned to customers a reality.

The first part is this: bringing over the source from $GATE, wx init, cd $SRC, /usr/bin/make setup on both UltraSPARC and x64 buildboxen, and then zfs snapshot followed by two zfs clone ops so that I get to build on UltraSPARC and x64 buildboxen in the same workspace at the same time.

Yes, this is a really ugly workaround for Should be able to build for sparc and x86 in a single workspace, and while I'm the RE for that bug, it's probably not going to be fixed for a while.

So here's the afore-mentioned "other part": a kinda-sorta replacement for bringover, using ZFS snapshots and clones. Both Bill and DarrenM have mentioned something of this in the past, and you know what - the script I just hacked together is about 3 lines of content, 1 line of #! magic and 16 lines of arg checking.

Herewith is the script. No warranties, guarantees or anything. Use at your own risk. It works for me, but your mileage may vary. Suggestions and improvements cheerfully accepted.
# The contents of this file are subject to the terms of the
# Common Development and Distribution License (the "License").
# You may not use this file except in compliance with the License.
# You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE
# or http://www.opensolaris.org/os/licensing.
# See the License for the specific language governing permissions
# and limitations under the License.
# When distributing Covered Code, include this CDDL HEADER in each
# file and include the License file at usr/src/OPENSOLARIS.LICENSE.
# If applicable, add the following below this CDDL HEADER, with the
# fields enclosed by brackets "[]" replaced with your own identifying
# information: Portions Copyright [yyyy] [name of copyright owner]
# Copyright 2008 Sun Microsystems, Inc.  All rights reserved.
# Use is subject to license terms.
# Version number? if this needs a rev.... there's something 
# really, really wrong

# we use the following process to snapshot, clone
# and mount an up to date image of $GATE ::
# zfs snapshot sink/$GATE@$wsname  {$GATE is onnv-gate|on10-feature-patch|on10-patch}
# zfs clone sink/$GATE@wsname sink/src/$wsname  ## ignore "failed to mount"
# zfs set mountpoint=/scratch/src/build/{rfes|bugs}/$wsname sink/src/$wsname

# arg1 is "b" or "r" - bug or rfe
# arg2 is $GATE - onnv-gate, on10-feature-patch, on10-patch
# arg3 is wsname

# first, some sanity checking of the args

if [ "$1" != "b" -a "$1" != "r" ]; then
   echo "Is this a bug (b) or an rfe (r) ?"
   exit 1;

if [ "$2" != "onnv-gate" -a "$2" != "on10-feature-patch" -a "$2" != "on10-patch" ]; then
    echo "unknown / invalid gate specified ($2). Please choose one of "
    echo "onnv-gate, on10-feature-patch or on10-patch."
    exit 2;


if [ "$1" = "b" ]; then


# ASSUMPTION1: our $GATE is a dataset under pool "sink"
# ASSUMPTION2: we have another dataset called "sink/src"
# ASSUMPTION3: our user has delegated admin privileges, and can mount
#              a cloned snapshot under /scratch/src/.....

zfs snapshot sink/$GATE@$WSNAME
zfs clone sink/$GATE@$WSNAME sink/src/$WSNAME >> /dev/null 2>&1
zfs set mountpoint=/scratch/src/build/$BR/$WSNAME sink/src/$WSNAME
exit 0

Note the ASSUMPTIONx lines - they're specific to my workstation, you will almost definitely want to change them to suit your system.

Friday Jul 04, 2008

Oh, if only I'd had

Back when I got my first real break as a sysadmin, one of my first tasks was to upgrade the Uni's finance office server, a SparcServer 1000. Running Solaris 2.5 with a gaggle of external unipacks and multipacks for Oracle 7.$mumble, I organised an outage with the DBAs and the Finance stakeholders, practiced installing Solaris 2.6 on a new system (we'd just got an E450), and at the appointed time on the Saturday morning I rocked up and got to work on my precisely specified upgrade plan.

That all went swimmingly (though looooooooowly) until the time came to reboot after the final SDS 4.1 mirror had been created. The primary system board decided that it really didn't like me, and promptly died along with the boot prom.


At that point I didn't know all that much about the innards of the SS1000 otherwise I probably would have just engaged in some swaptronics with the other three boards. However, I was green, nervous, and - by that point - very tired of sitting in a cold, loud machine room for 12 hours. Turned the box off, rang the local Sun support office and left a message (we didn't have weekend coverage on any of our systems then), rang my boss and the primary stakeholder in the Finance unit and went home.

Come Monday morning, all hell broke loose - the Accounts groups were unable to do any work, and the DBAs had to do a very quick enable of the DR system so I could get time to work on the problem with Sun. The "quick enable" took around 4 hours, if I'm remembering it correctly. Fortunately for me, not only were the DBAs quite sympathetic and very quick to help, but Miriam on the support phone number (who later hired me) was able to diagnose the problem and organise a service call to replace the faulty board. She also calmed me down, which I really, really appreciated. (Thankyou Miriam!)

So ... why am I dredging this up? Because I've just done a LiveUpgrade (LU) from Solaris Nevada build 91 to build 93, with ZFS root, and it took me a shade under 90 minutes. Total. Including the post-installation reboot. Not only would I have gone all gooey at the idea of being able to do something like LU back in that job, but if I could have done it with ZFS and not had to reconfigure all the uni- and multi-pack devices I probably could have had the whole upgrade done in around 4 or 5 hours rather than 12. (Remember, of course, that while the SS1000 could take quite a few cpus, they were still very very very very sloooooooooow).

Here's a trancript of this evening's upgrade:

# uname -a
SunOS gedanken 5.11 snv_91 i86pc i386 i86xpv

(remove the snv_91 LU packages)
pkgrm SUNWlu... packages from snv_91
(add the snv_93 LU packages)
pkgadd SUNWlu... packages from snv_93

(Create my LU config)
# lucreate -n snv_93 -p rpool
Checking GRUB menu...
Analyzing system configuration.
No name for current boot environment.
INFORMATION: The current boot environment is not named - assigning name .
Current boot environment is named .
Creating initial configuration for primary boot environment .
The device  is not a root device for any boot environment; cannot get BE ID.
PBE configuration successful: PBE name  PBE Boot Device .
Comparing source boot environment  file systems with the file 
system(s) you specified for the new boot environment. Determining which 
file systems should be in the new boot environment.
Updating boot environment description database on all BEs.
Updating system configuration files.
Creating configuration for boot environment .
Source boot environment is .
Creating boot environment .
Cloning file systems from boot environment  to create boot environment .
Creating snapshot for  on .
Creating clone for  on .
Setting canmount=noauto for  in zone  on .
Saving existing file  in top level dataset for BE  as //boot/grub/menu.lst.prev.
File  propagation successful
Copied GRUB menu from PBE to ABE
No entry for BE  in GRUB menu
Population of boot environment  successful.
Creation of boot environment  successful.
-bash-3.2# zfs list
NAME                             USED  AVAIL  REFER  MOUNTPOINT
rpool                           50.0G   151G    35K  /rpool
rpool/ROOT                      7.06G   151G    18K  legacy
rpool/ROOT/snv_91               7.06G   151G  7.06G  /
rpool/ROOT/snv_91@snv_93        71.5K      -  7.06G  -
rpool/ROOT/snv_93                128K   151G  7.06G  /tmp/.alt.luupdall.2695
rpool/WinXP-Host0-Vol0          3.57G   151G  3.57G  -
rpool/WinXP-Host0-Vol0@install  4.74M      -  3.57G  -
rpool/dump                      4.00G   151G  4.00G  -
rpool/export                    7.47G   151G    19K  /export
rpool/export/home               7.47G   151G  7.47G  /export/home
rpool/gate                      5.86G   151G  5.86G  /opt/gate
rpool/hometools                 2.10G   151G  2.10G  /opt/hometools
rpool/optcsw                     225M   151G   225M  /opt/csw
rpool/optlocal                  1.20G   151G  1.20G  /opt/local
rpool/scratch                   14.4G   151G  14.4G  /scratch
rpool/swap                         4G   155G  64.6M  -

# lustatus
Boot Environment           Is       Active Active    Can    Copy      
Name                       Complete Now    On Reboot Delete Status    
-------------------------- -------- ------ --------- ------ ----------
snv_91                     yes      yes    yes       no     -         
snv_93                     yes      no     no        yes    -         

Golly, that was so easy! Here I was rtfming for the LU with UFS syntax.... not needed at all.

# time luupgrade -u -s /media/SOL_11_X86 -n snv_93

No entry for BE  in GRUB menu
Copying failsafe kernel from media.
Uncompressing miniroot
Uncompressing miniroot archive (Part2)
13367 blocks
Creating miniroot device
miniroot filesystem is 
Mounting miniroot at 
Mounting miniroot Part 2 at 
Validating the contents of the media .
The media is a standard Solaris media.
The media contains an operating system upgrade image.
The media contains  version <11>.
Constructing upgrade profile to use.
Locating the operating system upgrade program.
Checking for existence of previously scheduled Live Upgrade requests.
Creating upgrade profile for BE .
Checking for GRUB menu on ABE .
Saving GRUB menu on ABE .
Checking for x86 boot partition on ABE.
Determining packages to install or upgrade for BE .
Performing the operating system upgrade of the BE .
CAUTION: Interrupting this process may leave the boot environment unstable 
or unbootable.
Upgrading Solaris: 100% completed
Installation of the packages from this media is complete.
Restoring GRUB menu on ABE .
Adding operating system patches to the BE .
The operating system patch installation is complete.
ABE boot partition backing deleted.
PBE GRUB has no capability information.
PBE GRUB has no versioning information.
ABE GRUB is newer than PBE GRUB. Updating GRUB.
GRUB update was successful.
Configuring failsafe for system.
Failsafe configuration is complete.
INFORMATION: The file  on boot 
environment  contains a log of the upgrade operation.
INFORMATION: The file  on boot 
environment  contains a log of cleanup operations required.
WARNING: <3> packages failed to install properly on boot environment .
INFORMATION: The file  on 
boot environment  contains a list of packages that failed to 
upgrade or install properly.
INFORMATION: Review the files listed above. Remember that all of the files 
are located on boot environment . Before you activate boot 
environment , determine if any additional system maintenance is 
required or if additional media of the software distribution must be 
The Solaris upgrade of the boot environment  is partially complete.
Installing failsafe
Failsafe install is complete.

real    83m24.299s
user    13m33.199s
sys     24m8.313s

# zfs list
NAME                             USED  AVAIL  REFER  MOUNTPOINT
rpool                           52.5G   148G  36.5K  /rpool
rpool/ROOT                      9.56G   148G    18K  legacy
rpool/ROOT/snv_91               7.07G   148G  7.06G  /
rpool/ROOT/snv_91@snv_93        18.9M      -  7.06G  -
rpool/ROOT/snv_93               2.49G   148G  5.53G  /tmp/.luupgrade.inf.2862
rpool/WinXP-Host0-Vol0          3.57G   148G  3.57G  -
rpool/WinXP-Host0-Vol0@install  4.74M      -  3.57G  -
rpool/dump                      4.00G   148G  4.00G  -
rpool/export                    7.47G   148G    19K  /export
rpool/export/home               7.47G   148G  7.47G  /export/home
rpool/gate                      5.86G   148G  5.86G  /opt/gate
rpool/hometools                 2.10G   148G  2.10G  /opt/hometools
rpool/optcsw                     225M   148G   225M  /opt/csw
rpool/optlocal                  1.20G   148G  1.20G  /opt/local
rpool/scratch                   14.4G   148G  14.4G  /scratch
rpool/swap                         4G   152G  64.9M  -
-bash-3.2# lustatus
Boot Environment           Is       Active Active    Can    Copy      
Name                       Complete Now    On Reboot Delete Status    
-------------------------- -------- ------ --------- ------ ----------
snv_91                     yes      yes    yes       no     -         
snv_93                     yes      no     no        yes    -         

# luactivate snv_93
System has findroot enabled GRUB
Generating boot-sign, partition and slice information for PBE 
Saving existing file  in top level dataset for BE  as //etc/bootsign.prev.
WARNING: <3> packages failed to install properly on boot environment .
INFORMATION:  on boot 
environment  contains a list of packages that failed to upgrade or 
install properly. Review the file before you reboot the system to 
determine if any additional system maintenance is required.

Generating boot-sign for ABE 
Saving existing file  in top level dataset for BE  as //etc/bootsign.prev.
Generating partition and slice information for ABE 
Copied boot menu from top level dataset.
Generating direct boot menu entries for PBE.
Generating xVM menu entries for PBE.
Generating direct boot menu entries for ABE.
Generating xVM menu entries for ABE.
Disabling splashimage
Re-enabling splashimage
No more bootadm entries. Deletion of bootadm entries is complete.
Changing GRUB menu default setting to <0>
Done eliding bootadm entries.


The target boot environment has been activated. It will be used when you 
reboot. NOTE: You MUST NOT USE the reboot, halt, or uadmin commands. You 
MUST USE either the init or the shutdown command when you reboot. If you 
do not use either init or shutdown, the system will not boot using the 
target BE.


In case of a failure while booting to the target BE, the following process 
needs to be followed to fallback to the currently working boot environment:

1. Boot from Solaris failsafe or boot in single user mode from the Solaris 
Install CD or Network.

2. Mount the Parent boot environment root slice to some directory (like 
/mnt). You can use the following command to mount:

     mount -Fzfs /dev/dsk/c1t0d0s0 /mnt

3. Run  utility with out any arguments from the Parent boot 
environment root slice, as shown below:


4. luactivate, activates the previous working boot environment and 
indicates the result.

5. Exit Single User mode and reboot the machine.


Modifying boot archive service
Propagating findroot GRUB for menu conversion.
File  propagation successful
File  propagation successful
File  propagation successful
File  propagation successful
Deleting stale GRUB loader from all BEs.
File  deletion successful
File  deletion successful
File  deletion successful
Activation of boot environment  successful.

# date
Friday,  4 July 2008  9:45:41 PM EST

# init 6
propagating updated GRUB menu
Saving existing file  in top level dataset for BE  as //boot/grub/menu.lst.prev.
File  propagation successful
File  propagation successful
File  propagation successful
File  propagation successful

Here I reboot and then login.

# lustatus
Boot Environment           Is       Active Active    Can    Copy      
Name                       Complete Now    On Reboot Delete Status    
-------------------------- -------- ------ --------- ------ ----------
snv_91                     yes      no     no        yes    -         
snv_93                     yes      yes    yes       no     -         

# lufslist -n snv_91
               boot environment name: snv_91

Filesystem              fstype    device size Mounted on          Mount Options
----------------------- -------- ------------ ------------------- --------------
/dev/zvol/dsk/rpool/swap swap       4294967296 -                   -
rpool/ROOT/snv_91       zfs          20630528 /                   -

# lufslist -n snv_93
               boot environment name: snv_93
               This boot environment is currently active.
               This boot environment will be active on next system boot.

Filesystem              fstype    device size Mounted on          Mount Options
----------------------- -------- ------------ ------------------- --------------
/dev/zvol/dsk/rpool/swap swap       4294967296 -                   -
rpool/ROOT/snv_93       zfs       10342821376 /                   -

Cor! That was so easy I think I need to fall off my chair.

Thinking about this for a moment, I needed just 6 commands and around 90 minutes to upgrade my laptop. If only I'd had this technology available to me back then.

Finally, let me send a massive, massive thankyou to the install team and the ZFS team for all their hard work to get these technologies integrated and working pretty darned smoothly together.

Wednesday Oct 31, 2007

Today is a very good day



Today I'm ecstatic to be able to announce that the S10 patches for our backport are finally available on sunsolve.sun.com. We've delivered PSARC 2006/703 MPxIO extension for Serial Attached SCSI, and (my personal favourite) PSARC 2007/046 stmsboot(1M) extension for mpt(7D).

The patches that you need to install are

sparc:: 125081-10
(We recommend that on sparc you also install 127747-01 as well, due to 6466248)


x86/x64:: 125082-10


The full list of rfes and bugs is as follows:

6443044 add mpxio support to SAS mpt driver
6502231 stmsboot needs to support SAS devices
6544226 mpt needs mdb module

6242789 primary path comes up as standby instead online even if auto-failback is enabled
6442215 mpt.conf maybe overwritten because filetype within SUNWckr package is 'f'
6449836 stmsboot -d failed to boot if several LUNs or targets map to same partition
6510425 properties "flow_control" and "queue" in mpt.conf are useless
6525558 untagged command unlikely to be sent to HBA during heavy I/O
6541750 CAM5.1.1b2: 2530, MPT2: Vdbench bailed out after I pull ctlr-A out
6545198 build should allow architecture-dependent class action scripts
6546164 stmsboot does not remove sun4u SMF service, erroneously lists parallel SCSI HBAs
6548867 mpxio-upgrade script has fatally mis-defined variable
6550585 mpt driver has a memory leak in mpt_send_tur
6550591 mpt should not print unnecessary messages
6550849 WARNING: mpt TEST_UNIT_READY failure
6554029 mpt should get maxdevice from portfacts, not IOCfacts
6554556 stmsboot's privilege message is not quite correct
6556832 after ctlr brought online, some paths failed to come back
6560371 mpt hangs during ST2530 firmware upgrade
6566097 mpt: sd targets under mpt are not power-manageable
6566815 changes for 6502231 broke g11n in stmsboot
6531069 SCSI2 (tc_mhioctkown test cases) testing are showing UNRESOLVED results for ST2530
6546465 mpt: kernel panic due to NULL pointer reference in an error code path
6556852 mpt needs to support Sun Fire x4540 platform
6588204 mpt_check_scsi_io_error() incorrectly tests IOCStatus register
6588278 mpt driver doesn't check GUID of LUN when the path online
6591973 panic in mdi_pi_free() when remapping devices
6613189 T125082-09 and T125081-09 don't work - missing misc/scsi module from deliverables

As an interesting side note, during the development process we stumbled across

6566270 Seagate Savvio 10k1 disks do not enumerate under scsi_vhci

You'll probably see this if you have a Galaxy or T2000/T1000 system. (Unfortunately you need a service contract to view the bug report due to its category).


And on a personal note, I'd like to thank the other members of our team for working so well together - with Greg in Melbourne, Javen and Dolpher up in Beijing, test teams in Beijing, Menlo Park, Broomfield and San Diego and yours truly in Sydney (and now Brisbane) - we have truly been a virtual team. I reckon we've demonstrated that physical distance does not get in the way of designing, developing, testing and (most importantly) delivering good software that provides solutions for our customers.


Technorati tags: , , , , , , , , , , ,

Sunday Oct 07, 2007

Opening up OpenSolaris just a little bit more

Get OpenSolaris

For the last few weeks I've been working with Jason King (jbk on #opensolaris) to integrate his clean-room re-implementation of libdisasm for SPARC.

Today, having received RTI approval, passed all the tests and checks and run many many nightly builds I was able to putback the changes to the ON gate. The heads up message is here.


The putback comments are

PSARC/2007/507 Unencumbered libdisasm for Sparc
6596739 need non-encumbered libdisasm for sparc
6396410 Update dis for preferred assembly language syntax
4751282 fp conversion ops decode registers incorrectly
4767086 fmovrq registers decoded wrong
4767091 pixel compare source registers decoded wrong
4767154 Registers for fmul8x16, fmul8sux16, fmul8ulx16 decoded wrong
4658958 dis misrepresents invalid opcodes
6193412 Support for new Olympus B/C instructions needed in disassemblers


I expect that there will be a few followup putbacks as people find edge cases, but the great thing about this putback is that \*you\* can make those changes if you want. You don't have to depend on Sun doing it for you :-)


Thankyou Jason - you've helped make OpenSolaris more open.

Technorati tags: OpenSolaris Jason King OpenSPARC UltraSPARC Solaris disassembler jbk libdisasm

Saturday Sep 22, 2007

A load off my mind

This morning I awoke and saw that the gatekeepers had given me the ok to putback our wad of changes to the Solaris 10 patch and feature gates. The gates opened less than a day ago, US/Pacific time, so I'm very happy that we were amongst the first to get in.

So after a few hours on the phone with the rest of our team, making absolutely darned sure that we had everything correct and doing a last-minute merge+test with the feature gate, I putback. I don't think I've been this nervous since the day I got married!

We expect to see patches for this backport in about a month - a coupla weeks after the close of the patch gate for this build. Not sure what the patch IDs will be, when I find out I'll mention them here.

The PSARC fasttracks we integrated are

PSARC 2006/703 MPxIO extension for Serial Attached SCSI (SAS) on mpt(7D)
PSARC 2007/046 stmsboot(1M) extension for mpt(7D)

And the primary bugids are

6443044 add mpxio support to SAS mpt driver
6502231 stmsboot needs to support SAS devices
6544226 mpt needs mdb module

(bugs.opensolaris.org won't show the updates until tomorrow).

Our little project team is very happy and despite the fact that we're in Beijing, Brisbane and Melbourne I think we might all go off to a pub to celebrate.


I work at Oracle in the Solaris group. The opinions expressed here are entirely my own, and neither Oracle nor any other party necessarily agrees with them.


« February 2015