Thursday Feb 20, 2014

Pre-work for upcoming Solaris 11 Hands on Workshops

Over the next few weeks, I will be hosting several Solaris 11 hands on workshops. Some of these will be public events at an Oracle office while others will be private sessions for a specific customer. If you are planning on attending any of these sessions, there are a few items of pre-work that will help us get the workshop started on time.

Enable VT-x

If you will be using VirtualBox to run your workshop lab guest, the hardware virtualization feature for your CPU must be enabled. For AMD systems, AMD-V is on by default and there may not be a setting to turn it off. For Intel systems, this is controlled by a BIOS setting, and almost always defaults to disabled. The BIOS setting varies from vendor to vendor, but is generally found in the System or CPU settings. If you don't see it there, try looking in security. If you still can't find it, search for your brand of laptop and "enable vt" using your favorite search engine.

On newer Intel systems, you may be given choices for CPU virtualization (VT-x) and data/IO (VT-d). You only need to enable VT-x. Some laptops will require a complete power cycle after changing this setting, including removing the battery.

If you have a company laptop that does not allow you to change the BIOS settings, you might ask your employer if they can provide you one for the day that is not locked down.

Note: Enabling hardware virtualization is a requirement to complete the workshop.

Download and Install VirtualBox

Since this will be primarily a hands on lab, you are encouraged to bring a laptop. The labs will all be run in a Solaris guest machine, so your laptop will also need a virtualization application, such as VMware or VirtualBox. We recommend VirtualBox and will be supplying the lab materials as a VirtualBox guest appliance. You can download VirtualBox for free at VirtualBox.org. Binaries are available for Windows, MacOS, Solaris and most Linux distributions.

After installing VirtualBox, you should also install the VirtualBox Extensions Pack. These are not required for the lab, but should you continue to use the guest machine after the workshop, you might find some of the features very useful.

Don't Forget Your Power Adapters

Since you will be running Solaris as a guest operating system, your host power management features might not very effective and you may find yourself with a drained battery before the morning is over. Please remember to bring your power adapter and cables. An external mouse, while not required, is generally a welcome device, as you cut and paste text between windows.

That should be about it. Please leave a comment if you have any questions. I am looking forward to seeing you at one of these, or a future Solaris event.

Technocrati Tags:

Thursday Jul 11, 2013

VirtualBox 4.2.16 is now available

On July 4, 2013, the VirtualBox development team released version 4.2.16 and it is available for download.

This is a maintenance release for version 4.2 and contains a relatively small number updates, but one important fix for appliance importing. Here is the list from the official Changelog.

  • OVF/OVA: don't crash on import if no manifest is used (4.2.14 regression; bug #11895)
  • GUI: do not restore the current snapshot if we power-off after a Guru Mediation
  • Storage: fixed a crash when hotplugging an empty DVD drive to the VM
  • Storage: fixed a crash when a guest read from a DVD drive attached to the SATA controller under certain circumstances
  • EFI: don't fail with 64-bit guests on 32-bit hosts (bug #11456)
  • Autostart: fixed VM startup on OS X
  • Windows hosts: native Windows 8 controls
  • Windows hosts: restore native style on Vista 32
  • Windows hosts / guests: Windows 8.1 adaptions (bug #11899)
  • Mac OS X hosts: after removing VirtualBox with VirtualBox_Uninstall.tool, remove it from the pkgutil --pkgs list as well

    The full changelog can be found here.

    You can download binaries for Solaris, Linux, Windows and MacOS hosts at
    http://www.virtualbox.org/wiki/Downloads

    Technocrati Tags:

  • Monday Jun 24, 2013

    VirtualBox 4.2.14 is now available

    The VirtualBox development team has just released version 4.2.14, and it is now available for download. This is a maintenance release for version 4.2 and contains quite a few fixes. Here is the list from the official Changelog.

  • VMM: another TLB invalidation fix for non-present pages
  • VMM: fixed a performance regression (4.2.8 regression; bug #11674)
  • GUI: fixed a crash on shutdown
  • GUI: prevent stuck keys under certain conditions on Windows hosts (bugs #2613, #6171)
  • VRDP: fixed a rare crash on the guest screen resize
  • VRDP: allow to change VRDP parameters (including enabling/disabling the server) if the VM is paused
  • USB: fixed passing through devices on Mac OS X host to a VM with 2 or more virtual CPUs (bug #7462)
  • USB: fixed hang during isochronous transfer with certain devices (4.1 regression; Windows hosts only; bug #11839)
  • USB: properly handle orphaned URBs (bug #11207)
  • BIOS: fixed function for returning the PCI interrupt routing table (fixes NetWare 6.x guests)
  • BIOS: don't use the ENTER / LEAVE instructions in the BIOS as these don't work in the real mode as set up by certain guests (e.g. Plan 9 and QNX 4)
  • DMI: allow to configure DmiChassisType (bug #11832)
  • Storage: fixed lost writes if iSCSI is used with snapshots and asynchronous I/O (bug #11479)
  • Storage: fixed accessing certain VHDX images created by Windows 8 (bug #11502)
  • Storage: fixed hang when creating a snapshot using Parallels disk images (bug #9617)
  • 3D: seamless + 3D fixes (bug #11723)
  • 3D: version 4.2.12 was not able to read saved states of older versions under certain conditions (bug #11718)
  • Main/Properties: don't create a guest property for non-running VMs if the property does not exist and is about to be removed (bug #11765)
  • Main/Properties: don't forget to make new guest properties persistent after the VM was terminated (bug #11719)
  • Main/Display: don't lose seamless regions during screen resize
  • Main/OVF: don't crash during import if the client forgot to call Appliance::interpret() (bug #10845)
  • Main/OVF: don't create invalid appliances by stripping the file name if the VM name is very long (bug #11814)
  • Main/OVF: don't fail if the appliance contains multiple file references (bug #10689)
  • Main/Metrics: fixed Solaris file descriptor leak
  • Settings: limit depth of snapshot tree to 250 levels, as more will lead to decreased performance and may trigger crashes
  • VBoxManage: fixed setting the parent UUID on diff images using sethdparentuuid
  • Linux hosts: work around for not crashing as a result of automatic NUMA balancing which was introduced in Linux 3.8 (bug #11610)
  • Windows installer: force the installation of the public certificate in background (i.e. completely prevent user interaction) if the --silent command line option is specified
  • Windows Additions: fixed problems with partial install in the unattended case
  • Windows Additions: fixed display glitch with the Start button in seamless mode for some themes
  • Windows Additions: Seamless mode and auto-resize fixes
  • Windows Additions: fixed trying to to retrieve new auto-logon credentials if current ones were not processed yet
  • Windows Additions installer: added the /with_wddm switch to select the experimental WDDM driver by default
  • Linux Additions: fixed setting own timed out and aborted texts in information label of the lightdm greeter
  • Linux Additions: fixed compilation against Linux 3.2.0 Ubuntu kernels (4.2.12 regression as a side effect of the Debian kernel build fix; bug #11709)
  • X11 Additions: reduced the CPU load of VBoxClient in drag'and'drop mode
  • OS/2 Additions: made the mouse wheel work (bug #6793)
  • Guest Additions: fixed problems copying and pasting between two guests on an X11 host (bug #11792)

    The full changelog can be found here.

    You can download binaries for Solaris, Linux, Windows and MacOS hosts at
    http://www.virtualbox.org/wiki/Downloads

    Technocrati Tags:

  • Saturday Apr 13, 2013

    VirtualBox 4.2.12 is now available

    The VirtualBox development team has just released version 4.2.12, and it is now available for download. This is a maintenance release for version 4.2 and contains the following fixes.

  • VMM: fixed a Guru Meditation on putting Linux guest CPU online if nested paging is disabled
  • VMM: invalidate TLB entries even for non-present pages
  • GUI: Multi-screen support: fixed a crash on visual-mode change
  • GUI: Multi-screen support: disabled guest-screens should now remain disabled on visual-mode change
  • GUI: Multi-screen support: handle host/guest screen plugging/unplugging in different visual-modes
  • GUI: Multi-screen support: seamless mode: fixed a bug when empty seamless screens were represented by fullscreen windows
  • GUI: Multi-screen support: each machine window in multi-screen configuration should have correct menu-bar now (Mac OS X hosts)
  • GUI: Multi-screen support: machine window View menu should have correct content in seamless/fullscreen mode now (Mac OS X hosts)
  • GUI: VM manager: vertical scroll-bars should be now updated on content/window resize
  • GUI: VM settings: fixed crash on machine state-change event
  • GUI: don't show warnings about enabled or disabled mouse integration if the VM was restored from a saved state
  • Virtio-net: properly announce that the guest has to handle partial TCP checksums
  • Storage: Fixed incorrect alignment of VDI images causing disk size changes when using snapshots
  • Audio: fixed broken ALSA & PulseAudio on some Linux hosts due to invalid symbol resolution
  • PS/2 keyboard: re-apply keyboard repeat delay and rate after a VM was restored from a saved state
  • BIOS: updated DMI processor information table (type 4): corrected L1 & L2 cache table handles
  • Timekeeping: fix several issues which can lead to incorrect time, Solaris guests sporadically showed time going briefly back to Jan 1 1970
  • Main/Metrics: disk metrics are collected properly when software RAID, symbolic links or rootfs are used on Linux hosts
  • VBoxManage: don't stay paused after a snapshot was created and the VM was running before
  • VBoxManage: introduced controlvm nicpromisc
  • VBoxManage: don't crash on controlvm guestmemoryballoon if the VM isn't running
  • VBoxHeadless: don't filter guest property events as this would affect all clients
  • Guest control: prevent double CR in the output generated by guest commands and do NLS conversion
  • Linux hosts / guests: fixed build errors on Linux 3.5 and newer kernels if the CONFIG_UIDGID_STRICT_TYPE_CHECKS config option is enabled
  • Linux Additions: handle fall-back to VESA driver on RedHat-based guests if vboxvideo cannot be loaded
  • Linux Additions: RHEL/OEL/CentOS 6.4 compile fix
  • Linux Additions: Debian Linux kernel 3.2.0-4 (3.2.39) compile fix
  • Linux Additions: added auto-logon support for Linux guests using LightDM as the display manager
  • Windows Additions: Support for multimonitor. Dynamic enable/disable of secondary virtual monitors. Support for XPDM/WDDM based guests
  • X11 Additions: support X.Org Server 1.14

    The full changelog can be found here.

    You can download binaries for Solaris, Linux, Windows and MacOS hosts at
    http://www.virtualbox.org/wiki/Downloads

    Technocrati Tags:

  • Thursday Dec 20, 2012

    pkg fix is my friend - a followup

    We bloggers appreciate questions and comments about what we post, whether privately in email or attached as comments to some article. In my last post, a reader asked a set of questions that were so good, I didn't want them to get lost down in the comments section. A big thanks to David Lange for asking these questions. I shall try to answer them here (perhaps with a bit more detail than you might have wanted).

    Does the pkg fix reinstall binaries if the hash or chksum doesn't match?

    Yes, it does. Let's actually see this in action, and then we will take a look at where it is getting the information required to correct the error.

    Since I'm working on a series of Solaris 11 Automated Installer (AI) How To articles, installadm seems a good choice to damage, courtesy of the random number generator.

    # ls /sbin/install*
    /sbin/install             /sbin/installadm-convert  /sbin/installf
    /sbin/installadm          /sbin/installboot         /sbin/installgrub
    
    # cd /sbin
    # mv installadm installadm-
    
    # dd if=/dev/random of=/sbin/installadm bs=8192 count=32
    0+32 records in
    0+32 records out
    
    # ls -la installadm*
    -rw-r--r--   1 root     root       33280 Dec 18 18:50 installadm
    -r-xr-xr-x   1 root     bin        12126 Dec 17 08:36 installadm-
    -r-xr-xr-x   1 root     bin        74910 Dec 17 08:36 installadm-convert
    
    OK, that should do it. Unless I am terribly unlucky, those random bytes will produce something that doesn't match the stored hash value of the installadm binary.

    This time, I will begin the repair process with a pkg verify, just to see what is broken.

    # pkg verify installadm
    PACKAGE                                                                 STATUS 
    pkg://solaris/install/installadm                                         ERROR
    
    	file: usr/sbin/installadm
    		Group: 'root (0)' should be 'bin (2)'
    		Mode: 0644 should be 0555
    		Size: 33280 bytes should be 12126
    		Hash: 2e862c7ebd5dce82ffd1b30c666364f23e9118b5 
                         should be 68374d71b9cb91b458a49ec104f95438c9a149a7
    
    For clarity, I have removed all of the compiled python module errors. Most of these have been corrected in Solaris 11.1, but you may see these occasionally when doing a pkg verify.

    Since we have a real package error, let's correct it.

    # pkg fix installadm
    Verifying: pkg://solaris/install/installadm                     ERROR          
    
    	file: usr/sbin/installadm
    		Group: 'root (0)' should be 'bin (2)'
    		Mode: 0644 should be 0555
    		Size: 33280 bytes should be 12126
    		Hash: 2e862c7ebd5dce82ffd1b30c666364f23e9118b5 
                         should be 68374d71b9cb91b458a49ec104f95438c9a149a7
    Created ZFS snapshot: 2012-12-19-00:51:00
    Repairing: pkg://solaris/install/installadm                  
                                                                                   
    
    DOWNLOAD                                  PKGS       FILES    XFER (MB)
    Completed                                  1/1       24/24      0.1/0.1
    
    PHASE                                        ACTIONS
    Update Phase                                   24/24 
    
    PHASE                                          ITEMS
    Image State Update Phase                         2/2 
    
    We can now run installadm as if it was never damaged.
    # installadm list
    
    Service Name     Alias Of       Status  Arch   Image Path 
    ------------     --------       ------  ----   ---------- 
    default-i386     solaris11-i386 on      x86    /install/solaris11-i386
    solaris11-i386   -              on      x86    /install/solaris11-i386
    solaris11u1-i386 -              on      x86    /install/solaris11u1-i386
    
    Oh, if you are wondering about that hash, it is a SHA1 checksum.
    # digest -a sha1 /usr/sbin/installadm
    68374d71b9cb91b458a49ec104f95438c9a149a7
    
    

    If so does IPS keep the installation binaries in a depot or have to point to the originating depot to fix the problem?

    IPS does keep a local cache of package attributes. Before diving into some of these details, it should be known that some, if not all of these, are private details of the current implementation of IPS, and can change in the future. Always consult the command and configuration file man pages before using any of these in scripts. In this case, the relevant information would be in pkg(5) (i.e. man -s 5 pkg).

    Our first step is to identify which publisher has provided the package that is currently installed. In my case, there is only one (solaris), but in a large and mature enterprise deployment, there could be many publishers.

    # pkg info installadm
    pkg info installadm
              Name: install/installadm
           Summary: installadm utility
       Description: Automatic Installation Server Setup Tools
          Category: System/Administration and Configuration
             State: Installed
         Publisher: solaris
           Version: 0.5.11
     Build Release: 5.11
            Branch: 0.175.0.0.0.2.1482
    Packaging Date: October 19, 2011 12:26:24 PM 
              Size: 1.04 MB
              FMRI: pkg://solaris/install/installadm@0.5.11,5.11-0.175.0.0.0.2.1482:20111019T122624Z
    
    From this we have learned that the actual package name is install/installadm and the publisher is in fact, solaris. We have also learned that the version of installadm comes from the original Solaris 11 GA release (5.11-0.175.0.0). That will allow us to go take a look at some of the configuration files (private interface warning still in effect).

    Note: Since package names contain slashes (/), we will have to encode them as %2F to keep the shell from interpreting them as a directory delimiter.

    # cd /var/pkg/publisher/solaris/pkg/install%2Finstalladm
    # ls -la
    drwxr-xr-x   2 root     root           4 Dec 18 00:55 .
    drwxr-xr-x 818 root     root         818 Dec 17 08:36 ..
    -rw-r--r--   1 root     root       25959 Dec 17 08:36
                0.5.11%2C5.11-0.175.0.0.0.2.1482%3A20111019T122624Z
    -rw-r--r--   1 root     root       26171 Dec 18 00:55
                0.5.11%2C5.11-0.175.0.13.0.3.0%3A20121026T213106Z
    
    The file 0.5.11%2C5.11-0.175.0.0.0.2.1482%3A20111019T122624Z is the one we are interested in.
    # digest -a sha1 /usr/sbin/installadm
    68374d71b9cb91b458a49ec104f95438c9a149a7
    
    # grep 68374d71b9cb91b458a49ec104f95438c9a149a7 *
    file 68374d71b9cb91b458a49ec104f95438c9a149a7
    chash=a5c14d2f8cc854dbd4fa15c3121deca6fca64515 group=bin mode=0555 
    owner=root path=usr/sbin/installadm pkg.csize=3194 pkg.size=12126
    
    
    That's how IPS knows our version of installadm has been tampered with. Since it is more than just changing attributes of the files, it has to download a new copy of the damaged files, in this case from the solaris publisher (or one of its mirrors). To keep from making this worse, it also makes a snapshot of the current boot environment, in case things go terribly wrong - which they do not.

    Armed with this information, we can use some other IPS features, such as searching by binary hash.

    # pkg search -r 68374d71b9cb91b458a49ec104f95438c9a149a7
    INDEX                                    ACTION VALUE               PACKAGE
    68374d71b9cb91b458a49ec104f95438c9a149a7 file   usr/sbin/installadm 
                     pkg:/install/installadm@0.5.11-0.175.0.0.0.2.1482
    
    ... or by name
    # pkg search -r installadm
    INDEX       ACTION VALUE                      PACKAGE
    basename    dir    usr/lib/installadm         pkg:/install/installadm@0.5.11-0.175.0.0.0.2.1482
    basename    dir    var/installadm             pkg:/install/installadm@0.5.11-0.175.0.0.0.2.1482
    basename    file   usr/sbin/installadm        pkg:/install/installadm@0.5.11-0.175.0.0.0.2.1482
    pkg.fmri    set    solaris/install/installadm pkg:/install/installadm@0.5.11-0.175.0.0.0.2.1482
    pkg.summary set    installadm utility         pkg:/install/installadm@0.5.11-0.175.0.0.0.2.1482
    
    And finally...
    # pkg contents -m installadm
    
    ..... lots of output truncated ......
    
    file 68374d71b9cb91b458a49ec104f95438c9a149a7 chash=a5c14d2f8cc854dbd4fa15c3121deca6fca64515 
    group=bin mode=0555 owner=root path=usr/sbin/installadm pkg.csize=3194 pkg.size=12126
    
    There is our information using a public and stable interface. Now you know, not only where IPS caches the information, but a predictable way to retrieve it, should you ever need to do so.

    As with the verify and fix operations, this is much more helpful than the SVR4 packaging commands in Solaris 10 and earlier.

    Given that customers might come up with their own ideas of keeping pkgs at various levels, could they be shooting themselves in the foot and creating such a customized OS that it causes problems?

    Stephen Hahn has written quite a bit on the origins of IPS, both on his archived Sun blog as well as on the OpenSolaris pkg project page. While it is a fascinating and useful read, the short answer is that IPS helps prevent this from happening - certainly much more so than with the previous packaging system.

    The assistance comes in several ways.

    Full packages: Since IPS delivers full packages only, that eliminates one of the most confusing and frustrating aspects of the legacy Solaris packaging system. Every time you update a package with IPS, you get a complete version of the software, the way it was assembled and tested at Oracle (and presumably other publishers as well). No more patch order files and, perhaps more important, no more complicated scripts to automate the patching process.

    Dependencies: A rich dependency mechanism allows the package maintainer to guarantee that other related software is at a compatible version. This includes incorporations, which protect large groups of software, such as the basic desktop, GNOME, auto-install and the userland tools. Although not a part of dependencies, facets allow for the control of optional software components - locales being a good example.

    Boot environments: Solaris 10 system administrators can enjoy many of the benefits of IPS boot environment integration by using Live Upgrade and ZFS as a root file system. IPS takes this to the next level by automatically performing important operations, such as upgrading the pkg package when needed or taking a snapshot before performing any risky actions.

    Expanding your question just a bit, IPS provides one new capability that should make updates much more predictable. If there is some specific component that an application requires, its version can be locked within a range. Here is an example, albeit a rather contrived one.

    # pkg list -af jre-6
    NAME (PUBLISHER)                                  VERSION                    IFO
    runtime/java/jre-6                                1.6.0.37-0.175.1.2.0.3.0   ---
    runtime/java/jre-6                                1.6.0.35-0.175.1.0.0.24.1  ---
    runtime/java/jre-6                                1.6.0.35-0.175.0.11.0.4.0  ---
    runtime/java/jre-6                                1.6.0.33-0.175.0.10.0.2.0  ---
    runtime/java/jre-6                                1.6.0.33-0.175.0.9.0.2.0   ---
    runtime/java/jre-6                                1.6.0.32-0.175.0.8.0.4.0   ---
    runtime/java/jre-6                                1.6.0.0-0.175.0.0.0.2.0    i--
    
    Suppose that we have an application that is tied to version 1.6.0.0 of the java runtime. You can lock it at that version and IPS will prevent you from applying any upgrade that would change it. In this example, an attempt to upgrade to SRU8 (which introduces version 1.6.0.32 of jre-6) will fail.
    # pkg freeze -c "way cool demonstration of IPS" jre-6@1.6.0.0
    runtime/java/jre-6 was frozen at 1.6.0.0
    
    # pkg list -af jre-6
    pkg list -af jre-6
    NAME (PUBLISHER)                                  VERSION                    IFO
    runtime/java/jre-6                                1.6.0.37-0.175.1.2.0.3.0   ---
    runtime/java/jre-6                                1.6.0.35-0.175.1.0.0.24.1  ---
    runtime/java/jre-6                                1.6.0.35-0.175.0.11.0.4.0  ---
    runtime/java/jre-6                                1.6.0.33-0.175.0.10.0.2.0  ---
    runtime/java/jre-6                                1.6.0.33-0.175.0.9.0.2.0   ---
    runtime/java/jre-6                                1.6.0.32-0.175.0.8.0.4.0   ---
    runtime/java/jre-6                                1.6.0.0-0.175.0.0.0.2.0    if-
    
    # pkg update --be-name s11ga-sru08  entire@0.5.11-0.175.0.8
    
    What follows is a lengthy set of complaints about not being able to satisfy all of the constraints, conveniently pointing back to our frozen package.

    But wait, there's more. IPS can figure out the latest update it can apply that satisfies the frozen package constraint. In this example, it should find SRU7.

    # pkg update --be-name s11ga-sru07
                Packages to update:  89
           Create boot environment: Yes
    Create backup boot environment:  No
    
    DOWNLOAD                                  PKGS       FILES    XFER (MB)
    Completed                                89/89   3909/3909  135.7/135.7
    
    PHASE                                        ACTIONS
    Removal Phase                                720/720 
    Install Phase                                889/889 
    Update Phase                               5066/5066 
    
    PHASE                                          ITEMS
    Package State Update Phase                   178/178 
    Package Cache Update Phase                     89/89 
    Image State Update Phase                         2/2 
    
    A clone of solaris exists and has been updated and activated.
    On the next boot the Boot Environment s11ga-sru07 will be
    mounted on '/'.  Reboot when ready to switch to this updated BE.
    
    
    ---------------------------------------------------------------------------
    NOTE: Please review release notes posted at:
    
    http://www.oracle.com/pls/topic/lookup?ctx=E23824&id=SERNS
    ---------------------------------------------------------------------------
    
    When the system is rebooted, a quick look shows that we are indeed running with SRU7.

    Perhaps we were too restrictive in locking down jre-6 to version 1.6.0.0. In this example, we will loosen the constraint to any 1.6.0 version, but prohibit upgrades that change it to 1.6.1. Note that I did not have to unfreeze the package as a new pkg freeze will replace the preceding one.

    # pkg freeze jre-6@1.6.0
    runtime/java/jre-6 was frozen at 1.6.0
    
    # pkg list -af jre-6
    NAME (PUBLISHER)                                  VERSION                    IFO
    runtime/java/jre-6                                1.6.0.37-0.175.1.2.0.3.0   -f-
    runtime/java/jre-6                                1.6.0.35-0.175.1.0.0.24.1  -f-
    runtime/java/jre-6                                1.6.0.35-0.175.0.11.0.4.0  -f-
    runtime/java/jre-6                                1.6.0.33-0.175.0.10.0.2.0  -f-
    runtime/java/jre-6                                1.6.0.33-0.175.0.9.0.2.0   -f-
    runtime/java/jre-6                                1.6.0.32-0.175.0.8.0.4.0   -f-
    runtime/java/jre-6                                1.6.0.0-0.175.0.0.0.2.0    if-
    
    This shows that all versions are available for upgrade (i.e. , they all satisfy the frozen package constraint).

    Once again, IPS gives us a wonderful capability that is missing in the legacy packaging system.

    When you perform a pkg update on a system are we guaranteed a highly tested configuration that has gone thru multiple regression tests?

    Short answer: yes.

    For the details, I will turn your attention to our friend, Gerry Haskins, and his two excellent blogs: The Patch Corner (Solaris 10 and earlier) and Solaris 11 Maintenance Lifecycle. Both are excellent reads and I encourage everybody to add them to your RSS reader of choice.

    Of particular note is Gerry's presentation, Solaris 11 Customer Maintenance Lifecycle, which goes into some great detail about patches, upgrades and the like. If you dig back to around the time that Solaris 10 9/10(u9) was released, you will find a links to a pair of interesting documents titled Oracle Integrated Stack - Complete, Trusted Enterprise Solutions and Trust Your Enterprise Deployments to the Oracle Product Stack: The integrated platform that's been developed, tested and certified to get the job done. These documents describe several test environments, including the Oracle Certification Environment (OCE) and Oracle Automated Stress Test (OAST). All Solaris 10 patches and Solaris 11 package updates (including Oracle Solaris Cluster) are put through these tests prior to release. The result is a higher confidence that patches will not introduce stability or performance problems, negating the old practice of putting a release or patch bundle on the shelf while somebody else finds all of the problems. Local testing on your own equipment is still a necessary practice, but you are able to move more quickly to a new release thanks to these additional testing environments.

    If I am allowed to ask a follow up question, it would be something like, "what can I do proactively to keep my system as current as possible and reduce the risks of bad patch or package interactions?"

    That is where the Critical Patch Updates come into play. Solaris 11 Support Repository Updates (SRU) come out approximately once per month. Every third one (generally) is special and becomes the CPU for Solaris. If you have a regular cadence for applying CPUs or Patch Set Updates (PSU) for your other Oracle software, choose the corresponding SRU that has been designated as that quarter's CPU. You can find this information in My Oracle Support (MOS), on the Oracle Technology Network (OTN), or just read Gerry's blog in mid January, April, July and October.

    Thanks again to David Lange for asking such good questions. I hope the answers helped.

    Tuesday Dec 11, 2012

    Solaris 11 pkg fix is my new friend

    While putting together some examples of the Solaris 11 Automated Installer (AI), I managed to really mess up my system, to the point where AI was completely unusable. This was my fault as a combination of unfortunate incidents left some remnants that were causing problems, so I tried to clean things up. Unsuccessfully. Perhaps that was a bad idea (OK, it was a terrible idea), but this is Solaris 11 and there are a few more tricks in the sysadmin toolbox.

    Here's what I did.

    # rm -rf /install/*
    # rm -rf /var/ai
    
    # installadm create-service -n solaris11-x86 --imagepath /install/solaris11-x86 \
                     -s solaris-auto-install@5.11-0.175.0
    
    Warning: Service svc:/network/dns/multicast:default is not online.
       Installation services will not be advertised via multicast DNS.
    
    Creating service from: solaris-auto-install@5.11-0.175.0
    DOWNLOAD                                PKGS         FILES    XFER (MB)   SPEED
    Completed                                1/1       130/130  264.4/264.4    0B/s
    
    PHASE                                          ITEMS
    Installing new actions                       284/284
    Updating package state database                 Done 
    Updating image state                            Done 
    Creating fast lookup database                   Done 
    Reading search index                            Done 
    Updating search index                            1/1 
    
    Creating i386 service: solaris11-x86
    
    Image path: /install/solaris11-x86
    
    So far so good. Then comes an oops.....
    setup-service[168]: cd: /var/ai//service/.conf-templ: [No such file or directory]
                                             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    
    This is where you generally say a few things to yourself, and then promise to quit deleting configuration files and directories when you don't know what you are doing. Then you recall that the new Solaris 11 packaging system has some ability to correct common mistakes (like the one I just made). Let's give it a try.
    # pkg fix installadm
    Verifying: pkg://solaris/install/installadm                     ERROR
            dir: var/ai
                    Group: 'root (0)' should be 'sys (3)'
            dir: var/ai/ai-webserver
                    Missing: directory does not exist
            dir: var/ai/ai-webserver/compatibility-configuration
                    Missing: directory does not exist
            dir: var/ai/ai-webserver/conf.d
                    Missing: directory does not exist
            dir: var/ai/image-server
                    Group: 'root (0)' should be 'sys (3)'
            dir: var/ai/image-server/cgi-bin
                    Missing: directory does not exist
            dir: var/ai/image-server/images
                    Group: 'root (0)' should be 'sys (3)'
            dir: var/ai/image-server/logs
                    Missing: directory does not exist
            dir: var/ai/profile
                    Missing: directory does not exist
            dir: var/ai/service
                    Group: 'root (0)' should be 'sys (3)'
            dir: var/ai/service/.conf-templ
                    Missing: directory does not exist
            dir: var/ai/service/.conf-templ/AI_data
                    Missing: directory does not exist
            dir: var/ai/service/.conf-templ/AI_files
                    Missing: directory does not exist
            file: var/ai/ai-webserver/ai-httpd-templ.conf
                    Missing: regular file does not exist
            file: var/ai/service/.conf-templ/AI.db
                    Missing: regular file does not exist
            file: var/ai/image-server/cgi-bin/cgi_get_manifest.py
                    Missing: regular file does not exist
    Created ZFS snapshot: 2012-12-11-21:09:53
    Repairing: pkg://solaris/install/installadm                  
    Creating Plan (Evaluating mediators): |
    
    DOWNLOAD                                PKGS         FILES    XFER (MB)   SPEED
    Completed                                1/1           3/3      0.0/0.0    0B/s
    
    PHASE                                          ITEMS
    Updating modified actions                      16/16
    Updating image state                            Done 
    Creating fast lookup database                   Done 
    
    In just a few moments, IPS found the missing files and incorrect ownerships/permissions. Instead of reinstalling the system, or falling back to an earlier Live Upgrade boot environment, I was able to create my AI services and now all is well.
    # installadm create-service -n solaris11-x86 --imagepath /install/solaris11-x86 \
                       -s solaris-auto-install@5.11-0.175.0
    Warning: Service svc:/network/dns/multicast:default is not online.
       Installation services will not be advertised via multicast DNS.
    
    Creating service from: solaris-auto-install@5.11-0.175.0
    DOWNLOAD                                PKGS         FILES    XFER (MB)   SPEED
    Completed                                1/1       130/130  264.4/264.4    0B/s
    
    PHASE                                          ITEMS
    Installing new actions                       284/284
    Updating package state database                 Done 
    Updating image state                            Done 
    Creating fast lookup database                   Done 
    Reading search index                            Done 
    Updating search index                            1/1 
    
    Creating i386 service: solaris11-x86
    
    Image path: /install/solaris11-x86
    
    Refreshing install services
    Warning: mDNS registry of service solaris11-x86 could not be verified.
    
    Creating default-i386 alias
    
    Setting the default PXE bootfile(s) in the local DHCP configuration
    to:
    bios clients (arch 00:00):  default-i386/boot/grub/pxegrub
    
    
    Refreshing install services
    Warning: mDNS registry of service default-i386 could not be verified.
    
    # installadm create-service -n solaris11u1-x86 --imagepath /install/solaris11u1-x86 \
                        -s solaris-auto-install@5.11-0.175.1
    Warning: Service svc:/network/dns/multicast:default is not online.
       Installation services will not be advertised via multicast DNS.
    
    Creating service from: solaris-auto-install@5.11-0.175.1
    DOWNLOAD                                PKGS         FILES    XFER (MB)   SPEED
    Completed                                1/1       514/514  292.3/292.3    0B/s
    
    PHASE                                          ITEMS
    Installing new actions                       661/661
    Updating package state database                 Done 
    Updating image state                            Done 
    Creating fast lookup database                   Done 
    Reading search index                            Done 
    Updating search index                            1/1 
    
    Creating i386 service: solaris11u1-x86
    
    Image path: /install/solaris11u1-x86
    
    Refreshing install services
    Warning: mDNS registry of service solaris11u1-x86 could not be verified.
    
    # installadm list
    
    Service Name    Alias Of      Status  Arch   Image Path 
    ------------    --------      ------  ----   ---------- 
    default-i386    solaris11-x86 on      i386   /install/solaris11-x86
    solaris11-x86   -             on      i386   /install/solaris11-x86
    solaris11u1-x86 -             on      i386   /install/solaris11u1-x86
    
    
    
    This is way way better than pkgchk -f in Solaris 10. I'm really beginning to like this new IPS packaging system.

    Wednesday Aug 22, 2012

    VirtualBox 4.2 Release Candidate 2 is available for testing

    Release Candidate 2 of VirtualBox 4.2 is now available for testing. This release is made available for testers and early adopters, and should not be used on production or critical systems.

    Version 4.2 will be a major update and contains the following new features.

  • Improved Windows 8 support, in particular many 3D-related fixes
  • Ability to group virtual machines
  • Expert mode for wizards
  • Allow more settings to be modified while a guest is running
  • Support for up to 36 network cards
  • Limiting network IO bandwidth
  • Ability to start VMs during system boot on Linux, OS X and Solaris
  • New experimental support for Drag'n'drop from the host to Linux guests. Support for more guests and for guest-to-host is planned. (bug #81)
  • Host parallel port pass through on Windows platform
  • New API functions for controlling guests (see the SDK documentation)

    In addition to the new functionality, the following bugs have been fixed since the last beta release.

  • Mac OS X hosts: sign application and installer to avoid warnings on Mountain Lion
  • VMM: improved VM context switch performance for Intel CPUs using nested paging
  • VMM: added support for FlushByASID features of AMD CPUs (Bulldozer and newer)
  • VMM: fixed unreal mode handling on older CPUs with VT-x (gPXE, Solaris 7/8/9; bug #9941)
  • VMM: fixed MP tables fixes for I/O APIC interrupt routing relevant for ancient SMP guests (e.g. old OS/2 releases)
  • VMM: support recent VIA CPUs (bug #10005)
  • GUI: network operations manager
  • GUI: allow taking screenshots of the current VM window content
  • GUI: allow automatically sorting of the VM list
  • GUI: allow starting of headless VMs from the GUI
  • GUI: allow reset, shutdown and poweroff from the Manager window
  • GUI: allow to globally limit the maximum screen resolution for guests
  • GUI: show the full medium part on hovering the list of recently used ISO images
  • GUI: do not create additional folders when a new machine has a separator character in its name (bug #6541)
  • GUI: don't crash on terminate if the settings dialog is still open (bug #9973)
  • Snapshots: fixed a crash when restoring an old snapshot when powering off a VM (bug #10491)
  • Settings: sanitise the name of VM folders and settings file (bug #10549)
  • Settings: allow to store the iSCSI initiator secret encrypted
  • E1000: 802.1q VLAN support
  • Storage: implemented burning of audio CDs in passthrough mode
  • Storage: implemented support for discarding unused image blocks through TRIM for SATA and IDE and UNMAP for SCSI when using VDI images
  • Storage: added support for QED images
  • Storage: added support for QCOW (full support for v1 and readonly support for v2 images)
  • Storage: added readonly support for VHDX images
  • Solaris additions: added support for X.org Server 1.11 and 1.12
  • Windows hosts: no need to recreate host-only adapters after a VirtualBox update
  • Windows hosts: updated toolchain; make the source code compatible to VC 2010 and enable some security-related compiler options
  • NAT: improvements for the built-in TFTP server (bugs #7385, #10286)

    Please refer to the VirtualBox 4.2 Release Candidate 2 Changelog for a complete list of changes and enhancements.

    Binaries for Windows, MacOS, Linux and Solaris can be downloaded here.

    Important Note: Please do not use this VirtualBox Beta release on production machines. A VirtualBox Release Candidate should be used for early evaluation and testing purposes.

    Report problems or issues at the VirtualBox Beta Forum.

    Technocrati Tags:

  • Tuesday Aug 21, 2012

    VirtualBox 4.2 Release Candidate 1 is available for testing

    Release Candidate 1 of VirtualBox 4.2 is now available for testing. This release is made available for testers and early adopters, and should not be used on production or critical systems.

    Version 4.2 will be a major update and contains the following new features.

  • Improved Windows 8 support, in particular many 3D-related fixes
  • Ability to group virtual machines
  • Expert mode for wizards
  • Allow more settings to be modified while a guest is running
  • Support for up to 36 network cards
  • Limiting network IO bandwidth
  • Ability to start VMs during system boot on Linux, OS X and Solaris
  • New experimental support for Drag'n'drop from the host to Linux guests. Support for more guests and for guest-to-host is planned. (bug #81)
  • Host parallel port pass through on Windows platform

    In addition to the new functionality, the following bugs have been fixed since the last beta release.

  • Mac OS X hosts: sign application and installer to avoid warnings on Mountain Lion
  • VMM: improved VM context switch performance for Intel CPUs using nested paging
  • VMM: added support for FlushByASID features of AMD CPUs (Bulldozer and newer)
  • VMM: fixed unreal mode handling on older CPUs with VT-x (gPXE, Solaris 7/8/9; bug #9941)
  • VMM: fixed MP tables fixes for I/O APIC interrupt routing relevant for ancient SMP guests (e.g. old OS/2 releases)
  • VMM: support recent VIA CPUs (bug #10005)
  • GUI: network operations manager
  • GUI: allow taking screenshots of the current VM window content
  • GUI: allow automatically sorting of the VM list
  • GUI: allow starting of headless VMs from the GUI
  • GUI: allow reset, shutdown and poweroff from the Manager window
  • GUI: allow to globally limit the maximum screen resolution for guests
  • GUI: show the full medium part on hovering the list of recently used ISO images
  • GUI: do not create additional folders when a new machine has a separator character in its name (bug #6541)
  • GUI: don't crash on terminate if the settings dialog is still open (bug #9973)
  • Snapshots: fixed a crash when restoring an old snapshot when powering off a VM (bug #10491)
  • Settings: sanitise the name of VM folders and settings file (bug #10549)
  • Settings: allow to store the iSCSI initiator secret encrypted
  • E1000: 802.1q VLAN support
  • Storage: implemented burning of audio CDs in passthrough mode
  • Storage: implemented support for discarding unused image blocks through TRIM for SATA and IDE and UNMAP for SCSI when using VDI images
  • Storage: added support for QED images
  • Storage: added support for QCOW (full support for v1 and readonly support for v2 images)
  • Storage: added readonly support for VHDX images
  • Solaris additions: added support for X.org Server 1.11 and 1.12
  • Windows hosts: no need to recreate host-only adapters after a VirtualBox update
  • Windows hosts: updated toolchain; make the source code compatible to VC 2010 and enable some security-related compiler options

    Please refer to the VirtualBox 4.2 Release Candidate 1 Changelog for a complete list of changes and enhancements.

    Binaries for Windows, MacOS, Linux and Solaris can be downloaded here.

    Important Note: Please do not use this VirtualBox Beta release on production machines. A VirtualBox Release Candidate should be used for early evaluation and testing purposes.

    Report problems or issues at the VirtualBox Beta Forum.

    Technocrati Tags:

  • VirtualBox 4.1.20 is now available

    VirtualBox 4.1.20 has been released and is now available. This is a maintenance release for version 4.1 and contains the following fixes.

  • VMM: fixed a crash under rare circumstances for VMs running without hardware virtualization
  • VMM: fixed a code analysis bug for certain displacement instructions for VMs running without hardware virtualization
  • VMM: fixed an interpretion bug for TPR read instructions under rare conditions (AMD-V only)
  • Snapshots: fixed a crash when restoring an old snapshot when powering off a VM (bugs #9604, #10491)
  • VBoxSVC: be more tolerant against environment variables with strange encodings (bug #8780)
  • VGA: fixed wrong access check which might cause a crash under certain conditions
  • NAT: final fix for crashes under rare conditions (bug #10513)
  • Virtio-net: fixed the problem with receiving of GSO packets in Windows XP guests causing packet loss in host-to-VM transfers
  • HPET: several fixes (bugs #10170, #10306)
  • Clipboard: disable the clipboard by default for new VMs
  • BIOS: the PCI BIOS was not properly detected with the chipset type set to ICH9 (bugs #9301, #10327)
  • Mac OS X hosts: adaptions to Mountain Lion
  • Linux Installer: fixes for Gentoo Linux (bug #10642)
  • Linux guests: fixed mouse integration on Fedora 17 guests (bug #2306)
  • Linux Additions: compile fixes for RHEL/CentOS 6.3 (bug #10756)
  • Linux Additions: compile fixes for Linux 3.5-rc1 and Linux 3.6-rc1 (bug #10709)
  • Solaris host: fixed a guru meditation while allocating large pages (bug #10600)
  • Solaris host: fixed possible kernel panics while freeing memory
  • Solaris Installer: fixed missing icon for menu and desktop shortcuts

    The full changelog can be found here.

    You can download binaries for Windows, OS X (Intel Mac), Linux and Solaris hosts at
    http://www.virtualbox.org/wiki/Downloads

    Technocrati Tags:

  • Wednesday Aug 15, 2012

    Pre-work for Upcoming Solaris 11 Boot Camps

    Over the next few weeks, I will be hosting some Solaris 11 hands on workshops. Some of these will be public events at an Oracle office while others will be private sessions for a specific customer.

    The public sessions I'm hosting are

    Note: there is also another identical Solaris 11 session hosted by my colleague, Pavel Anni, in Broomfield, Colorado on August 23.

    If you are planning on attending any of these sessions (including Pavel's), there are several things you can do in advance that will help not only you, but your fellow attendees.

    Enable VT-x or AMD-V on your Laptop

    If you will be using VirtualBox to host your workshop guest image, you need to enable the hardware virtualization feature. This is typically found in your BIOS and where you find the setting varies by laptop manufacturer. If you do not find it in the system or CPU settings, try looking in security. If you are given the choice of VT-x and VT-d, you only need to enable VT-x.

    If you have a company laptop that does not allow you to change the BIOS settings, you might ask your employer if they can provide you one for the day that is not locked down.

    Note: Enabling hardware virtualization is a requirement to complete the workshop.

    Download and Install VirtualBox

    Since this will be primarily a hands on lab, you are encouraged to bring a laptop. The labs will all be run in a Solaris guest machine, so your laptop will also need a virtualization application, such as VMware or VirtualBox. We recommend VirtualBox. You can download a free copy at VirtualBox.org. Binaries are available for Windows, MacOS, Solaris and most Linux distributions.

    After installing VirtualBox, you should also install the VirtualBox Extensions Pack. These are not required for the lab, but should you continue to use the guest machine after the workshop, you might find some of the features very useful.

    Download a Solaris 11 VM Appliance from the Oracle Technology Network (OTN)

    You can download a pre-built Solaris 11 guest image directly from the Oracle Technology Network. Here is a link to the VM download page. Accept the license and download the latest Solaris 11 VirtualBox guest image.

    Once downloaded, you can use the VirtualBox VM import function to create a usable guest. Clicking File -> Import Appliance on the VirtualBox main window will launch the import wizard. Select the file you just downloaded and in a few minutes you will have a bootable Solaris 11 guest. The import process should look something like this.


    Click image to enlarge

    Configure the Solaris Guest

    The first time you boot the Solaris 11 guest, you will be required to complete a short configuration dialog. Once you have specified all of the items on the page, press F2 to advance to the next screen.

    The introduction screen looks like this.



    Click image to enlarge

    On the second page, specify the host name and default network setup. The default name of solaris is used throughout the lab. For the network setup, select Automatic.



    Click image to enlarge

    The next item in the initial system configuration is the timezone. That does not matter for the hands on labs. If you are experiencing poor weather, I have found that setting the system to Aruba time can be helpful.

    The final step is to set the root password and set up the initial user. To stay consistent with the lab handouts, set the root password to oracle2011. The initial user should be specified as lab and its password should be oracle1.



    Click image to enlarge

    Finally, you will be presented a summary screen, which should look something like this. When satisfied, press F2 to complete.



    Click image to enlarge

    The Solaris 11 VM image from the Oracle Technology Network has the VirtualBox Guest Additions already installed. This enables keyboard and mouse integration as well resize/seamless windows.

    Set up a Local Repository

    To complete the zone installation labs in the workshop, you will need to access the Oracle public Solaris 11 repository, which means you also must have wireless network access. This does not always work well in a workshop with 30 or 40 users stressing out the local wireless access point. To make this easier, you can create your own customized package repository in your newly imported Solaris 11 guest. My colleague, Pavel Anni, has supplied this excellent set of instructions on how to do that..

    1. Create a directory or a ZFS file system to hold your local repository.

    # mkdir /repo
    or 
    # zfs create -o mountpoint=/repo -o compress=gzip rpool/repo
    
    2. Create an empty repository in it
    # pkgrepo create /repo
    
    3. Create a text file 'zone-pkgs.txt' with the list of necessary packages. That list should look like this (cut and paste is your best friend).
    
    pkg://solaris/compress/bzip2
    pkg://solaris/compress/gzip
    pkg://solaris/compress/p7zip
    pkg://solaris/compress/unzip
    pkg://solaris/compress/zip
    pkg://solaris/consolidation/SunVTS/SunVTS-incorporation
    pkg://solaris/consolidation/X/X-incorporation
    pkg://solaris/consolidation/admin/admin-incorporation
    pkg://solaris/consolidation/cacao/cacao-incorporation
    pkg://solaris/consolidation/cde/cde-incorporation
    pkg://solaris/consolidation/cns/cns-incorporation
    pkg://solaris/consolidation/dbtg/dbtg-incorporation
    pkg://solaris/consolidation/desktop/desktop-incorporation
    pkg://solaris/consolidation/desktop/gnome-incorporation
    pkg://solaris/consolidation/gfx/gfx-incorporation
    pkg://solaris/consolidation/install/install-incorporation
    pkg://solaris/consolidation/ips/ips-incorporation
    pkg://solaris/consolidation/java/java-incorporation
    pkg://solaris/consolidation/jdmk/jdmk-incorporation
    pkg://solaris/consolidation/l10n/l10n-incorporation
    pkg://solaris/consolidation/ldoms/ldoms-incorporation
    pkg://solaris/consolidation/man/man-incorporation
    pkg://solaris/consolidation/nspg/nspg-incorporation
    pkg://solaris/consolidation/nvidia/nvidia-incorporation
    pkg://solaris/consolidation/osnet/osnet-incorporation
    pkg://solaris/consolidation/sfw/sfw-incorporation
    pkg://solaris/consolidation/sic_team/sic_team-incorporation
    pkg://solaris/consolidation/solaris_re/solaris_re-incorporation
    pkg://solaris/consolidation/sunpro/sunpro-incorporation
    pkg://solaris/consolidation/ub_javavm/ub_javavm-incorporation
    pkg://solaris/consolidation/userland/userland-incorporation
    pkg://solaris/consolidation/vpanels/vpanels-incorporation
    pkg://solaris/consolidation/xvm/xvm-incorporation
    pkg://solaris/crypto/ca-certificates
    pkg://solaris/database/sqlite-3
    pkg://solaris/developer/base-developer-utilities
    pkg://solaris/developer/debug/mdb
    pkg://solaris/developer/macro/cpp
    pkg://solaris/diagnostic/cpu-counters
    pkg://solaris/diagnostic/snoop
    pkg://solaris/diagnostic/tcpdump
    pkg://solaris/driver/serial/asy
    pkg://solaris/driver/storage/cmdk
    pkg://solaris/driver/storage/mpt
    pkg://solaris/driver/x11/xsvc
    pkg://solaris/editor/vim/vim-core
    pkg://solaris/entire
    pkg://solaris/group/system/solaris-small-server
    pkg://solaris/library/database/gdbm
    pkg://solaris/library/expat
    pkg://solaris/library/libffi
    pkg://solaris/library/libidn
    pkg://solaris/library/libmilter
    pkg://solaris/library/libtecla
    pkg://solaris/library/libxml2
    pkg://solaris/library/libxslt
    pkg://solaris/library/ncurses
    pkg://solaris/library/nspr
    pkg://solaris/library/perl-5/sun-solaris-512
    pkg://solaris/library/python-2/cherrypy-26
    pkg://solaris/library/python-2/lxml-26
    pkg://solaris/library/python-2/m2crypto-26
    pkg://solaris/library/python-2/mako-26
    pkg://solaris/library/python-2/ply-26
    pkg://solaris/library/python-2/pybonjour-26
    pkg://solaris/library/python-2/pycurl-26
    pkg://solaris/library/python-2/pyopenssl-26
    pkg://solaris/library/python-2/python-extra-26
    pkg://solaris/library/python-2/simplejson-26
    pkg://solaris/library/readline
    pkg://solaris/library/security/nss
    pkg://solaris/library/security/openssl
    pkg://solaris/library/security/trousers
    pkg://solaris/library/zlib
    pkg://solaris/media/cdrtools
    pkg://solaris/media/xorriso
    pkg://solaris/naming/ldap
    pkg://solaris/network/bridging
    pkg://solaris/network/dns/bind
    pkg://solaris/network/ipfilter
    pkg://solaris/network/open-fabrics
    pkg://solaris/network/ping
    pkg://solaris/network/rsync
    pkg://solaris/network/ssh
    pkg://solaris/network/ssh/ssh-key
    pkg://solaris/package/pkg
    pkg://solaris/package/pkg/zones-proxy
    pkg://solaris/package/svr4
    pkg://solaris/release/name
    pkg://solaris/release/notices
    pkg://solaris/runtime/perl-512
    pkg://solaris/runtime/python-26
    pkg://solaris/security/nss-utilities
    pkg://solaris/security/sudo
    pkg://solaris/security/tcp-wrapper
    pkg://solaris/service/file-system/nfs
    pkg://solaris/service/network/dns/mdns
    pkg://solaris/service/network/smtp/sendmail
    pkg://solaris/service/network/ssh
    pkg://solaris/service/security/gss
    pkg://solaris/service/security/kerberos-5
    pkg://solaris/shell/bash
    pkg://solaris/shell/ksh
    pkg://solaris/system/boot-environment-utilities
    pkg://solaris/system/boot/wanboot
    pkg://solaris/system/core-os
    pkg://solaris/system/data/terminfo/terminfo-core
    pkg://solaris/system/data/timezone
    pkg://solaris/system/device-administration
    pkg://solaris/system/dtrace
    pkg://solaris/system/dtrace/dtrace-toolkit
    pkg://solaris/system/fault-management
    pkg://solaris/system/fault-management/smtp-notify
    pkg://solaris/system/file-system/autofs
    pkg://solaris/system/file-system/hsfs
    pkg://solaris/system/file-system/nfs
    pkg://solaris/system/file-system/pcfs
    pkg://solaris/system/file-system/udfs
    pkg://solaris/system/file-system/ufs
    pkg://solaris/system/file-system/zfs
    pkg://solaris/system/install
    pkg://solaris/system/install/configuration
    pkg://solaris/system/install/locale
    pkg://solaris/system/kernel
    pkg://solaris/system/kernel/platform
    pkg://solaris/system/kernel/secure-rpc
    pkg://solaris/system/kernel/security/gss
    pkg://solaris/system/library
    pkg://solaris/system/library/boot-management
    pkg://solaris/system/library/c++-runtime
    pkg://solaris/system/library/gcc-3-runtime
    pkg://solaris/system/library/iconv/utf-8
    pkg://solaris/system/library/install
    pkg://solaris/system/library/libpcap
    pkg://solaris/system/library/math
    pkg://solaris/system/library/openmp
    pkg://solaris/system/library/security/gss
    pkg://solaris/system/library/security/gss/diffie-hellman
    pkg://solaris/system/library/security/gss/spnego
    pkg://solaris/system/library/security/libsasl
    pkg://solaris/system/library/security/rpcsec
    pkg://solaris/system/library/storage/libdiskmgt
    pkg://solaris/system/library/storage/scsi-plugins
    pkg://solaris/system/linker
    pkg://solaris/system/locale
    pkg://solaris/system/manual
    pkg://solaris/system/manual/locale
    pkg://solaris/system/network
    pkg://solaris/system/network/nis
    pkg://solaris/system/network/routing
    pkg://solaris/system/prerequisite/gnu
    pkg://solaris/system/resource-mgmt/resource-caps
    pkg://solaris/system/resource-mgmt/resource-pools
    pkg://solaris/system/system-events
    pkg://solaris/system/zones
    pkg://solaris/system/zones/brand/brand-solaris
    pkg://solaris/terminal/luit
    pkg://solaris/terminal/resize
    pkg://solaris/text/doctools
    pkg://solaris/text/doctools/ja
    pkg://solaris/text/groff/groff-core
    pkg://solaris/text/less
    pkg://solaris/text/spelling-utilities
    pkg://solaris/web/curl
    pkg://solaris/web/wget
    pkg://solaris/x11/header/x11-protocols
    pkg://solaris/x11/library/libfontenc
    pkg://solaris/benchmark/iperf
    
    4. Populate your local repository with the required packages. At present, it is not possible to do this in parallel, so the packages much be received on at a time. Depending on your network speed, this step could take 2 to 3 hours.
    # for f in `cat zone-pkgs.txt` ; \ 
    do pkgrecv -s http://pkg.oracle.com/solaris/release -d /repo $f ; \ 
    echo $f ; \ 
    done
    pkgrepo rebuild -s /repo
    
    5. Check if you really have 167 packages (if you have downloaded and installed the archive, it might be more, we have added apache and iperf packages for our demo purposes)
    # pkgrepo info -s file:///repo
    
    6. Set up and enable package repository service in the global zone:
    # svccfg -s application/pkg/server setprop pkg/inst_root=/repo   
    # svcprop -p pkg/inst_root application/pkg/server   (Just checking...)
    # svcadm refresh application/pkg/server 
    # svcadm enable application/pkg/server 
    
    7. Switch repositories (disable the all existing ones and mirrors and enable the local one):
    # pkg set-publisher -G '*' -M '*' -g http://10.0.2.15/ solaris
    
    Note that it should use your global zone's IP address (in this case, provided automatically by VirtualBox). Then all the zones you create will keep this address and be able to install packages from the global zone. It won't work if you set your repository's HTTP address just to http://localhost.

    Download zoneplot

    The zones portion of the hands on lab will make use of two utilities that are not in Solaris. You will need to download both Pavel Anni's zoneplot and Andreas Bernauer's Gnuplot driver utility

    Optional: Return your Solaris publisher to the Oracle default repository

    When you have completed all of the labs, you can restore the original Oracle default repository.
    # pkg set-publisher -G '*' -g http://pkg.oracle.com/solaris/release -P solaris
    
    That should be about it. Please leave a comment if you have any questions. I am looking forward to seeing you at one of these, or a future Solaris event.

    Technocrati Tags:

    Friday Jan 13, 2012

    Live Upgrade, /var/tmp and the Ever Growing Boot Environments

    Even if you are a veteran Live Upgrade user, you might be caught by surprise when your new ZFS root pool starts filling up, and you have no idea where the space is going. I tripped over this one while installing different versions of StarOffice and OpenOffice and forgot that they left a rather large parcel behind in /var/tmp. When recently helping a customer through some Live Upgrade issues, I noticed that they were downloading patch clusters into /var/tmp and then I remembered that I used to do that too.

    And then stopped. This is why. What follows has been added to the list of Common Live Upgrade Problems, as Number 3.

    Let's start with a clean installation of Solaris 10 10/09 (u8).

    # df -k /
    Filesystem                       kbytes    used   avail capacity  Mounted on
    rpool/ROOT/s10x_u8wos_08a      20514816 4277560 13089687    25%    /
    
    
    So far, so good. Solaris is just a bit over 4GB. Another 3GB is used by the swap and dump devices. That should leave plenty of room for half a dozen or so patch cycles (assuming 1GB each) and an upgrade to the next release.

    Now, let's put on the latest recommended patch cluster. Note that I am following the suggestions in my Live Upgrade Survival Guide, installing the prerequisite patches and the LU patch before actually installing the patch cluster.

    # cd /var/tmp
    # wget patchserver:/export/patches/10_x86_Recommended-2012-01-05.zip .
    # unzip -qq 10_x86_Recommended-2012-01-05.zip
    
    # wget patchserver:/export/patches/121431-69.zip
    # unzip 121431-69
    
    # cd 10x_Recommended
    # ./installcluster --apply-prereq --passcode (you can find this in README)
    
    # patchadd -M /var/tmp 121431-69
    
    # lucreate -n s10u8-2012-01-05
    # ./installcluster -d -B s10u8-2012-01-05 --passcode
    
    # luactivate s10u8-2012-01-05
    # init 0
    
    
    After the new boot environment is activated, let's upgrade to the latest release of Solaris 10. In this case, it will be Solaris 10 8/11 (u10).

    Yes, this does seem like an awful lot is happening in a short period of time. I'm trying to demonstrate a situation that really does happen when you forget something as simple as a patch cluster clogging up /var/tmp. Think of this as one of those time lapse video sequences you might see in a nature documentary.

    # pkgrm SUNWluu SUNWlur SUNWlucfg
    # pkgadd -d /cdrom/sol_10_811_x86  SUNWluu SUNWlur SUNWlucfg
    # patchadd -M /var/tmp 121431-69
    
    # lucreate -n s10u10-baseline'
    # echo "autoreg=disable" > /var/tmp/no-autoreg
    # luupgrade -u -s /cdrom/sol_10_811_x86 -k /var/tmp/no-autoreg -n s10u10-baseline
    # luactivate s10u10-baseline
    # init 0
    
    As before, everything went exactly as expected. Or I thought so, until I logged in the first time and checked the free space in the root pool.
    # df -k /
    Filesystem                       kbytes    used   avail capacity  Mounted on
    rpool/ROOT/s10u10-baseline     20514816 10795038 2432308    82%    /
    
    Where did all of the space go ? Back of the napkin calculations of 4.5GB (s10u8) + 4.5GB (s10u10) + 1GB (patch set) + 3GB (swap and dump) = 13GB. 20GB pool - 13GB used = 7GB free. But there's only 2.4GB free ?

    This is about the time that I smack myself on the forehead and realize that I put the patch cluster in the /var/tmp. Old habits die hard. This is not a problem, I can just delete it, right ?

    Not so fast.

    # du -sh /var/tmp
     5.4G   /var/tmp
    
    # du -sh /var/tmp/10*
     3.8G   /var/tmp/10_x86_Recommended
     1.5G   /var/tmp/10_x86_Recommended-2012-01-05.zip
    
    # rm -rf /var/tmp/10*
    
    # du -sh /var/tmp
     3.4M   /var/tmp
    
    
    Imagine the look on my face when I check the pool free space, expecting to see 7GB.
    # df -k /
    Filesystem                      kbytes    used   avail capacity  Mounted on
    rpool/ROOT/s10u10-baseline    20514816 5074262 2424603    68%    /
    
    
    We are getting closer. At least my root filesystem size is reasonable (5GB vs 11GB). But the free space hasn't changed at all.

    Once again, I smack myself on the forehead. The patch cluster is also in the other two boot environments. All I have to do is get rid them too, and I'll get my free space back.

    # lumount s10u8-2012-01-05 /mnt
    # rm -rf /mnt/var/tmp/10_x86_Recommended*
    # luumount s10u8-2012-01-05
    
    # lumount s10x_u8wos_08a /mnt
    # rm -rf /mnt/var/tmp/10_x86_Recommended*
    # luumount s10x_u8wos_08a
    
    Surely, that will get my free space reclaimed, right ?
    # df -k /
    Filesystem                    kbytes    used   avail capacity  Mounted on
    rpool/ROOT/s10u10-baseline  20514816 5074265 2429261    68%    /
    
    
    This is when I smack myself on the forehead for the third time in one afternoon. Just getting rid of them in the boot environments is not sufficient. It would be if I were using UFS as a root filesystem, but lucreate will use the ZFS snapshot and cloning features when used on a ZFS root. So the patch cluster is in the snapshot, and the oldest one at that.

    Let's try this all over again, but this time I will put the patches somewhere else that is not part of a boot environment. If you are thinking of using root's home directory, think again - it is part of the boot environment. If you are running out of ideas, let me suggest that /export/patches might be a good place to put them.

    Doing the exercise again, with the patches in /export/patches, I get similar results (to be expected), but with one significant different.This time the patches are in a shared ZFS dataset (/export) and can be deleted.

    # lustatus
    Boot Environment           Is       Active Active    Can    Copy      
    Name                       Complete Now    On Reboot Delete Status    
    -------------------------- -------- ------ --------- ------ ----------
    s10x_u8wos_08a             yes      no     no        yes    -         
    s10u8-2012-01-05           yes      no     no        yes    -         
    s10u10-baseline            yes      yes    yes       no     -         
    
    # df -k /
    Filesystem                      kbytes    used   avail capacity  Mounted on
    rpool/ROOT/s10u10-baseline    20514816 5184578 2445140    68%    /
    
    
    # df -k /export
    Filesystem                      kbytes    used   avail capacity  Mounted on
    rpool/export                  20514816 5606384 2445142    70%    /export
    
    
    This time, when I delete them, the disk space will be reclaimed.
    # rm -rf /export/patches/10_x86_Recommended*
    
    # df -k /
    Filesystem                      kbytes    used   avail capacity  Mounted on
    rpool/ROOT/s10u10-baseline    20514816 5184578 8048050    40%    /
    
    
    Now, that's more like it. With this free space, I can continue to patch and maintain my system as I had originally planned - estimating a few hundred MB to 1.5GB per patch set.

    The moral to the story is that even if you follow all of the best practices and recommendations, you can still be tripped up by old habits when you don't consider their consequences. And when you do, don't feel bad. Many best practices come from exercises just like this one.

    Technocrati Tags:

    Wednesday Jul 06, 2011

    Live Upgrade Survival Guide

    When I started blogging about Live Upgrade, it was always my intention to post a list of tips. In this companion piece to Common Live Upgrade Problems, I will take a look at several proactive things you can do to make your Live Upgrade experience go more smoothly. Some of these are documented, although not always as obviously as I would like. Others are common sense. A few might surprise you.

    Since this is getting to be a long article, here are the tips, with direct links down to the explanation and examples.

    1. Keep your patching and packaging utilities up to date
    2. Check the log files
    3. ZFS pool and file system versioning
    4. Use ZFS for your root file system
    5. Don't save the patch backout files
    6. Start using Live Upgrade immediately after initial installation
    7. Keep your patching and packaging utilities up to date
    8. Use the installcluster script instead of luupgrade -t
    9. Keep your boot configurations simple
    10. Keeping /var/tmp clean
    Without any further delay, here my Live Upgrade Survival Tips.

    1. Always make sure your patching and packaging utilities are up to date

    This is the most frequent source of beginners troubles with Live Upgrade, and it is completely unnecessary. As such, if you call me or ask for help over email, my first question to you will be "Have you applied the prerequisite patches ? What about 121430/121341 ?" If your answer is, "I don't know", my response will be "Ok then. I'll wait while you check and apply what is out of date. Call me back when you have finished - long pause - if you are still having troubles."

    Live upgrade frequently stresses the installation tools. New versions are supplied on the update media, but we continue to fix corner cases, even after an update is released. It is important to check for any patches related to patching or packaging tools and update them before performing any Live Upgrade activities.

    Previously, you had to dig through the Infodoc 72099, then it was rewritten as Infodoc 206844. Today, this document lives on as Solaris Live Upgrade Software Patch Requirements. It is a much better read, but it is still an intimidating list of patches to sort through. To ease the effort on system adminstrators, we now include these patches in the Solaris 10 recommended patch cluster along with a safe way to install them in the current boot environment.

    Note: it is still worth checking the status of the Live Upgrade patch itself (121430 SPARC or 121431 x86).

    In this example, I'm taking a system from Solaris 10 10/08 (u6) to Solaris 10 10/09 (u8). I am already in the directory where the patch cluster was downloaded and unpacked.

    # lofiadm -a /export/iso/s10/s10u8-b08a-x86.iso
    /dev/lofi/1
    # mount -o ro -F hsfs /dev/lofi/1 /mnt
    # pkgadd -d /mnt/Solaris_10/Product SUNWluu SUNWlur SUNWlucfg
    
    # ./installcluster --apply-prepreq --s10cluster
    Setup .
    
    
    Recommended OS Cluster Solaris 10 x86 (2011.06.17)
    
    Application of patches started : 2011.06.29 11:19:11
    
    Applying 120901-03 ( 1 of 11) ... skipped
    Applying 121334-04 ( 2 of 11) ... skipped
    Applying 119255-81 ( 3 of 11) ... skipped
    Applying 119318-01 ( 4 of 11) ... skipped
    Applying 121297-01 ( 5 of 11) ... skipped
    Applying 138216-01 ( 6 of 11) ... skipped
    Applying 122035-05 ( 7 of 11) ... skipped
    Applying 127885-01 ( 8 of 11) ... skipped
    Applying 145045-03 ( 9 of 11) ... skipped
    Applying 142252-02 (10 of 11) ... skipped
    Applying 125556-10 (11 of 11) ... skipped
    
    Application of patches finished : 2011.06.29 11:19:13
    
    
    Following patches were skipped :
     Patches already applied
     120901-03     119318-01     138216-01     127885-01     142252-02
     121334-04     121297-01     122035-05     145045-03     125556-10
     119255-81
    
    Installation of prerequisite patches complete.
    
    Install log files written :
      /var/sadm/install_data/s10x_rec_cluster_short_2011.06.29_11.19.11.log
      /var/sadm/install_data/s10x_rec_cluster_verbose_2011.06.29_11.19.11.log
    After installing the new Live Upgrade packages from the installation media, I ran the installcluster script from the latest recommended patch cluster. The --apply-prereq argument tells the script just to install required live upgrade patches in the current boot environment. Since I have run several live upgrades previously from this boot environment, it is not surprising that all of the patches had already been applied. You're mileage will vary.

    The --s10cluster argument is the current patch cluster password. The intent is to make you read the included README, if for no other reason than to obtain the latest cluster password.

    # lucreate -n s10u8-baseline
    Checking GRUB menu...
    
    Population of boot environment  successful.
    Creation of boot environment  successful.
    
    # luupgrade -u -s /mnt -n s10u8-baseline
    
    
    Things will always go better when you have the proper versions of the patching and packaging utilities. This is not just a Live Upgrade survival tip, but a good one for general system maintenance.

    2. Always check the logs. Always, always, always

    How many problems could we prevent if we just read the documentation or took a look at the logs left after the maintenance activity finishes ? Repetitive success with Live Upgrade may lull you into a false sense of security. Things frequently work so well, and if the final output from the command is not proclaiming the end of civilization, we move on to the next step. Bzzzzt. Not so fast.

    Patches

    For patching, the situation is rather simple. Look at the summary output from luupgrade (or the installcluster script) and see if any patches failed to install properly. If you missed this, you can always go back into the patch logs themselves to see what happened.
    # lumount s10u9-2011-06-23 /mnt
    # grep -i failed /mnt/var/sadm/patch/*/log
    # grep -i error /mnt/var/sadm/patch/*/log
    /mnt/var/sadm/patch/118668-32/log:compress(1) returned error code 2
    /mnt/var/sadm/patch/119314-42/log:compress(1) returned error code 2
    /mnt/var/sadm/patch/119314-42/log:compress(1) returned error code 2
    /mnt/var/sadm/patch/119314-42/log:compress(1) returned error code 2
    /mnt/var/sadm/patch/124939-04/log:compress(1) returned error code 2
    
    So no patches failed to install, but there were a few errors. A closer look at the log files will tell us that these are harmless, caused when the existing patch backout files failed to compress. That's fine, they were already compressed.
    # cat /mnt/var/sadn/patch/119314-42/log
    
    Installation of  was successful.
    
    This appears to be an attempt to install the same architecture and
    version of a package which is already installed.  This installation
    will attempt to overwrite this package.
    
    /.alt.s10u9-2011-06-23-undo/var/sadm/pkg/SUNWlvmg/save/119314-42/undo: -- file u
    nchanged
    compress(1) returned error code 2
    The SUNWlvmg backout package will not be compressed.
    Continuing to process backout package.
    
    Installation of  was successful.
    

    Upgrades

    Upgrades are a bit more tricky because there are two different classes of problems: packages that failed to install and configuration files that couldn't be properly upgraded.

    The easiest to see are packages that failed to install. These packages are clearly identified in the last of the output from luupgrade. In case you missed them, we will tell you about them again if you try to luactivate(1M) a boot environment where some packaged failed to install.

    As with patching, if you missed the messages, you can look back at the upgrade log file in the alternate boot environment. You can find it at /var/sadm/system/logs/upgrade_log.

    # lumount s10u9-2011-06-23 /mnt
    # tail -18 /mnt/var/sadm/system/logs/upgrade_log
    Installation of  was successful.
    
    The messages printed to the screen by this upgrade have been saved to:
    
    	/a/var/sadm/system/logs/upgrade_log
    
    After this system is rebooted, the upgrade log can be found in the file:
    
    	/var/sadm/system/logs/upgrade_log
    
    
    Please examine the file:
    
    	/a/var/sadm/system/data/upgrade_cleanup
    
    It contains a list of actions that may need to be performed to complete
    the upgrade.  After this system is rebooted, this file can be found at:
    
    	/var/sadm/system/data/upgrade_cleanup
    
    After performing cleanup actions, you must reboot the system.	- Environment variables (/etc/default/init)
    Updating package information on boot environment .
    Package information successfully updated on boot environment .
    Adding operating system patches to the BE .
    
    There may be cases where an upgrade isn't able to process a configuration file that has been customized. In that case, the upgrade process will either preserve the original, saving the new configuration file under a different name, or the reverse, saving the existing file under a new name and installing a new one. How can you tell which of these happened ?

    Check the upgrade_cleanup log file. It is so important that we mention it twice as luupgrade finishes its output. Here is a snippet from an upgrade from Solaris 10 10/09 to Solaris 10 9/10.

    # lumount s10u9-baseline /mnt
    # cat /mnt/var/sadm/system/data/upgrade_cleanup
    
    ..... lots of output removed for readability ....
    
    /a/kernel/drv/e1000g.conf: existing file preserved, the new version was installed as /a/kernel/drv/e1000g.conf.new
    
    /etc/snmp/conf/snmpd.conf: existing file renamed to /etc/snmp/conf/snmpd.conf~10
    
    /a/etc/mail/sendmail.cf: existing file renamed to /a/etc/mail/sendmail.cf.old
    /a/etc/mail/submit.cf: existing file renamed to /a/etc/mail/submit.cf.old
    
    Sendmail has been upgraded to version 8.14.4 .
    After you reboot, you may want to run
    /usr/sbin/check-hostname
    and
    /usr/sbin/check-permissions ALL
    These two shell-scripts will check for common
    misconfigurations and recommend corrective
    action, or report if things are OK.
    
    
    In this example, we see several different actions taken by the installer.

    In the case of /kernel/drv/e1000g.conf (the e1000 driver configuration file), the original contents were preserved and a new default file was installed at /kernel/drv/e1000g.conf.new. Let's see what differences exist between the two files.

    # lumount s10u9-baseline /mnt
    # diff /mnt/kernel/drv/e1000g.conf /mnt/kernel/drv/e1000g.conf.new
    # Copyright 2010 Sun Microsystems, Inc.  All rights reserved.
    11c11
    < # ident	"@(#)e1000g.conf	1.4	06/03/06 SMI"
    ---
    > # ident	"@(#)e1000g.conf	1.5	10/01/12 SMI"
    41,45c41,51
    <         # These are maximum frame limits, not the actual ethernet frame
    <         # size. Your actual ethernet frame size would be determined by
    <         # protocol stack configuration (please refer to ndd command man pages)
    <         # For Jumbo Frame Support (9k ethernet packet) 
    <         # use 3 (upto 16k size frames)
    ---
    > 	#
    > 	# These are maximum frame limits, not the ethernet payload size
    > 	# (usually called MTU).  Your actual ethernet MTU is determined by frame
    > 	# size limit and protocol stack configuration (please refer to ndd
    > 	# command man pages)
    > 	#
    > 	# For Jumbo Frame Support (9k ethernet packet) use 3 (upto 16k size
    > 	# frames).  On PCH adapter type (82577 and 82578) you can configure up
    > 	# to 4k size frames.  The 4k size is only allowed at 1gig speed, so if
    > 	# you select 4k frames size, you cannot force or autonegotiate the
    > 	# 10/100 speed options.
    
    The differences in the two files are just comments. That is a common case, and not unexpected since I had not modified the e1000g driver configuration file.

    For /etc/snmp/conf/snmpd.conf, the situation was the reverse. The existing copy was saved with a new file extension of ~10. A quick look shows these two files to be identical.

    The last example is from our friend sendmail. Since this upgrade includes a new version of sendmail, it is reasonable to expect several differences in the old and new configuration files.

    # diff /mnt/etc/mail/sendmail.cf /mnt/etc/mail/sendmail.cf.old
    
    236c236,237
    < O DaemonPortOptions=Name=MTA
    ---
    > O DaemonPortOptions=Name=MTA-v4, Family=inet
    > O DaemonPortOptions=Name=MTA-v6, Family=inet6
    281c282
    < # key for shared memory; 0 to turn off, -1 to auto-select
    ---
    > # key for shared memory; 0 to turn off
    284,285c285
    < # file to store auto-selected key for shared memory (SharedMemoryKey = -1)
    < #O SharedMemoryKeyFile
    ---
    
    As with an earlier example, the output was truncated to improve readability. In this case, I would take all of my local modifications to sendmail.cf and apply those to the new configuration file. Note that the log file suggests running two scripts after I make these modifications to check for common errors.

    There are several other actions the installer can take. To learn more about those, take a look at the top portion of the upgrade_cleanup file where they are all explained in great detail, including recommended actions for the system administrator.

    3. Watch your ZFS Pool and File System Version Numbers

    Thanks to John Kotches and Craig Bell for bringing this one up in the comments. This one is a bit sneaky and it can catch you totally unaware. As such, I've included this pretty high up in the list of survival tips.

    ZFS pool and file system functionality may be added with a Solaris release. These new capabilities are identified in the ZFS zpool and file system version numbers. To find out what versions you are running, and what capabilities they provide, use the corresponding upgrade -v commands. Yes, it is a bit disconcerting at first, using an upgrade command, not to upgrade, but to determine which features exist.

    Here is an example of each output, for your reference.

    # zpool upgrade -v
    This system is currently running ZFS pool version 31.
    
    The following versions are supported:
    
    VER  DESCRIPTION
    ---  --------------------------------------------------------
     1   Initial ZFS version
     2   Ditto blocks (replicated metadata)
     3   Hot spares and double parity RAID-Z
     4   zpool history
     5   Compression using the gzip algorithm
     6   bootfs pool property
     7   Separate intent log devices
     8   Delegated administration
     9   refquota and refreservation properties
     10  Cache devices
     11  Improved scrub performance
     12  Snapshot properties
     13  snapused property
     14  passthrough-x aclinherit
     15  user/group space accounting
     16  stmf property support
     17  Triple-parity RAID-Z
     18  Snapshot user holds
     19  Log device removal
     20  Compression using zle (zero-length encoding)
     21  Deduplication
     22  Received properties
     23  Slim ZIL
     24  System attributes
     25  Improved scrub stats
     26  Improved snapshot deletion performance
     27  Improved snapshot creation performance
     28  Multiple vdev replacements
     29  RAID-Z/mirror hybrid allocator
     30  Encryption
     31  Improved 'zfs list' performance
    
    For more information on a particular version, including supported releases,
    see the ZFS Administration Guide.
    
    
    # zfs upgrade -v
    The following filesystem versions are supported:
    
    VER  DESCRIPTION
    ---  --------------------------------------------------------
     1   Initial ZFS filesystem version
     2   Enhanced directory entries
     3   Case insensitive and File system unique identifier (FUID)
     4   userquota, groupquota properties
     5   System attributes
    
    For more information on a particular version, including supported releases,
    see the ZFS Administration Guide.
    
    
    In this particular example, the kernel supports up to zpool version 31 and ZFS version 5.

    Where you can run into trouble with this is when you create a pool or file system and then fall back to a boot environment that is older and doesn't support those particular versions. The survival tip is keep your zpool and vfs versions at a level that is compatible with the oldest boot environment that you will ever fall back to. A corollary to this is that you can upgrade your pools and file systems when you have deleted the last boot environment that supports that particular version.

    Your first question is probably, "what versions of ZFS go with the particular Solaris releases ?" Here is a table of Solaris releases since 10/08 (u6) and their corresponding zpool and zfs version numbers.

    Solaris ReleaseZPOOL VersionZFS Version
    Solaris 10 10/08 (u6)103
    Solaris 10 5/09 (u7)103
    Solaris 10 10/09 (u8)154
    Solaris 10 9/10 (u9)224
    Solaris 10 8/11 (u10)295
    Solaris 10 1/13 (u11)325
    Solaris 11 11/11 (ga)335
    Solaris 11.1346

    Note that these versions are for the release as well as if you have patched a system to that same level. In other words, a Solaris 10 10/08 system with the latest recommended patch cluster installed might be at the 8/11 (u10) level. You can always use zpool upgrade -v and zfs upgrade -v to make sure.

    Now you are wondering how you create a pool or file system at a version different than the default for your Solaris release. Fortunately, ZFS is flexible enough to allow us to do exactly that. Here is an example.

    # zpool create testpool testdisk
    
    # zpool get version testpool
    NAME      PROPERTY  VALUE    SOURCE
    testpool  version   31       default
    
    # zfs get version testpool
    NAME      PROPERTY  VALUE    SOURCE
    testpool  version   5        -
    
    
    This pool and associated top level file system can only be accessed on a Solaris 11 system. Let's destroy it and start again, this time making it possible to access it on a Solaris 10 10/09 system (zpool version 15, zfs version 4). We can use the -o version= and -O version= when the pool is created to accomplish this.
    # zpool destroy testpool
    # zpool create -o version=15 -O version=4 testpool testdisk
    # zfs create testpool/data
    
    # zpool get version testpool
    NAME      PROPERTY  VALUE    SOURCE
    testpool  version   15       local
    
    # zfs get -r version testpool
    NAME      PROPERTY  VALUE    SOURCE
    testpool  version   4        -
    testpool/data  version   4        -
    
    
    In this example, we created the pool explicitly at version 15, and using -O to pass zfs file system creation options to the top level dataset, we set that to version 4. To make things easier, new file systems created in this pool will be at version 4, inheriting that from the parent, unless overridden by -o version= at the time the file system is created.

    The last remaining task is to look at how you might upgrade a pool and file system when you have removed an old boot environment. We will go back to our previous example where we have a version 15 pool and 4 dataset. We have removed the Solaris 10 10/09 boot environment and now the oldest is Solaris 10 8/11 (u10). That supports version 29 pools and version 5 file systems. We will use zpool/zfs upgrade -V to set the specific versions to 29 and 5 respectively.

    # zpool upgrade -V 29 testpool
    This system is currently running ZFS pool version 31.
    
    Successfully upgraded 'testpool' from version 15 to version 29
    
    # zpool get version testpool
    NAME      PROPERTY  VALUE    SOURCE
    testpool  version   29       local
    
    # zfs upgrade -V 5 testpool
    1 filesystems upgraded
    
    # zfs get -r version testpool
    testpool       version   5        -
    testpool/data  version   4        -
    
    
    That didn't go quite as expected, or did it ? The pool was upgraded as expected, as was the top level dataset. But testpool/data is still at version 4. It initially inherited that version from the parent when it was created. When using zfs upgrade, only the datasets listed are upgraded. If we wanted the entire pool of file systems to be upgraded, we should have used -r for recursive.
    # zfs upgrade -V 5 -r testpool
    1 filesystems upgraded
    1 filesystems already at this version
    
    # zfs get -r version testpool
    NAME           PROPERTY  VALUE    SOURCE
    testpool       version   5        -
    testpool/data  version   5        -
    
    
    Now, that's more like it.

    For review, the tip is to keep your shared ZFS datasets and pools are the lowest versions supported by the oldest boot environments you plan to use. You can always use upgrade -v to see what versions are available for use, and by using -o version= and -O version, you can create new pools and datasets that are accessible by older boot environments. This last tip can also come in handy if you are moving pools around systems that might be at different versions.

    4. Use ZFS as your root file system

    While Live Upgrade can take away a lot of the challenges of patching and upgrading Solaris systems, one small obstacle can make it nearly impossible to deploy - adequate disk slices. The disk sizes are going much faster than that of Solaris, so on any relatively modern system, there should be adequate space on the internal disks to place at least two, if not more boot environments. This can also include a plethora of zones, if sparse root are used.

    The problem is generally not space, but disk slices (partitions). With a regular disk label, there is a limit of 8 partitions (0-7). One of these (slice 2) is taken by the disk utilities to record the size of the disk, so it is not available for our use. Take another for the first swap area, one more for the Solaris Volume Manager (SVM) or two if you are using Veritas encapsulated root disks. Pretty soon, you run out of slices. Of course this assumes that you didn't use the entire boot disk to store things such as local data, home directories, backup configuration data, etc.

    In other words, if you didn't plan on using Live Upgrade before provisioning the system, it is unlikely that you will have the necessary slices or space available to start using it later. Perhaps in an upcoming posting, I will put together a little cookbook to give some ideas on how to work around this.

    The proper long term answer is to use ZFS for your root file system. As we can see in the Solaris 11 Express release notes, ZFS is now integrated with the new packaging and installation tools to simplify system maintenance. All of the capabilities of Live Upgrade are just built in and they work right out of the box. The key to making all of that work smoothly is the ability to rely on certain ZFS features being available for the root file system (snapshot, clone).

    Beginning with Solaris 10 10/08, ZFS has been an optional choice for the root file system. Thanks to some early adopters that have helped sort out the corner cases, ZFS is an excellent choice for use as a root file system. In fact, I would go a bit further and suggest that ZFS is the recommended root file system today.

    By using ZFS, the disk slice challenges have just gone away. The only question that remains is whether or not the root pool has enough space to hold the alternate boot environment, but even that has a different look with ZFS. Instead of copying the source boot environment, Live Upgrade makes a clone, saving both time and space. The new boot environment only needs enough disk space to hold the changes between boot environments, not the entire Solaris installation.

    Time for another example.

    # zfs list -r panroot/ROOT
    NAME                                                USED  AVAIL  REFER  MOUNTPOINT
    panroot/ROOT                                       36.7G  5.06G    18K  legacy
    panroot/ROOT/s10u6_baseline                        10.6M  5.06G  6.92G  /
    panroot/ROOT/s10u8-baseline                        34.7M  5.06G  7.08G  /
    panroot/ROOT/s10u9-2011-06-23                      1.46G  5.06G  7.73G  /
    panroot/ROOT/s10u9-2011-06-23-undo                 1.48G  5.06G  7.66G  /mnt
    panroot/ROOT/s10u9-baseline                        12.7G  5.06G  7.43G  /
    panroot/ROOT/s10x_u6wos_07b                         119M  5.06G  3.87G  /
    
    Each of these ZFS datasets corresponds to a separate boot environment - a bootable Solaris installation. The space required to keep the extra boot environments around is the sum of the dataset used space plus that of the snapshot (not shown in this example). For this single disk configuration, it would be impossible to hold so many different bootable Solaris images, but a df(1) shows that I have space for at least this many, if not more.

    If you are using ZFS as your root file system, you are just one command away from being able to enjoy all of the benefits of Live Upgrade.

    5. Don't save patch backout files

    At first you might think this is a curious recommendation, but stick with me for a few moments.

    One of most important features of Live Upgrade is maintaining a safe fall back in case you run into troubles with a patch or upgrade. Rather than performing surgery on a malfunctioning boot environment, perhaps doing more harm with each patch backed out, why not boot back to a known safe configuration ? One luactivate and an init 0 and you are back to an known operating configuration where you can take your time performing forensic analysis of your troubled boot environment.

    That would make all of those undo.Z files littering up /var/sadm/patch somewhat extraneous. And that gets us to the next reason for not saving the backout files, space - but not what you are thinking. Sure, the new boot environment is larger with all of those files laying around, but how much are we talking about ?

    More than you think. Quite a bit more, actually.

    Here is an example where I have installed the June 23, 2011 recommended patch cluster on a Solaris 10 9/10 system, with and without backout files.

    # zfs list -r panroot/ROOT | grep s10u9
    panroot/ROOT/s10u9-2011-06-23                      1.46G  4.06G  7.73G  /
    panroot/ROOT/s10u9-2011-06-23-undo                 2.53G  4.06G  8.66G  /mnt
    panroot/ROOT/s10u9-baseline                        12.7G  4.06G  7.43G  /
    
    That's a gigabyte of difference between the boot environment with and without the undo.Z files. Surely there must be some other explanation. Let's see.
    # lumount s10u9-2011-06-23-undo /mnt
    # find /mnt/var/sadm/patch -name undo.Z -print | xargs -n1 rm -f 
    # zfs list -r panroot/ROOT | grep s10u9
    panroot/ROOT/s10u9-2011-06-23                      1.46G  5.06G  7.73G  /
    panroot/ROOT/s10u9-2011-06-23-undo                 1.46G  5.06G  7.66G  /mnt
    panroot/ROOT/s10u9-baseline                        12.7G  5.06G  7.43G  /
    
    If it was just this one gigabyte, I might not be making such a big deal about it. Did you ever think about those zones you are deploying ? As the zone installer runs through all of those packages for the new non-global zone, it copies all of the applicable undo.Z files, if they are present. This compounds the space problem.

    In this example, before removing the undo.Z files, I created a zone on each boot environment, so that I can see the space difference. Remember that these are sparse root zones, and should only be around 100MB in size.

    # zfs list -r panroot/zones
    NAME                              USED  AVAIL  REFER  MOUNTPOINT-files/root/var/t
    panroot/zones                     761M  4.53G    22K  /zones
    panroot/zones/with-undo-files     651M  4.53G   651M  /zones/with-undo-files
    panroot/zones/without-undo-files  111M  4.53G   111M  /zones/without-undo-files
    
    That's right - there's a 540MB difference between the two zones, and the only difference is whether or not the patch backout files were preserved. Throw in a couple of dozen zones, and this becomes more than just a nuisance. Not only does it take more space and time to create the zones, it also impacts the zone backups. All so that you can keep around files that you will never use.

    When you run the installcluster script, don't forget the -d flag. If you prefer luupgrade -t, the magic sequence is -O "-d".

    6. Start using Live Upgrade immediately after installation

    This tip is largely influenced by how you provision your systems, and the frequency in which you might wipe the configuration and start again. My primary system is something of a lab experiment, but isn't too dissimilar from many development environments I have seen.

    Right after I installed Solaris 10 from the 10/08 media, I made created a second boot environment, preserving this initial pristine configuration. Rather than reinstalling Solaris from media or a jumpstart server, I would just boot back to the original boot enviroment, delete the remaining boot environments, and in just a few moments, back to square one.

    Another useful boot enviromnent to preserve is the initial customization, done immediately after installation. Users are added, security settings are changed, and a handful of software packages are installed. Preserving this system baseline can be very useful, should your system need to be refreshed in a hurry. In my case, that did happen at 34,000 ft, somewhere over Ohio - but that's a story for another day.

    If a system is to live through multiple upgrades, it might be a good idea to encode the Solaris release and the patch cluster in the boot environment name. A taxonomy that works for me is -. For example, s10u9-baseline would be the intial upgrade to Solaris 10 9/10, and s10u9-2011-06-23 would be that same release, but patched using the June 23, 2011 patch cluster.

    Putting this all together, we have something like this.

    # lustatus
    Boot Environment           Is       Active Active    Can    Copy
    Name                       Complete Now    On Reboot Delete Status
    -------------------------- -------- ------ --------- ------ ----------
    s10x_u6wos_07b             yes      no     no        yes    -
    s10u6-baseline             yes      no     no        yes    -
    s10u8-baseline             yes      no     no        yes    -
    s10u9-baseline             yes      yes    yes       no     -
    s10u9-2011-06-23           yes      no     no        yes    -
    
    
    The thing I like about this is arrangement is that I can quickly jump to a particular Solaris release when a customer asks me if a particular feature exists, or some patch has been integrated. I can see how this might be useful for some development environments as well.

    7. Remember to install the LU packages from the upgrade media

    Using Live Upgrade for an upgrade has an additional set over using it for patching. When performing an upgrade, the Live Upgrade packages from the installation media need to be installed in the current boot environment. After doing this, it is still necessary to check for prerequisite packages, especially if several months has passes since the update was released.

    Prior to Solaris 10 8/07 (u4), including Solaris 8 and 9, there were only two Live Upgrade packages: SUNWluu and SUNWlur. Solaris 10 8/07 (u4) and late will have a third package, SUNWlucfg. These packages can be found in the Product directory on the installation media.

    Here is an example.

    # mount -o ro -F hsfs `lofiadm -a /export/iso/s10/s10u9-ga-x86.iso` /mnt
    # pkgadd -d /mnt/Solaris_10/Product SUNWluu SUNWlur SUNWlucfg
    # cd 
    # ./installcluster --apply-prereq --s10cluster
    
    Now we are ready to use lucreate and luupgrade -u to create a new boot environment and upgrade it to the release on the installation media.

    8. Use the installcluster script from the Solaris patch cluster

    It would be perfectly acceptable to unpack a Solaris recommended patch cluster and then use luupgrade -t to install the patches into an alternate boot environment. Live Upgrade will build a patch order file based on the metadata in all of the patches, and will generally do the right thing.

    Occasionally, it might be more convenient to do things in a slightly different order, or handle patch installation errors just a bit better. That's what the installcluster script does. For corner cases where the patch order file might be incorrectly generated, the script builds its own installation order, working around some inconvenience situations. It also does a better job with error handling, perhaps trying a different way or sequence to install a problematic patch.

    The most important difference between the two installation methods is how they report their progress. Let's take a look at the two and see which you like better. First luupgrade -t

    # lucreate -n zippy
    # luupgrade -t -s /export/patches/10_Recommended-2011-06-23/patches -n zippy
    Validating patches...
    
    Loading patches installed on the system...
    
    Done!
    
    Loading patches requested to install.
    
    Architecture for package SUNWstaroffice-core01 from directory SUNWstaroffice-core01.i in patch 120186-22 differs from the package installed on the system.
    Version of package SUNWmcosx from directory SUNWmcosx in patch 121212-02 differs from the package installed on the system.
    Version of package SUNWmcos from directory SUNWmcos in patch 121212-02 differs from the package installed on the system.
    ..... lots of similar output deleted .......
    
    The following requested patches are already installed on the system
    Requested patch 113000-07 is already installed on the system.
    Requested patch 117435-02 is already installed on the system.
    
    ..... more output deleted .......
    
    The following requested patches do not update any packages installed on the system
    No Packages from patch 121212-02 are installed on the system.
    No Packages from patch 125540-06 are installed on the system.
    No Packages from patch 125542-06 are installed on the system.
    
    Checking patches that you specified for installation.
    
    Done!
    
    ..... yet more output deleted .....
    
    Approved patches will be installed in this order:
    
    118668-32 118669-32 119281-25 119314-42 119758-20 119784-18 119813-13 119901-11
    119907-18 120186-22 120544-22 121429-15 122912-25 123896-22 124394-11 124939-04
    125138-28 125139-28 125216-04 125333-17 125732-06 126869-05 136999-10 137001-08
    137081-05 138624-04 138823-08 138827-08 140388-02 140861-02 141553-04 143318-03
    143507-02 143562-09 143600-10 143616-02 144054-04 144489-17 145007-02 145125-02
    145797-01 145802-06 146020-01 146280-01 146674-01 146773-01 146803-02 146859-01
    146862-01 147183-01 147228-01 147218-01 145081-04 145201-06
    
    Checking installed patches...
    Installing patch packages...
    
    Patch 118668-32 has been successfully installed.
    See /a/var/sadm/patch/118668-32/log for details
    
    Patch packages installed:
      SUNWj5cfg
      SUNWj5dev
      SUNWj5dmo
      SUNWj5man
      SUNWj5rt
    
    Checking installed patches...
    Installing patch packages...
    
    Patch 118669-32 has been successfully installed.
    See /a/var/sadm/patch/118669-32/log for details
    
    Patch packages installed:
      SUNWj5dmx
      SUNWj5dvx
      SUNWj5rtx
    
    Checking installed patches...
    Executing prepatch script...
    Installing patch packages...
    
    Patch 119281-25 has been successfully installed.
    See /a/var/sadm/patch/119281-25/log for details
    Executing postpatch script...
    
    Patch packages installed:
      SUNWdtbas
      SUNWdtdst
      SUNWdtinc
      SUNWdtma
      SUNWdtmad
      SUNWmfrun
    
    Checking installed patches...
    Executing prepatch script...
    Installing patch packages...
    
    Patch 119314-42 has been successfully installed.
    
    I think you get the picture. To gauge progress, you have to keep scrolling back to the list of packages, and find the one luupgrade is currently working on. After just a few minutes, the scroll buffer of your terminal window will be exhausted and you will be left guessing how long the operation will take to complete.

    Let's compare this to the output from the installcluster script. Note the use of -d from an earlier recommendation.

    # lucreate -n zippy
    # ./installcluster -d -B zippy --s10cluster
    Setup ..............
    
    
    Recommended OS Cluster Solaris 10 x86 (2011.06.17)
    
    Application of patches started : 2011.07.06 00:25:07
    
    Applying 120901-03 (  1 of 216) ... skipped
    Applying 121334-04 (  2 of 216) ... skipped
    Applying 119255-81 (  3 of 216) ... skipped
    Applying 119318-01 (  4 of 216) ... skipped
    Applying 121297-01 (  5 of 216) ... skipped
    Applying 138216-01 (  6 of 216) ... skipped
    Applying 122035-05 (  7 of 216) ... skipped
    Applying 127885-01 (  8 of 216) ... skipped
    Applying 145045-03 (  9 of 216) ... skipped
    Applying 142252-02 ( 10 of 216) ... skipped
    Applying 125556-10 ( 11 of 216) ... skipped
    Applying 140797-01 ( 12 of 216) ... skipped
    Applying 113000-07 ( 13 of 216) ... skipped
    Applying 117435-02 ( 14 of 216) ... skipped
    Applying 118344-14 ( 15 of 216) ... skipped
    Applying 118668-32 ( 16 of 216) ... success
    Applying 118669-32 ( 17 of 216) ... success
    Applying 118778-14 ( 18 of 216) ... skipped
    Applying 121182-05 ( 19 of 216) ... skipped
    
    Notice the nice clean output. You can always tell where you are in the installation process (nn out of 216) and there is not a lot of extra information cluttering up the controlling terminal.

    9. Keep it Simple

    This will be the most difficult and controverial of the survival tips, and that's why I have saved it for last. Remember that Live Upgrade must work across three releases and all the various patch combinations and clusters. At the very least, it stresses the installation programs and patching tools.

    For a UFS root system, the administrator has a lot of control where the various file systems are laid out. All it takes is enough -m lines, or if that becomes too unwieldy, a list of slices in a control file passed by -M.

    ZFS provides a significant simplification of the Solaris file systems, and it is expected that system adminstrators will take advantage of this. Of the Solaris directories (/, /usr, /etc, /var, /opt, /kernel, /platform, /bin, /sbin, /lib, /dev, /devices), only /var is allowed to be broken out into its own dataset. Many legacy operational procedures, some dating back to SunOS 4.x days, will have /usr, /usr/local, /opt, /var and /var/crash split into different file systems. Not only is this not a recommended practice for a ZFS root system, it may actually prevent the use of Live Upgrade. If forced to choose between Live Upgrade and my old configuration habits, I will take Live Upgrade every time.

    There will be more

    I hope to occasionally revise this article, adding new tips, or reworking some of the current ones. Yes, that would make this more of a Wiki type of document than your typical blog, and that might be where this ends up some day. Until then, feel free to bookmark this page and return to it as often as you need to, especially as you plan out your Live Upgrade activities.

    If you have some tips and suggestions, please leave them in the comments. If I can work them up with good examples, I'll add them to the page (with full credit, of course). Technocrati Tags:

    Thursday Jun 30, 2011

    Common Live Upgrade Problems

    As I have worked with customers deploying Live Upgrade in their environments, several problems seem to surface over and over. With this blog article, I will try to collect these troubles, as well as suggest some workarounds. If this sounds like the beginnings of a Wiki, you would be right. At present, there is not enough material for one, so we will use this blog for the time being. I do expect new material to be posted on occasion, so if you wish to bookmark it for future reference, a permanent link can be found here.

    To help with your navigation, here is an index of the common problems.

    1. lucreate(1M) copies a ZFS root rather than making a clone
    2. luupgrade(1M) and the Solaris autoregistration file
    3. Watch out for an ever growing /var/tmp
    Without any further delay, here are some common Live Upgrade problems.

    Live Upgrade copies over ZFS root clone

    This was introduced in Solaris 10 10/09 (u8) and the root of the problem is a duplicate entry in the source boot environments ICF configuration file. Prior to u8, a ZFS root file system was not included in /etc/vfstab, since the mount is implicit at boot time. Starting with u8, the root file system is included in /etc/vfstab, and when the boot environment is scanned to create the ICF file, a duplicate entry is recorded. Here's what the error looks like.
    # lucreate -n s10u9-baseline
    Checking GRUB menu...
    System has findroot enabled GRUB
    Analyzing system configuration.
    Comparing source boot environment  file systems with the
    file system(s) you specified for the new boot environment. Determining
    which file systems should be in the new boot environment.
    Updating boot environment description database on all BEs.
    Updating system configuration files.
    Creating configuration for boot environment .
    Source boot environment is .
    Creating boot environment .
    Creating file systems on boot environment .
    Creating  file system for  in zone  on .
    
    The error indicator -----> /usr/lib/lu/lumkfs: test: unknown operator zfs
    
    Populating file systems on boot environment .
    Checking selection integrity.
    Integrity check OK.
    Populating contents of mount point .
    
    This should not happen ------> Copying.
    
    Ctrl-C and cleanup
    
    If you weren't paying close attention, you might not even know this is an error. The symptoms are lucreate times that are way too long due to the extraneous copy, or the one that alerted me to the problem, the root file system is filling up - again thanks to a redundant copy.

    This problem has already been identified and corrected, and a patch (121431-58 or later for x86, 121430-57 for SPARC) is available. Unfortunately, this patch has not yet made it into the Solaris 10 Recommended Patch Cluster. Applying the prerequisite patches from the latest cluster is a recommendation from the Live Upgrade Survival Guide blog, so an additional step will be required until the patch is included. Let's see how this works.

    # patchadd -p | grep 121431
    Patch: 121429-13 Obsoletes: Requires: 120236-01 121431-16 Incompatibles: Packages: SUNWluzone
    Patch: 121431-54 Obsoletes: 121436-05 121438-02 Requires: Incompatibles: Packages: SUNWlucfg SUNWluu SUNWlur
    
    # unzip 121431-58
    # patchadd 121431-58
    Validating patches...
    
    Loading patches installed on the system...
    
    Done!
    
    Loading patches requested to install.
    
    Done!
    
    Checking patches that you specified for installation.
    
    Done!
    
    
    Approved patches will be installed in this order:
    
    121431-58
    
    
    Checking installed patches...
    Executing prepatch script...
    Installing patch packages...
    
    Patch 121431-58 has been successfully installed.
    See /var/sadm/patch/121431-58/log for details
    Executing postpatch script...
    
    Patch packages installed:
      SUNWlucfg
      SUNWlur
      SUNWluu
    
    # lucreate -n s10u9-baseline
    Checking GRUB menu...
    System has findroot enabled GRUB
    Analyzing system configuration.
    INFORMATION: Unable to determine size or capacity of slice .
    Comparing source boot environment  file systems with the
    file system(s) you specified for the new boot environment. Determining
    which file systems should be in the new boot environment.
    INFORMATION: Unable to determine size or capacity of slice .
    Updating boot environment description database on all BEs.
    Updating system configuration files.
    Creating configuration for boot environment .
    Source boot environment is .
    Creating boot environment .
    Cloning file systems from boot environment  to create boot environment .
    Creating snapshot for  on .
    Creating clone for  on .
    Setting canmount=noauto for  in zone  on .
    Saving existing file  in top level dataset for BE  as //boot/grub/menu.lst.prev.
    Saving existing file  in top level dataset for BE  as //boot/grub/menu.lst.prev.
    Saving existing file  in top level dataset for BE  as //boot/grub/menu.lst.prev.
    File  propagation successful
    Copied GRUB menu from PBE to ABE
    No entry for BE  in GRUB menu
    Population of boot environment  successful.
    Creation of boot environment  successful.
    
    This time it took just a few seconds. A cursory examination of the offending ICF file (/etc/lu/ICF.3 in this case) shows that the duplicate root file system entry is now gone.
    # cat /etc/lu/ICF.3
    s10u8-baseline:-:/dev/zvol/dsk/panroot/swap:swap:8388608
    s10u8-baseline:/:panroot/ROOT/s10u8-baseline:zfs:0
    s10u8-baseline:/vbox:pandora/vbox:zfs:0
    s10u8-baseline:/setup:pandora/setup:zfs:0
    s10u8-baseline:/export:pandora/export:zfs:0
    s10u8-baseline:/pandora:pandora:zfs:0
    s10u8-baseline:/panroot:panroot:zfs:0
    s10u8-baseline:/workshop:pandora/workshop:zfs:0
    s10u8-baseline:/export/iso:pandora/iso:zfs:0
    s10u8-baseline:/export/home:pandora/home:zfs:0
    s10u8-baseline:/vbox/HardDisks:pandora/vbox/HardDisks:zfs:0
    s10u8-baseline:/vbox/HardDisks/WinXP:pandora/vbox/HardDisks/WinXP:zfs:0
    
    This error can show up in a slightly different form. When activating a new boot environment, propogation of the bootloader and configuration files may fail with an error indicating that an old boot enviromnent could not be mounted. That prevents the activation from taking place and you will find yourself booting back into the old BE.

    Again, the root cause is the root file system entry in /etc/vfstab. Even though the mount at boot time flag is set to no, it confuses lumount(1M) as it cycles through duing the propogation phase. To correct this problem, boot back to the offending boot environment and remove the vfstab entry for /.

    lucreate(1M) and the new (Solaris 10 10/09 and later) autoregistration file

    This one is actually mentioned in the Oracle Solaris 9/10 release notes. I know, I hate it when that happens too.

    Here's what the "error" looks like.

    # luupgrade -u -s /mnt -n s10u9-baseline
    
    System has findroot enabled GRUB
    No entry for BE  in GRUB menu
    Copying failsafe kernel from media.
    61364 blocks
    miniroot filesystem is 
    Mounting miniroot at 
    ERROR:
            The auto registration file <> does not exist or incomplete.
            The auto registration file is mandatory for this upgrade.
            Use -k  argument along with luupgrade command.
            autoreg_file is path to auto registration information file.
            See sysidcfg(4) for a list of valid keywords for use in
            this file.
    
            The format of the file is as follows.
    
                    oracle_user=xxxx
                    oracle_pw=xxxx
                    http_proxy_host=xxxx
                    http_proxy_port=xxxx
                    http_proxy_user=xxxx
                    http_proxy_pw=xxxx
    
            For more details refer "Oracle Solaris 10 9/10 Installation
            Guide: Planning for Installation and Upgrade".
    
    
    As with the previous problem, this is also easy to work around. Assuming that you don't want to use the auto-registration feature at upgrade time, create a file that contains just autoreg=disable and pass the filename on to luupgrade.

    Here is an example.

    # echo "autoreg=disable" > /var/tmp/no-autoreg
    # luupgrade -u -s /mnt -k /var/tmp/no-autoreg -n s10u9-baseline
     
    System has findroot enabled GRUB
    No entry for BE  in GRUB menu
    Copying failsafe kernel from media.
    61364 blocks
    miniroot filesystem is 
    Mounting miniroot at 
    #######################################################################
     NOTE: To improve products and services, Oracle Solaris communicates
     configuration data to Oracle after rebooting.
    
     You can register your version of Oracle Solaris to capture this data
     for your use, or the data is sent anonymously.
    
     For information about what configuration data is communicated and how
     to control this facility, see the Release Notes or
     www.oracle.com/goto/solarisautoreg.
    
     INFORMATION: After activated and booted into new BE ,
     Auto Registration happens automatically with the following Information
    
    autoreg=disable
    #######################################################################
    Validating the contents of the media .
    The media is a standard Solaris media.
    The media contains an operating system upgrade image.
    The media contains  version <10>.
    Constructing upgrade profile to use.
    Locating the operating system upgrade program.
    Checking for existence of previously scheduled Live Upgrade requests.
    Creating upgrade profile for BE .
    Checking for GRUB menu on ABE .
    Saving GRUB menu on ABE .
    Checking for x86 boot partition on ABE.
    Determining packages to install or upgrade for BE .
    Performing the operating system upgrade of the BE .
    CAUTION: Interrupting this process may leave the boot environment unstable
    or unbootable.
    
    The Live Upgrade operation now proceeds as expected. Once the system upgrade is complete, we can manually register the system. If you want to do a hands off registration during the upgrade, see the Oracle Solaris Auto Registration section of the Oracle Solaris Release Notes for instructions on how to do that.

    /var/tmp and the ever growing boot environment

    Let's start with a clean installation of Solaris 10 10/09 (u8).
    # df -k /
    Filesystem                       kbytes    used   avail capacity  Mounted on
    rpool/ROOT/s10x_u8wos_08a      20514816 4277560 13089687    25%    /
    
    
    So far, so good. Solaris is just a bit over 4GB. Another 3GB is used by the swap and dump devices. That should leave plenty of room for half a dozen or so patch cycles (assuming 1GB each) and an upgrade to the next release.

    Now, let's put on the latest recommended patch cluster. Note that I am following the suggestions in my Live Upgrade Survival Guide, installing the prerequisite patches and the LU patch before actually installing the patch cluster.

    # cd /var/tmp
    # wget patchserver:/export/patches/10_x86_Recommended-2012-01-05.zip .
    # unzip -qq 10_x86_Recommended-2012-01-05.zip
    
    # wget patchserver:/export/patches/121431-69.zip
    # unzip 121431-69
    
    # cd 10x_Recommended
    # ./installcluster --apply-prereq --passcode (you can find this in README)
    
    # patchadd -M /var/tmp 121431-69
    
    # lucreate -n s10u8-2012-01-05
    # ./installcluster -d -B s10u8-2012-01-05 --passcode
    
    # luactivate s10u8-2012-01-05
    # init 0
    
    
    After the new boot environment is activated, let's upgrade to the latest release of Solaris 10. In this case, it will be Solaris 10 8/11 (u10).

    Yes, this does seem like an awful lot is happening in a short period of time. I'm trying to demonstrate a situation that really does happen when you forget something as simple as a patch cluster clogging up /var/tmp. Think of this as one of those time lapse video sequences you might see in a nature documentary.

    # pkgrm SUNWluu SUNWlur SUNWlucfg
    # pkgadd -d /cdrom/sol_10_811_x86  SUNWluu SUNWlur SUNWlucfg
    # patchadd -M /var/tmp 121431-69
    
    # lucreate -n s10u10-baseline'
    # echo "autoreg=disable" > /var/tmp/no-autoreg
    # luupgrade -u -s /cdrom/sol_10_811_x86 -k /var/tmp/no-autoreg -n s10u10-baseline
    # luactivate s10u10-baseline
    # init 0
    
    As before, everything went exactly as expected. Or I thought so, until I logged in the first time and checked the free space in the root pool.
    # df -k /
    Filesystem                       kbytes    used   avail capacity  Mounted on
    rpool/ROOT/s10u10-baseline     20514816 10795038 2432308    82%    /
    
    Where did all of the space go ? Back of the napkin calculations of 4.5GB (s10u8) + 4.5GB (s10u10) + 1GB (patch set) + 3GB (swap and dump) = 13GB. 20GB pool - 13GB used = 7GB free. But there's only 2.4GB free ?

    This is about the time that I smack myself on the forehead and realize that I put the patch cluster in the /var/tmp. Old habits die hard. This is not a problem, I can just delete it, right ?

    Not so fast.

    # du -sh /var/tmp
     5.4G   /var/tmp
    
    # du -sh /var/tmp/10*
     3.8G   /var/tmp/10_x86_Recommended
     1.5G   /var/tmp/10_x86_Recommended-2012-01-05.zip
    
    # rm -rf /var/tmp/10*
    
    # du -sh /var/tmp
     3.4M   /var/tmp
    
    
    Imagine the look on my face when I check the pool free space, expecting to see 7GB free.
    # df -k /
    Filesystem                      kbytes    used   avail capacity  Mounted on
    rpool/ROOT/s10u10-baseline    20514816 5074262 2424603    68%    /
    
    
    We are getting closer, I suppose. At least my root filesystem size is reasonable (5GB vs 11GB). But the free space hasn't changed at all.

    Once again, I smack myself on the forehead. The patch cluster is also in the other two boot environments. All I have to do is get rid them too, and I'll get my free space back. Right ?

    # lumount s10u8-2012-01-05 /mnt
    # rm -rf /mnt/var/tmp/10_x86_Recommended*
    # luumount s10u8-2012-01-05
    
    # lumount s10x_u8wos_08a /mnt
    # rm -rf /mnt/var/tmp/10_x86_Recommended*
    # luumount s10x_u8wos_08a
    
    Surely, the free space will now be 7GB.
    # df -k /
    Filesystem                    kbytes    used   avail capacity  Mounted on
    rpool/ROOT/s10u10-baseline  20514816 5074265 2429261    68%    /
    
    
    This is when I smack myself on the forehead for the third time in one afternoon. Just getting rid of them in the boot environments is not sufficient. It would be if I were using UFS as a root filesystem, but lucreate will use the ZFS snapshot and cloning features when used on a ZFS root. So the patch cluster is in the snapshot, and the oldest one at that.

    Let's try this all over again, but this time I will put the patches somewhere else that is not part of a boot environment. If you are thinking of using root's home directory, think again - it is part of the boot environment. If you are running out of ideas, let me suggest that /export/patches might be a good place to put them.

    Doing the exercise again, with the patches in /export/patches, I get similar results (to be expected), but this time the patches are in a shared ZFS dataset (/export).

    # lustatus
    Boot Environment           Is       Active Active    Can    Copy      
    Name                       Complete Now    On Reboot Delete Status    
    -------------------------- -------- ------ --------- ------ ----------
    s10x_u8wos_08a             yes      no     no        yes    -         
    s10u8-2012-01-05           yes      no     no        yes    -         
    s10u10-baseline            yes      yes    yes       no     -         
    
    # df -k /
    Filesystem                      kbytes    used   avail capacity  Mounted on
    rpool/ROOT/s10u10-baseline    20514816 5184578 2445140    68%    /
    
    
    # df -k /export
    Filesystem                      kbytes    used   avail capacity  Mounted on
    rpool/export                  20514816 5606384 2445142    70%    /export
    
    
    This means that I can delete them, and reclaim the space.
    # rm -rf /export/patches/10_x86_Recommended*
    
    # df -k /
    Filesystem                      kbytes    used   avail capacity  Mounted on
    rpool/ROOT/s10u10-baseline    20514816 5184578 8048050    40%    /
    
    
    Now, that's more like it. With this free space, I can continue to patch and maintain my system as I had originally planned - estimating a few hundred MB to 1.5GB per patch set.

    Technocrati Tags:

    Monday Nov 30, 2009

    VirtualBox 3.1 has been released

    VirtualBox 3.1 has been released and is now available for download. You can get binaries for Windows, OS X (Intel Mac), Linux and Solaris hosts at http://www.virtualbox.org/wiki/Downloads

    Version 3.1 is a major update and contains the following new features
    • Live Migration of a VM session from one host to another (Teleportation)
    • VM states can now be restored from arbitrary snapshots and new snapshots can be taken from any snapshots (Branched Snapshots)
    • 2D video acceleration for Windows guests using the host video hardware for overlay stretching and color conversion
    • CD/DVD drives can be attached to arbitrary storage controllers
    • More than one CD/DVD drive per guest VM
    • The network attachment type can be changed while a VM is running
    • New experimental USB support for OpenSolaris hosts (OpenSolaris/Nevada build 124 and later)
    • Performance improvements for PAE and AMD64 guests when using non-nested paging (VT-x and AMD-V only)
    • Experimental support for EFI (Extensible Firmware Interface)
    • Support for paravirtualized network adapters (virtio-net)

    There is also a long list of bugs fixed in this new release. Please see the Changelog for a complete list.


    Technocrati Tags:

    Wednesday Nov 25, 2009

    VirtualBox 3.1 Beta 3 now available for download

    The third beta release of VirtualBox 3.1 is now available for testing. You can download the binaries for all platforms at http://download.virtualbox.org/virtualbox/3.1.0_BETA3/

    In addition to the new features introduced with Beta 1 (live migration, branched snapshots, new OpenSolaris USB support, 2D video acceleration, network changes while VM is running), the following are noteworthy changes since Beta 2.
    • VMM: fixes slow OpenSolaris booting (3.1.0 Beta 1 regression)
    • GUI: prevent hints being send for guest-initiated resizes
    • GUI: fix for saved mouse shape data which was incorrectly updated
    • Xml compatibility fixes with 3.0.x (CpuIdTree element)
    • Storage/VMDK: fix buffers for really huge VMDK files split into 2G pieces
    • EFI: added 64-bit firmware
    • 2D support: Saved state save/restore fixes
    • 2D support: fixed VM reset issues
    • Snapshots: suppresses the automatic resetting of immutable disk images on VM power-up if the machine has snapshots AND the current snapshot is an online snapshot, to avoid data corruption (the machine would otherwise be reset to a state which is equivalent of a poweroff of a running machine without a proper shutdown)
    • Guest Additions: fixed a rare guest crash with 2 or more guest CPUs
    • Windows Additions: improved file version lookup for guest OS information
    • Windows Additions: disabled some debug output
    • Linux Additions: fixed guests with Linux 2.4 kernels
    • Linux Additions: removed debugging symbols to save space (3.1.0 Beta 2 regression)
    • X11 Additions: fixed a bug which re-enabled the graphics (and dynamic resizing) capability when disabling seamless
    • X11 Additions: added more default resolutions for old guests
    • X11 Additions: workaround a bug in X.Org 1.3 which caused dynamic resizing to fail


    Please refer to the VirtualBox 3.1 Beta 3 Changelog for a complete list of changes and enhancements.

    Important Note: Please do not use this VirtualBox Beta release on production machines. A VirtualBox Beta release should be considered a bleeding-edge release meant for early evaluation and testing purposes.

    Report problems or issues at the VirtualBox Beta Forum.

    Technocrati Tags:
    About

    Bob Netherton is a Principal Sales Consultant for the North American Commercial Hardware group, specializing in Solaris, Virtualization and Engineered Systems. Bob is also a contributing author of Solaris 10 Virtualization Essentials.

    This blog will contain information about all three, but primarily focused on topics for Solaris system administrators.

    Please follow me on Twitter Facebook or send me email

    Search

    Archives
    « April 2014
    SunMonTueWedThuFriSat
      
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
       
           
    Today