Tuesday Mar 10, 2015

Maintaining Configuration Files in Solaris 11.2


Have you used Solaris 11 and wondered how to maintain customized system configuration files? In the past, and on other Unix/Linux systems, maintaining these configuration files was fraught with peril: extra bolt-on tools are needed to track changes, verify that inappropriate changes were not made, and fix them when something broke them.

A combination of features added to Solaris 10 and 11 address those problems. This blog entry describes the current state of related features, and demonstrates the method that was designed and implemented to automatically deploy and track changes to configuration files, verify consistency, and fix configuration files that "broke." Further, these new features are tightly integrated with the Solaris Service Management Facility introduced in Solaris 10 and the packaging system introduced in Solaris 11.


Solaris 10 added the Service Management Facility, which significantly improved on the old, unreliable pile of scripts in /etc/rc#.d directories. This also allowed us to move from the old model of system configuration information stored in ASCII files to a database of configuration information. The latter change reduces the risk associated with manual or automated modifications of text files. Each modification is the result of a command that verifies the correctness of the change before applying it. That verification process greatly reduces the opportunities for a mistake that can be very difficult to troubleshoot.

During updates to Solaris 10 and 11 we continued to move configuration files into SMF service properties. However, there are still configuration files, and we wanted to provide better integration between the Solaris 11 packaging facility (IPS), and those remaining configuration files. This blog entry demonstrates some of that integration, using features added up through Solaris 11.1.

Many Solaris systems need customized email delivery rules. In the past, providing those rules required replacing /etc/mail/sendmail.cf with a custom file. However, this created the need to maintain that file - restoring it after a system udpate, verifying its integrity periodically, and potentially fixing it if someone or something broke it.


IPS provides the tools to accomplish those goals, specifically:

  1. maintain one or more versions of a configuration file in an IPS repository
  2. use IPS and AI (Automated Installer) to install, update, verify, and potentially fix that configuration file
  3. automatically perform the steps necessary to re-configure the system with a configuration file that has just been installed or updated.

The rest of this assumes that you understand Solaris 11 and IPS.

In this example, we want to deliver a custom sendmail.cf file to multiple systems. We will do that by creating a new IPS package that contains just one configuration file. We need to create the "precursor" to a sendmail.cf file, (sendmail.mc) that will be expanded by sendmail when it starts. We also need to create a custom manifest for the package. Finally, we must create an SMF service profile, which will cause Solaris to understand that a new sendmail configuration is available and should be integrated into its database of configuration information.

Here are the steps in more detail.

  1. Create a directory ("mypkgdir") that will hold the package manifest and a directory ("contents") for package contents.
    $ mkdir -p mypkgdir/contents
    $ cd mypkgdir
    Then create the configuration file that you want to deploy with this package. For this example, we simply copy an existing configuration file.
    $ cp /etc/mail/cf/cf/sendmail.mc contents/custom_sm.mc
  2. Create a manifest file in mypkgdir/sendmail-config.p5m: (the entity that owns the computers is the fictional corporation Consolidated Widgets, Inc.)
    set name=pkg.fmri value=pkg://cwi/site/sendmail-config@8.14.9,1.0
    set name=com.cwi.info.name value=Solaris11sendmail
    set name=pkg.description value="ConWid sendmail.mc file for Solaris 11, accepts only local connections."
    set name=com.cwi.info.description value="Sendmail configuration"
    set name=pkg.summary value="Sendmail configuration"
    set name=variant.opensolaris.zone value=global value=nonglobal
    set name=com.cwi.info.version value=8.14.9
    set name=info.classification value=org.opensolaris.category.2008:System/Core
    set name=org.opensolaris.smf.fmri value=svc:/network/smtp:sendmail
    depend fmri=pkg://solaris/service/network/smtp/sendmail type=require
    file custom_sm.mc group=mail mode=0444 owner=root \
    file custom_sm_mc.xml group=mail mode=0444 owner=root \
       path=lib/svc/manifest/site/custom_sm_mc.xml        \
       restart_fmri=svc:/system/manifest-import:default   \
       refresh_fmri=svc:/network/smtp:sendmail            \
    The "depend" line tells IPS that the package smtp/sendmail must already be installed on this system. If it isn't, Solaris will install that package before proceeding to install this package.
    The line beginning "file custom_sm.mc" gives IPS detailed metadata about the configuration file, and indicates the full pathname - within an image - at which the macro should be stored. The last line specifies the local file name of of the service profile (more on that later), and the location to store it during package installation. It also lists three actuators: SMF services to refresh (re-configure) or restart at the end of package installation. The first of those imports new manifests and service profiles. Importing the service profile changes the property path_to_sendmail_mc. The other two re-configure and restart sendmail. Those two actions expand and then use the new configuration file - the goal of this entire exercise!

  3. Create a service profile:
    $ svcbundle -o contents/custom_sm_mc.xml -s bundle-type=profile \
        -s service-name=network/smtp -s instance-name=sendmail -s enabled=true \
        -s instance-property=config:path_to_sendmail_mc:astring:/etc/mail/cf/cf/custom_sm.mc
    That command creates the file custom_sm_mc.xml, which describes the profile. The sole profile of that profile is to set the sendmail service property "config/path_to_sendmail_mc" to the name of the new sendmail macro file.

  4. Verify correctness of the manifest. In this example, the Solaris repository is mounted at /mnt/repo1. For most systems, "-r" will be followed by the repository's URI, e.g. http://pkg.oracle.com/solaris/release/ or a data center's repository.
    $ pkglint -c /tmp/pkgcache -r /mnt/repo1 sendmail-config.p5m
    Lint engine setup...
    Starting lint run...
    As usual, the lack of output indicates success.

  5. Create the package, make it available in a repo to a test IPS client.
    Note: The documentation explains these steps in more detail.
    Note: this example stores a repo in /var/tmp/cwirepo. This will work, but I am not suggesting that you place repositories in /var/tmp. You should a repo in a directory that is publicly available.
    $ pkgrepo create /var/tmp/cwirepo
    $ pkgrepo -s /var/tmp/cwirepo set publisher/prefix=cwi
    $ pkgsend -s /var/tmp/cwirepo publish -d contents sendmail-config.p5m
    $ pkgrepo verify -s /var/tmp/cwirepo
    Initiating repository verification.
    $ pkgrepo info -s /var/tmp/cwirepo
    cwi       1        online           2015-03-05T16:39:13.906678Z
    $ pkgrepo list -s /var/tmp/cwirepo
    PUBLISHER NAME                                          O VERSION
    cwi       site/sendmail-config                            8.14.9,1.0:20150305T163913Z
    $ pkg list -afv -g /var/tmp/cwirepo
    FMRI                                                                         IFO
    pkg://cwi/site/sendmail-config@8.14.9,1.0:20150305T163913Z                   ---

With all of that, you can use the usual IPS packaging commands. I tested this by adding the "cwi" publisher to a running native Solaris Zone and making the repo available as a loopback mount:

# zlogin testzone mkdir /var/tmp/cwirepo
# zonecfg -rz testzone
zonecfg:testzone> add fs
zonecfg:testzone:fs> set dir=/var/tmp/cwirepo
zonecfg:testzone:fs> set special=/var/tmp/cwirepo
zonecfg:testzone:fs> set type=lofs
zonecfg:testzone:fs> end
zonecfg:testzone> commit
zone 'testzone': Checking: Mounting fs dir=/var/tmp/cwirepo
zone 'testzone': Applying the changes
zonecfg:testzone> exit
# zlogin testzone
root@testzone:~# pkg set-publisher -g /var/tmp/cwirepo cwi
root@testzone:~# pkg info -r sendmail-config
          Name: site/sendmail-config
       Summary: Sendmail configuration
   Description: ConWid sendmail.mc file for Solaris 11, accepts only local
      Category: System/Core
         State: Not installed
     Publisher: cwi
       Version: 8.14.9
 Build Release: 1.0
        Branch: None
Packaging Date: March  5, 2015 08:14:22 PM
          Size: 1.59 kB
          FMRI: pkg://cwi/site/sendmail-config@8.14.9,1.0:20150305T201422Z

root@testzone:~#  pkg install site/sendmail-config
           Packages to install:  1
            Services to change:  2
       Create boot environment: No
Create backup boot environment: No
DOWNLOAD                                PKGS         FILES    XFER (MB)   SPEED
Completed                                1/1           2/2      0.0/0.0    0B/s

PHASE                                          ITEMS
Installing new actions                         12/12
Updating package state database                 Done
Updating package cache                           0/0
Updating image state                            Done
Creating fast lookup database                   Done
Updating package cache                           2/2

root@testzone:~# pkg verify  site/sendmail-config

Installation of that package causes several effects. Obviously, the custom sendmail configuration file custom_sm.mc is placed into the directory /etc/mail/sendmail/cf/cf. The sendmail daemon is restarted, automatically expanding that file into a sendmail.cf file and using it. I have noticed that on occasion, it is necessary to refresh and restart the sendmail service.


The result of all of that is an easily maintained configuration file. These concepts can be used with other configuration files, and can be extended to more complex sets of configuration files.

For more information, see these documents:


I appreciate the assistance of Dave Miner, John Beck, and Scott Dickson, who helped me understand the details of these features. However, I am responsible for any errors.

Monday Dec 08, 2014

Provisioning Solaris 11

Oracle Solaris 11 introduced the Image Package System, a new software packaging in which Solaris software components are delivered. It replaces the System V Release 4 ("SVR4") packaging system used by Solaris 2.0 through Solaris 10. If you are learning Solaris 11, learning about IPS is a must! The links below will take you to the documents, videos, blog entries, and other artifacts that I think are the most important ones to begin.

Blog Entries, Screencasts


How-to Guides, Other Papers

Official Documentation

Tuesday Aug 26, 2014

Migratory Solaris Kernel Zones

The Introduction

Oracle Solaris 11.2 introduced Oracle Solaris Kernel Zones. Kernel Zones (KZs) offer a midpoint between traditional operating system virtualization and virtual machines. They exhibit the low overhead and low management effort of Solaris Zones, and add the best parts of the independence of virtual machines.

A Kernel Zone is a type of Solaris Zone that runs its own Solaris kernel. This gives each Kernel Zone complete independence of software packages, as well as other benefits.

One of the more interesting new abilities that Kernel Zones bring to Solaris Zones is the ability to "pause" a running KZ and, "resume" it on a different computer - or the same computer, if you prefer.

Of what value is the ability to "pause" a zone? One potential use is moving a workload from a smaller computer (with too few CPUs, or insufficient RAM) to a larger one. Some workloads do not maintain much state, and can restart quickly, and so they wouldn't benefit from suspend/resume. Others, such as static (read-only) databases, may take 30 minutes to start and obtain good performance. The ability to suspend, but not stop, the workload and its operating system can be very valuable.

Another possible use of this ability is the staging of multiple KZs which have already booted and, perhaps, have started to run a workload. Instead of booting in a few minutes, the workload can continue from a known state in just a few seconds. Further, the suspended zone can be "unpaused" on the computer of your choice. Suspended kernel zones are like a nest of dozing ants, waiting to take action at a moment's notice.

This blog entry shows the steps to create and move a KZ, highlighting both the Solaris iSCSI implementation as well as kernel zones and their suspend/resume feature. Briefly, the steps are:

  1. Create shared storage
  2. Make shared storage available to both computers - the one that will run the zone, at first, as well as the computer on which the zone will be resumed.
  3. Configure the zone on each system.
  4. Install the zone on one system.
  5. "Warm migrate" the zone by pausing it, and then, on the other computer, resuming it.

Links to relevant documentation and blogs are provided at the bottom.

The Method

The Kernel Zones suspend/resume feature requires the use of storage accessible by multiple computers. However, neither Kernel Zones nor suspend/resume requires a specific type of shared storage. In Solaris 11.2 the only types of shared storage that supports zones are iSCSI and Fiber Channel. This blog entry uses iSCSI.

The example below uses three computers. One is the iSCSI target, i.e. the storage server. The other two run the KZ, one at a time. All three systems run Solaris 11.2, although the iSCSI features below work on early updates to Solaris 11, or a ZFS Storage Appliance (the current family shares the brand name ZS3), or another type of iSCSI target.

In the commands shown below, the prompt "storage1#" indicates commands that would be entered into the iSCSI target. Similarly, "node1#" indicates commands that you would enter into the first computer that will run the kernel zone. The few commands preceded by the prompt "bothnodes#" must be run on the both node1 and node2. The name of the kernel zone is "ant1".

For simplicity, the example below ignores security concerns. (More about security below.)

Finally, note that these commands should be run by a non-root user who prefaces each command with the pseudo-command "sudo". ;-)

Step 1. Provide shared storage for the kernel zone. The zone only needs one device for its zpool. Redundancy is provided by the zpool in the iSCSI target. (For a more detailed explanation, see the link to the COMSTAR documentation in the section "The Links" below.)

storage1# pkg install group/feature/storage-server           # Install necessary software.
storage1# svcadm enable stmf:default                         # Enable that software.
storage1# zfs create rpool/zvols                             # A dataset for the zvol.
storage1# zfs create -V 20g rpool/zvols/ant1                 # Create a zvol as backing store.
storage1# stmfadm create-lu /dev/zvol/rdsk/rpool/zvols/ant1  # Create a back-end LUN.
Logical unit created: 600144F068D1CD00000053ECD3D20001
storage1# stmfadm list-lu
LU Name: 600144F068D1CD00000053ECD3D20001
storage1# stmfadm add-view 600144F068D1CD00000053ECD3D20001
storage1# stmfadm list-view -l 600144F068D1CD00000053ECD3D20001
View Entry: 0
Host group : All
Target Group : All
LUN : Auto

storage1# svcadm enable -r svc:/network/iscsi/target:default # Enable the target service.
storage1# svcs -l iscsi/target
fmri svc:/network/iscsi/target:default
name iscsi target
enabled true
state online
next_state none
state_time Auguest 10, 2014 03:58:50 PM EST
logfile /var/svc/log/network-iscsi-target:default.log
restarter svc:/system/svc/restarter:default
manifest /lib/svc/manifest/network/iscsi/iscsi-target.xml
dependency require_any/error svc:/milestone/network (online)
dependency require_all/none svc:/system/stmf:default (online)
storage1# itadm create-target
Target iqn.1986-03.com.sun:02:238d10b8-cca8-ef7a-e095-e1132d91c4a5
successfully created
storage1# itadm list-target -v
TARGET NAME                                                  STATE    SESSIONS
iqn.1986-03.com.sun:02:238d10b8-cca8-ef7a-e095-e1132d91c4a5  online   0
        alias:                  -
        auth:                   none (defaults)
        targetchapuser:         -
        targetchapsecret:       unset
        tpg-tags:               default

Step 2A. Configure initiators Configuring iSCSI on the two iSCSI initiators uses exactly the same commands on each, so I'll just list them once.

bothnodes# svcadm enable network/iscsi/initiator
bothnodes# iscsiadm modify discovery --sendtargets enable # The simplest discovery method.
bothnodes# iscsiadm add discovery-address    # IP address of the storage server.
At this point, the initiator will automatically discover all of the iSCSI LUNs offered by that target. One way to view the list of them is with the format(1M) command.
bothnodes# format
Searching for disks...done

       1. c0t600144F068D1CD00000053ECD3D20001d0 

Step 2B. On each of the two computers that will host the zone, identify the Storage Uniform Resource Identfiers ("SURI") - see suri(5) for more information. This command tells you the SURI of that LUN, in each of multiple formats. We'll need this SURI to specify the storage for the kernel zone.

bothnodes# suriadm lookup-uri c0t600144F068D1CD00000053ECD3D20001d0
Step 2C. When you suspend a kernel zone, its RAM pages must be stored temporarily in a file. In order to resume the zone on a different computer, the "suspend file" must be on storage that both computers can access. For this example, we'll use an NFS share. (Another iSCSI LUN could be used instead.) The method shown below is not particularly secure, although the suspended image is first encrypted. Secure methods would require the use of other Solaris features, but they are not the topic of this blog entry.
storage1# zfs create -p rpool/export/suspend
storage1# zfs set share.nfs=on rpool/export/suspend
That share must be made available on both nodes, with appropriate permissions.
node1# mkdir /mnt/suspend
node1# mount -F nfs storage1:/export/suspend /mnt/suspend
node2# mkdir /mnt/suspend
node2# mount -F nfs storage1:/export/suspend /mnt/suspend
Step 3. Configure a kernel zone, using the two iSCSI LUNs and a system profile. You can configure a kernel zone very easily. The only required settings are the name and the use of the kernel zone template. The name of the latter is SYSsolaris-kz. That template specifies a VNIC, 2GB of dedicated RAM, 1 virtual CPU, and local storage that will be configured automatically when the zone is installed. We need shared storage instead of local storage, so one of the first steps will be deleting the local storage resource. That device will have an ID number of zero. After deleting that resource, we add the LUN, using the SURI determined earlier.
node1# zonecfg -z ant1
  Use 'create' to begin configuring a new zone.
  zonecfg:ant1> create -t SYSsolaris-kz
  zonecfg:ant1> remove device id=0
  zonecfg:ant1> add device
  zonecfg:ant1:device> set storage=iscsi://house/luname.naa.600144f068d1cd00000053ecd3d20001
  zonecfg:ant1:device> set bootpri=0
  zonecfg:ant1:device> info
        match not specified
        storage: iscsi://house/luname.naa.600144f068d1cd00000053ecd3d20001
        id: 1
        bootpri: 0
  zonecfg:ant1:device> end
  zonecfg:ant1> add suspend
  zonecfg:ant1:suspend> set path=/mnt/suspend/ant1.sus
  zonecfg:ant1:suspend> end
  zonecfg:ant1> exit
We can create a reusable configuration profile.
node1# sysconfig create-profile  -o ant1
[The usual sysconfig conversation ensues...]

Step 4. Install and boot the kernel zone.

node1# zoneadm -z ant1 install -c ant1/sc_profile.xml
Progress being logged to /var/log/zones/zoneadm.20140815T155143Z.ant1.install
pkg cache: Using /var/pkg/publisher.
 Install Log: /system/volatile/install.15996/install_log
 AI Manifest: /tmp/zoneadm15390.W_a4NE/devel-ai-manifest.xml
  SC Profile: /usr/share/auto_install/sc_profiles/enable_sci.xml
Installation: Starting ...

        Creating IPS image
        Installing packages from:
                origin:  http://pkg.oracle.com/solaris/release/
        The following licenses have been accepted and not displayed.
        Please review the licenses for the following packages post-install:
        Package licenses may be viewed using the command:
          pkg info --license 

DOWNLOAD                                PKGS         FILES    XFER (MB)   SPEED
Completed                            483/483   64276/64276  543.7/543.7    0B/s

PHASE                                          ITEMS
Installing new actions                   87530/87530
Updating package state database                 Done
Updating package cache                           0/0
Updating image state                            Done
Creating fast lookup database                   Done
Installation: Succeeded
        Done: Installation completed in 207.564 seconds.

node1# zoneadm -z ant1 boot

Step 5. With all of the hard work behind us, we can "warm migrate" the zone. The first step is preparation of the destination system - "node2" in our example - by applying the zone's configuration to the destination.

node1# zonecfg -z ant1 export -f /mnt/suspend/ant1.cfg
node2# zonecfg -z ant1 -f /mnt/suspend/ant1.cfg
The "detach" operation does not delete anything. It merely tells node1 to cease considering the zone to be usable.
node1# zoneadm -z ant1 suspend
node1# zoneadm -z ant1 detach

A separate "resume" sub-command for zoneadm was not necessary. The "boot" sub-command fulfills that purpose.

node2# zoneadm -z ant1 attach
node2# zoneadm -z ant1 boot
Of course, "warm migration" is different from "live migration" in one important respect: the duration of the service outage. Live migration achieves a service outage that lasts a small fraction of a second. In one experiment, warm migration of a kernel zone created a service outage that lasted 30 seconds. It's not live migration, but is an important step forward, compared to other types of Solaris Zones.

The Notes

  1. This example used a zpool as back-end storage. That zpool provided data redundancy, so additional redundancy was not needed within the kernel zone. If unmirrored devices (e.g. physical disks were specified in zonecfg) then data redundancy should be achieved within the zone. Fortunately, you can specify two devices in zonecfg, and "zoneadm ... install" will automatically mirror them.
  2. In a simple network configuration, the steps above create a kernel zone that has normal network access. More complicated networks may require additional steps, such as VLAN configuration, etc.
  3. Some steps regarding file permissions on the NFS mount were omitted for clarity. This is one of the security weaknesses of the steps shown above. All of the weaknesses can be addressed by using additional Solaris features. These include, but are not limited to, iSCSI features (iSNS, CHAP authentication, RADIUS, etc.), NFS security features (e.g. NFS ACLs, Kerberos, etc.), RBAC, etc.

The Links

Thursday Apr 24, 2014

Oracle Solaris 11.2 Launch

On April 29th, Oracle will launch Oracle Solaris 11.2. This version will add significant new features that reinforce Solaris' position as the leading cloud OS.

These new features:

  • Further increase the flexibility of Solaris system virtualization
  • Simplify the creation of private and public clouds
  • Add unique software-defined networking (SDN) capabilities
  • Reduce management effort via OpenStack integration

To attend the launch event live in New York City, or view the live webcast, visit http://oracle.com/goto/solaris-11-2.

Tuesday Jul 16, 2013


Maximize the Value, Minimize the Effort

Are you among the people migrating workloads from IBM AIX to Oracle Solaris 11? Even if you have not yet begun this migration, you will benefit from the knowledge contained in these resources:
  1. an online comparison of features
  2. the IBM AIX to Oracle Solaris Technology Mapping Guide.

Wednesday Jun 12, 2013

Comparing Solaris 11 Zones to Solaris 10 Zones

Many people have asked whether Oracle Solaris 11 uses sparse-root zones or whole-root zones. I think the best answer is "both and neither, and more" - but that's a wee bit confusing. :-) This blog entry attempts to explain that answer.

First a recap: Solaris 10 introduced the Solaris Zones feature set, way back in 2005. Zones are a form of server virtualization called "OS (Operating System) Virtualization." They improve consolidation ratios by isolating processes from each other so that they cannot interact. Each zone has its own set of users, naming services, and other software components. One of the many advantages is that there is no need for a hypervisor, so there is no performance overhead. Many data centers run tens to hundreds of zones per server!

In Solaris 10, there are two models of package deployment for Solaris Zones. One model is called "sparse-root" and the other "whole-root." Each form has specific characteristics, abilities, and limitations.

A whole-root zone has its own copy of the Solaris packages. This allows the inclusion of other software in system directories - even though that practice has been discouraged for many years. Although it is also possible to modify the Solaris content in such a zone, e.g. patching a zone separately from the rest, this was highly frowned on. :-( (More importantly, modifying the Solaris content in a whole-root zone may lead to an unsupported configuration.)

The other model is called "sparse-root." In that form, instead of copying all of the Solaris packages into the zone, the directories containing Solaris binaries are re-mounted into the zone. This allows the zone's users to access them at their normal places in the directory tree. Those are read-only mounts, so a zone's root user cannot modify them. This improves security, and also reduces the amount of disk space used by the zone - 200MB instead of the usual 3-5 GB per zone. These loopback mounts also reduce the amount of RAM used by zones because Solaris only stores in RAM one copy of a program that is in use by several zones. This model also has disadvantages. One disadvantage is the inability to add software into system directories such as /usr. Also, although a sparse-root can be migrated to another Solaris 10 system, it cannot be moved to a Solaris 11 system as a "Solaris 10 Zone."

In addition to those contrasting characteristics, here are some characteristics of zones in Solaris 10 that are shared by both packaging models:

  • A zone can modify its own configuration files in /etc.
  • A zone can be configured so that it manages its own networking, or so that it cannot modify its network configuration.
  • It is difficult to give a non-root user in the global zone the ability to boot and stop a zone, without giving that user other abilities.
  • In a zone that can manage its own networking, the root user can do harmful things like spoof other IP addresses and MAC addresses.
  • It is difficult to assign network patcket processing to the same CPUs that a zone used. This could lead to unpredictable performance and performance troubleshooting challenges.
  • You cannot run a large number of zones in one system (e.g. 50) that each managed its own networking, because that would require assignment of more physical NICs than available (e.g. 50).
  • Except when managed by Ops Center, zones could not be safely stored on NAS.
  • Solaris 10 Zones cannot be NFS servers.
  • The fsstat command does not report statistics per zone.

Solaris 11 Zones use the new packaging system of Solaris 11. Their configuration does not offer a choice of packaging models, as Solaris 10 does. Instead, two (well, four) different models of "immutability" (changeability) are offered. The default model allows a privileged zone user to modify the zone's content. The other (three) limit the content which can be changed: none, or two overlapping sets of configuration files. (See "Configuring and Administering Immutable Zones".)

Solaris 11 addresses many of those limitations. With the characteristics listed above in mind, the following table shows the similarities and differences between zones in Solaris 10 and in Solaris 11. (Cells in a row that are similar have the same background color.)

Characteristic Solaris 10
Solaris 10
Solaris 11 Solaris 11
Immutable Zones
Each zone has a copy of most Solaris packagesYesNo YesYes
Disk space used by a zone (typical)3.5 GB100 MB 500MB500MB
A privileged zone user can add software to /usrYesNo YesNo
A zone can modify its Solaris programsTrueFalse TrueFalse
Each zone can modify its configuration filesYesYes YesNo
Delegated administrationNoNo YesYes
A zone can be configured to manage its own networkingYesYes YesYes
A zone can be configured so that it cannot manage its own networkingYesYes YesYes
A zone can be configured with resource controlsYesYes YesYes
Integrated tool to measure a zone's resource consumption (zonestat)NoNo YesYes
Network processing automatically happens on that zone's CPUsNoNo YesYes
Zones can be NFS serversNoNoYesYes
Per-zone fsstat dataNoNoYesYes

As you can see, the statement "Solaris 11 Zones are whole-root zones" is only true using the narrowest definition of whole-root zones: those zones which have their own copy of Solaris packaging content. But there are other valuable characteristics of sparse-root zones that are still available in Solaris 11 Zones. Also, some Solaris 11 Zones do not have some characteristics of whole-root zones.

For example, the table above shows that you can configure a Solaris 11 zone that has read-only Solaris content. And Solaris 11 takes that concept further, offering the ability to tailor that immutability. It also shows that Solaris 10 sparse-root and whole-root zones are more similar to each other than to Solaris 11 Zones.


Solaris 11 Zones are slightly different from Solaris 10 Zones. The former can achieve the goals of the latter, and they also offer features not found in Solaris 10 Zones. Solaris 11 Zones offer the best of Solaris 10 whole-root zones and sparse-root zones, and offer an array of new features that make Zones even more flexible and powerful.

Wednesday Nov 14, 2012

Webcast: New Features of Solaris 11.1 and Solaris Cluster 4.1

If you missed last week's webcast of the new features in Oracle Solaris 11.1 you can view the recording. The speakers discuss changes that improve performance and scalability, particularly for Oracle DB, and many other enhancements.

New features include Optimized Shared Memory (improves DB startup time), accelerated kernel locks (improves Oracle RAC performance and scalability), virtual memory improvements, a DTrace data collecter in the DB, Zones installed on Shared Storage (simplifies migration), Data Center Bridging, and Edge Virtual Bridging.

To view the archived webcast, you must register and use the URL that you receive in e-mail.

Tuesday Nov 13, 2012

Oracle Solaris: Zones on Shared Storage

Oracle Solaris 11.1 has several new features. At oracle.com you can find a detailed list.

One of the significant new features, and the most significant new feature releated to Oracle Solaris Zones, is casually called "Zones on Shared Storage" or simply ZOSS (rhymes with "moss"). ZOSS offers much more flexibility because you can store Solaris Zones on shared storage (surprise!) so that you can perform quick and easy migration of a zone from one system to another. This blog entry describes and demonstrates the use of ZOSS.

ZOSS provides complete support for a Solaris Zone that is stored on "shared storage." In this case, "shared storage" refers to fiber channel (FC) or iSCSI devices, although there is one lone exception that I will demonstrate soon. The primary intent is to enable you to store a zone on FC or iSCSI storage so that it can be migrated from one host computer to another much more easily and safely than in the past.

With this blog entry, I wanted to make it easy for you to try this yourself. I couldn't assume that you have a SAN available - which is a good thing, because neither do I! :-) What could I use, instead? [There he goes, foreshadowing again... -Ed.]

Developing this entry reinforced the lesson that the solution to every lab problem is VirtualBox. ;-) Oracle VM VirtualBox (its formal name) helps here in a couple of important ways. It offers the ability to easily install multiple copies of Solaris as guests on top of any popular system (Microsoft Windows, MacOS, Solaris, Oracle Linux (and other Linuxes) etc.). It also offers the ability to create a separate virtual disk drive (VDI) that appears as a local hard disk to a guest. This virtual disk can be moved very easily from one guest to another. In other words, you can follow the steps below on a laptop or larger x86 system.

Please note that the ability to use ZOSS to store a zone on a local disk is very useful for a lab environment, but not so useful for production. I do not suggest regularly moving disk drives among computers. [Update, 2013.01.28: Apparently the previous sentence caused some confusion. I do recommend the use of Zones on Shared Storage in production environments, when appropriate storage is used. "Appropriate storage" would include SAN or iSCSI at this point. I do not recommend using ZOSS with local disks in production because doing so would require moving the disks between computers.]

In the method I describe below, that virtual hard disk will contain the zone that will be migrated among the (virtual) hosts. In production, you would use FC or iSCSI LUNs instead. The zonecfg(1M) man page details the syntax for each of the three types of devices.

Why Migrate?

Why is the migration of virtual servers important? Some of the most common reasons are:
  • Moving a workload to a different computer so that the original computer can be turned off for extensive maintenance.
  • Moving a workload to a larger system because the workload has outgrown its original system.
  • If the workload runs in an environment (such as a Solaris Zone) that is stored on shared storage, you can restore the service of the workload on an alternate computer if the original computer has failed and will not reboot.
  • You can simplify lifecycle management of a workload by developing it on a laptop, migrating it to a test platform when it's ready, and finally moving it to a production system.


For ZOSS, the important new concept is named "rootzpool". You can read about it in the zonecfg(1M) man page, but here's the short version: it's the backing store (hard disk(s), or LUN(s)) that will be used to make a ZFS zpool - the zpool that will hold the zone. This zpool:

  • contains the zone's Solaris content, i.e. the root file system
  • does not contain any content not related to the zone
  • can only be mounted by one Solaris instance at a time

Method Overview

Here is a brief list of the steps to create a zone on shared storage and migrate it. The next section shows the commands and output.
  1. You will need a host system with an x86 CPU (hopefully at least a couple of CPU cores), at least 2GB of RAM, and at least 25GB of free disk space. (The steps below will not actually use 25GB of disk space, but I don't want to lead you down a path that ends in a big sign that says "Your HDD is full. Good luck!")
  2. Configure the zone on both systems, specifying the rootzpool that both will use. The best way is to configure it on one system and then copy the output of "zonecfg export" to the other system to be used as input to zonecfg. This method reduces the chances of pilot error. (It is not necessary to configure the zone on both systems before creating it. You can configure this zone in multiple places, whenever you want, and migrate it to one of those places at any time - as long as those systems all have access to the shared storage.)
  3. Install the zone on one system, onto shared storage.
  4. Boot the zone.
  5. Provide system configuration information to the zone. (In the Real World(tm) you will usually automate this step.)
  6. Shutdown the zone.
  7. Detach the zone from the original system.
  8. Attach the zone to its new "home" system.
  9. Boot the zone.
The zone can be used normally, and even migrated back, or to a different system.


The rest of this shows the commands and output. The two hostnames are "sysA" and "sysB".

Note that each Solaris guest might use a different device name for the VDI that they share. I used the device names shown below, but you must discover the device name(s) after booting each guest. In a production environment you would also discover the device name first and then configure the zone with that name. Fortunately, you can use the command "zpool import" or "format" to discover the device on the "new" host for the zone.

The first steps create the VirtualBox guests and the shared disk drive. I describe the steps here without demonstrating them.

  1. Download VirtualBox and install it using a method normal for your host OS. You can read the complete instructions.
  2. Create two VirtualBox guests, each to run Solaris 11.1. Each will use its own VDI as its root disk.
  3. Install Solaris 11.1 in each guest.Install Solaris 11.1 in each guest. To install a Solaris 11.1 guest, you can either download a pre-built VirtualBox guest, and import it, or install Solaris 11.1 from the "text install" media. If you use the latter method, after booting you will not see a windowing system. To install the GUI and other important things, login and run "pkg install solaris-desktop" and take a break while it installs those important things.
  4. Life is usually easier if you install the VirtualBox Guest Additions because then you can copy and paste between the host and guests, etc. You can find the guest additions in the folder matching the version of VirtualBox you are using. You can also read the instructions for installing the guest additions.
  5. To create the zone's shared VDI in VirtualBox, you can open the storage configuration for one of the two guests, select the SATA controller, and click on the "Add Hard Disk" icon nearby. Choose "Create New Disk" and specify an appropriate path name for the file that will contain the VDI. The shared VDI must be at least 1.5 GB. Note that the guest must be stopped to do this.
  6. Add that VDI to the other guest - using its Storage configuration - so that each can access it while running. The steps start out the same, except that you choose "Choose Existing Disk" instead of "Create New Disk." Because the disk is configured on both of them, VirtualBox prevents you from running both guests at the same time.
  7. Identify device names of that VDI, in each of the guests. Solaris chooses the name based on existing devices. The names may be the same, or may be different from each other. This step is shown below as "Step 1."


In the example shown below, I make these assumptions.
  • The guest that will own the zone at the beginning is named sysA.
  • The guest that will own the zone after the first migration is named sysB.
  • On sysA, the shared disk is named /dev/dsk/c7t2d0
  • On sysB, the shared disk is named /dev/dsk/c7t3d0

(Finally!) The Steps

Step 1) Determine the name of the disk that will move back and forth between the systems.
root@sysA:~# format
Searching for disks...done

       0. c7t0d0 
       1. c7t2d0 
Specify disk (enter its number): ^D
Step 2) The first thing to do is partition and label the disk. The magic needed to write an EFI label is not overly complicated.
root@sysA:~# format -e c7t2d0
selecting c7t2d0
[disk formatted]

format> fdisk
No fdisk table exists. The default partition for the disk is:

  a 100% "SOLARIS System" partition

Type "y" to accept the default partition,  otherwise type "n" to edit the
 partition table. n
Enter Selection: 1
  G=EFI_SYS    0=Exit? f

format> label
Specify Label type[1]: 1
Ready to label disk, continue? y

format> quit

root@sysA:~# ls /dev/dsk/c7t2d0

Step 3) Configure zone1 on sysA.
root@sysA:~# zonecfg -z zone1
Use 'create' to begin configuring a new zone.
zonecfg:zone1> create
create: Using system default template 'SYSdefault'
zonecfg:zone1> set zonename=zone1
zonecfg:zone1> set zonepath=/zones/zone1
zonecfg:zone1> add rootzpool
zonecfg:zone1:rootzpool> add storage dev:dsk/c7t2d0
zonecfg:zone1:rootzpool> end
zonecfg:zone1> exit
oot@sysA:~# zonecfg -z zone1 info
zonename: zone1
zonepath: /zones/zone1
brand: solaris
autoboot: false
ip-type: exclusive
        storage: dev:dsk/c7t2d0
Step 4) Install the zone. This step takes the most time, but you can wander off for a snack or a few laps around the gym - or both! (Just not at the same time...)
root@sysA:~# zoneadm -z zone1 install
Created zone zpool: zone1_rpool
Progress being logged to /var/log/zones/zoneadm.20121022T163634Z.zone1.install
       Image: Preparing at /zones/zone1/root.

 AI Manifest: /tmp/manifest.xml.RXaycg
  SC Profile: /usr/share/auto_install/sc_profiles/enable_sci.xml
    Zonename: zone1
Installation: Starting ...

              Creating IPS image
Startup linked: 1/1 done
              Installing packages from:
                      origin:  http://pkg.us.oracle.com/support/
DOWNLOAD                                PKGS         FILES    XFER (MB)   SPEED
Completed                            183/183   33556/33556  222.2/222.2  2.8M/s

PHASE                                          ITEMS
Installing new actions                   46825/46825
Updating package state database                 Done
Updating image state                            Done
Creating fast lookup database                   Done
Installation: Succeeded

        Note: Man pages can be obtained by installing pkg:/system/manual


        Done: Installation completed in 1696.847 seconds.

  Next Steps: Boot the zone, then log into the zone console (zlogin -C)

              to complete the configuration process.

Log saved in non-global zone as /zones/zone1/root/var/log/zones/zoneadm.20121022T163634Z.zone1.install
Step 5) Boot the Zone.
root@sysA:~# zoneadm -z zone1 boot
Step 6) Login to zone's console to complete the specification of system information.
root@sysA:~# zlogin -C zone1
Answer the usual questions and wait for a login prompt. Then you can end the console session with the usual "~." incantation.

Step 7) Shutdown the zone so it can be "moved."

root@sysA:~# zoneadm -z zone1 shutdown
Step 8) Detach the zone so that the original global zone can't use it.
root@sysA:~# zoneadm list -cv
  ID NAME             STATUS     PATH                           BRAND    IP
   0 global           running    /                              solaris  shared
   - zone1            installed  /zones/zone1                   solaris  excl
root@sysA:~# zpool list
rpool        17.6G  11.2G  6.47G  63%  1.00x  ONLINE  -
zone1_rpool  1.98G   484M  1.51G  23%  1.00x  ONLINE  -
root@sysA:~# zoneadm -z zone1 detach
Exported zone zpool: zone1_rpool
Step 9) Review the result and shutdown sysA so that sysB can use the shared disk.
root@sysA:~# zpool list
rpool  17.6G  11.2G  6.47G  63%  1.00x  ONLINE  -
root@sysA:~# zoneadm list -cv
  ID NAME             STATUS     PATH                           BRAND    IP
   0 global           running    /                              solaris  shared
   - zone1            configured /zones/zone1                   solaris  excl
root@sysA:~# init 0
Step 10) Now boot sysB and configure a zone with the parameters shown above in Step 1. (Again, the safest method is to use "zonecfg ... export" on sysA as described in section "Method Overview" above.) The one difference is the name of the rootzpool storage device, which was shown in the list of assumptions, and which you must determine by booting sysB and using the "format" or "zpool import" command.

When that is done, you should see the output shown next. (I used the same zonename - "zone1" - in this example, but you can choose any valid zonename you want.)

root@sysB:~# zoneadm list -cv
  ID NAME             STATUS     PATH                           BRAND    IP
   0 global           running    /                              solaris  shared
   - zone1            configured /zones/zone1                   solaris  excl
root@sysB:~# zonecfg -z zone1 info
zonename: zone1
zonepath: /zones/zone1
brand: solaris
autoboot: false
ip-type: exclusive
        linkname: net0
        storage: dev:dsk/c7t3d0
Step 11) Attaching the zone automatically imports the zpool.
root@sysB:~# zoneadm -z zone1 attach
Imported zone zpool: zone1_rpool
Progress being logged to /var/log/zones/zoneadm.20121022T184034Z.zone1.attach
    Installing: Using existing zone boot environment
      Zone BE root dataset: zone1_rpool/rpool/ROOT/solaris
                     Cache: Using /var/pkg/publisher.
  Updating non-global zone: Linking to image /.
Processing linked: 1/1 done
  Updating non-global zone: Auditing packages.
No updates necessary for this image.

  Updating non-global zone: Zone updated.
                    Result: Attach Succeeded.
Log saved in non-global zone as /zones/zone1/root/var/log/zones/zoneadm.20121022T184034Z.zone1.attach

root@sysB:~# zoneadm -z zone1 boot
root@sysB:~# zlogin zone1
[Connected to zone 'zone1' pts/2]
Oracle Corporation      SunOS 5.11      11.1    September 2012
Step 12) Now let's migrate the zone back to sysA. Create a file in zone1 so we can verify it exists after we migrate the zone back, then begin migrating it back.
root@zone1:~# ls /opt
root@zone1:~# touch /opt/fileA
root@zone1:~# ls -l /opt/fileA
-rw-r--r--   1 root     root           0 Oct 22 14:47 /opt/fileA
root@zone1:~# exit

[Connection to zone 'zone1' pts/2 closed]
root@sysB:~# zoneadm -z zone1 shutdown
root@sysB:~# zoneadm -z zone1 detach
Exported zone zpool: zone1_rpool
root@sysB:~# init 0
Step 13) Back on sysA, check the status.
Oracle Corporation      SunOS 5.11      11.1    September 2012
root@sysA:~# zoneadm list -cv
  ID NAME             STATUS     PATH                           BRAND    IP
   0 global           running    /                              solaris  shared
   - zone1            configured /zones/zone1                   solaris  excl
root@sysA:~# zpool list
rpool  17.6G  11.2G  6.47G  63%  1.00x  ONLINE  -
Step 14) Re-attach the zone back to sysA.
root@sysA:~# zoneadm -z zone1 attach
Imported zone zpool: zone1_rpool
Progress being logged to /var/log/zones/zoneadm.20121022T190441Z.zone1.attach
    Installing: Using existing zone boot environment
      Zone BE root dataset: zone1_rpool/rpool/ROOT/solaris
                     Cache: Using /var/pkg/publisher.
  Updating non-global zone: Linking to image /.
Processing linked: 1/1 done
  Updating non-global zone: Auditing packages.
No updates necessary for this image.

  Updating non-global zone: Zone updated.
                    Result: Attach Succeeded.
Log saved in non-global zone as /zones/zone1/root/var/log/zones/zoneadm.20121022T190441Z.zone1.attach

root@sysA:~# zpool list
rpool        17.6G  11.2G  6.47G  63%  1.00x  ONLINE  -
zone1_rpool  1.98G   491M  1.51G  24%  1.00x  ONLINE  -
root@sysA:~# zoneadm -z zone1 boot
root@sysA:~# zlogin zone1
[Connected to zone 'zone1' pts/2]
Oracle Corporation      SunOS 5.11      11.1    September 2012
root@zone1:~# zpool list
rpool  1.98G   538M  1.46G  26%  1.00x  ONLINE  -
Step 15) Check for the file created on sysB, earlier.
root@zone1:~# ls -l /opt
total 1
-rw-r--r--   1 root     root           0 Oct 22 14:47 fileA

Next Steps

Here is a brief list of some of the fun things you can try next.
  • Add space to the zone by adding a second storage device to the rootzpool. Make sure that you add it to the configurations of both zones!
  • Create a new zone, specifying two disks in the rootzpool when you first configure the zone. When you install that zone, or clone it from another zone, zoneadm uses those two disks to create a mirrored pool. (Three disks will result in a three-way mirror, etc.)


Hopefully you have seen the ease with which you can now move Solaris Zones from one system to another.

Thursday Nov 08, 2012

Happy Birthday! (to Solaris and SPARC)

Oracle is celebrating the 20th and 25th anniversaries (birthdays?) of Solaris and SPARC.

You can find video highlights of the histories of SPARC and Solaris and brief (static) infographic histories of SPARC and Solaris.

Wednesday Nov 07, 2012

Today: Oracle Solaris [&Cluster] Live Webcast!

Today, Oracle is hosting a live webcast, with Q&A, discussing Solaris 11.1 and Solaris Cluster 4.1 The webcast begins at 11AM EST, but you should register before the event. (Registration is also available at http://bit.ly/RfdkvK.)

Thursday Oct 25, 2012

Oracle Solaris 11.1

Oracle Solaris 11.1 was announced at Oracle OpenWorld recently. This release added 300 new performance and feature enhancements.

My favorite new features:

  • Solaris Zones on Shared Storage
  • Support for 32 TB (!) of RAM
  • Improved Oracle RAC lock latency
  • Dynamically resize the Oracle DB SGA
  • Industry-first support for FedFS
You can learn more from the press release or by attending the Solaris 11.1 webcast on November 7.

Tuesday Oct 23, 2012

Virtual Networks in Oracle Solaris - Part 5

     ago in a
  far, far away...

I wrote four blog entries to describe the new network virtualization features that were in Solaris 11 Express:

  • Part 1 introduced the concept of network virtualization and listed the basic virtual network elements.
  • Part 2 expanded on the concepts and discussed the resource management features.
  • Part 3 demonstrated the creation of some of these virtual network elements.
  • Part 4 demonstrated the network resource controls.
I had planned a final entry that added virtual routers to the list of virtual network elements, but Jeff McMeekin wrote a paper that discuses the same features. That paper is available at OTN. And this Jeff can't write any better than that Jeff...

All of the features described in those blog entries and that paper are also available in Solaris 11. It is possible that some details have changed, but the vast majority of the content is unchanged.

Tuesday Jan 31, 2012

(Solaris) Destination: Detroit

We will have another Solaris 11 Technology Forum soon. The two I hosted in New York City and Boston included over 150 attendees. Dozens more attended the session in Chicago hosted by my associate Scott Dickson. These sessions pack hundreds of details regarding the practical uses of Solaris 11 into 4 hours - plus lunch!

Next week - February 8, to be exact - I will host a session near Detroit, Michigan. You can attend by registering online. Two other Solaris experts - Dave Miner and Alex Barclay - will join me as we explain new Solaris 11 features such as the Image Packaging System and Automated Installer, network virtualization and resource controls, Immutable Zones, and ZFS Encryption and other new security features.

Registration is not required, but available seating is filling up quickly, so don't wait!

Thursday Nov 10, 2011

Solaris 11 Released!!

Oracle released Solaris 11 on November 9, 2011.

You can download Solaris 11.

You can also watch videos, participate in forums, and read white papers, data sheets and documentation.

Friday Oct 28, 2011

Oracle Solaris 11 Launch

Join Oracle executives Mark Hurd and John Fowler and key Oracle Solaris Engineers and Execs at the Oracle Solaris 11 launch event in New York City, at Gotham Hall on Broadway, November 9th and learn how you can build your infrastructure with Oracle Solaris 11 to:

  • Accelerate internal, public, and hybrid cloud applications
  • Optimize application deployment with built-in virtualization
  • Achieve top performance and cost advantages with Oracle Solaris 11–based engineered systems
The launch event will also feature exclusive content for our in-person audience including a session led by the VP of core Solaris development and his leads on Solaris 11 and a customer insights panel during lunch. We will also have a technology showcase featuring our latest systems and Solaris technologies. The Solaris executive team will also be there throughout the day to answer questions and give insights into future developments in Solaris.

Don't miss the Oracle Solaris 11 launch in New York on November 9. Register Today!


Jeff Victor writes this blog to help you understand Oracle's Solaris and virtualization technologies.

The views expressed on this blog are my own and do not necessarily reflect the views of Oracle.


« October 2015