Wednesday May 07, 2014

Cloning Zones with Unified Archives

Solaris 11.2 introduces a new native archive file type, the Unified Archive. Let's take a look at cloning zones with Unified Archives.

Using Unified Archives to clone zones provides a few differences compared to dataset clone-based zone cloning, as we have with 'zoneadm clone' with non-global zones.

The main difference in using an archive rather than 'zoneadm clone' is that the clone archive image is prepared for redistribution. Rather than a full copy, the origin zone is more used as a template for the creation of a new, independently deployable image.

With clone archives, various aspects of the file system are reverted to an as-installed state, and other aspects are cleaned up and sanitized. This makes for a fully portable, migratable image within the archive payload. It can also be carried to remote systems for cloning there.

To keep the images small for our examples, we'll install our zones with the new 'solaris-minimal-server' group package. This gives us a smaller zone image which has most of the core Solaris services available. The image makes for a nice starting point for application development.

One thing to note, the minimal server image doesn't include localization support. The 'system/locale' package is quite large, but we can have our cake and eat it too by using package facets. We can add 'system/locale' to our minimal install, and turn off all of the locales we don't need by using facets.

Let's start by putting this install profile into a simple AI manifest which we'll use for our initial installation.

# cat /data/cfg/zone_mini.xml
  <?xml version="1.0" encoding="UTF-8"?>
  <!DOCTYPE auto_install SYSTEM "file:///usr/share/install/ai.dtd.1">
  <auto_install>
    <ai_instance name="default">
      <target>
        <logical>
          <zpool name="rpool" is_root="true">
            <filesystem name="export" mountpoint="/export"/>
            <filesystem name="export/home"/>
            <be name="solaris"/>
          </zpool>
        </logical>
      </target>
      <software type="IPS">
        <destination>
          
        </destination>
        <software_data action="install">
          <name>pkg:/group/system/solaris-minimal-server</name>
          <name>pkg:/system/locale</name>
        </software_data>
      </software>
    </ai_instance>
  </auto_install>

For my purposes, I'm keeping English and unsetting the rest. You can configure your install as needed.

Ok, let's install our zone.

# zoneadm list -cv
  ID NAME     STATUS      PATH                  BRAND     IP    
  0  global   running     /                     solaris   shared
  -  thing1   configured  /system/zones/thing1  solaris   excl 

# zoneadm -z thing1 install -m /data/cfg/zone_mini.xml 

The following ZFS file system(s) have been created:
    rpool/VARSHARE/zones/thing1
Progress being logged to /var/log/zones/zoneadm.20140507T150634Z.thing1.install
       Image: Preparing at /system/zones/thing1/root.

 Install Log: /system/volatile/install.6115/install_log
 AI Manifest: /tmp/manifest.xml.Tfa46l
  SC Profile: /usr/share/auto_install/sc_profiles/enable_sci.xml
    Zonename: thing1
Installation: Starting ...

        Creating IPS image
Startup linked: 1/1 done
        Installing packages from:
            solaris
                origin:  http://host.domain/solaris11/pkg
DOWNLOAD                                PKGS         FILES    XFER (MB)   SPEED
Completed                            166/166   16634/16634  163.4/163.4  2.3M/s

PHASE                                          ITEMS
Installing new actions                   26917/26917
Updating package state database                 Done 
Updating package cache                           0/0 
Updating image state                            Done 
Creating fast lookup database                   Done 
Updating package cache                           1/1 
Installation: Succeeded

        Note: Man pages can be obtained by installing pkg:/system/manual

        Done: Installation completed in 185.128 seconds.

  Next Steps: Boot the zone, then log into the zone console (zlogin -C)
              to complete the configuration process.

Log saved in non-global zone as /system/zones/thing1/root/var/log/zones/zoneadm.20140507T150634Z.thing1.install

Now we've got a minimal zone. Notice the install does a lot of package work.  Since we're starting from scratch, the deployment creates a new IPS image, validates the publishers and host image, links the zone image to the host's and then installs it. The install consists of building the list of packages, downloading them, and then invoking all of the install and post-install actions for each one. It does this very quickly, but it's a lot of work so it takes some time. In this case, about 3 minutes.

A side effect of using Unified Archives to deploy Solaris systems is that the deployment time is typically quicker than with package-based installs. Since an archived system contains the system's package image, a deployment simply lays the image back down. IPS doesn't need to do all that work again, since it already did so during the deployment of the origin system.  

So, let's archive this zone up and deploy a clone of it to see how this works.

Again in the spirit of keeping things small, we can use the -e (exclude-media) option with archiveadm create. Since we don't need a portable and transformable image for this simple example, we won't need install media. More on embedded media later.

  # archiveadm create -z thing1 -e /data/archives/thing1.uar
  Initializing Unified Archive creation resources...
  Unified Archive initialized: /data/archives/thing1.uar
  Logging to: /system/volatile/archive_log.6239
  Executing dataset discovery...
  Dataset discovery complete
  Preparing archive system image...
  Beginning archive stream creation...
  Archive stream creation complete
  Beginning final archive assembly...
  Archive creation complete

That took about a minute and a half and resulted in an archive which is just shy of 200MB. There is quite a bit of compression in the image, as we can see from the verbose output the deployed size is nearly 1GB.

# -lh /data/archives/thing1.uar 
  -rw-r--r-- 1 root root  197M May  7 09:20 /data/archives/thing1.uar
  
# archiveadm info -v /data/archives/thing1.uar
  Archive Information
            Creation Time:  2014-05-07T15:18:59Z
              Source Host:  ducksiren
             Architecture:  i386
         Operating System:  Oracle Solaris 11.2 X86
         Recovery Archive:  No
                Unique ID:  31542e88-dfe9-4e96-f39f-f622f1f2fdbf
          Archive Version:  1.0
  
Deployable Systems
            'thing1'
               OS Version:  0.5.11
                OS Branch:  0.175.2.0.0.38.0
                Active BE:  solaris
                    Brand:  solaris
              Size Needed:  971MB
                Unique ID:  a4ac1faf-4b7d-c6cf-f5fc-83a3b8106298
                Root-only:  Yes

Now that we have an archive, we can deploy new zones directly from it.  As always, deploying a zone is two steps; the zone configuration is first created and then it is installed.

The zonecfg and zoneadm utilities have been updated to work with unified archives. This allows for direct cloning of the origin configuration stored within the archive as well as installation of a new zone directly from the archive. These two steps are not tied to each other - any valid zone configuration can be installed from an archive, the zonecfg need not be sourced from the archive.

Let's create a new zone from the archive, which will mirror the origin zone's configuration, and then install it.


# zoneadm list -cv 
    ID NAME     STATUS      PATH                  BRAND     IP     
    0  global   running     /                     solaris   shared 
    -  thing1   installed   /system/zones/thing1  solaris   excl  
  # zonecfg -z thing2 create -a /data/archives/thing1.uar
  # zoneadm list -cv  
    ID NAME     STATUS      PATH                  BRAND     IP     
    0  global   running     /                     solaris   shared 
    -  thing1   installed   /system/zones/thing1  solaris   excl  
    -  thing2   configured  /system/zones/thing2  solaris   excl

So, easy enough. The new zone 'thing2' has a configuration which is based upon the configuration of 'thing1'. Now we can install the new zone directly from the archive as well, with zoneadm.

# zoneadm -z thing2 install -a /data/archives/thing1.uar 
  
The following ZFS file system(s) have been created:
      rpool/VARSHARE/zones/thing2
  Progress being logged to /var/log/zones/zoneadm.20140507T153751Z.thing2.install 
      Installing: This may take several minutes...
   Install Log: /system/volatile/install.12268/install_log
   AI Manifest: /tmp/manifest.thing2.jqaO7x.xml
      Zonename: thing2
  Installation: Starting ...
  
        Commencing transfer of stream: a4ac1faf-4b7d-c6cf-f5fc-83a3b8106298-0.zfs to rpool/VARSHARE/zones/thing2/rpool
          Completed transfer of stream: 'a4ac1faf-4b7d-c6cf-f5fc-83a3b8106298-0.zfs' from file:///data/archives/thing1.uar
          Archive transfer completed
  Installation: Succeeded
        Zone BE root dataset: rpool/VARSHARE/zones/thing2/rpool/ROOT/solaris
                       Cache: Using /var/pkg/publisher.
  Updating image format
  Image format already current.
    Updating non-global zone: Linking to image /.
  Processing linked: 1/1 done
    Updating non-global zone: Syncing packages.
  No updates necessary for this image. (zone:thing2)
  
  Updating non-global zone: Zone updated.
                      Result: Attach Succeeded.
  
        Done: Installation completed in 104.898 seconds.
    Next Steps: Boot the zone, then log into the zone console (zlogin -C)
                to complete the configuration process.
  Log saved in non-global zone as /system/zones/thing2/root/var/log/zones/zoneadm.20140507T153751Z.thing2.install

Simple, and this one deploys in about a minute and a half. IPS still links the image into the global zone and does some validation, but for the most part already did the heavy lifting for us in the deployment of the origin system.

This archive can be used to deploy any number of zones on any number of host systems. The only criteria for support is that the host is a supported platform of the same ISA. This means that archives can be used for all sorts of migrations and transforms, even across virtualization boundaries. More on that later, as well.

Support for kernel zones is transparent - zonecfg and zoneadm work the same way to create and install a new zone from an archive, respectively. By the way, for a kernel zones primer and a bit more detail, check out Mike Gerdts' Zones blog.

Note when the archive is created this time, we'll need the embedded media which is built by default. This media is used to boot and install the new kernel zone. This all happens under the covers, of course. Just keep in mind that if you might want to deploy an archive into a kernel zone in the future, don't use --exclude-media.

Ok, let's create a clone archive of a kernel zone and build it a friend.

# zoneadm list -cv
    ID NAME     STATUS      PATH                  BRAND       IP    
    0  global   running     /                     solaris     shared
    3  vandemar running     -                     solaris-kz  excl
    -  thing1   installed   /system/zones/thing1  solaris     excl 
    -  thing2   installed   /system/zones/thing2  solaris     excl 
  
# archiveadm create -z vandemar /data/archives/vandemar.uar
  Initializing Unified Archive creation resources...
  Unified Archive initialized: /data/archives/vandemar.uar
  Logging to: /system/volatile/archive_log.15994
  Executing dataset discovery...
  Dataset discovery complete
  Creating install media for zone(s)...
  Media creation complete
  Preparing archive system image...
  Beginning archive stream creation...
  Archive stream creation complete
  Beginning final archive assembly...
  Archive creation complete
  
# zonecfg -z croup create -a /data/archives/vandemar.uar 
  # zoneadm -z croup install -a /data/archives/vandemar.uar 
  Progress being logged to
  /var/log/zones/zoneadm.20140507T184203Z.croup.install
  [Connected to zone 'croup' console]
  Boot device: cdrom1  File and args: -B install=true,auto-shutdown=true -B aimanifest=/system/shared/ai.xml
  reading module /platform/i86pc/amd64/boot_archive...done.
  reading kernel file /platform/i86pc/kernel/amd64/unix...done.
  SunOS Release 5.11 Version 11.2 64-bit
  Copyright (c) 1983, 2014, Oracle and/or its affiliates. All rights reserved.
  Remounting root read/write
  Probing for device nodes ...
  Preparing image for use
  Done mounting image
  Configuring devices.
  Hostname: solaris
  Using specified install manifest : /system/shared/ai.xml
  
solaris console login: 
  Automated Installation started
  The progress of the Automated Installation will be output to the console
  Detailed logging is in the logfile at /system/volatile/install_log
  Press RETURN to get a login prompt at any time.
  
 18:43:58    Install Log: /system/volatile/install_log
   18:43:58    Using XML Manifest: /system/volatile/ai.xml
   18:43:58    Using profile specification: /system/volatile/profile
   18:43:58    Starting installation.
   18:43:58    0% Preparing for Installation
   18:43:58    100% manifest-parser completed.
   18:43:58    100% None
   18:43:58    0% Preparing for Installation
   18:43:59    1% Preparing for Installation
   18:44:00    2% Preparing for Installation
   18:44:00    3% Preparing for Installation
   18:44:00    4% Preparing for Installation
   18:44:00    5% archive-1 completed.
   18:44:00    8% target-discovery completed.
   18:44:03    Pre-validating manifest targets before actual target selection
   18:44:03    Selected Disk(s) : c1d0
   18:44:03    Pre-validation of manifest targets completed
   18:44:03    Validating combined manifest and archive origin targets
   18:44:03    Selected Disk(s) : c1d0
   18:44:03    9% target-selection completed.
   18:44:03    10% ai-configuration completed.
   18:44:04    9% var-share-dataset completed.
   18:44:08    10% target-instantiation completed.
   18:44:08    10% Beginning archive transfer
   18:44:09    Commencing transfer of stream: 072fdc78-431e-6aa6-89d5-a0088766a4af-0.zfs to rpool
   18:44:17    36% Transferring contents
   18:44:23    67% Transferring contents
   18:44:25    78% Transferring contents
   18:44:27    87% Transferring contents
   18:44:31    Completed transfer of stream: '072fdc78-431e-6aa6-89d5-a0088766a4af-0.zfs' from file:///system/shared/uafs/OVA
   18:44:31    89% Transferring contents
   18:44:33    Archive transfer completed
   18:44:34    90% generated-transfer-1447-1 completed.
   18:44:34    90% apply-pkg-variant completed.
   18:44:34    Setting boot devices in firmware
   18:44:34    91% boot-configuration completed.
   18:44:35    91% update-dump-adm completed.
   18:44:35    92% setup-swap completed.
   18:44:35    92% device-config completed.
   18:44:37    92% apply-sysconfig completed.
   18:44:37    93% transfer-zpool-cache completed.
   18:44:44    98% boot-archive completed.
   18:44:44    98% transfer-ai-files completed.
   18:44:44    98% cleanup-archive-install completed.
   18:44:45    100% create-snapshot completed.
   18:44:45    100% None
   18:44:45    Automated Installation succeeded.
   18:44:45    You may wish to reboot the system at this time.
  Automated Installation finished successfully
  Shutdown requested. Shutting down the system
  Log files will be available in /var/log/install/ after reboot
  svc.startd: The system is coming down.  Please wait.
  svc.startd: 115 system services are now being stopped.
  syncing file systems... done
  
[NOTICE: Zone halted]
  
[Connection to zone 'croup' console closed]
           Done: Installation completed in 180.636 seconds.

And there we go. We created a kernel zone archive in a few minutes and deployed a new kernel zone from it in a few more minutes.

A handy recipe for a private NAT'd DHCP server

I have been working a lot recently with kernel zones to test various aspects of the archive library. Most of my work requires connectivity to IPS publishers which I have set up on the zone, and those are usually hosted remotely on my build server. So I found myself needing networking for the test zones in order for them to be useful. However, I didn't want to go through the trouble of allocating static IP addresses for simple throwaway test stands. So I started using a private DHCP server for the zones, and I set up NAT with IP filtering so that the zones could reach the outside world through the private network.

The general idea is as follows. We build an etherstub in the host global zone and hang a virtual NIC off of it. We plumb up the vnic with a private net address and run a DHCP server on that private subnet. We then configure NAT with IP filtering and IP forwarding for that subnet over to the public network, and clients on that private net can get out to the public network.

For a zone to make use of the DHCP server, it needs its net/anet 'lower-link' set to the etherstub, which is one simple zonecfg change from the default. You can put as many zones as you like on the stub, but if you're doing lots of IO intensive stuff, using more than one physical NIC might be a good idea. Your mileage may vary, of course.

The steps to build this up follow.

First, create the etherstub for the private network and create a host vnic on it for the DHCP server. Then set a private net IP address on the vnic and check the configurations are all correct so far.
# dladm create-etherstub stub0
# dladm create-vnic -l stub0 vnic0

# ipadm create-ip vnic0
# ipadm create-addr -T static -a 192.168.0.1/24 vnic0/privaddr

# dladm show-vnic vnic0
LINK OVER SPEED MACADDRESS MACADDRTYPE VIDS
vnic0 stub0 40000 2:8:20:83:3:95 random 0

# ipadm show-addr vnic0
ADDROBJ TYPE STATE ADDR
vnic0/privaddr static ok 192.168.0.1/24
 

Next, configure NAT and enable IP forwarding for the private network. Map your public-facing vanity net name in this step (e.g. net0, as shown below).

# cat /etc/ipf/ipnat.conf
map net0 192.168.0.0/24 -> 0/32 portmap tcp/udp auto

# ipadm set-prop -p forwarding=on ipv4

Configure a simple DHCP server for the private network. Customize settings according to your preferences.
# cat /etc/inet/dhcpd4.conf
option domain-name "domain.com";                # <--- enter your DNS info here
option domain-name-servers 10.8.0.1, 10.8.0.2;  # <--- enter your DNS info here

default-lease-time 86400;

max-lease-time -1;

log-facility local7;

subnet 192.168.0.0 netmask 255.255.255.0 {
    range 192.168.0.100 192.168.0.120;
    option routers 192.168.0.1;
    option broadcast-address 192.168.0.255;
} 

Finally, turn on IP filtering and the DHCP server.
# svcadm enable svc:/network/ipfilter:default
# svcadm enable svc:/network/dhcp/server:ipv4

To make use of the new private DHCP server, just set the etherstub's name as your zone's lower-link via zonecfg and boot the zone.
# zonecfg -z some-zone "select anet id=0;set lower-link=stub0;end"

Zones booted on the etherstub should get private net IP addresses from the DHCP server and should be able to reach the public network.

This works really well for my purposes, feel free to suggest useful updates and I'll add them in.

Tuesday Apr 29, 2014

Introducing Unified Archives in Solaris 11.2

Since first writing about creation of system recovery images for Solaris 11 over two years ago, I've been working on a building a core feature for Solaris to address this need. With the release of Solaris 11.2, we announce the introduction of this feature for Solaris users.

Solaris 11.2 introduces a new native archive file type called Unified Archives. Users create archives from a deployed Solaris instance, if there are zones installed within the instance they can be included or excluded as desired.

Unified Archives are created through the use of the new archiveadm(1m) utility. To create an archive of a system which is suitable for cloning a system, by default including any installed zones on the system,  the 'create' subcommand is used without any options.


# archiveadm create /data/archives/jacks.uar
Initializing Unified Archive creation resources... 
Unified Archive initialized: /data/archives/jacks.uar
Logging to: /system/volatile/archive_log.1660
Executing dataset discovery...
Dataset discovery complete
Creating install media for zone(s)...
Media creation complete 
Preparing archive system image...
Beginning archive stream creation...
Archive stream creation complete
Beginning final archive assembly...
Archive creation complete

To examine the content of a given archive, the 'info' subcommand is used. It shows various attributes of an archive, such as the origin system and the list of systems which are deployable from the archive.

# archiveadm info /data/archives/jacks.uar
Archive Information
      Creation Time:  2014-04-29T19:01:42Z
        Source Host:  jacks
       Architecture:  i386
   Operating System:  Oracle Solaris 11.2 X86
 Deployable Systems:  global,frost,quick,nimble

Note there are three zones installed in the host in this example, which makes for four independently deployable systems: the global zone, and the three zones 'frost', 'quick' and 'nimble'.

Any archived system from within a Unified Archive can be deployed to any supported same-ISA platform. This support includes crossing virtualization boundaries, so a Unified Archive created on a SPARC T5 LDOM can be supported as a zone, and a zone archive can be installed to a metal system.

Unified Archives can be created of a specific installed zone, or list of zones, by simply selecting it via create's -z option. Here is an example of archiving a single solaris-brand zone.


# archiveadm create -z frost /data/archives/frost.uar
Initializing Unified Archive creation resources...
Unified Archive initialized: /data/archives/frost.uar
Logging to: /system/volatile/archive_log.4983
Executing dataset discovery...
Dataset discovery complete
Creating install media for zone(s)...
Media creation complete
Preparing archive system image...
Beginning archive stream creation...
Archive stream creation complete
Beginning final archive assembly...
Archive creation complete

The 'info' subcommand has a verbose option which shows a bit more detail regarding an archive's content.

# archiveadm info -v /data/archives/frost.uar
  Archive Information
       Creation Time:  2014-04-29T19:06:38Z
         Source Host:  jacks
        Architecture:  i386
    Operating System:  Oracle Solaris 11.2 X86
    Recovery Archive:  No
           Unique ID:  0e0cd6ea-0caf-cff3-aa7c-a139c2d7ee4c
     Archive Version:  1.0

Deployable Systems
  'frost'
          OS Version:  0.5.11
           OS Branch:  0.175.2.0.0.38.0
           Active BE:  solaris
               Brand:  solaris
         Size Needed:  859MB
           Unique ID:  382c7a60-2ada-c745-e778-f149c751ebf3
            AI Media:  0.175.2_ai_x86.iso
           Root-only:  Yes

We've integrated deployment support for Unified Archives in the Solaris Automated Installer as well as the Solaris Zones utilities.

Both zonecfg(1m) and zoneadm(1m) natively support Unified Archives. The -a option is used for both CLIs to create a zone based upon an archived zone, and then install it with the archived content, respectively.

Note it's not necessary to create the zone configuration from the archive, it's just a feature to allow template-like deployment of zones. Any valid zone configuration can be installed from an archive.


# zonecfg -z dandy create -a /data/archives/frost.uar -z frost
# zoneadm -z dandy install -a /data/archives/frost.uar -z frost
    The following ZFS file system(s) have been created:
        rpool/VARSHARE/zones/dandy

    Progress being logged to /var/log/zones/zoneadm.20140429T200115Z.dandy.install
            Installing: This may take several minutes...
  Zone BE root dataset: rpool/VARSHARE/zones/dandy/rpool/ROOT/solaris
                 Cache: Using /var/pkg/publisher.
   Updating image format
   Image format already current.
   Updating non-global zone: Linking to image /.
   Processing linked: 1/1 done

   Updating non-global zone: Syncing packages.
   No updates necessary for this image. (zone:dandy)

   Updating non-global zone: Zone updated.
   Result: Attach Succeeded.

     Done: Installation completed in 86.108 seconds.

Next Steps: Boot the zone, then log into the zone console (zlogin -C)
            to complete the configuration process.

Log saved in non-global zone as /system/zones/dandy/root/var/log/zones/zoneadm.20140429T200115Z.dandy.install

For deployment to metal systems or LDOMs via the Solaris Automated Installer, a new software type of ARCHIVE in the AI manifest handles deployment of Unified Archives. A simple manifest as shown below which will deploy the archived system directly from the HTTP URI listed.

    <?xml version="1.0" encoding="UTF-8"?>
    <!DOCTYPE auto_install SYSTEM "file:///usr/share/install/ai.dtd.1">
    <auto_install>
      <ai_instance name="default">
        <target>
          <logical>
            <zpool name="rpool" is_root="true">
              <filesystem name="export" mountpoint="/export"/>
              <filesystem name="export/home"/>
            </zpool>
          </logical>
        </target>
        <software type="ARCHIVE">
          <source>
            <file uri="http://host.domain/path/to/archive.uar"/>
          </source>
          <software_data action="install">
            <name>global</name>
          </software_data>
        </software>
      </ai_instance>
    </auto_install>


I'll be posting more here with some use case walkthroughs and more feature descriptions.

Tuesday Dec 13, 2011

Solaris 11: archive creation and recovery

Solaris 11 has recently released after several years of development. Solaris 10 has been updated at regular intervals, and through these updates existing Solaris 10 installations will benefit from many of the enhancements that have come about as part of the Solaris 11 development effort. However, there exists a set of key features that are not offered in Solaris 10 updates, including considerable performance enhancements for newer hardware, enhancements to ZFS to provide deduplication and enhanced crypto support, improvements to the Solaris Zones virtualization technology, a new automated installer and an entirely new packing system. In certain contexts these features are very compelling.

When upgrading Solaris 10 environments, Live Upgrade functionality may be used. This allows an installation of a Solaris 10 update to target an alternate boot device while the system is up and running, which minimizes downtime for the services deployed there. For a move to Solaris 11, no such live update facility exists. This is due in part to the move to ZFS as the default boot file system and the difficulties that may arise in migrating a UFS root file system, but mostly this is related to the change in packaging system, away from SVr4-style packages to the new Image Packaging System (IPS).

When it comes to system installation, there is no support in Solaris 11 for Flash Archive installation, or FLAR. This is also mostly due to the move from SVr4-style packages to IPS, but also due to the reason that FLAR was implemented in the first place should be addressed by IPS. FLAR was initially meant to simplify patch deployment. As we all know, prior to IPS, patching a Solaris system could be a bit of an undertaking. FLAR allows a system administrator to patch up a system and then create an archive of it, which can then be used to install subsequent systems. One side effect of this is that a full archive of the given system is created, which can be utilized to restore the system in case of catastrophic failure. In this way, many admins have utilized FLAR as an element of their disaster recovery plan.

With no support for FLAR in place in Solaris 11, we must work to address this gap. Currently there is no functionality in Solaris to do this, but work is underway. In the meantime, a document has been created which describes a set of steps that can be utilized to create re-deployable archives of installed systems. These archives can be stored on alternate storage or even at alternate sites, and then redeployed when needed as part of disaster recovery operations.

This document can be found here:  How to Perform System Archival and Recovery Procedures with Oracle Solaris 11

While a description of a manual procedure is not ideal, it does describe a set of operations which can be scripted if required. Otherwise, the manual steps described therein may be utilized as a stop-gap until some of the functionality finds its way into Solaris 11. As you can see after reviewing the document, the procedure is built upon ZFS file system operations, and as such shouldn't describe too much new territory for experienced system administrators. Likewise, it should be a fairly complete description for anyone who is less then familiar with Solaris administration.

I welcome comments and corrections.


About

Thoughts on Solaris from Midcoast Maine.

Search

Categories
Archives
« July 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
  
       
Today