Monday Mar 14, 2011

Migrating to Oracle Blogs

Migrating to Oracle Blogs

Wednesday Sep 30, 2009

Running OpenSolaris and Zones in the Amazon Cloud - Part 3

Introduction

In Part 1 of this series of tutorials on OpenSolaris and Zones in AWS we described a method for creating zones within an EC2 instance running OpenSolaris.

In Part 2 we describe a method for backing up the zones using ZFS snapshots, sending a copy to a secondary EBS volume, and then performing a EBS snapshot of the secondary volume. We also provide examples of how to recover zones from our ZFS snapshots as well as recover from the secondary EBS volume if for some reason our primary EBS volume fails.

This is Part 3 of the series where we explain how to save a fully configured zones environment using EBS snapshots, which can then be cloned and up and running in minutes. We leverage Parametrized Launches as described in a previous blog entry.

Prerequisites

  • Basic understanding of AWS EC2 including: managing EBS volumes and snapshots.
  • Basic understanding of OpenSolaris including ZFS.
  • Zones up and running as described in Part 1 of this tutorial.
  • Ability to run the EC2 command line tools.
  • Although not a requirement, the example uses OpenSolaris and EC2 Parametrized Launches to greatly simplify the launch of a new instance.

Example EC2 environment

As described in Part 1, I created three EBS volumes, one for shared software, one for zones storage, and another one for zones backup. The EC2 environment is displayed below.

AWS EC2 Environment

Our goal is to perform the following steps:

  • Halt the zones
  • Detach the zones
  • Create a EBS snapshot of our zones ZFS pool.
  • Create a EBS volume from this snapshot.
  • Launch a new EC2 instance and bring up the zones.

We will follow the standard procedure for preparing to migrate a non-global Zone to a different machine. Instead of actually moving the zones to another EC2 instance we put the zones into a state as if we are going to migrate, we then take a EBS snapshot of the EBS volume that contains the zones ZFS pool. Once the EBS snapshot has been started we simply bring our zones back up on the current EC2 instance.

Our current zones list is shown below.

root:~# zoneadm list -cv
  ID NAME             STATUS     PATH               BRAND    IP
   0 global           running    /                  native   shared
   1 zone1            running    /zones/zone1       ipkg     excl
   2 zone2            running    /zones/zone2       ipkg     excl
root:~#

Prepare the Zones for Migration and Create a EBS Snapshot

The first step is to halt the zones.

root:~# zoneadm -z zone1 halt
root:~# zoneadm -z zone2 halt

 Next, we detach the zones.

root:~# zoneadm -z zone1 detach
root:~# zoneadm -z zone2 detach

At this point the zones are in a state ready for migration. We will export the zones ZFS pool, start an EBS snapshot, re-attach the zones, and boot them. In our example the EBS volume id for the zones ZFS pool is: vol-f0c93b99

root:~# zpool export zones
root:~# ec2-create-snapshot vol-f0c93b99

SNAPSHOT  snap-c03b89a9   vol-f0c93b99    pending 2009-09-30T23:07:29+0000

root:~# zoneadm -z zone1 attach
       Global zone version: entire@0.5.11,5.11-0.111:20090514T145840Z
   Non-Global zone version: entire@0.5.11,5.11-0.111:20090514T145840Z
                Evaluation: Packages in zone1 are in sync with global zone.
Attach complete.

root:~# zoneadm -z zone1 boot
root:~# zoneadm -z zone2 attach

       Global zone version: entire@0.5.11,5.11-0.111:20090514T145840Z
   Non-Global zone version: entire@0.5.11,5.11-0.111:20090514T145840Z
                Evaluation: Packages in zone2 are in sync with global zone.
Attach complete.

root:~# zoneadm -z zone2 boot

We now have a EBS snapshot with our saved zones and have re-started the zones on the original EC2 instance. Let's keep this instance up and running until we have tested launching a new instance with our saved zones.

Launch a new EC2 Instance and Start our Zones

Okay, we now have our saved zones sitting out on a EBS snapshot. We will now create a EBS volume from this snapshot, attach the volume to a new instance, and start up the zones on the new instance.

In this example, we assume that we have a AMI available which has the capability to run a user supplied script at startup. See OpenSolaris and EC2 Parametrized Launches for instructions.

The first step is to create a EBS volume from the snapshot taken in the previous section. In our example, the snapshot id is: snap-c03b89a9

root:~# ec2-create-volume --snapshot snap-c03b89a9 -z us-east-1c 
VOLUME  vol-a433c1cd  8  snap-c03b89a9  us-east-1c  creating  2009-09-30T23:29:09+0000

We note that the new volume id is: vol-a433c1cd

Next, we build the script that will do the following:

    • Attach the new volume created from our EBS snapshot.
    • Create the zones network as described in Part 1.
    • Import the ZFS pool.
    • Attach the zones.
    • Boot the zones.

    #!/usr/bin/bash

    ebsvolumeids="vol-a433c1cd"
    zpools="zones"

    ec2autorundir="/var/ec2"
    ec2keysdir="/usr/local/aws/.ec2"
    ec2keysbucket="skodell.ec2.keys"

    ntpdate 0.north-america.pool.ntp.org

    # set the environment for EC2
    # ec2autorun.setec2 file is passed
    # to instance in payload file
    . ${ec2autorundir}/ec2autorun.setec2

    # create the directory to hold the AWS keys
    if ! [[ -d ${ec2keysdir} ]]
    then
        /usr/bin/mkdir -p ${ec2keysdir}
    else
        /usr/bin/rm -r ${ec2keysdir}/\*
    fi
    builtin cd ${ec2keysdir}
    chmod 700 ${ec2keysdir}

    # get the AWS keys
    for i in `s3cmd list ${ec2keysbucket}`
    do
        if ! [[ $i == "--------------------" ]]
        then
            s3cmd get ${ec2keysbucket}:$i $i
        fi
    done

    # get the ec2 instance id and instance type
    ec2instanceid=`curl http://169.254.169.254/latest/meta-data/instance-id --silent`
    ec2instancetype=`curl http://169.254.169.254/latest/meta-data/instance-type --silent`
    ec2publichostname=`curl http://169.254.169.254/latest/meta-data/public-hostname --silent`

    # set the starting device number for the ebs volumes
    case ${ec2instancetype} in
    'm1.small')
        ebsvolumedev=2
     ;;
    'm1.large')
        ebsvolumedev=3
        ;;
    'm1.xlarge')
        ebsvolumedev=5
        ;;
    \*)
        ebsvolumedev=5
     ;;
    esac

    # attach the volumes
    for volid in ${ebsvolumeids}
    do
        ec2-attach-volume -i ${ec2instanceid} -d ${ebsvolumedev} ${volid}
        let ebsvolumedev=${ebsvolumedev}+1
    done

    if ! [[ ${ebsvolumeids} == "" ]]
    then
        # while until all of the volumes report that they are attached
        ebsvolsattached=0
        while [[ ${ebsvolsattached} -eq 0 ]]
        do
            ebsvolsattached=1
            ebsvolumestatus=`ec2-describe-volumes | egrep ATTACHMENT | egrep ${ec2instanceid} | cut -f5`   
            for volstatus in ${ebsvolumestatus}
            do
                echo "Vol Status is: ${volstatus}"
                if ! [[ ${volstatus} == "attached" ]]
                then
                    ebsvolsattached=0
                fi
            done
            sleep 1
        done
    fi

    # import the zfs pools for this instance
    for zpoolid in ${zpools}
    do
        zpool import ${zpoolid}
    done

    # setup VLAN for zones
    dladm create-etherstub etherstub0
    dladm create-vnic -l etherstub0 vnic0
    dladm create-vnic -l etherstub0 vnic1
    dladm create-vnic -l etherstub0 vnic2

    dladm show-etherstub
    dladm show-vnic

    ifconfig vnic0 plumb
    ifconfig vnic0 inet 192.168.0.1 up
    ifconfig vnic0

    routeadm -u -e ipv4-forwarding

    # create the zones
    zonecfg -z zone1 create -a /zones/zone1
    zonecfg -z zone2 create -a /zones/zone2

    zoneadm -z zone1 attach
    zoneadm -z zone2 attach

    zoneadm -z zone1 boot
    zoneadm -z zone2 boot

    # start ipfilter
    cp ${ec2autorundir}/ec2-ipnat.conf /etc/ipf/ipnat.conf
    svcadm enable network/ipfilter


    Once we have the script created, we launch a new instance with this scrpt as described in OpenSolaris and EC2 Parametrized Launches.

    References

    Friday Sep 25, 2009

    OpenSolaris and EC2 Parametrized Launches Example


    Building on the excellent writeup, Using Parameterized Launches to Customize Your AMIs, in this blog entry we provide an example tailored for OpenSolaris. This is an updated version of the entry which originally appeared on the Sun EC2 blog. We will leverage this entry in Part 3 of our series on OpenSolaris Zones and EC2.

    Introduction

    It is possible to make "user data" available to an EC2 instance, specified when the instance is launched from the command line. From the Amazon Elastic Compute Cloud Developer Guide, we find the following options available for the ec2-run-instance command:

    -d user_data
    Data to make available to the instances. This data is read from the command line of the USER_DATA argument. If you want the data to be read from a file, see the -f option.
    Example: -d "my user data"

    -f user_data_file
    Data to make available to these instances. The data is read from the file specified by FILE_NAME. To specify user data on the command line, use the -d option.
    Example: -f data.zip

    Passing data to the instance at launch time is very useful as a mechanism for customizing instances without having to create a new AMI image for each type of instance.

    For example, using this technique we can use the same AMI to run a NFS file server and a NFS client. The only difference is that for the NFS file server we share filesystems and start NFS services, and for the NFS client we mount the NFS filesystems from the server. We will use this NFS scenario for our example.


    Getting Starting

    The first step is to use one of the Sun provided OpenSolaris 2009.06 AMIs to create a custom AMI with the addition of a script that runs at startup. Or if you have already bundled a customized OpenSolaris AMI, you can add to one of your own

    The following is a high level summary of what we will cover:

    1. Start a new instance, either from a Sun provided AMI or one of your own.
    2. Add a script that runs when the instance starts for the first time.
    3. Create a new AMI which includes the script that runs at startup.
    4. Demonstrate how to launch a NFS server instance and a NFS client instance, using the AMI created in step 3.
    We assume that the reader has a basic understanding of EC2 and OpenSolaris. We also use a EBS volume to provide the file system that will be served by the NFS server.

    Launch an instance of the OpenSolaris AMI of your choice and login to the new instance.

    bash# ec2-run-instances -k <my-keypair> -g <my-group> -z us-east-1b \\
       -t m1.small ami-e56e8f8c


      RESERVATION     r-868289ef      578281684738               <my-group>
      INSTANCE        i-5736e13f      ami-e56e8f8c    pending    <my-keypair> 0
                      m1.small        2009-09-23T06:48:50+0000   us-east-1b
                      aki-1783627e    ari-9d6889f4               monitoring-disabled

    bash# ec2-describe-instances | grep i-5736e13f | cut -f2,3,4,6
      i-5736e13f   ami-e56e8f8c  ec2-174-129-168-111.compute-1.amazonaws.com  running

    bash#
    ssh -i <my-key-pair-file> root@ec2-174-129-168-111.compute-1.amazonaws.com
      login as: root
      Authenticating with public key "imported-openssh-key"
      Last login: Wed Sep 23 07:02:45 2009 from somewhere
      Sun Microsystems Inc.   SunOS 5.11      snv_111b        November 2008
    bash#


    Create the script to run at boot time with the following contents. We name this file:
    /etc/init.d/ec2autorun

    #!/bin/sh

    ec2autorundir="/var/ec2"
    ec2autorunfile="ec2autorun.input"
    ec2autorunscript="ec2autorun.script"
    ec2userdataurl="http://169.254.169.254/latest/user-data"

    # we password protect the user data file since any user can retrieve.
    # we need to remember the password as we will need to use when
    # creating the user data zip file. This password can be anything
    # and is not the same as your AWS secret key.

    ec2password="put your password here"

    # we usually pass the user data as a ZIP archive file, but we
    # can also simply pass a plain text script file.

    zipfiletype="ZIP archive"

    case "$1" in
    'start')
            # create directory for autorun scripts and data
            if [ ! -d ${ec2autorundir} ]
            then
                /usr/bin/mkdir -p ${ec2autorundir}
            else
                # if the directory already exists we do not run the script again
                echo "`date`: ec2autorun script already ran, exiting." \\
                  >> ${ec2autorundir}/${ec2autorunfile}.stdout
                exit 0
            fi

            cd ${ec2autorundir}

            # get the user data file passed to the instance when created
            /usr/bin/wget --quiet \\
              --output-document=${ec2autorundir}/${ec2autorunfile} \\
                ${ec2userdataurl}

            # test the file size, if zero then no user data was provided
            # when instance was started
            if [ ! -s ${ec2autorundir}/${ec2autorunfile} ]
            then
                echo "User data was not passed to instance." \\
                  >> ${ec2autorunfile}.stdout
                exit 0
            fi

            # if the file is a zip file, unzip it and then run the
            # script extracted from the zip file, else run the file
            # assuming it is a script
            filetype=`file ${ec2autorundir}/${ec2autorunfile} | /usr/bin/cut -f2`
            if [ "${filetype}" = "${zipfiletype}" ]
            then
                unzip -P ${ec2password} ${ec2autorunfile} \\
                  >> ${ec2autorunfile}.stdout 2>>${ec2autorunfile}.stderr

                bash ./${ec2autorunscript} \\
                  >> ${ec2autorunscript}.stdout 2>>${ec2autorunscript}.stderr
            else
                bash ./${ec2autorunfile} \\
                  >> ${ec2autorunfile}.stdout 2>>${ec2autorunfile}.stderr
            fi

            # set the autorun directory to be read only by root
            chmod 700 ${ec2autorundir}

            ;;
    \*)
            echo "Usage: $0 { start }"
            ;;
    esac
    exit 0

    Change the permissions on the startup script to be readable only by the root user and create a link in /etc/rc3.d so that the script runs at boot time.

    bash# chmod 700 /etc/init.d/ec2autorun
    bash# ln /etc/init.d/ec2autorun /etc/rc3.d/S10ec2autorun

    Further customize the instance as you desire. For example, we use the S3Sync tools to retrieve our AWS key files from a S3 bucket, you can use any method that you prefer.

    Then, build a new AMI using the instructions found in Getting Started Guide for OpenSolaris on Amazon EC2. See the section "Rebundling and uploading OpenSolaris images on Amazon EC2".

    Launching a new NFS Sever

    Using the newly created AMI from the Getting Started section above, we are now ready to use the new AMI to launch an instance to run as the NFS file server. In our example, we assume the following:

    1. The AMI id of the image we created above is: ami-2bcc2c42
    2. The AMI has the EC2 API tools installed.
    3. We have previously created a EBS volume for a NFS file system and the volume id is: vol-d58774bc.
    4. You can run EC2 API commands.
    5. You have a method for securely retrieving your AWS key files.
    First, we create a script that will be passed to a new EC2 instance and then processed using the ec2autorun script that we created and installed within our new AMI as described in the Getting Started section above.

    Create the script with the following contents. We name this file: ec2autorun.script

    #!/usr/bin/bash

    ec2ipaddr=""
    ebsvolumeids="vol-d58774bc"
    nfspool="nfsfs"

    ec2autorundir="/var/ec2"
    ec2keysdir="/usr/local/aws/.ec2"
    ec2keysbucket="mykeys.ec2.keys"

    ntpdate 0.north-america.pool.ntp.org

    # set the environment for EC2
    # the file referenced below is included in our
    # user data zip file and sets the variables needed
    # to interact with EC2.
    . ${ec2autorundir}/ec2autorun.setec2

    # create the directory to hold the keys
    if ! [[ -d ${ec2keysdir} ]]
    then
        /usr/bin/mkdir -p ${ec2keysdir}
    else
        /usr/bin/rm -r ${ec2keysdir}/\*
    fi

    chmod 700 ${ec2keysdir}
    builtin cd ${ec2keysdir}

    # get the ec2 keys
    for i in `s3cmd list ${ec2keysbucket}`
    do
        if ! [[ $i == "--------------------" ]]
        then
            s3cmd get ${ec2keysbucket}:$i $i
        fi
    done

    # get the ec2 instance id and instance type
    ec2instanceid=`/usr/local/bin/meta-get instance-id`
    ec2instancetype=`/usr/local/bin/meta-get instance-type`
    ec2publichostname=`/usr/local/bin/meta-get public-hostname`

    # set the starting device number for the ebs volumes
    case ${ec2instancetype} in
    'm1.small')
        ebsvolumedev=2
        ;;
    'm1.large')
        ebsvolumedev=3
        ;;
    'm1.xlarge')
        ebsvolumedev=5
        ;;
    \*)
        ebsvolumedev=5
        ;;
    esac

    # attach the volumes
    for volid in ${ebsvolumeids}
    do
        ec2-attach-volume -i ${ec2instanceid} -d ${ebsvolumedev} ${volid}
        let ebsvolumedev=${ebsvolumedev}+1
    done

    if ! [[ ${ebsvolumeids} == "" ]]
    then
        # while until all of the volumes report that they are attached
        ebsvolsattached=0
        while [[ ${ebsvolsattached} -eq 0 ]]
        do
            ebsvolsattached=1
            ebsvolumestatus=`ec2-describe-volumes | egrep ATTACHMENT | egrep ${ec2instanceid} | cut -f5`   
            for volstatus in ${ebsvolumestatus}
            do
                echo "Vol Status is: ${volstatus}"
                if ! [[ ${volstatus} == "attached" ]]
                then
                    ebsvolsattached=0
                fi
            done
            sleep 1
        done
    fi

    # create a ZFS pool for our NFS file system
    /usr/sbin/zpool create ${nfspool} c7d2

    # create two ZFS file systems that we will share
    /usr/sbin/zfs create ${nfspool}/home
    /usr/sbin/zfs create ${nfspool}/share

    /usr/sbin/zfs set sharenfs=on ${nfspool}/home
    /usr/sbin/zfs set sharenfs=on ${nfspool}/share

    # associate the ip address
    if ! [[ ${ec2ipaddr} == "" ]]
    then
        ec2-associate-address -i ${ec2instanceid} ${ec2ipaddr}

        # while until the ip address is allocated
        ec2addrassigned=0
        while [[ ${ec2addrassigned} -eq 0 ]]
        do
            ec2addrassigned=1
            ec2ipaddrcurrent=`/usr/local/bin/meta-get public-ipv4`
            echo "IP address current: ${ec2ipaddrcurrent}"
            if ! [[ ${ec2ipaddrcurrent} == ${ec2ipaddr} ]]
            then
                ec2addrassigned=0
            fi
            sleep 5
        done
    fi

    # start the services for this instance
    svcadm enable -r svc:/network/nfs/server:default


    Create a zip file which includes the script file created above, and any other files that you want to pass to the new instance. Since all users have access to the user data passed to an instance, we encrypt the zip file with a password.

    Note: make sure you use the exact same password that you specified in the
    /etc/init.d/ec2autorun file created in the Getting Started section.

    bash# zip -P "put your password here" ec2autorun.zip ec2autorun.script \\
    ec2autorun.setec2


    Now we are ready to launch the a new instance, passing the zip file as user data on the command line.

    bash# ec2-run-instances -k <my-key-pair> -g <my-group> -z us-east-1b \\
      -f ec2autorun.zip -t m1.small
    ami-2bcc2c42

    RESERVATION     r-86d3c4ef      578281684738    my-group
    INSTANCE        i-ffcb1997      ami-2bcc2c42                    pending my-key-pair 0               m1.small        2009-09-26T04:57:58+0000        us-east-1b      aki-1783627e    ari-9d6889f4            monitoring-disabled

    The last thing we need to do is take note of the EC2 private host name for the new NFS server instance since we will use it to launch our NFS clients. Once the NFS server startup has completed, we can get the private host name as follows. Note the instance id displayed when the instance was launched from above.

    bash# ec2-describe-instances | grep i-3f994856 | cut -f2,5
    i-3f994856      ip-10-251-203-4.ec2.internal

    Launching a new NFS Client

    Once the NFS server is up and running we can launch any number of NFS clients. The script to launch the NFS client is much simpler since all we need to do is launch the instance and then mount the file systems from the NFS server. We use exactly the same steps that we used for the NFS server, just a different script to launch at boot time.

    Create a script with the following contents. We name this file: ec2autorun.script

    Note that in this script we include the EC2 private host name of the NFS server.

    #!/usr/bin/bash

    # client system setup

    # NFS server private host name
    nfsserver="ip-10-250-19-166.ec2.internal"

    # the name of the ZFS pool on the NFS server
    nfspool="nfsfs"

    # start the services for this instance
    svcadm enable -r svc:/network/nfs/client:default

    # mount the NFS file systems from the server

    mkdir -p /${nfspool}/home
    mkdir -p /${nfspool}/share

    mount -F nfs ${nfsserver}:/${nfspool}/home  /${nfspool}/home
    mount -F nfs ${nfsserver}:/${nfspool}/share /${nfspool}/share

    # add to vfstab file
    echo "${nfsserver}:/${nfspool}/home  - /${nfspool}/home nfs  - yes rw,xattr"   >> /etc/vfstab
    echo "${nfsserver}:/${nfspool}/share - /${nfspool}/share nfs - yes rw,xattr"   >> /etc/vfstab


    We zip this file and launch a new instance, same as above, except this time we launch with the NFS client version of the script.

    Summary

    As you can see from this simple example, Parameterized Launches in EC2 is a very flexible and powerful technique for launching custom instances from a single AMI. In combination with S3 there are many other uses, for example: patching systems, adding users and groups, and running specific services at launch time.

    All output from the scripts is captured and can be reviewed for troubleshooting purposes. Log in to one of your new instances and review the following.

    bash# cd /var/ec2
    bash#
    ls

    # the zip file that was passed on the ec2-run-instance
    # command line
    file ec2autorun.input

    # stderr from the /etc/init.d/ec2autorun script
    cat ec2autorun.input.stderr

    # stdout from the /etc/init.d/ec2autorun script
    cat ec2autorun.input.stdout

    # the script that was bundled in the zip file,
    # called by /etc/init.ec2atorun
    cat ec2autorun.script

    # stderr from ec2autorun.script
    cat ec2autorun.script.stderr

    # stdout from ec2autorun.script
    cat ec2autorun.script.stdout

    Monday Sep 21, 2009

    Running OpenSolaris and Zones in the Amazon Cloud - Part 2

    Introduction

    In Part 1 of this series of tutorials on OpenSolaris and Zones in AWS we described a method for creating zones within an EC2 instance running OpenSolaris.

    This is Part 2 of the series where we will describe a method for backing up the zones using ZFS snapshots, sending a copy to a secondary EBS volume, and then performing a EBS snapshot of the secondary volume. We will provide an example of how to recover zones from our ZFS snapshots as well as recover from the secondary EBS volume if for some reason our primary EBS volume fails.

    In Part 3 we will explain how to save a fully configured environment using a AMI and EBS snapshots, which can then be cloned and up and running in minutes.

    Disclaimer: while the procedures described in this document have been tested, it is the responsibility of the reader to verify that these procedures work in their environment.

    Prerequisites

    • Basic understanding of AWS EC2 including: managing EBS volumes and snapshots.

    • Basic understanding of OpenSolaris including ZFS.
    • Zones up and running as described in Part 1 of this tutorial.

    Example EC2 environment

    As described in Part 1, I created three EBS volumes, one for shared software, one for zones storage, and another one for zones backup. The EC2 environment is displayed below.

    AWS EC2 Environment


    Our goal is to perform the following steps:

    • Create ZFS snapshots for zone1 and zone2.

    • Send these snapshots to our backup EBS volume.

    • Create an EBS snapshot of our backup EBS volume.

    I create ZFS snapshots on a hourly basis, and then create my EBS snapshot once a day. Pick the schedule that works for you, and make sure that you test the restore process!

    Before we run our first backup, my ZFS environment for zone storage and zone backup is shown below.

    root:~# zfs list -r zones
    NAME                   USED  AVAIL  REFER  MOUNTPOINT
    zones                 4.55G  3.26G    22K  /zones
    zones/zone1           3.91G  3.26G    22K  /zones/zone1
    zones/zone1/ROOT      3.91G  3.26G    19K  legacy
    zones/zone1/ROOT/zbe  3.91G  3.26G  3.90G  legacy
    zones/zone2            658M  3.26G    22K  /zones/zone2
    zones/zone2/ROOT       658M  3.26G    19K  legacy
    zones/zone2/ROOT/zbe   658M  3.26G   650M  legacy
    root:~#
    root:~# zfs list -r zones-backup
    NAME           USED  AVAIL  REFER  MOUNTPOINT
    zones-backup    70K  15.6G    19K  /zones-backup
    root:~#

    Backup Operations

    I wrote a simple perl script to create ZFS snapshots and send to a backup ZFS pool. You can get a copy of the script at: zsnap-backup.pl. Please review the script before using to ensure it is suitable for your environment.

    The script takes the following arguments.

    root:~/bin# ./zsnap-backup -h

    usage:

        zsnap-backup [-q] -t full|inc|list -f file_system
          -s source_pool -d dest_pool

          -q : run in quiet mode
          -t : type of backup to run:
                 full : send complete snapshot to destination pool
                 inc  : send incremental snapshot to destination pool
                 list : list snapshots on source and destination pools
          -f : name of ZFS file system to backup
          -s : name of the source ZFS pool
          -d : name of the backup ZFS pool

    Before I run my first backup, let's run the zsnap-backup script with the list option.

    root:~/bin# ./zsnap-backup -t list -s zones -d zones-backup -f zone1
    cannot open 'zones-backup/zone1': dataset does not exist
    cannot open 'zones-backup/zone1': dataset does not exist
    Source File System List
    =========================
    Name     : zones/zone1
    Last Snap:
    Name     : zones/zone1/ROOT
    Last Snap:
    Name     : zones/zone1/ROOT/zbe
    Last Snap:
    =========================
    Dest File System List
    =========================
    =========================
    Source Snapshot List
    =========================
    =========================
    Destination Snapshot List
    =========================
    =========================
    root:~/bin#

    The messages "dataset does not exist" are telling me that there is no zones dataset on the destination pool yet. This will be created once the first full backup is run.

    The first backup that I perform needs to be a "full" backup, i.e. when I send the ZFS snapshots to the zones-backup pool. In the example below, we backup our zone1 and zone2 file systems.

    root:~/bin# ./zsnap-backup -t full -s zones -d zones-backup -f zone1
    cannot open 'zones-backup/zone1': dataset does not exist
    cannot open 'zones-backup/zone1': dataset does not exist
    Sending: zone1@20090921-230804
    root:~/bin#
    root:~/bin# ./zsnap-backup -t full -s zones -d zones-backup -f zone2
    cannot open 'zones-backup/zone2': dataset does not exist
    cannot open 'zones-backup/zone2': dataset does not exist
    Sending: zone2@20090921-230848
    root:~/bin#

    The time to complete the full backup depends n the size of the dataset. It could take a while, so be patient. Let's list our backups again with the list option.

    root:~/bin# ./zsnap-backup -t list -s zones -d zones-backup -f zone1
    Source File System List
    =========================
    Name     : zones/zone1
    Last Snap: zones/zone1@20090921-230804
    Name     : zones/zone1/ROOT
    Last Snap: zones/zone1/ROOT@20090921-230804
    Name     : zones/zone1/ROOT/zbe
    Last Snap: zones/zone1/ROOT/zbe@20090921-230804
    =========================
    Dest File System List
    =========================
    Name     : zones-backup/zone1
    Last Snap: zones-backup/zone1@20090921-230804
    Name     : zones-backup/zone1/ROOT
    Last Snap: zones-backup/zone1/ROOT@20090921-230804
    Name     : zones-backup/zone1/ROOT/zbe
    Last Snap: zones-backup/zone1/ROOT/zbe@20090921-230804
    =========================
    Source Snapshot List
    =========================
    zones/zone1@20090921-230804
    zones/zone1/ROOT@20090921-230804
    zones/zone1/ROOT/zbe@20090921-230804
    =========================
    Destination Snapshot List
    =========================
    zones-backup/zone1@20090921-230804
    zones-backup/zone1/ROOT@20090921-230804
    zones-backup/zone1/ROOT/zbe@20090921-230804
    =========================
    root:~/bin#

    We see from the example above that we have created snapshots on the source pool zones and sent them to the destination pool zones-backup.

    Now we will perform an incremental backup. Take a look at the script and you will see that we use the -R and -I options for the receive command. This will cause an incremental replication stream to be generated.

    root:~/bin# ./zsnap-backup -t inc -s zones -d zones-backup -f zone1
    Sending: zone1@20090921-231140
    root:~/bin#
    root:~/bin# ./zsnap-backup -t inc -s zones -d zones-backup -f zone2
    Sending: zone2@20090921-231158

    The incremental backup should complete in seconds. Use the list option for each file system to see the results. You can now schedule the incremental backups to run on a regular basis using cron if desired.

    At this point we have created ZFS snapshots of our zones and sent a copy to our backup pool which resides on a separate EBS volume.

    In the next step, we will create a EBS snapshot of the volume being used to store our zones-backup pool. You could use the AWS Console, the AWS Manager console, Elasticfox, etc. to perfrm the EBS snapshot step. In our example below, we provide the command line version. This assumes that you have the EC2 API tools installed in your environment and that you know the volume id where your zones-backup pool is located.

    In our example, the EBS volume id is: vol-f546b69c

    I have found that it is best to first export the ZFS pool before creating the EBS snapshot. Remember to import after the EBS snapshot command is run. If you have your incremental backups running in cron, schedule the EBS snapshot to occur at a time when you know that the zsnap-backup command will not be running.

    root:~/bin# zpool export zones-backup
    root:~/bin# ec2-create-snapshot vol-f546b69c
    SNAPSHOT        snap-74bd0b1d   vol-f546b69c    pending 2009-09-22T06:14:52+0000
    root:~/bin#
    root:~/bin# zpool import zones-backup
    root:~/bin# ec2-describe-snapshots | egrep snap-74bd0b1d
    SNAPSHOT   snap-74bd0b1d   vol-f546b69c  completed  2009-09-22T06:14:52+0000  100%
    root:~/bin#

    At this point we have completed the following:

    • Created ZFS snapshots of our zones

    • Sent a copy of the snapshots to a backup ZFS pool

    • Created a EBS snapshot of the volume which contains our backup ZFS pool

    Restore Operations

    In general, there are three scenarios that we want to recover from:

    • User deleted a file or a handful of files in the zone
    • User did something catastrophic such as "rm -r \*" while in the /usr directory
    • The EBS volume containing the zones ZFS pool crashed

    In the examples included below, we explain how to recover from each of these scenarios. Please see Working With ZFS Snapshots for more information.

    Copying Individual Files From a Snapshot

    It is possible to copy individual files from a snapshot by changing into the hidden .zfs directory of the file system that has been snapped. In our example we have created a couple of test files. After creation of these files a ZFS snapshot has been created. We then delete these files by accident and want to recover them. In the global zone we can restore the deleted files.

    For example, we deleted two files, test1 and test2, by accident from zone1. We can then restore from our ZFS snapshot as shown below.

    zone1

    root@zone1:/var/tmp# ls -l test\*
    test\*: No such file or directory
    root@zone1:/var/tmp#

    global zone

    root:~# cd /zones/zone1/root/.zfs/snapshot/20090921-232400/var/tmp
    root:~# ls test\*
    test1  test2
    root:~# cp test\* /zones/zone1/root/var/tmp

    zone1

    root@zone1:~# cd /var/tmp
    root@zone1:/var/tmp# ls test\*
    test1  test2
    root@zone1:/var/tmp#

    Rolling Back to a Previous Snapshot

    We can use rollback to recover an entire file system. In our example, we accidently remove the entire /usr directory. To recover, we will halt the zone, perform the rollback, and then boot the zone.

    Make sure you know what you are doing and have a snapshot to rollback before performing the following. You also need to know the name of the snapshot that you want to rollback. We will rollback to the latest snapshot for each zone1 file system. We also perform the rollback on our zones-backup pool to ensure they are synchronized.

    zone1

    root@zone1:/usr# cd /usr
    root@zone1:/usr# rm -rf \*
    root@:/usr# ls
    -bash: /usr/bin/ls: No such file or directory
    -bash: /usr/bin/hostname: No such file or directory

    global zone

    root:~# zoneadm -z zone1 halt
    root:~#
    root:~/bin# zfs rollback zones/zone1@20090921-232400
    root:~/bin# zfs rollback zones/zone1/ROOT@20090921-232400
    root:~/bin# zfs rollback zones/zone1/ROOT/zbe@20090921-232400
    root:~/bin#
    root:~/bin# zfs rollback zones-backup/zone1@20090921-232400
    root:~/bin# zfs rollback zones-backup/zone1/ROOT@20090921-232400
    root:~/bin# zfs rollback zones-backup/zone1/ROOT/zbe@20090921-232400
    root:~/bin#
    root:~/bin# zoneadm -z zone1 boot

    zone1

    root@zone1:~# cd /usr
    root@zone1:/usr# ls
    X         dict      kernel    man       perl5     sadm      spool
    adm       games     kvm       net       platform  sbin      src
    bin       gnu       lib       news      preserve  sfw       tmp
    ccs       has       local     old       proc      share     xpg4
    demo      include   mail      opt       pub       snadm
    root@zone1:/usr#

    Recovering from a EBS Volume Failure

    Next, we provide an example of how to recover from a failure of the EBS volume where the zones data is stored.

    It is important that prior to performing these steps you know what you are doing and have performed at least a full backup of the zones file systems.

    In our example scenario, the EBS volume for the zones ZFS pool has been corrupted somehow. We will perform the following steps to recover from our zones-backup pool.

    • Halt the zones

    • Export the zones pool

    • Export the zones-backup pool

    • Import the zones-backup pool renaming it to "zones"

    global zone

    root:~# zoneadm -z zone1 halt
    root:~# zoneadm -z zone2 halt
    root:~#
    root:~# zpool export zones
    root:~# zpool export zones-backup
    root:~#
    root:~# zpool import zones-backup zones
    root:~#
    root:~# zoneadm -z zone1 boot
    root:~# zoneadm -z zone2 boot

    Effectively what we have done is booted our zones from the zones-backup pool after importing zones-backup as zones. To get our backup process going again we can delete the old zones EBS volume, create a new EBS volume to replace zones-backup, and create a new zones-backup pool on this new volume.

    We can then perform a full backup as described above and re-start our regular incremental backup schedule.

    References

    Friday Sep 18, 2009

    Labeling items in EC2 - Yet another AWS console

    Over a year ago, prior to the release of Elasticfox and the AWS Console from Amazon, I created yet another AWS management interface.

    At this time I was participating in early access testing of Solaris and Elastic Block Store (EBS). I was tired of typing commands on the command line to start and stop volumes, create snapshots, attach volumes to instances, etc.

    I also needed a way to label my EC2 items, sort, filter, run batch snapshots, create more than one volume with a single click, etc.


    So, now that AWS Console is available, why I am I bothering to write this blog entry?

    The answer is that even though I have not updated the software in a while, myself and some of my colleagues still use this console as the AWS Console is still missing some basic features such as labeling.

    If you are interested, you can find more information at awsmanager.com or connect directly at https awsmanager.com. I have made the software available if you want to install on one of your own systems. Tested with Apache and Glassfish app. servers.

    Sunday Sep 06, 2009

    Running OpenSolaris and Zones in the Amazon Cloud - Part 1

    Introduction

    Now that OpenSolaris 2009.06 is available on Amazon EC2, I have been interested in setting up zones within a OpenSolaris EC2 instances utilizing the virtual networking features provided by Crossbow.

    In this tutorial I will provide a step-by-step guide describing how to get this environment up and running. We use Crossbow together with NAT to build a complete virtual network connecting multiple Zones within a Solaris EC2 host.

    Much of the networking information used in this tutorial is taken directly from the excellent article Private virtual networks for Solaris xVM and Zones with Crossbow by Nicolas Droux.

    This is Part 1 of this tutorial series. In Part 2 we will explain how to use ZFS and AWS snapshots to backup the zones. In Part 3 we will explain how to save a fully configured environment using a AMI and EBS snapshots, which can then be cloned and up and running in minutes.

    Prerequisites

    • Basic understanding of AWS EC2 including: managing AMIs, launching instances, managing EBS volumes and snapshots, and firewall management.

    • Basic understanding of OpenSolaris including system setup, networking, and zone management.

    Building the EC2 environment

    For this tutorial, I used the OpenSolaris 2009.06 AMI ami-e56e8f8c. I also created three EBS volumes, one for shared software, one for zones storage, and another one for zones backup. In Part 2 of this tutorial, I will explain the use of ZFS snapshots and EBS snapshots for the purposes of backing up the zones. The EC2 environment is displayed below.

    AWS EC2 Environment


    A summary of the steps are as follows:

    • Create the EBS volumes and attach them to the instance.

    • Create ZFS pools, one for the shared software, one for zones, and one for zones backup.

    When finished, I have a OpenSolaris 2009.06 EC2 instance running with three ZFS filesystems on top of three EBS volumes as shown below.

    root:~# zfs list -r sharedsw
    NAME              USED  AVAIL  REFER  MOUNTPOINT
    sharedsw/opt     3.41G  4.12G  3.41G  /opt

    root:~# zfs list -r zones
    NAME    USED  AVAIL  REFER  MOUNTPOINT
    zones    70K  7.81G    19K  /zones

    root:~# zfs list -r zones-backup
    NAME           USED  AVAIL  REFER  MOUNTPOINT
    zones-backup    70K  7.81G    19K  /zones-backup

    Building the private network

    The next task is to create the virtual network as shown in the diagram below.

    Private Network


    We create an etherstub and three VNICs for our virtual network.

    root:~# dladm show-phys
    LINK         MEDIA                STATE      SPEED  DUPLEX    DEVICE
    xnf0         Ethernet             up         1000   full      xnf0
    root:~#
    root:~# dladm create-etherstub etherstub0
    root:~# dladm create-vnic -l etherstub0 vnic0
    root:~# dladm create-vnic -l etherstub0 vnic1
    root:~# dladm create-vnic -l etherstub0 vnic2
    root:~#
    root:~# dladm show-etherstub
    LINK
    etherstub0
    root:~# dladm show-vnic
    LINK         OVER         SPEED  MACADDRESS           MACADDRTYPE         VID
    vnic0        etherstub0   0      2:8:20:20:10:b8      random              0
    vnic1        etherstub0   0      2:8:20:c2:70:f6      random              0
    vnic2        etherstub0   0      2:8:20:15:35:ca      random              0

    Assign a static IP address to vnic0 in the global zone:

    root:~# ifconfig vnic0 plumb
    root:~# ifconfig vnic0 inet 192.168.0.1 up
    root:~# ifconfig vnic0
    vnic0: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 9000 index 3
            inet 192.168.0.1 netmask ffffff00 broadcast 192.168.0.255
            ether 2:8:20:20:10:b8

    Note that the usual configuration variables (e.g. /etc/hostname.) must be populated for the configuration to persist across reboots). We must also enable IPv4 forwarding on the global zone. Run routeadm(1M) to display the current configuration, and if "IPv4 forwarding" is disabled, enable it with the following command:

    root:~# routeadm -u -e ipv4-forwarding
    root:~# routeadm
                  Configuration   Current              Current
                         Option   Configuration        System State
    ---------------------------------------------------------------
                   IPv4 routing   disabled             disabled
                   IPv6 routing   disabled             disabled
                IPv4 forwarding   enabled              enabled
                IPv6 forwarding   disabled             disabled

               Routing services   "route:default ripng:default"

    Routing daemons:

                          STATE   FMRI
                       disabled   svc:/network/routing/route:default
                       disabled   svc:/network/routing/rdisc:default
                         online   svc:/network/routing/ndp:default
                       disabled   svc:/network/routing/legacy-routing:ipv4
                       disabled   svc:/network/routing/legacy-routing:ipv6
                       disabled   svc:/network/routing/ripng:default
    root:~#

    Next, we enable NAT on the xnf0 interface. We also want to be able to connect to the zones from the public internet, so we enable port forwarding. In EC2 make sure you open these ports within the EC2 firewall, following best practices.

    root:~# cat /etc/ipf/ipnat.conf
    map xnf0 192.168.0.0/24 -> 0/32 portmap tcp/udp auto
    map xnf0 192.168.0.0/24 -> 0/32

    rdr xnf0 0.0.0.0/0 port 22101  -> 192.168.0.101 port 22
    rdr xnf0 0.0.0.0/0 port 22102  -> 192.168.0.102 port 22
    rdr xnf0 0.0.0.0/0 port 8081   -> 192.168.0.101 port 80
    rdr xnf0 0.0.0.0/0 port 8082   -> 192.168.0.102 port 80
    rdr xnf0 0.0.0.0/0 port 40443  -> 192.168.0.102 port 443

    root:~# svcadm enable network/ipfilter
    root:~# ipnat -l
    List of active MAP/Redirect filters:
    map xnf0 192.168.0.0/24 -> 0.0.0.0/32 portmap tcp/udp auto
    map xnf0 192.168.0.0/24 -> 0.0.0.0/32
    rdr xnf0 0.0.0.0/0 port 22101 -> 192.168.0.101 port 22 tcp
    rdr xnf0 0.0.0.0/0 port 22102 -> 192.168.0.102 port 22 tcp
    rdr xnf0 0.0.0.0/0 port 8081 -> 192.168.0.101 port 80 tcp
    rdr xnf0 0.0.0.0/0 port 8082 -> 192.168.0.102 port 80 tcp
    rdr xnf0 0.0.0.0/0 port 40443 -> 192.168.0.102 port 443 tcp

    List of active sessions:
    root:~#

    Creating the zones

    Create and install zone1

    root:~# zonecfg -z zone1
    zone1: No such zone configured
    Use 'create' to begin configuring a new zone.
    zonecfg:zone1> create
    zonecfg:zone1> set zonepath=/zones/zone1
    zonecfg:zone1> set ip-type=exclusive
    zonecfg:zone1> add net
    zonecfg:zone1:net> set physical=vnic1
    zonecfg:zone1:net> end
    zonecfg:zone1> add fs
    zonecfg:zone1:fs> set dir=/opt
    zonecfg:zone1:fs> set special=/opt
    zonecfg:zone1:fs> set type=lofs
    zonecfg:zone1:fs> add options ro
    zonecfg:zone1:fs> end
    zonecfg:zone1> verify
    zonecfg:zone1> commit
    zonecfg:zone1> exit
    root:~#
    root:~# zoneadm -z zone1 install
    A ZFS file system has been created for this zone.
    Publisher: Using opensolaris.org (http://pkg.opensolaris.org/release/).
           Image: Preparing at /zones/zone1/root.
           Cache: Using /var/pkg/download.
    Sanity Check: Looking for 'entire' incorporation.
      Installing: Core System (output follows)
     Postinstall: Copying SMF seed repository ... done.
     Postinstall: Applying workarounds.
            Done: Installation completed in 428.065 seconds.

      Next Steps: Boot the zone, then log into the zone console
                 (zlogin -C) to complete the configuration process

    root:~#

    Create and install zone2

    root:~#
    root:~# zonecfg -z zone2
    zone2: No such zone configured
    Use 'create' to begin configuring a new zone.
    zonecfg:zone2> create
    zonecfg:zone2> set zonepath=/zones/zone2
    zonecfg:zone2> set ip-type=exclusive
    zonecfg:zone2> add net
    zonecfg:zone2:net> set physical=vnic2
    zonecfg:zone2:net> end
    zonecfg:zone2> add fs
    zonecfg:zone2:fs> set dir=/opt
    zonecfg:zone2:fs> set special=/opt
    zonecfg:zone2:fs> set type=lofs
    zonecfg:zone1:fs> add options ro
    zonecfg:zone2:fs> end
    zonecfg:zone2> verify
    zonecfg:zone2> commit
    zonecfg:zone2> exit
    root:~#
    root:~# zoneadm -z zone2 install
    A ZFS file system has been created for this zone.
      Publisher: Using opensolaris.org (http://pkg.opensolaris.org/release/).
           Image: Preparing at /zones/zone2/root.
           Cache: Using /var/pkg/download.
    Sanity Check: Looking for 'entire' incorporation.
      Installing: Core System (output follows)
     Postinstall: Copying SMF seed repository ... done.
     Postinstall: Applying workarounds.
            Done: Installation completed in 125.975 seconds.

      Next Steps: Boot the zone, then log into the zone console
                 (zlogin -C) to complete the configuration process
    root:~#

    Zone configuration

    Now that the zones are installed, we are ready to boot each zone and perform system configuration. First we boot each zone.

    root:~# zoneadm -z zone1 boot
    root:~# zoneadm -z zone2 boot

    The next step is to connect to the console for each zone and perform system configuration. The configuration params that I used are listed below. Connect to the console with the command "zlogin -C zone_name", for example: zlogin -C zone1

    zone1
    =====
    Host name for vnic1        : zone1
    IP address for vnic1       : 192.168.0.101
    System part of a subnet    : Yes
    Netmask for vnic1          : 255.255.255.0
    Enable IPv6 for vnic1      : No
    Default Route for vnic1    : Specify one
    Router IP Address for vnic1: 192.168.0.1
    Name service               : DNS
    DNS Domain name            : compute-1.internal
    DNS Server's IP address    : 172.16.0.23
    NFSv4 Domain Name          : << Value to be derived dynamically >>

    zone2
    =====
    Host name for vnic2        : zone2
    IP address for vnic2       : 192.168.0.102
    System part of a subnet    : Yes
    Netmask for vnic2          : 255.255.255.0
    Enable IPv6 for vnic2      : No
    Default Route for vnic2    : Specify one
    Router IP Address for vnic2: 192.168.0.1
    Name service               : DNS
    DNS Domain name            : compute-1.internal
    DNS Server's IP address    : 172.16.0.23
    NFSv4 Domain Name          : << Value to be derived dynamically >>

    Test connection to the zones

    Once the zones are running and have been configured, we should be able to connect to the zones from the "outside". This connection test is dependent on having the EC2 firewall correctly setup. In the example below, we connect to zone1 via port 22101.

    -bash-3.2$ ssh ec2-xxx-xxx-xxx-xxx.compute-1.amazonaws.com -p 22101

    login as: username
    Using keyboard-interactive authentication.
    Password:
    Last login: Sun Sep  6 22:15:38 2009 from c-24-7-37-94.hs
    Sun Microsystems Inc.   SunOS 5.11      snv_111b        November 2008

    -bash-3.2$ hostname
    zone1
    -bash-3.2$

    References

    About

    Sean ODell

    Search

    Categories
    Archives
    « April 2014
    SunMonTueWedThuFriSat
      
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
       
           
    Today