Friday Sep 25, 2009

OpenSolaris and EC2 Parametrized Launches Example

Building on the excellent writeup, Using Parameterized Launches to Customize Your AMIs, in this blog entry we provide an example tailored for OpenSolaris. This is an updated version of the entry which originally appeared on the Sun EC2 blog. We will leverage this entry in Part 3 of our series on OpenSolaris Zones and EC2.


It is possible to make "user data" available to an EC2 instance, specified when the instance is launched from the command line. From the Amazon Elastic Compute Cloud Developer Guide, we find the following options available for the ec2-run-instance command:

-d user_data
Data to make available to the instances. This data is read from the command line of the USER_DATA argument. If you want the data to be read from a file, see the -f option.
Example: -d "my user data"

-f user_data_file
Data to make available to these instances. The data is read from the file specified by FILE_NAME. To specify user data on the command line, use the -d option.
Example: -f

Passing data to the instance at launch time is very useful as a mechanism for customizing instances without having to create a new AMI image for each type of instance.

For example, using this technique we can use the same AMI to run a NFS file server and a NFS client. The only difference is that for the NFS file server we share filesystems and start NFS services, and for the NFS client we mount the NFS filesystems from the server. We will use this NFS scenario for our example.

Getting Starting

The first step is to use one of the Sun provided OpenSolaris 2009.06 AMIs to create a custom AMI with the addition of a script that runs at startup. Or if you have already bundled a customized OpenSolaris AMI, you can add to one of your own

The following is a high level summary of what we will cover:

  1. Start a new instance, either from a Sun provided AMI or one of your own.
  2. Add a script that runs when the instance starts for the first time.
  3. Create a new AMI which includes the script that runs at startup.
  4. Demonstrate how to launch a NFS server instance and a NFS client instance, using the AMI created in step 3.
We assume that the reader has a basic understanding of EC2 and OpenSolaris. We also use a EBS volume to provide the file system that will be served by the NFS server.

Launch an instance of the OpenSolaris AMI of your choice and login to the new instance.

bash# ec2-run-instances -k <my-keypair> -g <my-group> -z us-east-1b \\
   -t m1.small ami-e56e8f8c

  RESERVATION     r-868289ef      578281684738               <my-group>
  INSTANCE        i-5736e13f      ami-e56e8f8c    pending    <my-keypair> 0
                  m1.small        2009-09-23T06:48:50+0000   us-east-1b
                  aki-1783627e    ari-9d6889f4               monitoring-disabled

bash# ec2-describe-instances | grep i-5736e13f | cut -f2,3,4,6
  i-5736e13f   ami-e56e8f8c  running

ssh -i <my-key-pair-file>
  login as: root
  Authenticating with public key "imported-openssh-key"
  Last login: Wed Sep 23 07:02:45 2009 from somewhere
  Sun Microsystems Inc.   SunOS 5.11      snv_111b        November 2008

Create the script to run at boot time with the following contents. We name this file:



# we password protect the user data file since any user can retrieve.
# we need to remember the password as we will need to use when
# creating the user data zip file. This password can be anything
# and is not the same as your AWS secret key.

ec2password="put your password here"

# we usually pass the user data as a ZIP archive file, but we
# can also simply pass a plain text script file.

zipfiletype="ZIP archive"

case "$1" in
        # create directory for autorun scripts and data
        if [ ! -d ${ec2autorundir} ]
            /usr/bin/mkdir -p ${ec2autorundir}
            # if the directory already exists we do not run the script again
            echo "`date`: ec2autorun script already ran, exiting." \\
              >> ${ec2autorundir}/${ec2autorunfile}.stdout
            exit 0

        cd ${ec2autorundir}

        # get the user data file passed to the instance when created
        /usr/bin/wget --quiet \\
          --output-document=${ec2autorundir}/${ec2autorunfile} \\

        # test the file size, if zero then no user data was provided
        # when instance was started
        if [ ! -s ${ec2autorundir}/${ec2autorunfile} ]
            echo "User data was not passed to instance." \\
              >> ${ec2autorunfile}.stdout
            exit 0

        # if the file is a zip file, unzip it and then run the
        # script extracted from the zip file, else run the file
        # assuming it is a script
        filetype=`file ${ec2autorundir}/${ec2autorunfile} | /usr/bin/cut -f2`
        if [ "${filetype}" = "${zipfiletype}" ]
            unzip -P ${ec2password} ${ec2autorunfile} \\
              >> ${ec2autorunfile}.stdout 2>>${ec2autorunfile}.stderr

            bash ./${ec2autorunscript} \\
              >> ${ec2autorunscript}.stdout 2>>${ec2autorunscript}.stderr
            bash ./${ec2autorunfile} \\
              >> ${ec2autorunfile}.stdout 2>>${ec2autorunfile}.stderr

        # set the autorun directory to be read only by root
        chmod 700 ${ec2autorundir}

        echo "Usage: $0 { start }"
exit 0

Change the permissions on the startup script to be readable only by the root user and create a link in /etc/rc3.d so that the script runs at boot time.

bash# chmod 700 /etc/init.d/ec2autorun
bash# ln /etc/init.d/ec2autorun /etc/rc3.d/S10ec2autorun

Further customize the instance as you desire. For example, we use the S3Sync tools to retrieve our AWS key files from a S3 bucket, you can use any method that you prefer.

Then, build a new AMI using the instructions found in Getting Started Guide for OpenSolaris on Amazon EC2. See the section "Rebundling and uploading OpenSolaris images on Amazon EC2".

Launching a new NFS Sever

Using the newly created AMI from the Getting Started section above, we are now ready to use the new AMI to launch an instance to run as the NFS file server. In our example, we assume the following:

  1. The AMI id of the image we created above is: ami-2bcc2c42
  2. The AMI has the EC2 API tools installed.
  3. We have previously created a EBS volume for a NFS file system and the volume id is: vol-d58774bc.
  4. You can run EC2 API commands.
  5. You have a method for securely retrieving your AWS key files.
First, we create a script that will be passed to a new EC2 instance and then processed using the ec2autorun script that we created and installed within our new AMI as described in the Getting Started section above.

Create the script with the following contents. We name this file: ec2autorun.script





# set the environment for EC2
# the file referenced below is included in our
# user data zip file and sets the variables needed
# to interact with EC2.
. ${ec2autorundir}/ec2autorun.setec2

# create the directory to hold the keys
if ! [[ -d ${ec2keysdir} ]]
    /usr/bin/mkdir -p ${ec2keysdir}
    /usr/bin/rm -r ${ec2keysdir}/\*

chmod 700 ${ec2keysdir}
builtin cd ${ec2keysdir}

# get the ec2 keys
for i in `s3cmd list ${ec2keysbucket}`
    if ! [[ $i == "--------------------" ]]
        s3cmd get ${ec2keysbucket}:$i $i

# get the ec2 instance id and instance type
ec2instanceid=`/usr/local/bin/meta-get instance-id`
ec2instancetype=`/usr/local/bin/meta-get instance-type`
ec2publichostname=`/usr/local/bin/meta-get public-hostname`

# set the starting device number for the ebs volumes
case ${ec2instancetype} in

# attach the volumes
for volid in ${ebsvolumeids}
    ec2-attach-volume -i ${ec2instanceid} -d ${ebsvolumedev} ${volid}
    let ebsvolumedev=${ebsvolumedev}+1

if ! [[ ${ebsvolumeids} == "" ]]
    # while until all of the volumes report that they are attached
    while [[ ${ebsvolsattached} -eq 0 ]]
        ebsvolumestatus=`ec2-describe-volumes | egrep ATTACHMENT | egrep ${ec2instanceid} | cut -f5`   
        for volstatus in ${ebsvolumestatus}
            echo "Vol Status is: ${volstatus}"
            if ! [[ ${volstatus} == "attached" ]]
        sleep 1

# create a ZFS pool for our NFS file system
/usr/sbin/zpool create ${nfspool} c7d2

# create two ZFS file systems that we will share
/usr/sbin/zfs create ${nfspool}/home
/usr/sbin/zfs create ${nfspool}/share

/usr/sbin/zfs set sharenfs=on ${nfspool}/home
/usr/sbin/zfs set sharenfs=on ${nfspool}/share

# associate the ip address
if ! [[ ${ec2ipaddr} == "" ]]
    ec2-associate-address -i ${ec2instanceid} ${ec2ipaddr}

    # while until the ip address is allocated
    while [[ ${ec2addrassigned} -eq 0 ]]
        ec2ipaddrcurrent=`/usr/local/bin/meta-get public-ipv4`
        echo "IP address current: ${ec2ipaddrcurrent}"
        if ! [[ ${ec2ipaddrcurrent} == ${ec2ipaddr} ]]
        sleep 5

# start the services for this instance
svcadm enable -r svc:/network/nfs/server:default

Create a zip file which includes the script file created above, and any other files that you want to pass to the new instance. Since all users have access to the user data passed to an instance, we encrypt the zip file with a password.

Note: make sure you use the exact same password that you specified in the
/etc/init.d/ec2autorun file created in the Getting Started section.

bash# zip -P "put your password here" ec2autorun.script \\

Now we are ready to launch the a new instance, passing the zip file as user data on the command line.

bash# ec2-run-instances -k <my-key-pair> -g <my-group> -z us-east-1b \\
  -f -t m1.small

RESERVATION     r-86d3c4ef      578281684738    my-group
INSTANCE        i-ffcb1997      ami-2bcc2c42                    pending my-key-pair 0               m1.small        2009-09-26T04:57:58+0000        us-east-1b      aki-1783627e    ari-9d6889f4            monitoring-disabled

The last thing we need to do is take note of the EC2 private host name for the new NFS server instance since we will use it to launch our NFS clients. Once the NFS server startup has completed, we can get the private host name as follows. Note the instance id displayed when the instance was launched from above.

bash# ec2-describe-instances | grep i-3f994856 | cut -f2,5
i-3f994856      ip-10-251-203-4.ec2.internal

Launching a new NFS Client

Once the NFS server is up and running we can launch any number of NFS clients. The script to launch the NFS client is much simpler since all we need to do is launch the instance and then mount the file systems from the NFS server. We use exactly the same steps that we used for the NFS server, just a different script to launch at boot time.

Create a script with the following contents. We name this file: ec2autorun.script

Note that in this script we include the EC2 private host name of the NFS server.


# client system setup

# NFS server private host name

# the name of the ZFS pool on the NFS server

# start the services for this instance
svcadm enable -r svc:/network/nfs/client:default

# mount the NFS file systems from the server

mkdir -p /${nfspool}/home
mkdir -p /${nfspool}/share

mount -F nfs ${nfsserver}:/${nfspool}/home  /${nfspool}/home
mount -F nfs ${nfsserver}:/${nfspool}/share /${nfspool}/share

# add to vfstab file
echo "${nfsserver}:/${nfspool}/home  - /${nfspool}/home nfs  - yes rw,xattr"   >> /etc/vfstab
echo "${nfsserver}:/${nfspool}/share - /${nfspool}/share nfs - yes rw,xattr"   >> /etc/vfstab

We zip this file and launch a new instance, same as above, except this time we launch with the NFS client version of the script.


As you can see from this simple example, Parameterized Launches in EC2 is a very flexible and powerful technique for launching custom instances from a single AMI. In combination with S3 there are many other uses, for example: patching systems, adding users and groups, and running specific services at launch time.

All output from the scripts is captured and can be reviewed for troubleshooting purposes. Log in to one of your new instances and review the following.

bash# cd /var/ec2

# the zip file that was passed on the ec2-run-instance
# command line
file ec2autorun.input

# stderr from the /etc/init.d/ec2autorun script
cat ec2autorun.input.stderr

# stdout from the /etc/init.d/ec2autorun script
cat ec2autorun.input.stdout

# the script that was bundled in the zip file,
# called by /etc/init.ec2atorun
cat ec2autorun.script

# stderr from ec2autorun.script
cat ec2autorun.script.stderr

# stdout from ec2autorun.script
cat ec2autorun.script.stdout

Friday Sep 18, 2009

Labeling items in EC2 - Yet another AWS console

Over a year ago, prior to the release of Elasticfox and the AWS Console from Amazon, I created yet another AWS management interface.

At this time I was participating in early access testing of Solaris and Elastic Block Store (EBS). I was tired of typing commands on the command line to start and stop volumes, create snapshots, attach volumes to instances, etc.

I also needed a way to label my EC2 items, sort, filter, run batch snapshots, create more than one volume with a single click, etc.

So, now that AWS Console is available, why I am I bothering to write this blog entry?

The answer is that even though I have not updated the software in a while, myself and some of my colleagues still use this console as the AWS Console is still missing some basic features such as labeling.

If you are interested, you can find more information at or connect directly at https I have made the software available if you want to install on one of your own systems. Tested with Apache and Glassfish app. servers.

Sunday Sep 06, 2009

Running OpenSolaris and Zones in the Amazon Cloud - Part 1


Now that OpenSolaris 2009.06 is available on Amazon EC2, I have been interested in setting up zones within a OpenSolaris EC2 instances utilizing the virtual networking features provided by Crossbow.

In this tutorial I will provide a step-by-step guide describing how to get this environment up and running. We use Crossbow together with NAT to build a complete virtual network connecting multiple Zones within a Solaris EC2 host.

Much of the networking information used in this tutorial is taken directly from the excellent article Private virtual networks for Solaris xVM and Zones with Crossbow by Nicolas Droux.

This is Part 1 of this tutorial series. In Part 2 we will explain how to use ZFS and AWS snapshots to backup the zones. In Part 3 we will explain how to save a fully configured environment using a AMI and EBS snapshots, which can then be cloned and up and running in minutes.


  • Basic understanding of AWS EC2 including: managing AMIs, launching instances, managing EBS volumes and snapshots, and firewall management.

  • Basic understanding of OpenSolaris including system setup, networking, and zone management.

Building the EC2 environment

For this tutorial, I used the OpenSolaris 2009.06 AMI ami-e56e8f8c. I also created three EBS volumes, one for shared software, one for zones storage, and another one for zones backup. In Part 2 of this tutorial, I will explain the use of ZFS snapshots and EBS snapshots for the purposes of backing up the zones. The EC2 environment is displayed below.

AWS EC2 Environment

A summary of the steps are as follows:

  • Create the EBS volumes and attach them to the instance.

  • Create ZFS pools, one for the shared software, one for zones, and one for zones backup.

When finished, I have a OpenSolaris 2009.06 EC2 instance running with three ZFS filesystems on top of three EBS volumes as shown below.

root:~# zfs list -r sharedsw
sharedsw/opt     3.41G  4.12G  3.41G  /opt

root:~# zfs list -r zones
zones    70K  7.81G    19K  /zones

root:~# zfs list -r zones-backup
zones-backup    70K  7.81G    19K  /zones-backup

Building the private network

The next task is to create the virtual network as shown in the diagram below.

Private Network

We create an etherstub and three VNICs for our virtual network.

root:~# dladm show-phys
LINK         MEDIA                STATE      SPEED  DUPLEX    DEVICE
xnf0         Ethernet             up         1000   full      xnf0
root:~# dladm create-etherstub etherstub0
root:~# dladm create-vnic -l etherstub0 vnic0
root:~# dladm create-vnic -l etherstub0 vnic1
root:~# dladm create-vnic -l etherstub0 vnic2
root:~# dladm show-etherstub
root:~# dladm show-vnic
LINK         OVER         SPEED  MACADDRESS           MACADDRTYPE         VID
vnic0        etherstub0   0      2:8:20:20:10:b8      random              0
vnic1        etherstub0   0      2:8:20:c2:70:f6      random              0
vnic2        etherstub0   0      2:8:20:15:35:ca      random              0

Assign a static IP address to vnic0 in the global zone:

root:~# ifconfig vnic0 plumb
root:~# ifconfig vnic0 inet up
root:~# ifconfig vnic0
vnic0: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 9000 index 3
        inet netmask ffffff00 broadcast
        ether 2:8:20:20:10:b8

Note that the usual configuration variables (e.g. /etc/hostname.) must be populated for the configuration to persist across reboots). We must also enable IPv4 forwarding on the global zone. Run routeadm(1M) to display the current configuration, and if "IPv4 forwarding" is disabled, enable it with the following command:

root:~# routeadm -u -e ipv4-forwarding
root:~# routeadm
              Configuration   Current              Current
                     Option   Configuration        System State
               IPv4 routing   disabled             disabled
               IPv6 routing   disabled             disabled
            IPv4 forwarding   enabled              enabled
            IPv6 forwarding   disabled             disabled

           Routing services   "route:default ripng:default"

Routing daemons:

                      STATE   FMRI
                   disabled   svc:/network/routing/route:default
                   disabled   svc:/network/routing/rdisc:default
                     online   svc:/network/routing/ndp:default
                   disabled   svc:/network/routing/legacy-routing:ipv4
                   disabled   svc:/network/routing/legacy-routing:ipv6
                   disabled   svc:/network/routing/ripng:default

Next, we enable NAT on the xnf0 interface. We also want to be able to connect to the zones from the public internet, so we enable port forwarding. In EC2 make sure you open these ports within the EC2 firewall, following best practices.

root:~# cat /etc/ipf/ipnat.conf
map xnf0 -> 0/32 portmap tcp/udp auto
map xnf0 -> 0/32

rdr xnf0 port 22101  -> port 22
rdr xnf0 port 22102  -> port 22
rdr xnf0 port 8081   -> port 80
rdr xnf0 port 8082   -> port 80
rdr xnf0 port 40443  -> port 443

root:~# svcadm enable network/ipfilter
root:~# ipnat -l
List of active MAP/Redirect filters:
map xnf0 -> portmap tcp/udp auto
map xnf0 ->
rdr xnf0 port 22101 -> port 22 tcp
rdr xnf0 port 22102 -> port 22 tcp
rdr xnf0 port 8081 -> port 80 tcp
rdr xnf0 port 8082 -> port 80 tcp
rdr xnf0 port 40443 -> port 443 tcp

List of active sessions:

Creating the zones

Create and install zone1

root:~# zonecfg -z zone1
zone1: No such zone configured
Use 'create' to begin configuring a new zone.
zonecfg:zone1> create
zonecfg:zone1> set zonepath=/zones/zone1
zonecfg:zone1> set ip-type=exclusive
zonecfg:zone1> add net
zonecfg:zone1:net> set physical=vnic1
zonecfg:zone1:net> end
zonecfg:zone1> add fs
zonecfg:zone1:fs> set dir=/opt
zonecfg:zone1:fs> set special=/opt
zonecfg:zone1:fs> set type=lofs
zonecfg:zone1:fs> add options ro
zonecfg:zone1:fs> end
zonecfg:zone1> verify
zonecfg:zone1> commit
zonecfg:zone1> exit
root:~# zoneadm -z zone1 install
A ZFS file system has been created for this zone.
Publisher: Using (
       Image: Preparing at /zones/zone1/root.
       Cache: Using /var/pkg/download.
Sanity Check: Looking for 'entire' incorporation.
  Installing: Core System (output follows)
 Postinstall: Copying SMF seed repository ... done.
 Postinstall: Applying workarounds.
        Done: Installation completed in 428.065 seconds.

  Next Steps: Boot the zone, then log into the zone console
             (zlogin -C) to complete the configuration process


Create and install zone2

root:~# zonecfg -z zone2
zone2: No such zone configured
Use 'create' to begin configuring a new zone.
zonecfg:zone2> create
zonecfg:zone2> set zonepath=/zones/zone2
zonecfg:zone2> set ip-type=exclusive
zonecfg:zone2> add net
zonecfg:zone2:net> set physical=vnic2
zonecfg:zone2:net> end
zonecfg:zone2> add fs
zonecfg:zone2:fs> set dir=/opt
zonecfg:zone2:fs> set special=/opt
zonecfg:zone2:fs> set type=lofs
zonecfg:zone1:fs> add options ro
zonecfg:zone2:fs> end
zonecfg:zone2> verify
zonecfg:zone2> commit
zonecfg:zone2> exit
root:~# zoneadm -z zone2 install
A ZFS file system has been created for this zone.
  Publisher: Using (
       Image: Preparing at /zones/zone2/root.
       Cache: Using /var/pkg/download.
Sanity Check: Looking for 'entire' incorporation.
  Installing: Core System (output follows)
 Postinstall: Copying SMF seed repository ... done.
 Postinstall: Applying workarounds.
        Done: Installation completed in 125.975 seconds.

  Next Steps: Boot the zone, then log into the zone console
             (zlogin -C) to complete the configuration process

Zone configuration

Now that the zones are installed, we are ready to boot each zone and perform system configuration. First we boot each zone.

root:~# zoneadm -z zone1 boot
root:~# zoneadm -z zone2 boot

The next step is to connect to the console for each zone and perform system configuration. The configuration params that I used are listed below. Connect to the console with the command "zlogin -C zone_name", for example: zlogin -C zone1

Host name for vnic1        : zone1
IP address for vnic1       :
System part of a subnet    : Yes
Netmask for vnic1          :
Enable IPv6 for vnic1      : No
Default Route for vnic1    : Specify one
Router IP Address for vnic1:
Name service               : DNS
DNS Domain name            : compute-1.internal
DNS Server's IP address    :
NFSv4 Domain Name          : << Value to be derived dynamically >>

Host name for vnic2        : zone2
IP address for vnic2       :
System part of a subnet    : Yes
Netmask for vnic2          :
Enable IPv6 for vnic2      : No
Default Route for vnic2    : Specify one
Router IP Address for vnic2:
Name service               : DNS
DNS Domain name            : compute-1.internal
DNS Server's IP address    :
NFSv4 Domain Name          : << Value to be derived dynamically >>

Test connection to the zones

Once the zones are running and have been configured, we should be able to connect to the zones from the "outside". This connection test is dependent on having the EC2 firewall correctly setup. In the example below, we connect to zone1 via port 22101.

-bash-3.2$ ssh -p 22101

login as: username
Using keyboard-interactive authentication.
Last login: Sun Sep  6 22:15:38 2009 from c-24-7-37-94.hs
Sun Microsystems Inc.   SunOS 5.11      snv_111b        November 2008

-bash-3.2$ hostname



Sean ODell


« July 2016