OpenSolaris and EC2 Parametrized Launches Example


Building on the excellent writeup, Using Parameterized Launches to Customize Your AMIs, in this blog entry we provide an example tailored for OpenSolaris. This is an updated version of the entry which originally appeared on the Sun EC2 blog. We will leverage this entry in Part 3 of our series on OpenSolaris Zones and EC2.

Introduction

It is possible to make "user data" available to an EC2 instance, specified when the instance is launched from the command line. From the Amazon Elastic Compute Cloud Developer Guide, we find the following options available for the ec2-run-instance command:

-d user_data
Data to make available to the instances. This data is read from the command line of the USER_DATA argument. If you want the data to be read from a file, see the -f option.
Example: -d "my user data"

-f user_data_file
Data to make available to these instances. The data is read from the file specified by FILE_NAME. To specify user data on the command line, use the -d option.
Example: -f data.zip

Passing data to the instance at launch time is very useful as a mechanism for customizing instances without having to create a new AMI image for each type of instance.

For example, using this technique we can use the same AMI to run a NFS file server and a NFS client. The only difference is that for the NFS file server we share filesystems and start NFS services, and for the NFS client we mount the NFS filesystems from the server. We will use this NFS scenario for our example.


Getting Starting

The first step is to use one of the Sun provided OpenSolaris 2009.06 AMIs to create a custom AMI with the addition of a script that runs at startup. Or if you have already bundled a customized OpenSolaris AMI, you can add to one of your own

The following is a high level summary of what we will cover:

  1. Start a new instance, either from a Sun provided AMI or one of your own.
  2. Add a script that runs when the instance starts for the first time.
  3. Create a new AMI which includes the script that runs at startup.
  4. Demonstrate how to launch a NFS server instance and a NFS client instance, using the AMI created in step 3.
We assume that the reader has a basic understanding of EC2 and OpenSolaris. We also use a EBS volume to provide the file system that will be served by the NFS server.

Launch an instance of the OpenSolaris AMI of your choice and login to the new instance.

bash# ec2-run-instances -k <my-keypair> -g <my-group> -z us-east-1b \\
   -t m1.small ami-e56e8f8c


  RESERVATION     r-868289ef      578281684738               <my-group>
  INSTANCE        i-5736e13f      ami-e56e8f8c    pending    <my-keypair> 0
                  m1.small        2009-09-23T06:48:50+0000   us-east-1b
                  aki-1783627e    ari-9d6889f4               monitoring-disabled

bash# ec2-describe-instances | grep i-5736e13f | cut -f2,3,4,6
  i-5736e13f   ami-e56e8f8c  ec2-174-129-168-111.compute-1.amazonaws.com  running

bash#
ssh -i <my-key-pair-file> root@ec2-174-129-168-111.compute-1.amazonaws.com
  login as: root
  Authenticating with public key "imported-openssh-key"
  Last login: Wed Sep 23 07:02:45 2009 from somewhere
  Sun Microsystems Inc.   SunOS 5.11      snv_111b        November 2008
bash#


Create the script to run at boot time with the following contents. We name this file:
/etc/init.d/ec2autorun

#!/bin/sh

ec2autorundir="/var/ec2"
ec2autorunfile="ec2autorun.input"
ec2autorunscript="ec2autorun.script"
ec2userdataurl="http://169.254.169.254/latest/user-data"

# we password protect the user data file since any user can retrieve.
# we need to remember the password as we will need to use when
# creating the user data zip file. This password can be anything
# and is not the same as your AWS secret key.

ec2password="put your password here"

# we usually pass the user data as a ZIP archive file, but we
# can also simply pass a plain text script file.

zipfiletype="ZIP archive"

case "$1" in
'start')
        # create directory for autorun scripts and data
        if [ ! -d ${ec2autorundir} ]
        then
            /usr/bin/mkdir -p ${ec2autorundir}
        else
            # if the directory already exists we do not run the script again
            echo "`date`: ec2autorun script already ran, exiting." \\
              >> ${ec2autorundir}/${ec2autorunfile}.stdout
            exit 0
        fi

        cd ${ec2autorundir}

        # get the user data file passed to the instance when created
        /usr/bin/wget --quiet \\
          --output-document=${ec2autorundir}/${ec2autorunfile} \\
            ${ec2userdataurl}

        # test the file size, if zero then no user data was provided
        # when instance was started
        if [ ! -s ${ec2autorundir}/${ec2autorunfile} ]
        then
            echo "User data was not passed to instance." \\
              >> ${ec2autorunfile}.stdout
            exit 0
        fi

        # if the file is a zip file, unzip it and then run the
        # script extracted from the zip file, else run the file
        # assuming it is a script
        filetype=`file ${ec2autorundir}/${ec2autorunfile} | /usr/bin/cut -f2`
        if [ "${filetype}" = "${zipfiletype}" ]
        then
            unzip -P ${ec2password} ${ec2autorunfile} \\
              >> ${ec2autorunfile}.stdout 2>>${ec2autorunfile}.stderr

            bash ./${ec2autorunscript} \\
              >> ${ec2autorunscript}.stdout 2>>${ec2autorunscript}.stderr
        else
            bash ./${ec2autorunfile} \\
              >> ${ec2autorunfile}.stdout 2>>${ec2autorunfile}.stderr
        fi

        # set the autorun directory to be read only by root
        chmod 700 ${ec2autorundir}

        ;;
\*)
        echo "Usage: $0 { start }"
        ;;
esac
exit 0

Change the permissions on the startup script to be readable only by the root user and create a link in /etc/rc3.d so that the script runs at boot time.

bash# chmod 700 /etc/init.d/ec2autorun
bash# ln /etc/init.d/ec2autorun /etc/rc3.d/S10ec2autorun

Further customize the instance as you desire. For example, we use the S3Sync tools to retrieve our AWS key files from a S3 bucket, you can use any method that you prefer.

Then, build a new AMI using the instructions found in Getting Started Guide for OpenSolaris on Amazon EC2. See the section "Rebundling and uploading OpenSolaris images on Amazon EC2".

Launching a new NFS Sever

Using the newly created AMI from the Getting Started section above, we are now ready to use the new AMI to launch an instance to run as the NFS file server. In our example, we assume the following:

  1. The AMI id of the image we created above is: ami-2bcc2c42
  2. The AMI has the EC2 API tools installed.
  3. We have previously created a EBS volume for a NFS file system and the volume id is: vol-d58774bc.
  4. You can run EC2 API commands.
  5. You have a method for securely retrieving your AWS key files.
First, we create a script that will be passed to a new EC2 instance and then processed using the ec2autorun script that we created and installed within our new AMI as described in the Getting Started section above.

Create the script with the following contents. We name this file: ec2autorun.script

#!/usr/bin/bash

ec2ipaddr=""
ebsvolumeids="vol-d58774bc"
nfspool="nfsfs"

ec2autorundir="/var/ec2"
ec2keysdir="/usr/local/aws/.ec2"
ec2keysbucket="mykeys.ec2.keys"

ntpdate 0.north-america.pool.ntp.org

# set the environment for EC2
# the file referenced below is included in our
# user data zip file and sets the variables needed
# to interact with EC2.
. ${ec2autorundir}/ec2autorun.setec2

# create the directory to hold the keys
if ! [[ -d ${ec2keysdir} ]]
then
    /usr/bin/mkdir -p ${ec2keysdir}
else
    /usr/bin/rm -r ${ec2keysdir}/\*
fi

chmod 700 ${ec2keysdir}
builtin cd ${ec2keysdir}

# get the ec2 keys
for i in `s3cmd list ${ec2keysbucket}`
do
    if ! [[ $i == "--------------------" ]]
    then
        s3cmd get ${ec2keysbucket}:$i $i
    fi
done

# get the ec2 instance id and instance type
ec2instanceid=`/usr/local/bin/meta-get instance-id`
ec2instancetype=`/usr/local/bin/meta-get instance-type`
ec2publichostname=`/usr/local/bin/meta-get public-hostname`

# set the starting device number for the ebs volumes
case ${ec2instancetype} in
'm1.small')
    ebsvolumedev=2
    ;;
'm1.large')
    ebsvolumedev=3
    ;;
'm1.xlarge')
    ebsvolumedev=5
    ;;
\*)
    ebsvolumedev=5
    ;;
esac

# attach the volumes
for volid in ${ebsvolumeids}
do
    ec2-attach-volume -i ${ec2instanceid} -d ${ebsvolumedev} ${volid}
    let ebsvolumedev=${ebsvolumedev}+1
done

if ! [[ ${ebsvolumeids} == "" ]]
then
    # while until all of the volumes report that they are attached
    ebsvolsattached=0
    while [[ ${ebsvolsattached} -eq 0 ]]
    do
        ebsvolsattached=1
        ebsvolumestatus=`ec2-describe-volumes | egrep ATTACHMENT | egrep ${ec2instanceid} | cut -f5`   
        for volstatus in ${ebsvolumestatus}
        do
            echo "Vol Status is: ${volstatus}"
            if ! [[ ${volstatus} == "attached" ]]
            then
                ebsvolsattached=0
            fi
        done
        sleep 1
    done
fi

# create a ZFS pool for our NFS file system
/usr/sbin/zpool create ${nfspool} c7d2

# create two ZFS file systems that we will share
/usr/sbin/zfs create ${nfspool}/home
/usr/sbin/zfs create ${nfspool}/share

/usr/sbin/zfs set sharenfs=on ${nfspool}/home
/usr/sbin/zfs set sharenfs=on ${nfspool}/share

# associate the ip address
if ! [[ ${ec2ipaddr} == "" ]]
then
    ec2-associate-address -i ${ec2instanceid} ${ec2ipaddr}

    # while until the ip address is allocated
    ec2addrassigned=0
    while [[ ${ec2addrassigned} -eq 0 ]]
    do
        ec2addrassigned=1
        ec2ipaddrcurrent=`/usr/local/bin/meta-get public-ipv4`
        echo "IP address current: ${ec2ipaddrcurrent}"
        if ! [[ ${ec2ipaddrcurrent} == ${ec2ipaddr} ]]
        then
            ec2addrassigned=0
        fi
        sleep 5
    done
fi

# start the services for this instance
svcadm enable -r svc:/network/nfs/server:default


Create a zip file which includes the script file created above, and any other files that you want to pass to the new instance. Since all users have access to the user data passed to an instance, we encrypt the zip file with a password.

Note: make sure you use the exact same password that you specified in the
/etc/init.d/ec2autorun file created in the Getting Started section.

bash# zip -P "put your password here" ec2autorun.zip ec2autorun.script \\
ec2autorun.setec2


Now we are ready to launch the a new instance, passing the zip file as user data on the command line.

bash# ec2-run-instances -k <my-key-pair> -g <my-group> -z us-east-1b \\
  -f ec2autorun.zip -t m1.small
ami-2bcc2c42

RESERVATION     r-86d3c4ef      578281684738    my-group
INSTANCE        i-ffcb1997      ami-2bcc2c42                    pending my-key-pair 0               m1.small        2009-09-26T04:57:58+0000        us-east-1b      aki-1783627e    ari-9d6889f4            monitoring-disabled

The last thing we need to do is take note of the EC2 private host name for the new NFS server instance since we will use it to launch our NFS clients. Once the NFS server startup has completed, we can get the private host name as follows. Note the instance id displayed when the instance was launched from above.

bash# ec2-describe-instances | grep i-3f994856 | cut -f2,5
i-3f994856      ip-10-251-203-4.ec2.internal

Launching a new NFS Client

Once the NFS server is up and running we can launch any number of NFS clients. The script to launch the NFS client is much simpler since all we need to do is launch the instance and then mount the file systems from the NFS server. We use exactly the same steps that we used for the NFS server, just a different script to launch at boot time.

Create a script with the following contents. We name this file: ec2autorun.script

Note that in this script we include the EC2 private host name of the NFS server.

#!/usr/bin/bash

# client system setup

# NFS server private host name
nfsserver="ip-10-250-19-166.ec2.internal"

# the name of the ZFS pool on the NFS server
nfspool="nfsfs"

# start the services for this instance
svcadm enable -r svc:/network/nfs/client:default

# mount the NFS file systems from the server

mkdir -p /${nfspool}/home
mkdir -p /${nfspool}/share

mount -F nfs ${nfsserver}:/${nfspool}/home  /${nfspool}/home
mount -F nfs ${nfsserver}:/${nfspool}/share /${nfspool}/share

# add to vfstab file
echo "${nfsserver}:/${nfspool}/home  - /${nfspool}/home nfs  - yes rw,xattr"   >> /etc/vfstab
echo "${nfsserver}:/${nfspool}/share - /${nfspool}/share nfs - yes rw,xattr"   >> /etc/vfstab


We zip this file and launch a new instance, same as above, except this time we launch with the NFS client version of the script.

Summary

As you can see from this simple example, Parameterized Launches in EC2 is a very flexible and powerful technique for launching custom instances from a single AMI. In combination with S3 there are many other uses, for example: patching systems, adding users and groups, and running specific services at launch time.

All output from the scripts is captured and can be reviewed for troubleshooting purposes. Log in to one of your new instances and review the following.

bash# cd /var/ec2
bash#
ls

# the zip file that was passed on the ec2-run-instance
# command line
file ec2autorun.input

# stderr from the /etc/init.d/ec2autorun script
cat ec2autorun.input.stderr

# stdout from the /etc/init.d/ec2autorun script
cat ec2autorun.input.stdout

# the script that was bundled in the zip file,
# called by /etc/init.ec2atorun
cat ec2autorun.script

# stderr from ec2autorun.script
cat ec2autorun.script.stderr

# stdout from ec2autorun.script
cat ec2autorun.script.stdout

Comments:

these are just great! thanks so much for posting. have a good weekend.

Posted by david on October 02, 2009 at 02:06 AM PDT #

Post a Comment:
  • HTML Syntax: NOT allowed
About

Sean ODell

Search

Categories
Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today