Tuesday Jul 08, 2008

Using Parameterized Launches to Customize Your AMIs - Solaris

EC2 Solaris Parameterized Launches
Solaris Parameterized Launches Example by Sean O'Dell

Building on the excellent writeup, Using Parameterized Launches to Customize Your AMIs, in this blog entry we provide an example tailored for Solaris. In the very near future Sun will be adding the capability to re-bundle OpenSolaris 2008.05 AMIs. Once this functionality is available, this example will work for both OpenSolaris 2008.05 and SXCE.

Introduction

It is possible to make "user data" available to an EC2 instance, specified when the instance is launched from the command line. From the Amazon Elastic Compute Cloud Developer Guide, we find the following options available for the ec2-run-instance command:

-d user_data
Data to make available to the instances. This data is read from the command line of the USER_DATA argument. If you want the data to be read from a file, see the -f option.
Example: -d "my user data"

-f user_data_file
Data to make available to these instances. The data is read from the file specified by FILE_NAME. To specify user data on the command line, use the -d option.
Example: -f data.zip

Passing data to the instance at launch time is very useful as a mechanism for customizing instances without having to create a new AMI image for each type of instance.

For example, using this technique we can use the same AMI to run a NFS file server and a NFS client. The only difference is that for the NFS file server we share filesystems and start NFS services, and for the NFS client we mount the NFS filesystems from the server. We will use this NFS scenario for our example.


Getting Starting

The first step is to use one of the Sun provided Solaris SXCE Build 79 AMIs (or the OpenSolaris 2008.05 AMI once re-bundling is available) to create a custom AMI with the addition of a script that runs at startup. We also add the S3 Sync tools which we use to interface with S3. You can use one of the Sun provided AMIs as a starting point, or if you have already bundled a new AMI, you can add to one of your own.

The following is a high level summary of what we will cover.
  1. Start a new instance, either from a Sun provided AMI or one of your own.
  2. Add a script that runs when the instance starts for the first time. Install the S3 Sync tools.
  3. Create a new AMI which includes the script that runs at startup and S3 Sync tools from step 2.
  4. Demonstrate how to launch a NFS server instance and a NFS client instance, using the single AMI created in step 3.
We assume that the reader has a basic understanding of EC2 and Solaris. We also build on the previous blog entry which described how to save and restore ZFS snapshots to and from S3.

Launch an instance of the Solaris AMI and login to the new instance.

bash# ec2-run-instances -k <my-key-pair> -t m1.small ami-eb7a9f82

bash# ec2-describe-instances | grep INSTANCE | cut -f2,3,4,6
i-196fb970  ami-eb7a9f82  ec2-75-101-225-153.compute-1.amazonaws-DOT-com  running

bash# ssh -i <my-key-pair-file> root-AT-ec2-75-101-225-153.compute-1.amazonaws.com
Last login: Mon Apr 28 02:24:21 2008
Sun Microsystems Inc.   SunOS 5.11      snv_79  January 2008

# bash -o vi

Create the script to run at boot time with the following contents. We name this file:
/etc/init.d/ec2autorun

#!/bin/sh

ec2autorundir="/var/ec2"
ec2autorunfile="ec2autorun.input"
ec2autorunscript="ec2autorun.script"
ec2userdataurl="http://169.254.169.254/latest/user-data"

# we password protect the user data file since any user can retrieve.
# we need to remember the password as we will need to use when
# creating the user data zip file. This password can be anything
# and is not the same as your AWS secret key.

ec2password="put your password here"

# we usually pass the user data as a ZIP archive file, but we
# can also simply pass a plain text script file.

zipfiletype="ZIP archive"

case "$1" in
'start')
        # create directory for autorun scripts and data
        if [ ! -d ${ec2autorundir} ]
        then
            /usr/bin/mkdir -p ${ec2autorundir}
        else
            # if the directory already exists we do not run the script again
            echo "`date`: ec2autorun script already ran, exiting." \\
              >> ${ec2autorundir}/${ec2autorunfile}.stdout
            exit 0
        fi
       
        cd ${ec2autorundir}

        # get the user data file passed to the instance when created
        /usr/sfw/bin/wget --quiet \\
          --output-document=${ec2autorundir}/${ec2autorunfile} \\
            ${ec2userdataurl}

        # test the file size, if zero then no user data was provided
        # when instance was started
        if [ ! -s ${ec2autorundir}/${ec2autorunfile} ]
        then
            echo "User data was not passed to instance." \\
              >> ${ec2autorunfile}.stdout
            exit 0
        fi

        # if the file is a zip file, unzip it and then run the
        # script extracted from the zip file, else run the file
        # assuming it is a script
        filetype=`file ${ec2autorundir}/${ec2autorunfile} | /usr/bin/cut -f2`
        if [ "${filetype}" = "${zipfiletype}" ]
        then
            unzip -P ${ec2password} ${ec2autorunfile} \\
              >> ${ec2autorunfile}.stdout 2>>${ec2autorunfile}.stderr

            bash ./${ec2autorunscript} \\
              >> ${ec2autorunscript}.stdout 2>>${ec2autorunscript}.stderr
        else
            bash ./${ec2autorunfile} \\
              >> ${ec2autorunfile}.stdout 2>>${ec2autorunfile}.stderr
        fi

        # set the autorun directory to be read only by root
        chmod 700 ${ec2autorundir}

        ;;
\*)
        echo "Usage: $0 { start }"
        ;;
esac
exit 0

Change the permissions on the startup script to be readable only by the root user and create a link in /etc/rc3.d so that the script runs at boot time.

bash# chmod 700 /etc/init.d/ec2autorun
bash# ln /etc/init.d/ec2autorun /etc/rc3.d/S10ec2autorun

Download and install the S3 Sync tools. Install the tools in your preferred location. In our example, we use /usr/local/aws.

bash# mkdir -p /usr/local/aws
bash# cd /usr/local/aws

bash# /usr/sfw/bin/wget \\
  http://s3.amazonaws.com/ServEdge_pub/s3sync/s3sync.tar.gz

bash# gzcat s3sync.tar.gz | tar xf -
bash# cd /usr/local/aws/s3sync

bash# ln -s s3cmd.rb s3cmd
bash# export PATH=$PATH:/usr/local/aws/s3sync

bash# rm /usr/local/aws/s3sync.tar.gz

Further customize the instance as you desire and then build a new AMI using the instructions found in Getting Started Guide for OpenSolaris on Amazon EC2. See the section "Rebundling and uploading OpenSolaris images on Amazon EC2".

Launching a new NFS Sever

Using the newly created AMI from the Getting Started section above, we are now ready to use the new AMI to launch an instance to run as the NFS file server. In our example, we assume the following:
  1. The AMI id of the image we created above is: ami-93b652fa
  2. We have previously created and saved two ZFS filesystems that the NFS server will share.
  3. You have a system with the EC2 API tools installed and can run the command: ec2-run-instances
First, we create a script that will be passed to a new EC2 instance and then processed using the ec2autorun script that we created and installed within our new AMI as described in the Getting Started section above.

Create the script with the following contents. We name this file: ec2autorun.script

#!/usr/bin/bash

zpool="s3pool"
zpooldisk="c0d1p0"
zfssnapshots="home share"
snapshotsdir="/var/tmp/snapshots"
snapshotsbucket="<my-bucket-name-for-zfs-snapshots>"
s3toolspath="/usr/local/aws/s3sync"

ntpdate 0.north-america.pool.ntp.org

# create the ZFS pool
/usr/sbin/zpool create ${zpool} ${zpooldisk}

# download the ZFS snapshots and restore the file systems
if ! [[ -d ${snapshotsdir} ]]
then
    /usr/bin/mkdir -p ${snapshotsdir}
fi

# setup the environment for S3
export PATH=$PATH:${s3toolspath}
export AWS_ACCESS_KEY_ID=<my-aws-access-key>
export AWS_SECRET_ACCESS_KEY=<my-aws-secret-access-key>

# download the saved ZFS file systems and restore
builtin cd ${snapshotsdir}
for i in ${zfssnapshots}
do
    s3cmd get ${snapshotsbucket}:s3-${i}.bz2 s3-${i}.bz2
done

# restore the snapshots and share for NFS
for i in ${zfssnapshots}
do
    bzcat s3-${i}.bz2 | zfs recv -F -d ${zpool}
    zfs set sharenfs=on ${zpool}/${i}
done

# start the NFS services for this instance
svcadm enable -r svc:/network/nfs/server:default


Create a zip file which includes the script created above, and any other files that you want to pass to the new instance. Since all users have access to the user data passed to an instance, we encrypt the zip file with a password.

Note: make sure you use the exact same password that you specified in the
/etc/init.d/ec2autorun file created in the Getting Started section.

bash# zip -P "put your password here" ec2autorun.zip ec2autorun.script

Now we are ready to launch the a new instance, passing the zip file as user data on the command line.

bash# ec2-run-instances -k <my-key-pair> -f ec2autorun.zip ami-93b652fa

RESERVATION     r-5609c43f      578281684738    default
INSTANCE        i-3f994856      ami-93b652fa    pending
                <my-key-pair>   0               m1.small
                2008-07-01T02:55:46+0000        us-east-1b
                aki-b57b9edc    ari-b47b9edd

The last thing we need to do is take note of the EC2 private host name for the new NFS server instance since we will use it to launch our NFS clients. Once the NFS server startup has completed, we can get the private host name as follows. Note the instance id displayed when the instance was launched from above.

bash# ec2-describe-instances | grep i-3f994856 | cut -f2,5
i-3f994856      ip-10-251-203-4.ec2.internal

Launching a new NFS Client

Once the NFS server is up and running we can launch any number of NFS clients. The script to launch the NFS client is much simpler since all we need to do is launch the instance and then mount the file systems from the NFS server. We use exactly the same steps that we used for the NFS server, just a different script to launch at boot time.

Create a script with the following contents. We name this file: ec2autorun.script
Note that in this script we include the EC2 private host name of the NFS server.

#!/usr/bin/bash

# NFS server private host name. This name is also referred to in
# EC2 as the Private DNS Name.
nfsserver="ip-10-251-203-4.ec2.internal"

# start the services for this instance
svcadm enable -r svc:/network/nfs/client:default

# mount the NFS file systems from the NFS server
mkdir -p /s3pool/home
mkdir -p /s3pool/share

mount -F nfs ${nfsserver}:/s3pool/home  /s3pool/home
mount -F nfs ${nfsserver}:/s3pool/share /s3pool/share

# add to vfstab file
echo "${nfsserver}:/s3pool/home - /s3pool/home nfs - yes rw,xattr"   >> /etc/vfstab
echo "${nfsserver}:/s3pool/share - /s3pool/share nfs - yes rw,xattr" >> /etc/vfstab

We zip this file and launch a new instance, same as above, except this time we launch with the NFS client version of the script.

bash# zip -P "put your password here" ec2autorun.zip ec2autorun.script
bash# ec2-run-instances -k <my-key-pair> -f ec2autorun.zip ami-93b652fa

Summary

As you can see from this simple example, Parameterized Launches in EC2 is a very flexible and powerful technique for launching custom instances from a single AMI. In combination with S3 there are many other uses, for example: patching systems, adding users and groups, and running specific services at launch time.

All output from the scripts is captured and can be reviewed for troubleshooting purposes. Log in to one of your new instances and review the following.

bash# cd /var/ec2
bash# ls

# the zip file that was passed on the ec2-run-instance
# command line
ec2autorun.input

# stderr from the /etc/init.d/ec2autorun script
ec2autorun.input.stderr

# stdout from the /etc/init.d/ec2autorun script
ec2autorun.input.stdout

# the script that was bundled in the zip file,
# called by /etc/init.ec2atorun
ec2autorun.script

# stderr from ec2autorun.script
ec2autorun.script.stderr

# stdout from ec2autorun.script
ec2autorun.script.stdout



Wednesday May 28, 2008

OpenSolaris FAQ for Amazon EC2 users

For OpenSolaris 2008.05 ( ami-0c41a465):

Q:  What additional software is available after the installation of  Solaris 2008.05
A:  The default installation will point toward the opensolaris.org  package authority.  There are a large number of packages available  for install in that repository.  The "pkg" command will assist in  retrieving software.
Refer to http://dlc.sun.com/osol/docs/content/IPS/ggcph.html for information on the new IPS (Image Packaging System) in OpenSolaris 2008.05

In addition to standard software package a package group has been made available so the user wanting to have the complete software suite  can do so with the invocation of a single pkg command.


Q:  Is there a package which will install a set of web tools (a SAMP stack) on OpenSolaris 2008.05?
A:  Yes, there is a web stack package available which includes Apache, MySQL, PHP/Perl/Python.  Installation is done with a single command:

#  pkg install amp-dev
A list of packages installed with this group is listed at:
http://dlc.sun.com/osol/docs/content/IPS/webstacktbl.html

Q:  What package groups are available for OpenSolaris 2008.05?
A:  There are many developer packages available.
amp-dev:  SAMP, a web stack with Apache, MySQL, PHP/Perl/Python.
ss-dev:  Sun Studio Express 5/08
gcc-dev:  GNU compilers, make, gdb and other GNU programs.
java-dev:  Glassfish, ant, Sun Studio, Netbeans

Installation can be done with:
#  pkg install <package name or package group name>

More details can be found at "Installing Developer Packages".

Q:  The time on the instance is set back in 1970.
A:  This is due to a bug in OpenSolaris 2008.05 (CR 6574993).  There are no current workarounds but setting up and running ntp helps.  ntp can be
configured to use an open ntp server.  It is also possible to run ntpdate after boot to set the time properly.  A list of public ntp servers can  be found at:  http://support.ntp.org/bin/view/Servers/NTPPoolServers

-bash-3.2# ntpdate 0.pool.ntp.org
Please note that setting the time after boot still leaves a number of system files and directories with the errant time.


Build 79 :
Q:  The time is not set correctly, is there a way to fix it?

A:  After boot, run the command "rtc -z GMT" as root.  This will update the kernel with information on the timzone of the hardware clock.  To
change the timezone, edit /etc/TIMEZONE with the proper TZ entry.  See  the timezone(4) manpage for pointers to valid TZ entries and TIMEZONE(4)
manpage for more information on the /etc/TIMEZONE file.

Generic :
Q:  Is there additional storage on the instances?
A:  Yes, depending on the model you launch there are one to three additional storage devices.  A quick way to access the storage is to
use ZFS.  The following commands will create a ZFS filesystem on the instance.

   OpenSolaris 2008.05 32 Bit small:  zpool create storage c4d1

   SXCE Build 79 - 32 Bit small:  zpool create storage c0d1
   SXCE Build 79 - 64 Bit large:  zpool create storage c0d1 c0d2
   SXCE Build 79 - 64 Bit xlarge:  zpool create storage c0d1 c0d2 c0d3 c0d4

The above commands will allocate all of the storage from the instances disks into a large zfs pool accessible at /storage.  See the zpool(1m) and
zfs(1M) man pages for further information.

About

Information about Solaris and OpenSolaris on Amazon Web Services (AWS) EC2. Look here for the latest information on the program and any late breaking information on (Open)Solaris on EC2.

Search

Categories
Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today