Tuesday Jul 15, 2008

ALERT: Down time

ALERT: Amazon just informed that the Pinotage endpoint server will be down for scheduled maintenance on Wednesday starting at 2AM PDT. The maintenance is scheduled to last for 2 - 3hours. Sorry for any inconvenience this may cause.

Tuesday Jul 08, 2008

Using Parameterized Launches to Customize Your AMIs - Solaris

EC2 Solaris Parameterized Launches
Solaris Parameterized Launches Example by Sean O'Dell

Building on the excellent writeup, Using Parameterized Launches to Customize Your AMIs, in this blog entry we provide an example tailored for Solaris. In the very near future Sun will be adding the capability to re-bundle OpenSolaris 2008.05 AMIs. Once this functionality is available, this example will work for both OpenSolaris 2008.05 and SXCE.


It is possible to make "user data" available to an EC2 instance, specified when the instance is launched from the command line. From the Amazon Elastic Compute Cloud Developer Guide, we find the following options available for the ec2-run-instance command:

-d user_data
Data to make available to the instances. This data is read from the command line of the USER_DATA argument. If you want the data to be read from a file, see the -f option.
Example: -d "my user data"

-f user_data_file
Data to make available to these instances. The data is read from the file specified by FILE_NAME. To specify user data on the command line, use the -d option.
Example: -f data.zip

Passing data to the instance at launch time is very useful as a mechanism for customizing instances without having to create a new AMI image for each type of instance.

For example, using this technique we can use the same AMI to run a NFS file server and a NFS client. The only difference is that for the NFS file server we share filesystems and start NFS services, and for the NFS client we mount the NFS filesystems from the server. We will use this NFS scenario for our example.

Getting Starting

The first step is to use one of the Sun provided Solaris SXCE Build 79 AMIs (or the OpenSolaris 2008.05 AMI once re-bundling is available) to create a custom AMI with the addition of a script that runs at startup. We also add the S3 Sync tools which we use to interface with S3. You can use one of the Sun provided AMIs as a starting point, or if you have already bundled a new AMI, you can add to one of your own.

The following is a high level summary of what we will cover.
  1. Start a new instance, either from a Sun provided AMI or one of your own.
  2. Add a script that runs when the instance starts for the first time. Install the S3 Sync tools.
  3. Create a new AMI which includes the script that runs at startup and S3 Sync tools from step 2.
  4. Demonstrate how to launch a NFS server instance and a NFS client instance, using the single AMI created in step 3.
We assume that the reader has a basic understanding of EC2 and Solaris. We also build on the previous blog entry which described how to save and restore ZFS snapshots to and from S3.

Launch an instance of the Solaris AMI and login to the new instance.

bash# ec2-run-instances -k <my-key-pair> -t m1.small ami-eb7a9f82

bash# ec2-describe-instances | grep INSTANCE | cut -f2,3,4,6
i-196fb970  ami-eb7a9f82  ec2-75-101-225-153.compute-1.amazonaws-DOT-com  running

bash# ssh -i <my-key-pair-file> root-AT-ec2-75-101-225-153.compute-1.amazonaws.com
Last login: Mon Apr 28 02:24:21 2008
Sun Microsystems Inc.   SunOS 5.11      snv_79  January 2008

# bash -o vi

Create the script to run at boot time with the following contents. We name this file:



# we password protect the user data file since any user can retrieve.
# we need to remember the password as we will need to use when
# creating the user data zip file. This password can be anything
# and is not the same as your AWS secret key.

ec2password="put your password here"

# we usually pass the user data as a ZIP archive file, but we
# can also simply pass a plain text script file.

zipfiletype="ZIP archive"

case "$1" in
        # create directory for autorun scripts and data
        if [ ! -d ${ec2autorundir} ]
            /usr/bin/mkdir -p ${ec2autorundir}
            # if the directory already exists we do not run the script again
            echo "`date`: ec2autorun script already ran, exiting." \\
              >> ${ec2autorundir}/${ec2autorunfile}.stdout
            exit 0
        cd ${ec2autorundir}

        # get the user data file passed to the instance when created
        /usr/sfw/bin/wget --quiet \\
          --output-document=${ec2autorundir}/${ec2autorunfile} \\

        # test the file size, if zero then no user data was provided
        # when instance was started
        if [ ! -s ${ec2autorundir}/${ec2autorunfile} ]
            echo "User data was not passed to instance." \\
              >> ${ec2autorunfile}.stdout
            exit 0

        # if the file is a zip file, unzip it and then run the
        # script extracted from the zip file, else run the file
        # assuming it is a script
        filetype=`file ${ec2autorundir}/${ec2autorunfile} | /usr/bin/cut -f2`
        if [ "${filetype}" = "${zipfiletype}" ]
            unzip -P ${ec2password} ${ec2autorunfile} \\
              >> ${ec2autorunfile}.stdout 2>>${ec2autorunfile}.stderr

            bash ./${ec2autorunscript} \\
              >> ${ec2autorunscript}.stdout 2>>${ec2autorunscript}.stderr
            bash ./${ec2autorunfile} \\
              >> ${ec2autorunfile}.stdout 2>>${ec2autorunfile}.stderr

        # set the autorun directory to be read only by root
        chmod 700 ${ec2autorundir}

        echo "Usage: $0 { start }"
exit 0

Change the permissions on the startup script to be readable only by the root user and create a link in /etc/rc3.d so that the script runs at boot time.

bash# chmod 700 /etc/init.d/ec2autorun
bash# ln /etc/init.d/ec2autorun /etc/rc3.d/S10ec2autorun

Download and install the S3 Sync tools. Install the tools in your preferred location. In our example, we use /usr/local/aws.

bash# mkdir -p /usr/local/aws
bash# cd /usr/local/aws

bash# /usr/sfw/bin/wget \\

bash# gzcat s3sync.tar.gz | tar xf -
bash# cd /usr/local/aws/s3sync

bash# ln -s s3cmd.rb s3cmd
bash# export PATH=$PATH:/usr/local/aws/s3sync

bash# rm /usr/local/aws/s3sync.tar.gz

Further customize the instance as you desire and then build a new AMI using the instructions found in Getting Started Guide for OpenSolaris on Amazon EC2. See the section "Rebundling and uploading OpenSolaris images on Amazon EC2".

Launching a new NFS Sever

Using the newly created AMI from the Getting Started section above, we are now ready to use the new AMI to launch an instance to run as the NFS file server. In our example, we assume the following:
  1. The AMI id of the image we created above is: ami-93b652fa
  2. We have previously created and saved two ZFS filesystems that the NFS server will share.
  3. You have a system with the EC2 API tools installed and can run the command: ec2-run-instances
First, we create a script that will be passed to a new EC2 instance and then processed using the ec2autorun script that we created and installed within our new AMI as described in the Getting Started section above.

Create the script with the following contents. We name this file: ec2autorun.script


zfssnapshots="home share"

ntpdate 0.north-america.pool.ntp.org

# create the ZFS pool
/usr/sbin/zpool create ${zpool} ${zpooldisk}

# download the ZFS snapshots and restore the file systems
if ! [[ -d ${snapshotsdir} ]]
    /usr/bin/mkdir -p ${snapshotsdir}

# setup the environment for S3
export PATH=$PATH:${s3toolspath}
export AWS_ACCESS_KEY_ID=<my-aws-access-key>
export AWS_SECRET_ACCESS_KEY=<my-aws-secret-access-key>

# download the saved ZFS file systems and restore
builtin cd ${snapshotsdir}
for i in ${zfssnapshots}
    s3cmd get ${snapshotsbucket}:s3-${i}.bz2 s3-${i}.bz2

# restore the snapshots and share for NFS
for i in ${zfssnapshots}
    bzcat s3-${i}.bz2 | zfs recv -F -d ${zpool}
    zfs set sharenfs=on ${zpool}/${i}

# start the NFS services for this instance
svcadm enable -r svc:/network/nfs/server:default

Create a zip file which includes the script created above, and any other files that you want to pass to the new instance. Since all users have access to the user data passed to an instance, we encrypt the zip file with a password.

Note: make sure you use the exact same password that you specified in the
/etc/init.d/ec2autorun file created in the Getting Started section.

bash# zip -P "put your password here" ec2autorun.zip ec2autorun.script

Now we are ready to launch the a new instance, passing the zip file as user data on the command line.

bash# ec2-run-instances -k <my-key-pair> -f ec2autorun.zip ami-93b652fa

RESERVATION     r-5609c43f      578281684738    default
INSTANCE        i-3f994856      ami-93b652fa    pending
                <my-key-pair>   0               m1.small
                2008-07-01T02:55:46+0000        us-east-1b
                aki-b57b9edc    ari-b47b9edd

The last thing we need to do is take note of the EC2 private host name for the new NFS server instance since we will use it to launch our NFS clients. Once the NFS server startup has completed, we can get the private host name as follows. Note the instance id displayed when the instance was launched from above.

bash# ec2-describe-instances | grep i-3f994856 | cut -f2,5
i-3f994856      ip-10-251-203-4.ec2.internal

Launching a new NFS Client

Once the NFS server is up and running we can launch any number of NFS clients. The script to launch the NFS client is much simpler since all we need to do is launch the instance and then mount the file systems from the NFS server. We use exactly the same steps that we used for the NFS server, just a different script to launch at boot time.

Create a script with the following contents. We name this file: ec2autorun.script
Note that in this script we include the EC2 private host name of the NFS server.


# NFS server private host name. This name is also referred to in
# EC2 as the Private DNS Name.

# start the services for this instance
svcadm enable -r svc:/network/nfs/client:default

# mount the NFS file systems from the NFS server
mkdir -p /s3pool/home
mkdir -p /s3pool/share

mount -F nfs ${nfsserver}:/s3pool/home  /s3pool/home
mount -F nfs ${nfsserver}:/s3pool/share /s3pool/share

# add to vfstab file
echo "${nfsserver}:/s3pool/home - /s3pool/home nfs - yes rw,xattr"   >> /etc/vfstab
echo "${nfsserver}:/s3pool/share - /s3pool/share nfs - yes rw,xattr" >> /etc/vfstab

We zip this file and launch a new instance, same as above, except this time we launch with the NFS client version of the script.

bash# zip -P "put your password here" ec2autorun.zip ec2autorun.script
bash# ec2-run-instances -k <my-key-pair> -f ec2autorun.zip ami-93b652fa


As you can see from this simple example, Parameterized Launches in EC2 is a very flexible and powerful technique for launching custom instances from a single AMI. In combination with S3 there are many other uses, for example: patching systems, adding users and groups, and running specific services at launch time.

All output from the scripts is captured and can be reviewed for troubleshooting purposes. Log in to one of your new instances and review the following.

bash# cd /var/ec2
bash# ls

# the zip file that was passed on the ec2-run-instance
# command line

# stderr from the /etc/init.d/ec2autorun script

# stdout from the /etc/init.d/ec2autorun script

# the script that was bundled in the zip file,
# called by /etc/init.ec2atorun

# stderr from ec2autorun.script

# stdout from ec2autorun.script

Tuesday Jul 01, 2008

ZFS snapshots to and from S3

Saving and Restoring ZFS Snapshots to and from Amazon S3

by Sean O'Dell

We can use ZFS snapshots to save and restore filesystems from one Solaris EC2 instance to another. This functionality is very useful, for example, for saving user home directories, web server documents, MySQL databases, etc., terminating a EC2 instance, and then restoring these filesystems on a new EC2 instance created at a later date.

Amazon Simple Storage Service (S3) provides a simple web services interface that can be used to store and retrieve any amount of data, at any time, from anywhere on the web. It gives any developer access to the same highly scalable, reliable, fast, inexpensive data storage infrastructure that Amazon uses to run its own global network of web sites.

Getting Started

In our example, we use the second EC2 ephemeral storage device, c0d1p0 (c4d1p0 for OpenSolaris 2008.05, c3d0s0 for OpenSolaris 2008.11), to create a ZFS pool. We create filesystems within this pool for data that we want to restore for each new EC2 instance, and then save and restore the data using S3.

We use the S3 Sync command line tools listed in the references section to interface with S3. We assume the that reader has an AWS account and has a basic understanding of EC2, S3 and Solaris ZFS.

In this section we describe the steps necessary to get started using ZFS within the EC2 environment. For our example, we use the SXCE Build 79 32 bit image AMI:
ami-eb7a9f82. This example has been tested and works with any of the Sun provided Solaris AMIs, including OpenSolaris 2008.05.

Launch an instance of the Solaris AMI and login to the new instance.

bash# ec2-run-instances -k <my-key-pair> -t m1.small ami-eb7a9f82

bash# ec2-describe-instances | grep INSTANCE | cut -f2,3,4,6
i-196fb970  ami-eb7a9f82  ec2-75-101-225-153.compute-1.amazonaws-DOT-com  running

bash# ssh -i <my-key-pair-file> root-AT-ec2-75-101-225-153.compute-1.amazonaws.com
Last login: Mon Apr 28 02:24:21 2008
Sun Microsystems Inc.   SunOS 5.11      snv_79  January 2008

# bash -o vi

Download and install the S3 Sync tools. Install the tools in your preferred location. In our example, we use /usr/local/aws.

bash# mkdir -p /usr/local/aws
bash# cd /usr/local/aws

bash# /usr/sfw/bin/wget \\
bash# gzcat s3sync.tar.gz | tar xf -
bash# cd /usr/local/aws/s3sync
bash# ln -s s3cmd.rb s3cmd
bash# export PATH=$PATH:/usr/local/aws/s3sync

Please note that there is an additional step required if you are using the OpenSolaris 2008.05 AMI. This AMI does not have ruby installed. Since the S3 Sync tools require ruby, we need to install ruby if using the OpenSolaris 22008.05 AMI.

bash# which ruby
no ruby in /usr/sbin /usr/bin /usr/local/aws/s3sync

bash# pkg install SUNWruby18
bash# which ruby

Set the correct time, set your AWS environment variables, and create a S3 bucket to store your saved ZFS snapshots.

bash# ntpdate 0.north-america.pool.ntp.org
bash# export AWS_ACCESS_KEY_ID=<my-aws-access-key>
bash# export AWS_SECRET_ACCESS_KEY=<my-aws-secret-access-key>
bash# s3cmd createbucket <my-bucket-name-for-zfs-snapshots>

We are now ready to create our ZFS pool and a few file systems. Logged in to the instance created and setup with the steps above, do the following.

Note: the extra disk for the OpenSolaris 2008.05 image is: c4d1p0. From this point forward, we use the SXCE Build 79 32 bit image in our examples. If using OpenSolaris 2008.05, substitute c4d1p0 for c0d1p0.

bash# zpool create s3pool c0d1p0
bash# zpool list
s3pool   149G   111K   149G     0%  ONLINE  -

bash# for i in home mysqldb share www
    zfs create s3pool/$i

bash# zfs list -r s3pool
s3pool           210K   147G    23K  /s3pool
s3pool/home       18K   147G    18K  /s3pool/home
s3pool/mysqldb    18K   147G    18K  /s3pool/mysqldb
s3pool/share      18K   147G    18K  /s3pool/share
s3pool/www        18K   147G    18K  /s3pool/www

bash# ls /s3pool
home     mysqldb  share    www

At this point we now have ZFS file systems created and mounted, ready to be populated with files.

Saving ZFS Snapshots to S3

On our source EC2 instance we have the following ZFS filesystems in pool s3pool.

bash# zfs list -r s3pool

s3pool          2.11G  13.5G    23K  /s3pool
s3pool/home      117M  13.5G   117M  /s3pool/home
s3pool/mysqldb  20.8M  13.5G  20.8M  /s3pool/mysqldb
s3pool/share    7.90M  13.5G  7.90M  /s3pool/share
s3pool/www       609K  13.5G   609K  /s3pool/www

Create a snapshot for each of the filesystems.

bash# export snapshotdate=`date '+%Y%m%d-%H%M%S'`
bash# export poolname=s3pool
bash# for i in home mysqldb share www
    zfs snapshot -r ${poolname}/${i}@s3-${i}-$snapshotdate

bash# zfs list -t snapshot -r s3pool
NAME                                        USED  AVAIL  REFER  MOUNTPOINT
s3pool/home@s3-home-20080629-223417            0      -   117M  -
s3pool/mysqldb@s3-mysqldb-20080629-223417      0      -  20.8M  -
s3pool/share@s3-share-20080629-223417          0      -  7.90M  -
s3pool/www@s3-www-20080629-223417              0      -   609K  -

For each ZFS snapshot, save to a stream file.

bash# mkdir /var/tmp/snapshots
bash# for i in home mysqldb share www
    zfs send -R ${poolname}/${i}@s3-${i}-$snapshotdate | bzip2 \\
      > /var/tmp/snapshots/s3-${i}.bz2

bash# cd /var/tmp/snapshots
bash# ls
s3-home.bz2     s3-mysqldb.bz2  s3-share.bz2    s3-www.bz2

Upload the ZFS snapshot streams to S3.

bash# cd /var/tmp/snapshots
bash# export s3bucketname=<my-bucket-name-for-zfs-snapshots>

bash# for i in `ls`
    s3cmd put ${s3bucketname}:$i $i

bash# s3cmd list ${s3bucketname}

bash# rm /var/tmp/snapshots/\*.bz2

Restoring ZFS Snapshots from S3

On our destination EC2 instance we can restore the ZFS Snapshots previously saved.

Create the new ZFS pool.

bash# zpool create s3pool c0d1p0

bash# zpool list
s3pool   149G    94K   149G     0%  ONLINE  -

Download the ZFS Snapshots from S3.

bash# mkdir /var/tmp/snapshots
bash# cd /var/tmp/snapshots
bash# export s3bucketname=<my-bucket-name-for-zfs-snapshots>

bash# for i in `s3cmd list ${s3bucketname}`
    if ! [[ $i == "--------------------" ]]
        s3cmd get ${s3bucketname}:$i $i

bash# ls
s3-home.bz2     s3-mysqldb.bz2  s3-share.bz2    s3-www.bz2

Restore the ZFS Snapshots.

bash# for i in `ls`
    bzcat $i | zfs recv -F -d s3pool

bash# zfs list -r s3pool
NAME                                        USED  AVAIL  REFER  MOUNTPOINT
s3pool                                      146M   147G    23K  /s3pool
s3pool/home                                 117M   147G   117M  /s3pool/home
s3pool/home@s3-home-20080629-223417            0      -   117M  -
s3pool/mysqldb                             20.8M   147G  20.8M  /s3pool/mysqldb
s3pool/mysqldb@s3-mysqldb-20080629-223417      0      -  20.8M  -
s3pool/share                               7.90M   147G  7.90M  /s3pool/share
s3pool/share@s3-share-20080629-223417          0      -  7.90M  -
s3pool/www                                  609K   147G   609K  /s3pool/www
s3pool/www@s3-www-20080629-223417              0      -   609K  -

Solaris ZFS Administration Guide
S3 Sync - free and open source interfaces to the Amazon S3 system

Sunday Jun 29, 2008

How to get the Sun AMI/API tools for build 79 images

In some of the Solaris Build 79 based images it may be possible that it doesn't have AMI/API tool bundled.

If the Solaris Build 79 image you are using does not have the Sun AMI/API tools installed.  You can follow the steps below to load/install the tools on your instance.
The archive of the current tools can be retrieved with the following command:

# curl -O http://s3.amazonaws.com/sun-osol-tools/sun-amiapi-tools-latest.cpio
Once the archive is loaded, the checksum can be easily checked to make sure that you have the right file:
# md5sum sun-amiapi-tools-latest.cpio
1de19f491d32a2fe23b9a0515ec025dd  sun-amiapi-tools-latest.cpio

Then extract the archive as shown below. Full paths are used in the archive so the tools are always extracted in the same location.

# cpio -icvdum < sun-amiapi-tools-latest.cpio
Once the tools are installed, rebundling can be done as documented in the Sun Amazon EC2 Getting Started Guide.

Note that the guide references mounting disks at /mnt but that is completed automatically in certain AMI's.
The following command can be run to see if there is a preconfigured filesystem at /mnt.
# df -kl /mnt

Wednesday May 28, 2008

OpenSolaris FAQ for Amazon EC2 users

For OpenSolaris 2008.05 ( ami-0c41a465):

Q:  What additional software is available after the installation of  Solaris 2008.05
A:  The default installation will point toward the opensolaris.org  package authority.  There are a large number of packages available  for install in that repository.  The "pkg" command will assist in  retrieving software.
Refer to http://dlc.sun.com/osol/docs/content/IPS/ggcph.html for information on the new IPS (Image Packaging System) in OpenSolaris 2008.05

In addition to standard software package a package group has been made available so the user wanting to have the complete software suite  can do so with the invocation of a single pkg command.

Q:  Is there a package which will install a set of web tools (a SAMP stack) on OpenSolaris 2008.05?
A:  Yes, there is a web stack package available which includes Apache, MySQL, PHP/Perl/Python.  Installation is done with a single command:

#  pkg install amp-dev
A list of packages installed with this group is listed at:

Q:  What package groups are available for OpenSolaris 2008.05?
A:  There are many developer packages available.
amp-dev:  SAMP, a web stack with Apache, MySQL, PHP/Perl/Python.
ss-dev:  Sun Studio Express 5/08
gcc-dev:  GNU compilers, make, gdb and other GNU programs.
java-dev:  Glassfish, ant, Sun Studio, Netbeans

Installation can be done with:
#  pkg install <package name or package group name>

More details can be found at "Installing Developer Packages".

Q:  The time on the instance is set back in 1970.
A:  This is due to a bug in OpenSolaris 2008.05 (CR 6574993).  There are no current workarounds but setting up and running ntp helps.  ntp can be
configured to use an open ntp server.  It is also possible to run ntpdate after boot to set the time properly.  A list of public ntp servers can  be found at:  http://support.ntp.org/bin/view/Servers/NTPPoolServers

-bash-3.2# ntpdate 0.pool.ntp.org
Please note that setting the time after boot still leaves a number of system files and directories with the errant time.

Build 79 :
Q:  The time is not set correctly, is there a way to fix it?

A:  After boot, run the command "rtc -z GMT" as root.  This will update the kernel with information on the timzone of the hardware clock.  To
change the timezone, edit /etc/TIMEZONE with the proper TZ entry.  See  the timezone(4) manpage for pointers to valid TZ entries and TIMEZONE(4)
manpage for more information on the /etc/TIMEZONE file.

Generic :
Q:  Is there additional storage on the instances?
A:  Yes, depending on the model you launch there are one to three additional storage devices.  A quick way to access the storage is to
use ZFS.  The following commands will create a ZFS filesystem on the instance.

   OpenSolaris 2008.05 32 Bit small:  zpool create storage c4d1

   SXCE Build 79 - 32 Bit small:  zpool create storage c0d1
   SXCE Build 79 - 64 Bit large:  zpool create storage c0d1 c0d2
   SXCE Build 79 - 64 Bit xlarge:  zpool create storage c0d1 c0d2 c0d3 c0d4

The above commands will allocate all of the storage from the instances disks into a large zfs pool accessible at /storage.  See the zpool(1m) and
zfs(1M) man pages for further information.

Tuesday May 13, 2008

Meet the team

We thought you may want to know about the team who is working on making OpenSolaris integrate and work on Amazon EC2. This team is your first point of contact for any information on EC2/OpenSolaris and for providing any technical assistance you may need while you develop and create your stacks on OpenSolaris for Amazon EC2. You can reach the team at ec2-solaris [at] sun [dot] com.

Rajesh Ramchandani Rajesh Ramchandani

I am a Senior Market Development manager and manage relationships with startups, developers and strategic partners and am part of Startups and Emerging Markets team at Sun. As part of Sun Startup Essentials team, I am currently working on ways to help early stage startups to provide them Sun technologies, products and services which help them take-off the ground and grow their businesses with very little or no investments and shorter time to market. I am managing business relationship with Amazon Web services and working with the team to bring best Operating System on Planet aka OpenSolaris to the developers, startups and other EC2 users.

Sujeet Vasudevan Sujeet Vasudevan

Sujeet Vasudevan is a Senior Engineering Manager in Global Market Development and engineering. He has over 17 years of industry experience working with various ISV, customers, SIs and startups. He works primarily with Oracle as a team leader, and also leads team of engineers who work with many others ISVs across the spectrum.  Sujeet is lead engineering manager on integration of OpenSolaris with EC2. 

Dileep Kumar Dileep Kumar

Dileep Kumar is a Staff Engineer in the ISV Engineering Group at Sun Microsystems, Inc. He has over ten years of experience in the computer industry and now works on OpenSolaris and IBM WebSphere products on Solaris. His area of expertise includes Solaris, Java and J2EE based system design and development, performance enhancement, and implementation. He holds a M.S. degree in Engineering Management from Santa Clara University, Santa Clara. Read Dileep's professional blog at http://blogs.sun.com/dkumar

Alan Yoshia Alan Yoshida

Alan is one of the lead developers on OpenSolaris on EC2 engineering team. Alan has been working on porting AMI and API tools to OpenSolaris and most often you will hear from Alan in case you run into any OpenSolaris issues.

Sharlene Wong Sharlene Wong

Sharlene is program manager for OpenSolaris on EC2 beta program. If you have been approved for access to OpenSolaris AMIs, you've seen emails from Sharlene. Sharlene will continue to be you point of communication during the Beta program.

Divyen Patel Divyen Patel

Divyen Patel is an engineer in the ISV Engineering group at Sun Microsystems. He is graduated from San Jose State University with Major in Software Engineering. His area of interest includes web 2.0 and its related technologies. Read Divyen's weblog at http://blogs.sun.com/divyen

Sharlene Wong Prateek Parekh

Prateek Parekh is a developer in the ISV Engineering team at Sun Microsystems, and has worked on J2EE, J2ME, Java CAPS, Unix and Web 2.0 related technologies. He holds an MS degree in Software Engineering from San Jose State University. He maintains a blog at http://blogs.sun.com/prateek

GlassFish AMIs -

Announcing availability of new AMIs based on SXCE AMI and GlassFish. For those of you who have been provided access to OpenSolaris AMIs, you have access to these AMIs as well.

GlassFish OpenSolaris MySQL

1. OpenSolaris (SXCE) + GlassFish ( This is smaller footprint of SXCE or "Just Enough OS" )

    a. 32-bit AMI: ami-8142a7e8 aki-b57b9edc ari-b47b9edd / sun-osol/JeOS-79_32_1.0.img.manifest.xml

    b. 64-bit AMI: ami-314da858 aki-8e7a9fe7 ari-817a9fe8 / sun-osol/JeOS-79_64_1.0.img.manifest.xml

2. OpenSolaris (SXCE) + GlassFish + MySQL

    a. 32-bit AMI: ami-3742a75e aki-b57b9edc ari-b47b9edd / sun-osol/GFAS-MySQL-79_32_1.0.img.manifest.xml

 3. OpenSolaris (SXCE) + GlassFish + Liferay

    a. 32-bit AMI: ami-cb40a5a2 aki-b57b9edc ari-b47b9edd  / sun-osol/GF-LF-79_32_1.0.img.manifest.xml

    b. 64-bit AMI: ami-dd40a5b4 aki-8e7a9fe7 ari-817a9fe8 / sun-osol/GF-LF-79_64_1.0.img.manifest.xml

For more information and details on these AMIs including user names and passwords, please refer to Rudolf's detailed blog. As always, please feel free to contact us at ec2-solaris [at] sun [dot] com.

Thursday May 08, 2008

Update: Capacity limit on OpenSolaris 2008.05 AMI

We are almost reaching the capacity limit on number of instances we have available on Amazon EC2 for running OpenSolaris 2008.05 AMI. As we continue to integrate and test OpenSolaris 2008.05 with EC2, we have limitation on the number of instances allocated currently. If you have been granted access to OpenSolaris 2008.05 AMI, please note that EC2_URL must be set to :

EC2_URL=https://ec2-pinotage.amazonaws.com. Once Amazon team has completed testing, the URL will change and communicated.

Please note that the image re-bundling tools are being under development / testing at this time as well.

 Solaris Express.79 AMIs are available for deployment in full capacity. AMI / API tools have been completed ported and tested and are available for use.

 Thank you for your patience as Sun and Amazon continue to work together to make EC2 and OpenSolaris a platform to offer best development experience.

 If you have any suggestions or comments or would like to help out with OpenSolaris 2008.05 re-bundling efforts, please feel free to contact us at ec2-solaris [AT] SUN [dot] COM.

Wednesday May 07, 2008

Launch of OpenSolaris on Amazon EC2

On may 5th, we announced the availability of OpenSolaris (2008.05) on Amazon EC2. Announcements were covered by GigaOm and Amazon's official announcement is here. We / MySQl also announced Amazon EC2 as one of the supported platforms, which meas if you already have purchased a support contract from MySQl, you can now deploy MySQl on EC2 and get same production level support from MySQL. These are two huge milestones which proves Sun's leadership in Cloud Computing space.



So, what did this collaboration between Amazon Web services and Sun do? It offers choice, choice to developers and startups in two ways-

1) OpenSolaris provides ZFS filesystem, first 128-bit filesystem which offer 64-bit checksum capability to maintain data integrity and Rollback functionality to be able to quickly rollback changes to the last known stable state in case of data corruption of instability in the fileysytsem. Secondly, OpenSolaris is the only OS which offers Dtrace - a observability and tracing tool which helps developers and administrators to debug and trace issues and bottlenecks right from the hardware level (OS level in case of EC2) to their application code and middleware in between. Most importantly, developers and EC2 users can now have choice of multiple OSes other than various variants of Linux.

2) Freedom to run MySQL on Cloud. MySQl is most critical component of any web applications and MySQL's commitment to support Cloud such as EC2 as supported platform, opens the doors for startups to leverage the cloud infrastructure to further reduce the cost and capex investment towards the infrastructure build-out at early stages of the company.

We are excited to be working with Amazon Web services and providing choice to developers, startups and students who wish to use EC2 cloud for their development and deployment use.

Welcome to OpenSolaris on Amazon EC2 Beta program!

Here you will find the latest information on the program and any late breaking information. OpenSolaris on EC2 team welcomes you and we look forward to providing you any technical assistance and information you need for a best possible OpenSolaris experience. Please feel free to leave comments here or contact us at ec2-solaris-support [AT] SUN [dot] COM.


Information about Solaris and OpenSolaris on Amazon Web Services (AWS) EC2. Look here for the latest information on the program and any late breaking information on (Open)Solaris on EC2.


« August 2016