EBS FAQ and how-to's

Previously we posted a blog entry titled “OpenSolaris supports EBS - provides capability to create ZFS” that explains how to use the Amazon's Elastic Block Storage with OpenSolaris EC2 instances. This document combines the EBS with OpenSolaris ZFS technology. While we tried to cover the details needed but there a few questions have been asked several times. In this entry, I will try to explain those details and feel free to ask more questions so we can make it as clear as possible:

  • When an EBS device is attached to OpenSolaris instance, how do I identify these drives from within the instance?
  • Can I use some automated scripts to mount these EBS devices during the instance startup ? is this answered last (below)?
  • Why do I sometimes detach/attach being not successful ?

I will answer these questions in following sections.

When a disk is attached to an OpenSolaris instance it can be viewed in number of ways and the simplest one is to use the format(1M) command. Following is the output of the format command on a default OpenSolaris EC2 instance (without any EBS device being attached) :

root@domU-12-31-39-00-50-A7:~# format

Searching for disks...done


0. c7d0 <DEFAULT cyl 1274 alt 0 hd 255 sec 63>


1. c7d1 <DEFAULT cyl 19464 alt 0 hd 255 sec 63>


Specify disk (enter its number):

What this tells us is there are two default disks wwhere the controller is 7 and disk is 0(c7d0) and 1(c7d1). It is important to note that this is an OpenSolaris 2009.06 AMI and any AMI which is based on this should have the same controller number. For different OpenSolaris versions the controller number may change and the Getting Started Guide or the format command can be referred to get this information. Any further disk attachment through EBS commands (ec2-attach-volume) will have the same controller ID and a new disk id which will change based on the argument we give to this command. So far it can be easily assumed that -d (a unique number greater than 1) will result in an EBS device appearing as below with the format command within the EC2 instance:

c7d<decimal value of <<a unique number greater than 1> treated as hex number>


$ ec2-attach-volume vol-63d6250a -d 3 -i i-cf65b5a7

will result in:

c7d3 <DEFAULT cyl 2048 alt 0 hd 128 sec 32>


And so a command like this will result in the following:

$ ec2-attach-volume vol-63d6250a -d 10 -i i-cf65b5a7

will result in:

c7d16 <DEFAULT cyl 2048 alt 0 hd 128 sec 32>


Now if the disk number is already assigned and you try to do that again, it will result in following error:

$ ec2-detach-volume vol-63d6250a -d 2 -i i-cf65b5a7

Client.InvalidAttachment.NotFound: The volume 'vol-63d6250a' is not attached to instance 'i-cf65b5a7' as device '2'.

So it must be unique and unused number.

Also, for detaching a mounted ZFS or regular UFS? file system, we will have to do few things before we can do a clean detach:

For the EBS volumes that is part of ZFS we have to do the following:

Shutdown all applications that are running on top of the ZFS pool.
Export the ZFS pools with:
$ zpool export pool_name
Detach the EBS volumes from the ECS instance.
Also clean up devices with:
$ devfsadm -C -v

For regular UFS? mounted file systems using the newfs(1M) and mount(1M) commands:

Unmount the mounted volume:

$ umount /ebs-vol

Also clean up devices with:
$ devfsadm -C -v


I was happy to begin using using EBS with osol-ec2, but I was bitten by what is either a bug or a documentation error that your other readers should be aware of.

I created a zvol on ebs and used it to build zones which I bridged to my ec2 instance via ipnat. So far so good, so I moved on to automating the ec2 instance launch. My launch script waits for the instance to be indicated as 'running', then attaches my ebs volume an d binds my elastic IP. But apparently my script is too fast.

It turns out that svc:/ec2/mount (/opt/ec2/sbin/ec2mount.sh) constructs /mnt as a zpool not just on c7d1p0, but striped across all c7d[1234]p0 that exist at the time it executes. So my EBS zpool that I carefully built with my zones is now a member of /mnt and will be destroyed when I terminate the instance.

Lesson A: at least until the AMI is updated, the practical range for EBS attachments is 5-23, not 2-23. DO NOT attach EBS volumes to the OpenSolaris AMI with -d/--device 2-4. For me it was a timing issue, but even if I had waited longer, a reboot of my instance would have rerun svc:/ec2/mount and wiped my zpool.

Lesson B: paranoia pays, and so do backups. :)

Request 1: would it be possible to put EBS volumes onto c8, or even to specify cXdY in the ec2-attach-volume command (if for some reason I need more than 24 luns)? Failing that, please either document the range that ec2/mount will eat if available, or change its behavior.

Request 2: someone (Sean, maybe?) had a guide describing how to customize an AMI to make use of user data. Can we have that in the stock AMIs too, please? If I could send a zip file in user data and have it extract and run an autorun.sh file, I could complete my automated launch (copying /etc/zones from my zpool and setting up the etherstub bridge), and never \*need\* a custom AMI (which I haven't figured out how to build yet).

Posted by David Champion on October 22, 2009 at 04:36 AM PDT #

Post a Comment:
Comments are closed for this entry.

Information about Solaris and OpenSolaris on Amazon Web Services (AWS) EC2. Look here for the latest information on the program and any late breaking information on (Open)Solaris on EC2.


« January 2017