Thursday Jul 09, 2009

Encrypted ZFS using Amazon EBS and OpenSolaris 2009.06

Recently, I had the pleasure of exchanging e-mail with István Soós who had contacted our OpenSolaris on EC2 team asking how he could use OpenSolaris along with Amazon's Elastic Compute Cloud (EC2) and Elastic Block Stores (EBS) to create a subversion-based source code control system. Sounds simple, right? Well, István threw us a curve ball. He wanted the revision control system to run on OpenSolaris and be stored on an encrypted, mirrored ZFS file system backed by EBS. Now, you have to admit, that is pretty cool!

This is the point in the story where István met. After going over the requirements, it appeared as though the encrypted scratch space work that had been done for the Immutable Service Container project was a near fit except that persistence was needed. So, I provided István with links to this work which of course linked to Darren's original article on ZFS encryption using LOFI. Just a day later, István replied that his environment was up and running! Talk about speed and agility in the Cloud Computing world!

I would definitely encourage you to check out all of the details on István's blog. I especially want to thank István for sharing this great article that I hope will encourage others to try new things and keep pushing the OpenSolaris envelope forward!

Take care!

Technorati Tag:

Tuesday Jun 16, 2009

NEW: Encrypted ZFS Backups to the Cloud v0.3

Building upon the v0.4 release of the Cloud Safety Box tool, I am happy to announce the availability of v0.3 of the Encrypted ZFS Backups to the Cloud code. This new version uses the Cloud Safety Box project to enable compression, encryption and splitting of the ZFS backups before uploading the results to the Cloud. Due to this change, this project now officially depends upon the Cloud Safety Box project. The nice thing about this change is that it helps to keep the amount of redundant code low (between the two projects) while also improving testing time.

From an end-user perspective, this change is mostly transparent. A few parameters were added or changed in the /etc/default/zfs-backup-to-s3 defaults file such as:

# ENC_PROVIDER defines the cryptographic services provider used for
# encryption operations.  Value values are "solaris" and "openssl".
ENC_PROVIDER="solaris"

# MAX_FILE_SIZE specifies the maximum file size that can be sent
# to the Cloud storage provider without first splitting the file
# up into chunks (of MAX_FILE_SIZE or less).  This value is specified
# in Kbytes.  If this variable is 0 or not defined, then this service
# will _not_ attempt to split the file into chunks.
MAX_FILE_SIZE=40000000

# S3C_CRYPTO_CMD_NAME defines the fully qualified path to the
# s3-crypto.ksh program which is used to perform compression,
# encryption, and file splitting operations.
S3C_CRYPTO_CMD_NAME=""

# S3C_CLI_CMD_NAME defines the fully qualified path to the program
# used to perform actual upload operations to the Cloud storage
# provider.  This program is called (indirectly) by the 
# s3-crypto.ksh program defined by the S3C_CRYPTO_CMD_NAME variable
# above.
S3C_CLI_CMD_NAME=""

It should be noted that compression is always enabled. If this turns out to be a problem, please let me know and we can add a parameter to control the behavior. I would like to try and keep the number of knobs under control, so I figured we would go for simplicity with this release and add additional functionality as necessary.

Encryption is always always enabled. In this release you have the choice of the OpenSSL or Solaris cryptographic providers. Note that just as with the Cloud Safety Box project, key labels are only supported for the Solaris cryptographic provider. The name of the algorithm to be used must match the algorithm name supported by whichever provider you have selected.

File splitting is enabled by default. This behavior can be changed by setting the MAX_FILE_SIZE parameter to 0 (off) or any positive integer value (representing a size in Kbytes).

All of the other changes are basic implementation details and should not impact the installation, configuration or use of the tool. If you have not had a chance, I would encourage you to check out the ZFS Automatic Snapshot as well as the latest version of this project so that you can begin storing compressed, encrypted ZFS backups into Amazon's Simple Storage Service (S3) or Sun's SunCloud Storage Service (when available).

As always, feedback and ideas are greatly appreciated! Come join the discussion at Project Kenai!

Take care!

Technorati Tag:

Friday Jun 12, 2009

Encrypted Scratch Space in OpenSolaris 2009.06

Last week, I announced the availability of a set of scripts that could be used to enable encrypted swap in OpenSolaris 2009.06. Building upon this concept, today, I am happy to announce a new set of scripts that enables the creation of an encrypted file system (intended to be used as scratch space).

The method for creating these encrypted file systems is similar to the approach discussed by Darren in his posting on the topic of Encrypting ZFS Pools using LOFI. I had been working on a similar model for the Immutable Service Container project where I had wanted to be able to give each OpenSolaris zone that was created its own place to store sensitive information (such as key material) that would be effectively lost when the system was rebooted (without requiring a time-consuming disk scrubbing process).

The way these scripts work is quite simple. There is an SMF service, called isc-encrypted-scratch, that (if enabled) will automatically create encrypted scratch space for the global zone as well as any non-global zones on the system (by default). The creation of encrypted scratch space is configurable allowing you to specify which zones (including the global zone) can have one. You can specify which ZFS file system can be used as the home directory for the scratch space hierarchy. Using SMF properties and standard SMF service configuration methods, you can also specify the size of the encrypted scratch space.

Once created, you will have access to a ZFS file system (based upon a ZFS pool which itself is based upon an encrypted LOFI which itself is based upon a ZFS zvol - crazy eh?) The file systems created for the encrypted scratch space are destroyed and re-created upon boot (or service restart). Just as with the encrypted swap scripts, the encrypted LOFIs use ephemeral keys in conjunction with the AES-256-CBC cipher.

So, without further ado, let's get to the particulars. To enable encrypted scratch in OpenSolaris 2009.06, you need only follow the following steps.

Note that the following instructions assume that privileged operations will be executed by someone with administrative access (directly or via Solaris role-based access control). For the examples below, no changes were made to the default RBAC configuration. The commands as written were executed as the user created during the installation process.
  • Add the Encrypted Scratch Space SMF service. First, you will need to download the archive containing the encrypted scratch space SMF service manifest and method files. Note that these files are user contributed and as such are not officially a part of the OpenSolaris release nor are they officially supported by Sun. If you are ok with these terms, you should now download the archive and install the files using the following commands:

    $ wget -qnd http://mediacast.sun.com/users/gbrunette/media/smf-encrypted-scratch-v0.1.tar.bz2
    
    $ bzip2 -d -c ./smf-encrypted-scratch-v0.1.tar.bz2 | tar xf -
    
    $ cd ./smf-encrypted-scratch
    
    $ pfexec ./install.sh
    
    $ svccfg import /var/svc/manifest/site/isc-enc-scratch.xml
    
  • Configure the Encrypted Scratch Space Service. Unlike the Encrypted Swap SMF Service, this service is not enabled automatically. This is to allow you the opportunity to adjust its configuration should you want to change any of the following properties:
    • config/scratch_root. This property defines the root ZFS file system to be used for the scratch space hierarchy. By default, it is set to rpool/export. Based upon this value, a collection of scratch files will be created under this location (each in its own directory tied to the name of the zone).
    • config/scratch_size. This property defines the size of the scratch space. This value is used during the initial creation of a ZFS volume (zvol) and accepts the same values as would be accepted by the zfs create -V command. The default size is 100 Mbytes. Note that today, each individual encrypted scratch space on a single system must be the same size.
    • config/zone_list. This property defines the zones for which encrypted scratch space will be created. By default, this is all zones including the global zone. Setting this value to a zone or list of zones will cause encrypted scratch spaces to be only created for those specified.

    For example, to configure this service to create 1Gbyte encrypted scratch spaces, use the command:

    $ svccfg -s isc-encrypted-scratch setprop config/scratch_size = 1g
    $ svcadm refresh isc-encrypted-scratch
    

  • Enable the Encrypted Scratch Space Service. Once you have finished configuring the service, you can enable it using the standard SMF method:

    $ svcadm enable isc-encrypted-scratch
    

  • Verify the Encrypted Scratch Space Service. To verify that the service is operating correctly, you can use the following commands to verify that everything has been properly created. First, let's make sure the service is running:

    $ svcs isc-encrypted-scratch
    STATE          STIME    FMRI
    online         12:40:02 svc:/system/isc-encrypted-scratch:default
    

    Next, let's verify that all of the proper ZFS mount points and files have been created. Note that the scratch root in this case is the default (rpool/export) and under this location a new scratch file system has been created under which there is a file system for each zone on the system (global and test). For each zone, a 1 Gbyte scratch file has been created.

    $ zfs list -r rpool/export/scratch
    NAME                                       USED  AVAIL  REFER  MOUNTPOINT
    rpool/export/scratch                      2.00G  5.21G    19K  /export/scratch
    rpool/export/scratch/global               1.00G  5.21G    19K  /export/scratch/global
    rpool/export/scratch/global/scratch_file     1G  6.21G  1.15M  -
    rpool/export/scratch/test                 1.00G  5.21G    19K  /export/scratch/test
    rpool/export/scratch/test/scratch_file       1G  6.21G  1.15M  -
    

    Next, let's verify that the encrypted LOFIs have been created. The mapping of the device files back to the actual scratch file zvols is left as an exercise for the reader.

    $ lofiadm
    Block Device             File                           Options
    /dev/lofi/1              /devices/pseudo/zfs@0:1c,raw   Encrypted
    /dev/lofi/2              /devices/pseudo/zfs@0:2c,raw   Encrypted
    

    Next, let's verify that new zpools and ZFS file systems have been created from the encrypted LOFIs:

    $ zpool list
    NAME             SIZE   USED  AVAIL    CAP  HEALTH  ALTROOT
    rpool           11.9G  4.06G  7.88G    34%  ONLINE  -
    scratch-global  1016M    82K  1016M     0%  ONLINE  -
    scratch-test    1016M    82K  1016M     0%  ONLINE  -
    
    $ zpool status scratch-global scratch-test
      pool: scratch-global
     state: ONLINE
     scrub: none requested
    config:
    
            NAME           STATE     READ WRITE CKSUM
            scratch-global  ONLINE       0     0     0
              /dev/lofi/1  ONLINE       0     0     0
    
    errors: No known data errors
    
      pool: scratch-test
     state: ONLINE
     scrub: none requested
    config:
    
            NAME           STATE     READ WRITE CKSUM
            scratch-test   ONLINE       0     0     0
              /dev/lofi/2  ONLINE       0     0     0
    
    errors: No known data errors
    
    $ zfs list /scratch-\*
    NAME             USED  AVAIL  REFER  MOUNTPOINT
    scratch-global    70K   984M    19K  /scratch-global
    scratch-test      70K   984M    19K  /scratch-test
    

  • (Optional) Add Encrypted Scratch Space to a Non-Global Zone. At this point, you have everything that you need to get started. In fact, for the global zone, there are no further steps, but you can now assign the scratch space to a non-global zone (if desired) using the standard zonecfg mechanisms. For example, you could do the following:

    $ pfexec zonecfg -z test
    zonecfg:test> add dataset
    zonecfg:test:dataset> set name=scratch-test
    zonecfg:test:dataset> end
    zonecfg:test> verify
    zonecfg:test> 
    

  • (Optional) Verify Encrypted Scratch Space in a Non-Global Zone. Once booted, the new encrypted scratch space data set will be made available to the non-global zone:

    $ pfexec zlogin test
    [Connected to zone 'test' pts/2]
    Last login: Fri Jun 12 09:57:43 on pts/2
    
    root@test:~# zpool list scratch-test
    NAME           SIZE   USED  AVAIL    CAP  HEALTH  ALTROOT
    scratch-test  1016M  74.5K  1016M     0%  ONLINE  -
    
    root@test:~# df -k /scratch-test
    Filesystem            kbytes    used   avail capacity  Mounted on
    scratch-test         1007616      19 1007546     1%    /scratch-test
    root@test:~# 
    

    Upon reboot, each of the zones will be shut down before the encrypted scratch space is destroyed. Note that upon global zone or service restart, the encrypted scratch space will be re-created and therefore will not persist across global zone reboots. The encrypted scratch space will persist across non-global zone reboots.

    There you have it! Enabling encrypted scratch in OpenSolaris 2009.06 (for the global and non-global zones) is as easy as following these few simple steps. It is worth stating that this solution is just a temporary workaround. Once ZFS encryption is available, it should be used instead of this approach. In the meantime, however, if you are interested in enabling encrypted scratch on your OpenSolaris 2009.06 systems, give this model at try and please be sure to send along your feedback!

    Take care!

    P.S. Some of you may be wondering why the SMF service and associated files are labeled with an ISC prefix? The answer is simple. They were developed and are being used as part of the Immutable Service Container project! Look for more information and materials from this project in the near future!

    Technorati Tag:

Wednesday Jun 10, 2009

NEW: Cloud Safety Box v0.4

Today, I am happy to announce the v0.4 release of the Cloud Safety Box project. About a month ago, I announced the initial public release and since that time it was even highlighted and demonstrated at Sun's CommunityOne event! Not too bad for a new project!

The new version released today was a substantial redesign in order to improve the overall design and efficiency of the tools while at the same time adding a few key features. The biggest visible changes include support for compression, splitting up of large files into small chunks, and also support for Solaris key labels. Let's dive into each of these briefly:

  • Compression. Compression is enabled automatically for the Cloud Safety Box (csb) tool and it is configurable when using the s3-crypto.ksh utility. When compression is enabled, the input stream or file is compressed first (before encryption and splitting). By default, compression is formed using the bzip2 utility (with the command-line option -9. To enable compression with the s3-crypto.ksh utility, use the -C option as in the following example:
    $ s3-crypto.ksh -C -m put -b mybucket -l myfile -r myfile
    

    Of course, compression can be used along with encryption and file splitting. Decompression is handled on get operations and is the last step to be performed (after file re-assembly and decryption). Just as with compression, the bzip2 utility is used (with the command-line options -d -c. To enable decompression with the s3-crypto.ksh utility, use the -C option as in the following example:

    $ s3-crypto.ksh -C -m get -b mybucket -l myfile -r myfile
    

    The actual compression and decompression methods can be changed using the S3C_COMPRESS_CMD and S3C_DECOMPRESS_CMD environment variables respectively as in the following example:

    $ env S3C_COMPRESS_CMD="gzip -9" S3C_DECOMPRESS_CMD="gzip -d -c" \\
       s3-crypto.ksh -C -m put -b mybucket -l myfile -r myfile
    

  • Splitting. It is well known that there are file size limits associated with Cloud Storage services. There are times, however, when you may have files that you would like to store that exceed those limits. This is where splitting comes into the picture. Splitting will take an input file and based upon a size threshold, divide it up into a number of files. Splitting is done by default with the csb tool and can be optionally enabled in the s3-crypto.ksh tool. Splitting is accomplished using the GNU split(1) program and is enabled using the -S option. The maximum file size limit is, by default, set at 4 GB, but it can be adjusted using the -L command-line option (specified in Kbytes). Splitting at 2 GB is enabled in the following example:
    $ s3-crypto.ksh -S -L 2000000 -m put -b mybucket -l myfile -r myfile
    

    When splitting is enabled and triggered (when a file's size exceeds the limit), the files stored in the Cloud Storage service use the name as specified by the remote_file (-r) argument. In the above example, the split files will all begin with the name myfile. Each will have a suffix of a ~ followed by an identification string. For example, files stored in the Cloud may look like:

    myfile~aaaaa
    myfile~aaaab
    myfile~aaaac
    myfile~aaaad
    

    The csb and s3-crypto.ksh tools will use this naming convention to automatically reassemble files for get operations. Just as with splitting, reassembly is automatically performed for the csb tool and is enabled in the s3-crypto.ksh tool using the command-line option -S. When specifying a file that has been split, you do not need to include the suffix. The tools will discover that the file has been split and automatically reassemble it. Here is an example for reassembly:

    $ s3-crypto.ksh -S -m get -b mybucket -l myfile -r myfile
    

    The only downsides to splitting are the time it takes to split the files and the additional space that is needed to accommodate both the original file as well as the files created during the splitting process. This is unavoidable however as complete files must be available locally before they can be uploaded to the Cloud Storage provider.

  • Key Labels. The last "big" feature added in this new version is support for symmetric keys stored in PKCS#11 tokens (when the Solaris cryptographic provider is used). By default, the Solaris cryptographic provider is not selected (for reasons of portability), but it can easily be enabled in the s3-crypto.ksh tool using the -p solaris command line option. This setting will cause enable the use of the Solaris encrypt(1) and decrypt commands in place of their OpenSSL counterparts. Using the Solaris cryptographic provider allows you to take advantage of the Solaris Key Management Framework. Today, only the Sun Software PKCS#11 softtoken is supported, but I expect to remove this restriction in a future release.

    Using the pktool(1) command, you can create a key with a specific key label:

    $ pktool genkey keystore=pkcs11 label=my-new-key keytype=aes keylen=256
    Enter PIN for Sun Software PKCS#11 softtoken 
    Enter PIN for Sun Software PKCS#11 softtoken
    Enter PIN for Sun Software : 
    

    The creation of this new key (with label my-new-key) can be verified:

    $ pktool list objtype=key
    Enter PIN for Sun Software PKCS#11 softtoken  Enter PIN for Sun Software P: 
    Found 1 symmetric keys.
    Key #1 - AES:  my-new-key (256 bits)
    

    This key can be used with the s3-crypto.ksh tool when the Solaris cryptographic provider is selected and the key label is provided using the -K command-line option as in the following example:

    $ s3-crypto.ksh -c -p solaris -m put -b mybucket -K my-new-key -l myfile -r myfile
    Enter PIN for Sun Software PKCS#11 softtoken  : 
    

    The same approach is used to decrypt files when a get operation is specified.

As always, I am always looking for feedback! Let me know if these tools are helpful and how they can be improved! You can find out more information on this project at its home page at Project Kenai.

Take care!

Technorati Tag:

Monday Jun 08, 2009

Encrypted Swap in OpenSolaris 2009.06

Back in December 2008, LOFI encryption support was added to Solaris Nevada (build 105). With the release of OpenSolaris 2009.06, this functionality is now available as part of a released product. What does this have to do with encrypted swap you may ask? To get your answer, you need only review the lofi(7d) crypto support architectural review case (PSARC/2007/001). Toward the bottom is a section titled "Encrypted Swap". This information gives us everything that we need to enable encrypted swap on OpenSolaris -- almost.

The problem is that the encrypted swap portion of this ARC case was never completed as it is expected that the ZFS encryption project will provide this functionality when it integrates. Unfortunately, ZFS encryption is not here today, so until it is - we can enable a workaround using LOFI encryption. There are some "issues" to consider when using LOFI encryption that Darren Moffat covers well in his post on this subject.

So, without further ado, let's get to the particulars. To enable encrypted swap in OpenSolaris 2009.06, you need only follow the following steps.

Note that the following instructions assume that privileged operations will be executed by someone with administrative access (directly or via Solaris role-based access control). For the examples below, no changes were made to the default RBAC configuration. The commands as written were executed as the user created during the installation process.
  • Prevent the system from automatically adding swap devices or files. This is actually a little trickier than it sounds since the /sbin/swapadd program, called during the boot process, will attempt to use anything defined as swap that is not commented. I would prefer not to comment the files as it would then be harder to tell the difference between those we wanted to use for encrypted swap and those that were commented for some other reason. To work around this issue, you simply must edit the /etc/vfstab file and define the swap device or file as something other than "swap". For the scripts discussed below, we will use the key "enc-swap". Here is an example from /etc/vfstab:

    $ grep enc-swap /etc/vfstab
    /dev/zvol/dsk/rpool/swap      -      -      enc-swap      -      no      -
    
    $ swap -l
    No swap devices configured
    

  • Remove the existing swap devices or files. It is likely that your system will have already added the swap devices or files to the system. To determine if this is the case, simply use the following command:

    $ swap -l
    swapfile                   dev    swaplo   blocks     free
    /dev/zvol/dsk/rpool/swap 182,2         8  1226744  1226744
    

    If there are devices or files already configured, remove them using the following command:

    $ pfexec swap -d /dev/zvol/dsk/rpool/swap
    
    $ swap -l
    No swap devices configured
    

    If swap is in use, you may need to reboot you system in order to remove the device at this point. Note that the previous step (where the file system type was changed to enc-swap) will ensure that the device or file is not used upon boot.)

  • Add the encrypted swap SMF service. Here is where the magic lives. You will need to download the archive containing the encrypted swap SMF service manifest and method files. Note that these files are user contributed and as such are not officially a part of the OpenSolaris release nor are they officially supported by Sun. If you are ok with these terms, you should now download the archive and install the files using the following commands:

    $ wget -qnd http://mediacast.sun.com/users/gbrunette/media/smf-encrypted-swap-v0.1.tar.bz2
    
    $ bzip2 -d -c ./smf-encrypted-swap-v0.1.tar.bz2 | tar xf -
    
    $ cd ./smf-encrypted-swap
    
    $ pfexec ./install.sh
    
    $ svccfg import /var/svc/manifest/site/isc-enc-swap.xml
    

    The install.sh script is used to copy this service's SMF manifest and method scripts into the proper locations as well as set correct ownership and permissions of these files.

  • Verify the service is running and encrypted swap is configured. The last step is to verify that everything is working as expected. Use the following commands to verify the service was properly installed and enabled:

    $ svcs isc-encrypted-swap
    STATE          STIME    FMRI
    online         14:30:10 svc:/system/isc-encrypted-swap:default
    

    Use the following commands to verify that encrypted swap is in use:

    $ lofiadm
    Block Device             File                           Options
    /dev/lofi/1              /devices/pseudo/zfs@0:2c       Encrypted
    
    $ swap -l
    swapfile             dev    swaplo   blocks     free
    /dev/lofi/1         144,1         8  1226728  1226728
    

    The last two commands show that an encrypted block device was created at /dev/lofi/1 and that the device is currently in use as a swap device. It should be noted that no password, passphrase or other credential was given when the encryption was configured. This is because this service is configured to use an ephemeral key. The key is not stored on the system and is lost when the system is restarted. Upon each reboot, a new encrypted block device with a new ephemeral key will be used to configure encrypted swap.

Note that the examples above have shown the service with a single swap device, but the SMF service has been written to support multiple swap devices or files. For example, a secondary swap file could be created using the following steps:

$ pfexec zfs create -V 1G rpool/export/swapfile

$ pfexec vi /etc/vfstab
[add the new entry for rpool/export/swapfile as verified in the next step]

$ grep enc-swap /etc/vfstab
/dev/zvol/dsk/rpool/swap      -      -      enc-swap      -      no      -
/dev/zvol/dsk/rpoo/export/swapfile      -      -      enc-swap      -      no      -

$ svcadm restart isc-encrypted-swap

$ lofiadm
Block Device             File                           Options
/dev/lofi/1              /devices/pseudo/zfs@0:2c       Encrypted
/dev/lofi/2              /devices/pseudo/zfs@0:3c       Encrypted

$ swap -l
swapfile             dev    swaplo   blocks     free
/dev/lofi/1         144,1         8  1226728  1226728
/dev/lofi/2         144,2         8  2097128  2097128

There you have it! Enabling encrypted swap in OpenSolaris 2009.06 is as easy as following these few simple steps. It is worth reiterating that this solution is just a temporary workaround. Once ZFS encryption is available, it should be used instead of this approach. In the meantime, however, if you are interested in enabling encrypted swap on your OpenSolaris 2009.06 systems, give this model at try and please be sure to send along your feedback!

Take care!

P.S. Some of you may be wondering why the SMF service and associated files are labeled with an ISC prefix? The answer is simple. They were developed and are being used as part of the Immutable Service Container project! Look for more information and materials from this project in the near future!

Technorati Tag:

Friday Jun 05, 2009

Cloud Security from Sun's CommunityOne

As we come to the close of yet another week, I am reminded that this week was different. Unlike most weeks, I was actually off from work, recovering from surgery, and yet at the same time, several of my projects were living lives of their own at CommunityOne West and Java One. Since I could not be there in person to talk about this work, I figured the next best thing was to take a few moments to highlight them here and offer an open invitation to publicly discuss them on their project pages.

There were three Cloud Computing security projects that were discussed and demonstrated this week:

  • Security Hardened Virtual Machine Images.
    Summary: Sun and the Center for Internet Security have been working together for over six years to promote enterprise-class security best practices for the Solaris OS. Building upon their latest success, the Solaris 10 Security Benchmark, they have adapted its security guidance to the OpenSolaris platform and today are announcing the availability of a virtual machine image pre-configured with these settings.

    Key Points: Sun is the first commercial vendor to publish and make freely available a hardened virtual machine image - secured using industry accepted best practices. Images will be made available for both Amazon EC2 and Sun Cloud.

    More Information: Announcement.

  • Cloud Safety Box.
    Summary: Security is a key concern for customers everywhere, and the Cloud is no exception. Customers who are concerned about the confidentiality of their information should encrypt their data before sending it to the Cloud. This utility simplifies the process of encrypting files and storing them in the Cloud (as well as decrypting them after they have been retrieved).

    Key Points: The tools leverage strong, industry standard encryption (AES 256-bit) but are configurable to accommodate other algorithms and key sizes. The tools can leverage the cryptographic acceleration capabilities of systems configured with Sun's UltraSPARC T2 (Niagara 2) processor enabling ~7x speed improvement over software encryption. The tools support multiple client platforms and multiple cloud providers today including Sun Cloud and Amazon S3.

    More Information: Project Page

  • Encrypted ZFS Backups.
    Summary: Customers often encrypt their backups before sending them off-site for storage, so why should the Cloud be any different. This utility integrates with the OpenSolaris ZFS automatic snapshot service to automatically encrypt the content before storing it into the Cloud. This way, backup data is always stored in an encrypted form in the Cloud and the decryption keys never leave your organization. Recovery is as easy as downloading and decrypting the snapshots (using the Cloud Safety Box tool, for example) and reverting to those snapshots using standard ZFS methods.

    Key Points: The tool leverages strong, industry standard encryption (AES 256-bit) but is configurable to accommodate other algorithms and key sizes. The tool can leverage the cryptographic acceleration capabilities of systems configured with Sun's UltraSPARC T2 (Niagara 2) processor enabling ~7x speed improvement over software encryption. The tool supports multiple cloud providers today including Sun Cloud and Amazon S3.

    More Information: Project Page

Each of these projects were also highlighted during the Cloud Computing keynote delivered by Lew Tucker (VP/CTO, Cloud Computing) as shown in the replay, starting about 2:18 seconds into this video:

In addition, the Cloud Safety Box and ZFS Encrypted Backups projects were demonstrated at the Sun Cloud demonstrations pods and were featured prominently on both the Sun Cloud Computing landing page as well as on Project Kenai. Click the snapshots below for larger versions:

If you have not already, please give these projects a look and send me feedback! Cloud Computing security is in its infancy in many ways, and these projects are just a start down a long and winding road. I remain convinced as ever that Cloud Computing will have a role to play in raising the information security bar for everyone, but we still have work to do! As a teaser, I would say that this is just the beginning and we have quite a number of other tricks still up our sleeves! So stay tuned and send along your ideas and feedback!

Technorati Tag:

Friday May 01, 2009

Cloud Safety Box

Yesterday, I wrote about the ZFS Encrypted Backup to S3 project that I started over at Project Kenai. This project integrates with the ZFS Automatic Snapshot service to provide a way for automatically storing encrypted ZFS snapshots into the Cloud.

So, what if you wanted to just store and retrieve individual files? Well, there is a tool to help fill this need as well! The Crypto Front End to S3 CLIs project offers a couple tools that allow you to encrypt and upload files to the Cloud (and of course download and decrypt files as well). This project provides a very simple to use interface in the form of the Cloud Safety Box, a tool that leverages a number of pre-configured default settings to trade-off flexibility for ease of use. For those wanting more control over the settings (including encryption provider, encryption algorithm, key type and other settings), simply use the s3-crypto.sh utility. A diagram is available showing how these tools work together.

Since these tools can be configured to use OpenSSL as their cryptography provider (and there are no further dependencies on OpenSolaris, you can actually use this tool on other operating systems (e.g., Mac OS X was successfully used during one of the tests).

It should be noted that the s3-crypto.sh utility can be used to download and decrypt an ZFS snapshot uploaded to the Cloud using the ZFS Encrypted Backup to S3 utility so that with these two tools you have a way of storing and retrieving regular files as well as ZFS snapshots.

You can find all of the details, documentation and download instructions (as well as a Mercurial gate) at the Crypto Front End to S3 CLIs project page. So, please give it a try and let us know what you think!

Technorati Tag:

Thursday Apr 30, 2009

Saving Encrypted ZFS Snapshots to the Cloud

Are you an OpenSolaris user? Do you use ZFS? Have you tried the ZFS Automatic Snapshot service? If so, you might be interested in a new tool that I just published over at Project Kenai that enables you to encrypt and store ZFS snapshots to either the Sun Cloud Storage Service (Employees Only at the moment) or Amazon's Simple Storage Service (S3).

You can find all of the details, documentation and download instructions (as well as a Mercurial gate) at the ZFS Encrypted Backup to S3 project page. So, please give it a try and let us know what you think!

Technorati Tag:

About

gbrunett

Search

Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today