Wednesday Jun 10, 2009

NEW: Cloud Safety Box v0.4

Today, I am happy to announce the v0.4 release of the Cloud Safety Box project. About a month ago, I announced the initial public release and since that time it was even highlighted and demonstrated at Sun's CommunityOne event! Not too bad for a new project!

The new version released today was a substantial redesign in order to improve the overall design and efficiency of the tools while at the same time adding a few key features. The biggest visible changes include support for compression, splitting up of large files into small chunks, and also support for Solaris key labels. Let's dive into each of these briefly:

  • Compression. Compression is enabled automatically for the Cloud Safety Box (csb) tool and it is configurable when using the s3-crypto.ksh utility. When compression is enabled, the input stream or file is compressed first (before encryption and splitting). By default, compression is formed using the bzip2 utility (with the command-line option -9. To enable compression with the s3-crypto.ksh utility, use the -C option as in the following example:
    $ s3-crypto.ksh -C -m put -b mybucket -l myfile -r myfile
    

    Of course, compression can be used along with encryption and file splitting. Decompression is handled on get operations and is the last step to be performed (after file re-assembly and decryption). Just as with compression, the bzip2 utility is used (with the command-line options -d -c. To enable decompression with the s3-crypto.ksh utility, use the -C option as in the following example:

    $ s3-crypto.ksh -C -m get -b mybucket -l myfile -r myfile
    

    The actual compression and decompression methods can be changed using the S3C_COMPRESS_CMD and S3C_DECOMPRESS_CMD environment variables respectively as in the following example:

    $ env S3C_COMPRESS_CMD="gzip -9" S3C_DECOMPRESS_CMD="gzip -d -c" \\
       s3-crypto.ksh -C -m put -b mybucket -l myfile -r myfile
    

  • Splitting. It is well known that there are file size limits associated with Cloud Storage services. There are times, however, when you may have files that you would like to store that exceed those limits. This is where splitting comes into the picture. Splitting will take an input file and based upon a size threshold, divide it up into a number of files. Splitting is done by default with the csb tool and can be optionally enabled in the s3-crypto.ksh tool. Splitting is accomplished using the GNU split(1) program and is enabled using the -S option. The maximum file size limit is, by default, set at 4 GB, but it can be adjusted using the -L command-line option (specified in Kbytes). Splitting at 2 GB is enabled in the following example:
    $ s3-crypto.ksh -S -L 2000000 -m put -b mybucket -l myfile -r myfile
    

    When splitting is enabled and triggered (when a file's size exceeds the limit), the files stored in the Cloud Storage service use the name as specified by the remote_file (-r) argument. In the above example, the split files will all begin with the name myfile. Each will have a suffix of a ~ followed by an identification string. For example, files stored in the Cloud may look like:

    myfile~aaaaa
    myfile~aaaab
    myfile~aaaac
    myfile~aaaad
    

    The csb and s3-crypto.ksh tools will use this naming convention to automatically reassemble files for get operations. Just as with splitting, reassembly is automatically performed for the csb tool and is enabled in the s3-crypto.ksh tool using the command-line option -S. When specifying a file that has been split, you do not need to include the suffix. The tools will discover that the file has been split and automatically reassemble it. Here is an example for reassembly:

    $ s3-crypto.ksh -S -m get -b mybucket -l myfile -r myfile
    

    The only downsides to splitting are the time it takes to split the files and the additional space that is needed to accommodate both the original file as well as the files created during the splitting process. This is unavoidable however as complete files must be available locally before they can be uploaded to the Cloud Storage provider.

  • Key Labels. The last "big" feature added in this new version is support for symmetric keys stored in PKCS#11 tokens (when the Solaris cryptographic provider is used). By default, the Solaris cryptographic provider is not selected (for reasons of portability), but it can easily be enabled in the s3-crypto.ksh tool using the -p solaris command line option. This setting will cause enable the use of the Solaris encrypt(1) and decrypt commands in place of their OpenSSL counterparts. Using the Solaris cryptographic provider allows you to take advantage of the Solaris Key Management Framework. Today, only the Sun Software PKCS#11 softtoken is supported, but I expect to remove this restriction in a future release.

    Using the pktool(1) command, you can create a key with a specific key label:

    $ pktool genkey keystore=pkcs11 label=my-new-key keytype=aes keylen=256
    Enter PIN for Sun Software PKCS#11 softtoken 
    Enter PIN for Sun Software PKCS#11 softtoken
    Enter PIN for Sun Software : 
    

    The creation of this new key (with label my-new-key) can be verified:

    $ pktool list objtype=key
    Enter PIN for Sun Software PKCS#11 softtoken  Enter PIN for Sun Software P: 
    Found 1 symmetric keys.
    Key #1 - AES:  my-new-key (256 bits)
    

    This key can be used with the s3-crypto.ksh tool when the Solaris cryptographic provider is selected and the key label is provided using the -K command-line option as in the following example:

    $ s3-crypto.ksh -c -p solaris -m put -b mybucket -K my-new-key -l myfile -r myfile
    Enter PIN for Sun Software PKCS#11 softtoken  : 
    

    The same approach is used to decrypt files when a get operation is specified.

As always, I am always looking for feedback! Let me know if these tools are helpful and how they can be improved! You can find out more information on this project at its home page at Project Kenai.

Take care!

Technorati Tag:

Friday Jun 05, 2009

Cloud Security from Sun's CommunityOne

As we come to the close of yet another week, I am reminded that this week was different. Unlike most weeks, I was actually off from work, recovering from surgery, and yet at the same time, several of my projects were living lives of their own at CommunityOne West and Java One. Since I could not be there in person to talk about this work, I figured the next best thing was to take a few moments to highlight them here and offer an open invitation to publicly discuss them on their project pages.

There were three Cloud Computing security projects that were discussed and demonstrated this week:

  • Security Hardened Virtual Machine Images.
    Summary: Sun and the Center for Internet Security have been working together for over six years to promote enterprise-class security best practices for the Solaris OS. Building upon their latest success, the Solaris 10 Security Benchmark, they have adapted its security guidance to the OpenSolaris platform and today are announcing the availability of a virtual machine image pre-configured with these settings.

    Key Points: Sun is the first commercial vendor to publish and make freely available a hardened virtual machine image - secured using industry accepted best practices. Images will be made available for both Amazon EC2 and Sun Cloud.

    More Information: Announcement.

  • Cloud Safety Box.
    Summary: Security is a key concern for customers everywhere, and the Cloud is no exception. Customers who are concerned about the confidentiality of their information should encrypt their data before sending it to the Cloud. This utility simplifies the process of encrypting files and storing them in the Cloud (as well as decrypting them after they have been retrieved).

    Key Points: The tools leverage strong, industry standard encryption (AES 256-bit) but are configurable to accommodate other algorithms and key sizes. The tools can leverage the cryptographic acceleration capabilities of systems configured with Sun's UltraSPARC T2 (Niagara 2) processor enabling ~7x speed improvement over software encryption. The tools support multiple client platforms and multiple cloud providers today including Sun Cloud and Amazon S3.

    More Information: Project Page

  • Encrypted ZFS Backups.
    Summary: Customers often encrypt their backups before sending them off-site for storage, so why should the Cloud be any different. This utility integrates with the OpenSolaris ZFS automatic snapshot service to automatically encrypt the content before storing it into the Cloud. This way, backup data is always stored in an encrypted form in the Cloud and the decryption keys never leave your organization. Recovery is as easy as downloading and decrypting the snapshots (using the Cloud Safety Box tool, for example) and reverting to those snapshots using standard ZFS methods.

    Key Points: The tool leverages strong, industry standard encryption (AES 256-bit) but is configurable to accommodate other algorithms and key sizes. The tool can leverage the cryptographic acceleration capabilities of systems configured with Sun's UltraSPARC T2 (Niagara 2) processor enabling ~7x speed improvement over software encryption. The tool supports multiple cloud providers today including Sun Cloud and Amazon S3.

    More Information: Project Page

Each of these projects were also highlighted during the Cloud Computing keynote delivered by Lew Tucker (VP/CTO, Cloud Computing) as shown in the replay, starting about 2:18 seconds into this video:

In addition, the Cloud Safety Box and ZFS Encrypted Backups projects were demonstrated at the Sun Cloud demonstrations pods and were featured prominently on both the Sun Cloud Computing landing page as well as on Project Kenai. Click the snapshots below for larger versions:

If you have not already, please give these projects a look and send me feedback! Cloud Computing security is in its infancy in many ways, and these projects are just a start down a long and winding road. I remain convinced as ever that Cloud Computing will have a role to play in raising the information security bar for everyone, but we still have work to do! As a teaser, I would say that this is just the beginning and we have quite a number of other tricks still up our sleeves! So stay tuned and send along your ideas and feedback!

Technorati Tag:

Thursday Jun 04, 2009

Free Security Hardened Virtual Machine Image

Perhaps I am a bit sensitive to the topic of security, but I could not let a "first" go by without comment. Back in 1999 and 2000, Sun was _the_ first commercial operating system vendor to publish not only detailed security guidance but also a tool that allowed organizations to harden the security configuration of their systems in accordance with Sun's best practices and their own policies. That tool, known as the Solaris Security Toolkit, continued to be enhanced and evolve for nearly a decade supporting new versions of the Solaris OS and adding new capabilities such as auditing. Recently, it has taken its next step forward as an OpenSolaris project. Best of luck to Jason (the new project leader)!

But, this was not the _first_ that compelled me to write today. Yes, there has been another!

Working together for more than six years, Sun and the Center for Internet Security have consistently collaborated on best-in-class, supportable and complete security hardening guidance for the Solaris operating system. The latest version (previously discussed), developed for the Solaris 10 operating system, was completed with substantial contributions from Sun, CIS, the U.S. National Security Agency (NSA), as well as the U.S. Defense Information Systems Agency (DISA).

Building upon this solid foundation, this week, Sun and the Center for Internet Security are proud to announce a new _first_. We have collaborated to adapt the security recommendations published in the Solaris 10 Benchmark to the OpenSolaris operating system. This alone may be an interesting _first_, but we have gone farther. We have adapted the recommendations to meet the needs of virtual machine images running in Cloud Computing environments. All of our findings and recommendations are freely available and can be found at the Sun OpenSolaris AMI Hardening Wiki. But that is not all!

We have worked with the Sun's OpenSolaris on EC2 team to develop the _first_ vendor-provided machine image that has been hardened based upon industry-accepted and vendor supported security recommendations. As a further commitment to our "Secure by Default" strategy, we have made this AMI publicly available (AMI ID: ami-35ac4a5c) so that anyone can quickly and easily make use of it without having to apply the security hardening steps manually. Interested? Learn more about this AMI from the OpenSolaris on EC2 announcement. Of course, this will also be available for the Sun Cloud too!

Special thanks to Blake Frantz (CIS), Lew Tucker (Sun), Sujeet Vasudevan (Sun), and Divyen Patel (Sun) - without whom this new _first_ would not have been possible!

Technorati Tag:

Accepted @ Cloud Computing Expo West

Back in March, I had the pleasure of presenting at Cloud Computing Expo (East) in New York City. The topic of my talk was "Enhanced Security Models for Machine Image Deployments". The abstract for this talk was:
Security is consistently rated as a leading customer concern impeding the full scale adoption of Cloud Computing. At the same time, security is viewed as complicated and painful. This session will help to ease the pain by focusing on the key issues to be considered along with specific best practices for deployment. This session will share a number of security models for IaaS machine images based upon these best practices to illustrate how we can improve upon the security state of the art in Cloud Computing today.
I had a lot of fun with this talk for a few reasons. First, it was a departure from my traditional speaking style and I certainly enjoy the challenge. Rather than inundate the audience with text, a single or small set of graphics was the presentation model of the day allowing me to weave together a story (reinforced by imagery) showing how each piece of the puzzle built upon and reinforced the overall security strategy I was discussing. The other reason it was a lot of fun was that it was a great opportunity to tie together security approaches of the past with new models such as Immutable Service Containers. We even talked about a few architectural models that could be built upon ISCs that supported more autonomic security defenses. The reception from the audience was fantasic and spawned quite a few hallway discussions allowing me to dive into the specifics not possible during the time slot allocated to the talk.

Given such a positive experience, you can see why I submitted a paper for Cloud Computing Expo) (West) when the opportunity arose. Rather than focus the talk simply on virtual machines, I decided to up-level my proposal and focus on things from an architectural perspective. Well, I am very happy to say that my proposal was accepted! So, I will be speaking at Cloud Computing Expo (West) in Santa Clara, CA on November 2nd through the 4th. I hope to see you there! As a teaser, the title of my talk will be "Cloud Security - It's Nothing New; It Changes Everything!". The abstract goes as follows:

This session cuts through the hype and sharpens our focus on what security means for cloud computing and and other elastic, hyper-scale architectures. Is security different in the Cloud? If so, how and for what types of clouds? Is it time to challenge some of our existing ideas and practices? There has been much discussion about security being a barrier to Cloud adoption, but where do we really stand today? This session will examine Cloud security including threats, new challenges and opportunities. Concepts will be reinforced using use cases and architectural patterns showing why cloud computing has the potential to help enable security like never before! Come join the revolution. Cloud security is nothing new, and it changes everything!
Hope to see you there!

Technorati Tag:

Friday May 01, 2009

Cloud Safety Box

Yesterday, I wrote about the ZFS Encrypted Backup to S3 project that I started over at Project Kenai. This project integrates with the ZFS Automatic Snapshot service to provide a way for automatically storing encrypted ZFS snapshots into the Cloud.

So, what if you wanted to just store and retrieve individual files? Well, there is a tool to help fill this need as well! The Crypto Front End to S3 CLIs project offers a couple tools that allow you to encrypt and upload files to the Cloud (and of course download and decrypt files as well). This project provides a very simple to use interface in the form of the Cloud Safety Box, a tool that leverages a number of pre-configured default settings to trade-off flexibility for ease of use. For those wanting more control over the settings (including encryption provider, encryption algorithm, key type and other settings), simply use the s3-crypto.sh utility. A diagram is available showing how these tools work together.

Since these tools can be configured to use OpenSSL as their cryptography provider (and there are no further dependencies on OpenSolaris, you can actually use this tool on other operating systems (e.g., Mac OS X was successfully used during one of the tests).

It should be noted that the s3-crypto.sh utility can be used to download and decrypt an ZFS snapshot uploaded to the Cloud using the ZFS Encrypted Backup to S3 utility so that with these two tools you have a way of storing and retrieving regular files as well as ZFS snapshots.

You can find all of the details, documentation and download instructions (as well as a Mercurial gate) at the Crypto Front End to S3 CLIs project page. So, please give it a try and let us know what you think!

Technorati Tag:

Thursday Apr 30, 2009

Saving Encrypted ZFS Snapshots to the Cloud

Are you an OpenSolaris user? Do you use ZFS? Have you tried the ZFS Automatic Snapshot service? If so, you might be interested in a new tool that I just published over at Project Kenai that enables you to encrypt and store ZFS snapshots to either the Sun Cloud Storage Service (Employees Only at the moment) or Amazon's Simple Storage Service (S3).

You can find all of the details, documentation and download instructions (as well as a Mercurial gate) at the ZFS Encrypted Backup to S3 project page. So, please give it a try and let us know what you think!

Technorati Tag:

About

gbrunett

Search

Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today