Thursday Jun 11, 2009

UPDATE: Free Security Hardened Virtual Machine Image

Just a few days ago, I announced the availability of a security hardened OpenSolaris AMI for Amazon EC2. Well, the OpenSolaris on EC2 team has taken the next step by making this image available to our colleagues using the EC2 European Region! This publicly available AMI (AMI ID: ami-d7a189a3) is available today, and just as with the U.S. version, it does not require registration. There is nothing to get in the way of your using them today! Go give it a spin and let us know what you think! Check out the announcement for all of the details.

Technorati Tag:

Wednesday Jun 10, 2009

NEW: Cloud Safety Box v0.4

Today, I am happy to announce the v0.4 release of the Cloud Safety Box project. About a month ago, I announced the initial public release and since that time it was even highlighted and demonstrated at Sun's CommunityOne event! Not too bad for a new project!

The new version released today was a substantial redesign in order to improve the overall design and efficiency of the tools while at the same time adding a few key features. The biggest visible changes include support for compression, splitting up of large files into small chunks, and also support for Solaris key labels. Let's dive into each of these briefly:

  • Compression. Compression is enabled automatically for the Cloud Safety Box (csb) tool and it is configurable when using the s3-crypto.ksh utility. When compression is enabled, the input stream or file is compressed first (before encryption and splitting). By default, compression is formed using the bzip2 utility (with the command-line option -9. To enable compression with the s3-crypto.ksh utility, use the -C option as in the following example:
    $ s3-crypto.ksh -C -m put -b mybucket -l myfile -r myfile

    Of course, compression can be used along with encryption and file splitting. Decompression is handled on get operations and is the last step to be performed (after file re-assembly and decryption). Just as with compression, the bzip2 utility is used (with the command-line options -d -c. To enable decompression with the s3-crypto.ksh utility, use the -C option as in the following example:

    $ s3-crypto.ksh -C -m get -b mybucket -l myfile -r myfile

    The actual compression and decompression methods can be changed using the S3C_COMPRESS_CMD and S3C_DECOMPRESS_CMD environment variables respectively as in the following example:

    $ env S3C_COMPRESS_CMD="gzip -9" S3C_DECOMPRESS_CMD="gzip -d -c" \\
       s3-crypto.ksh -C -m put -b mybucket -l myfile -r myfile

  • Splitting. It is well known that there are file size limits associated with Cloud Storage services. There are times, however, when you may have files that you would like to store that exceed those limits. This is where splitting comes into the picture. Splitting will take an input file and based upon a size threshold, divide it up into a number of files. Splitting is done by default with the csb tool and can be optionally enabled in the s3-crypto.ksh tool. Splitting is accomplished using the GNU split(1) program and is enabled using the -S option. The maximum file size limit is, by default, set at 4 GB, but it can be adjusted using the -L command-line option (specified in Kbytes). Splitting at 2 GB is enabled in the following example:
    $ s3-crypto.ksh -S -L 2000000 -m put -b mybucket -l myfile -r myfile

    When splitting is enabled and triggered (when a file's size exceeds the limit), the files stored in the Cloud Storage service use the name as specified by the remote_file (-r) argument. In the above example, the split files will all begin with the name myfile. Each will have a suffix of a ~ followed by an identification string. For example, files stored in the Cloud may look like:


    The csb and s3-crypto.ksh tools will use this naming convention to automatically reassemble files for get operations. Just as with splitting, reassembly is automatically performed for the csb tool and is enabled in the s3-crypto.ksh tool using the command-line option -S. When specifying a file that has been split, you do not need to include the suffix. The tools will discover that the file has been split and automatically reassemble it. Here is an example for reassembly:

    $ s3-crypto.ksh -S -m get -b mybucket -l myfile -r myfile

    The only downsides to splitting are the time it takes to split the files and the additional space that is needed to accommodate both the original file as well as the files created during the splitting process. This is unavoidable however as complete files must be available locally before they can be uploaded to the Cloud Storage provider.

  • Key Labels. The last "big" feature added in this new version is support for symmetric keys stored in PKCS#11 tokens (when the Solaris cryptographic provider is used). By default, the Solaris cryptographic provider is not selected (for reasons of portability), but it can easily be enabled in the s3-crypto.ksh tool using the -p solaris command line option. This setting will cause enable the use of the Solaris encrypt(1) and decrypt commands in place of their OpenSSL counterparts. Using the Solaris cryptographic provider allows you to take advantage of the Solaris Key Management Framework. Today, only the Sun Software PKCS#11 softtoken is supported, but I expect to remove this restriction in a future release.

    Using the pktool(1) command, you can create a key with a specific key label:

    $ pktool genkey keystore=pkcs11 label=my-new-key keytype=aes keylen=256
    Enter PIN for Sun Software PKCS#11 softtoken 
    Enter PIN for Sun Software PKCS#11 softtoken
    Enter PIN for Sun Software : 

    The creation of this new key (with label my-new-key) can be verified:

    $ pktool list objtype=key
    Enter PIN for Sun Software PKCS#11 softtoken  Enter PIN for Sun Software P: 
    Found 1 symmetric keys.
    Key #1 - AES:  my-new-key (256 bits)

    This key can be used with the s3-crypto.ksh tool when the Solaris cryptographic provider is selected and the key label is provided using the -K command-line option as in the following example:

    $ s3-crypto.ksh -c -p solaris -m put -b mybucket -K my-new-key -l myfile -r myfile
    Enter PIN for Sun Software PKCS#11 softtoken  : 

    The same approach is used to decrypt files when a get operation is specified.

As always, I am always looking for feedback! Let me know if these tools are helpful and how they can be improved! You can find out more information on this project at its home page at Project Kenai.

Take care!

Technorati Tag:

Friday Jun 05, 2009

Cloud Security from Sun's CommunityOne

As we come to the close of yet another week, I am reminded that this week was different. Unlike most weeks, I was actually off from work, recovering from surgery, and yet at the same time, several of my projects were living lives of their own at CommunityOne West and Java One. Since I could not be there in person to talk about this work, I figured the next best thing was to take a few moments to highlight them here and offer an open invitation to publicly discuss them on their project pages.

There were three Cloud Computing security projects that were discussed and demonstrated this week:

  • Security Hardened Virtual Machine Images.
    Summary: Sun and the Center for Internet Security have been working together for over six years to promote enterprise-class security best practices for the Solaris OS. Building upon their latest success, the Solaris 10 Security Benchmark, they have adapted its security guidance to the OpenSolaris platform and today are announcing the availability of a virtual machine image pre-configured with these settings.

    Key Points: Sun is the first commercial vendor to publish and make freely available a hardened virtual machine image - secured using industry accepted best practices. Images will be made available for both Amazon EC2 and Sun Cloud.

    More Information: Announcement.

  • Cloud Safety Box.
    Summary: Security is a key concern for customers everywhere, and the Cloud is no exception. Customers who are concerned about the confidentiality of their information should encrypt their data before sending it to the Cloud. This utility simplifies the process of encrypting files and storing them in the Cloud (as well as decrypting them after they have been retrieved).

    Key Points: The tools leverage strong, industry standard encryption (AES 256-bit) but are configurable to accommodate other algorithms and key sizes. The tools can leverage the cryptographic acceleration capabilities of systems configured with Sun's UltraSPARC T2 (Niagara 2) processor enabling ~7x speed improvement over software encryption. The tools support multiple client platforms and multiple cloud providers today including Sun Cloud and Amazon S3.

    More Information: Project Page

  • Encrypted ZFS Backups.
    Summary: Customers often encrypt their backups before sending them off-site for storage, so why should the Cloud be any different. This utility integrates with the OpenSolaris ZFS automatic snapshot service to automatically encrypt the content before storing it into the Cloud. This way, backup data is always stored in an encrypted form in the Cloud and the decryption keys never leave your organization. Recovery is as easy as downloading and decrypting the snapshots (using the Cloud Safety Box tool, for example) and reverting to those snapshots using standard ZFS methods.

    Key Points: The tool leverages strong, industry standard encryption (AES 256-bit) but is configurable to accommodate other algorithms and key sizes. The tool can leverage the cryptographic acceleration capabilities of systems configured with Sun's UltraSPARC T2 (Niagara 2) processor enabling ~7x speed improvement over software encryption. The tool supports multiple cloud providers today including Sun Cloud and Amazon S3.

    More Information: Project Page

Each of these projects were also highlighted during the Cloud Computing keynote delivered by Lew Tucker (VP/CTO, Cloud Computing) as shown in the replay, starting about 2:18 seconds into this video:

In addition, the Cloud Safety Box and ZFS Encrypted Backups projects were demonstrated at the Sun Cloud demonstrations pods and were featured prominently on both the Sun Cloud Computing landing page as well as on Project Kenai. Click the snapshots below for larger versions:

If you have not already, please give these projects a look and send me feedback! Cloud Computing security is in its infancy in many ways, and these projects are just a start down a long and winding road. I remain convinced as ever that Cloud Computing will have a role to play in raising the information security bar for everyone, but we still have work to do! As a teaser, I would say that this is just the beginning and we have quite a number of other tricks still up our sleeves! So stay tuned and send along your ideas and feedback!

Technorati Tag:

Thursday Jun 04, 2009

Free Security Hardened Virtual Machine Image

Perhaps I am a bit sensitive to the topic of security, but I could not let a "first" go by without comment. Back in 1999 and 2000, Sun was _the_ first commercial operating system vendor to publish not only detailed security guidance but also a tool that allowed organizations to harden the security configuration of their systems in accordance with Sun's best practices and their own policies. That tool, known as the Solaris Security Toolkit, continued to be enhanced and evolve for nearly a decade supporting new versions of the Solaris OS and adding new capabilities such as auditing. Recently, it has taken its next step forward as an OpenSolaris project. Best of luck to Jason (the new project leader)!

But, this was not the _first_ that compelled me to write today. Yes, there has been another!

Working together for more than six years, Sun and the Center for Internet Security have consistently collaborated on best-in-class, supportable and complete security hardening guidance for the Solaris operating system. The latest version (previously discussed), developed for the Solaris 10 operating system, was completed with substantial contributions from Sun, CIS, the U.S. National Security Agency (NSA), as well as the U.S. Defense Information Systems Agency (DISA).

Building upon this solid foundation, this week, Sun and the Center for Internet Security are proud to announce a new _first_. We have collaborated to adapt the security recommendations published in the Solaris 10 Benchmark to the OpenSolaris operating system. This alone may be an interesting _first_, but we have gone farther. We have adapted the recommendations to meet the needs of virtual machine images running in Cloud Computing environments. All of our findings and recommendations are freely available and can be found at the Sun OpenSolaris AMI Hardening Wiki. But that is not all!

We have worked with the Sun's OpenSolaris on EC2 team to develop the _first_ vendor-provided machine image that has been hardened based upon industry-accepted and vendor supported security recommendations. As a further commitment to our "Secure by Default" strategy, we have made this AMI publicly available (AMI ID: ami-35ac4a5c) so that anyone can quickly and easily make use of it without having to apply the security hardening steps manually. Interested? Learn more about this AMI from the OpenSolaris on EC2 announcement. Of course, this will also be available for the Sun Cloud too!

Special thanks to Blake Frantz (CIS), Lew Tucker (Sun), Sujeet Vasudevan (Sun), and Divyen Patel (Sun) - without whom this new _first_ would not have been possible!

Technorati Tag:

Accepted @ Cloud Computing Expo West

Back in March, I had the pleasure of presenting at Cloud Computing Expo (East) in New York City. The topic of my talk was "Enhanced Security Models for Machine Image Deployments". The abstract for this talk was:
Security is consistently rated as a leading customer concern impeding the full scale adoption of Cloud Computing. At the same time, security is viewed as complicated and painful. This session will help to ease the pain by focusing on the key issues to be considered along with specific best practices for deployment. This session will share a number of security models for IaaS machine images based upon these best practices to illustrate how we can improve upon the security state of the art in Cloud Computing today.
I had a lot of fun with this talk for a few reasons. First, it was a departure from my traditional speaking style and I certainly enjoy the challenge. Rather than inundate the audience with text, a single or small set of graphics was the presentation model of the day allowing me to weave together a story (reinforced by imagery) showing how each piece of the puzzle built upon and reinforced the overall security strategy I was discussing. The other reason it was a lot of fun was that it was a great opportunity to tie together security approaches of the past with new models such as Immutable Service Containers. We even talked about a few architectural models that could be built upon ISCs that supported more autonomic security defenses. The reception from the audience was fantasic and spawned quite a few hallway discussions allowing me to dive into the specifics not possible during the time slot allocated to the talk.

Given such a positive experience, you can see why I submitted a paper for Cloud Computing Expo) (West) when the opportunity arose. Rather than focus the talk simply on virtual machines, I decided to up-level my proposal and focus on things from an architectural perspective. Well, I am very happy to say that my proposal was accepted! So, I will be speaking at Cloud Computing Expo (West) in Santa Clara, CA on November 2nd through the 4th. I hope to see you there! As a teaser, the title of my talk will be "Cloud Security - It's Nothing New; It Changes Everything!". The abstract goes as follows:

This session cuts through the hype and sharpens our focus on what security means for cloud computing and and other elastic, hyper-scale architectures. Is security different in the Cloud? If so, how and for what types of clouds? Is it time to challenge some of our existing ideas and practices? There has been much discussion about security being a barrier to Cloud adoption, but where do we really stand today? This session will examine Cloud security including threats, new challenges and opportunities. Concepts will be reinforced using use cases and architectural patterns showing why cloud computing has the potential to help enable security like never before! Come join the revolution. Cloud security is nothing new, and it changes everything!
Hope to see you there!

Technorati Tag:

Friday May 01, 2009

Cloud Safety Box

Yesterday, I wrote about the ZFS Encrypted Backup to S3 project that I started over at Project Kenai. This project integrates with the ZFS Automatic Snapshot service to provide a way for automatically storing encrypted ZFS snapshots into the Cloud.

So, what if you wanted to just store and retrieve individual files? Well, there is a tool to help fill this need as well! The Crypto Front End to S3 CLIs project offers a couple tools that allow you to encrypt and upload files to the Cloud (and of course download and decrypt files as well). This project provides a very simple to use interface in the form of the Cloud Safety Box, a tool that leverages a number of pre-configured default settings to trade-off flexibility for ease of use. For those wanting more control over the settings (including encryption provider, encryption algorithm, key type and other settings), simply use the utility. A diagram is available showing how these tools work together.

Since these tools can be configured to use OpenSSL as their cryptography provider (and there are no further dependencies on OpenSolaris, you can actually use this tool on other operating systems (e.g., Mac OS X was successfully used during one of the tests).

It should be noted that the utility can be used to download and decrypt an ZFS snapshot uploaded to the Cloud using the ZFS Encrypted Backup to S3 utility so that with these two tools you have a way of storing and retrieving regular files as well as ZFS snapshots.

You can find all of the details, documentation and download instructions (as well as a Mercurial gate) at the Crypto Front End to S3 CLIs project page. So, please give it a try and let us know what you think!

Technorati Tag:

Thursday Apr 30, 2009

Saving Encrypted ZFS Snapshots to the Cloud

Are you an OpenSolaris user? Do you use ZFS? Have you tried the ZFS Automatic Snapshot service? If so, you might be interested in a new tool that I just published over at Project Kenai that enables you to encrypt and store ZFS snapshots to either the Sun Cloud Storage Service (Employees Only at the moment) or Amazon's Simple Storage Service (S3).

You can find all of the details, documentation and download instructions (as well as a Mercurial gate) at the ZFS Encrypted Backup to S3 project page. So, please give it a try and let us know what you think!

Technorati Tag:

Wednesday Jan 28, 2009

Amazon S3 Silent Data Corruption

While catching up on my reading, I came across an interesting article focused on the Amazon's Simple Storage Service (S3). The author points to a number of complaints where Amazon S3 customers had experienced silent data corruption. The author recommends calculating MD5 digital fingerprints of files before posting them to S3 and validating those fingerprints after later retrieving them from the service. More recently, Amazon has posted a best practices document for using S3 that includes:

Amazon S3’s REST PUT operation provides the ability to specify an MD5 checksum ( for the data being sent to S3. When the request arrives at S3, an MD5 checksum will be recalculated for the object data received and compared to the provided MD5 checksum. If there’s a mismatch, the PUT will be failed, preventing data that was corrupted on the wire from being written into S3. At that point, you can retry the PUT.

MD5 checksums are also returned in the response to REST GET requests and may be used client-side to ensure that the data returned by the GET wasn’t corrupted in transit. If you need to ensure that values returned by a GET request are byte-for-byte what was stored in the service, calculate the returned value’s MD5 checksum and compare it to the checksum returned along with the value by the service.

All in all - good advice, but it strikes me as unnecessarily "left as an exercise to the reader". Just as ZFS has revolutionized end-to-end data integrity within a single system, why can't we have similar protections at the Cloud level? While certainly it would help if Amazon was using ZFS on Amber Road as their storage back-end, even this would be insufficient...

Clearly, more is needed. For example, would it make sense to have an API layer be added that automates the calculation and validation of digital fingerprints? Most people don't think about silent data corruption and honestly they shouldn't have to! Integrity checks like these should be automated just as they are in ZFS and TCP/IP! As we move into 2009, we need to offer easy to use, approachable solutions to these problems, because if the future is in the clouds, it will revolve around the data.

Technorati Tag:

Security Recommendations for IaaS Providers


While under intense interest in the industry today, Cloud Computing (Cloud) will not be able to realize its full potential without strong security assurances offered by Cloud providers.  The type of security controls and the level of assurance required will vary by Cloud layer (IaaS, PaaS, SaaS) and also by the degree to which the Cloud is shared (public, private, hybrid) and interconnected with other services.  The goal of this article is to highlight a few security recommendations that apply to IaaS Cloud architectures.


Before we begin, a few definitions are provided to ensure that everyone is working from the same page:

  • Cloud Resources. (Resources) Virtualized compute, storage, networking, infrastructure and application services.

  • Cloud Provider. (Provider) The owner of the Cloud Computing architecture and all of its Cloud Resources.  The Cloud Provider provisions Cloud Resources based upon requests (and optionally payment) from Cloud Customers.

  • Cloud Customer. (Customer) The entity who requests, purchases, rents and/or leverages Cloud Resources makes available by a Cloud Provider. Using the allocated Cloud Resources, a Cloud Customer can optionally deploy and share content, functionality, applications and services that can be accessed and used by Cloud Consumers.

  • Cloud Consumer. (Consumer) The entity who accesses or makes use of content, functionality, applications and services being offered by a Cloud Customer.

Security Recommendations for Providers of Cloud Computing Services

To promote greater security as well as customer confidence in and adoption of Cloud Computing architectures and services, Cloud Providers are strongly encouraged to embrace and embody the following three recommendations:
  • Recommendation #1: All management and control interactions between Cloud Providers and Cloud Customers must take place over secure channels that utilize standards-based protocols and support authentication, authorization, confidentiality, integrity and accountability.  Without these basic protections, the provider and its customers are vulnerable to unwanted disclosure, impersonation, identity and/or service theft or misuse, and repudiation.  These protections must exist for Customers when they access the services offered by the Cloud Provider for the purposes of signup, provisioning, payment, monitoring, and other management and control related functions.

    • Note #1: Providers should move to offer strong mutual authentication mechanisms for their Customers in order to provide greater protection for Customer accounts. Given that management and control interactions have very real financial impact, it is critical that these mechanisms be protected by more than just a simple password.

    • Note #2: If a Provider must use password authentication to grant Customer access to account, control and management functions, then the Provider should take steps to ensure that the passwords chosen are strong and passed only over encrypted channels. Today, very few Providers enforce strong password composition rules that are otherwise in effect throughout modern enterprises. Additional access monitoring to detect and prevent fraudulent access is also advised.

  • Recommendation #2: By default, Resources allocated by the Provider shall not interact with any other Resource not owned by the same Customer.  This includes (physical or virtual) compute, networking and storage resources, (virtual) applications and services, and other related objects.  The Customer to which a Resource is assigned is called its “owner”.  This recommendation is necessary to ensure that objects that exist in the Cloud are not inadvertently exposed to unauthorized consumers.

    • Note #3: The Provider may offer a mechanism to allow Customers to manage access to the Resources that they own thereby allowing other Customers or Cloud Consumers to access their content and services.  A default deny policy is recommended regardless of what other access configurations may be possible.

    • Note #4: If some aspect of a Resource is to be shared between multiple Customers, the Provider must implement security controls that ensure that the intended owner of the object is compartmentalized from the rest of the population.  The Provider must enforce sufficient protections preventing unauthorized access, manipulation or destruction of objects under their care.  That is, a Provider must be able to demonstrate that security protections are in place preventing Customer A from accessing, manipulating or destroying Resources associated with Customer B.  Ideally, these controls will be validated using a trusted third party who will act as an auditor.  The Provider should make available (sanitized) audit reports to Customers as requested.

  • Recommendation #3: Providers must implement controls preventing the accidental or malicious access, use, modification or destruction of Resources under their care.  As Providers have physical access to the underlying infrastructure, they can circumvent many common security protections.  Consequently, it is imperative that Providers implement controls that restrict their own employees access to Resources.  Further, all authorized access must be audited and regularly reviewed in order to promote accountability and ensure that actions are taken in accordance with their stated security policy.
It should be stated that these recommendations should be implemented in a manner that does not significantly compromise a Customer's ability to easily, efficiently and reliably use the Cloud services offered by the Provider. In future articles, I would like to explore these areas in more detail as well as discuss some of the security burden that is placed squarely on IaaS Customers.

Technorati Tag:




« April 2014