Wednesday Dec 02, 2009

NEW: OpenSolaris VPC Gateway Tool v0.1

On August 26th, 2009, Amazon Web Services launched their new Virtual Private Cloud (VPC) service. According to Amazon, this service:
[...] is a secure and seamless bridge between a company’s existing IT infrastructure and the AWS cloud. Amazon VPC enables enterprises to connect their existing infrastructure to a set of isolated AWS compute resources via a Virtual Private Network (VPN) connection, and to extend their existing management capabilities such as security services, firewalls, and intrusion detection systems to include their AWS resources. Amazon VPC integrates today with Amazon EC2, and will integrate with other AWS services in the future.

Sounds pretty cool, right? Well, I thought so. Back then, this announcement peaked by interest and I wanted to dive in and give it a try. Unfortunately, the VPC documentation leans more heavily toward configurations where a Cisco or Juniper device acts as my Customer Gateway to the VPC. That is certainly a problem as I do not have access to either of those kinds of devices. That got me to thinking...

Wouldn't be cool if we could just use OpenSolaris as a VPC Customer Gateway?

Even more interesting would be if I could create and access a VPC from OpenSolaris running inside of VirtualBox on my MacBook Pro! That way, I could have an on-demand virtual data center in the Cloud that I could access from anywhere!

It was from this concept, that I reached out to Dan McDonald and Dileep Kumar. Forming this virtual team, we applied our respective skills to this challenge. As things started to heat up, we pulled in Sebastien Roy and Sowmini Varadhan who provided invaluable support and architectural guidance without which we would still be in troubleshooting hell. (Thank you guys!)

So, where do things stand? (Drum roll, please!)

As it turns out... Yes, we were able to configure OpenSolaris (without any new development required!) to act as a Customer Gateway as part of an AWS VPC configuration. Our initial configuration used a dedicated system with an Internet routable, static IP address per the AWS VPC guidelines. So, question #1 is answered - yes, you can use OpenSolaris as a VPC Customer Gateway! W00t!

With this completed, I was still left wondering about by second question - getting this all to work from OpenSolaris running in VirtualBox on my laptop (or other non-dedicated system). As it turns out, it can be made to work as well - which is pretty cool, but since it is not supported by AWS at this time, it is not a configuration that I would recommend or support. That said, it is pretty cool to see this working (if even only in a "playground" sense).

Would you like to give this a try? Do you have VPC access but do not have a Cisco or Juniper device at your disposal? Well, fear not! Use OpenSolaris FTW!

Today, we are happy to announce the availability of the OpenSolaris VPC Gateway tool (version 0.1). As we stepped through getting everything to work, it was clear that nearly every aspect of the VPC configuration and creation process could be automated - so we automated it! The OpenSolaris VPC Gateway tool requires just a small bit of configuration after which you can quickly and easily establish a basic VPC configuration (with one subnet and one instance). You can customize the tool to make things more complex, but this is left as an exercise to the reader.

The OpenSolaris VPC Gateway tool is publicly available from the Kenai repository complete with installation, configuration and usage documentation.

Note that this is still preview-quality software with all of the necessary caveats that go along with it, but I would encourage those interested in OpenSolaris, VPCs, and especially in both to give it a try and send us your feedback! Thanks in advance and take care!

P.S. Looking for a good default instance to create? Try an OpenSolaris 2009.06 Immutable Service Container!

Technorati Tag:

Wednesday Nov 04, 2009

Update: Recent Cloud Security Happenings

I have to say that it has been a very busy couple of weeks. That said, I am happy to say that there is a lot to show for everyone's effort however. We have been able to publish quite a lot of new and updated content, and I figured that it might be a good time to shine a spotlight on some of the more interesting items. Without further ado...

Going forward, we are going to try and bring together all of the Cloud Computing security content on our brand new Sun.COM Cloud Security home page. Be sure to check it out regularly!

More is coming, don't miss it!

Technorati Tag:

Monday Nov 02, 2009

Immutable Service Containers @ Amazon EC2

Just in time for the OpenSolaris Developer Conference, we were able to publish new Immutable Service Containers images directly to the Amazon Web Services Elastic Compute Cloud (EC2) environment. Previously, I talked about creating ISCs using our security enhanced OpenSolaris 2009.06 AMIs. Today, I am happy to announce that we have taken the next logical step by making available AMIs that fully incorporate the ISC changes. If you want to try out this configuration, simply provision an Immutable Service Containers AMI on EC2. We have made AMIs available in both the U.S. (ami-48c32021) and European (ami-78567d0c) regions. As always, we would love to get your feedback on these images and what you would like to see next!

Take care!

Technorati Tag:

Friday Sep 11, 2009

Immutable Service Containers on Amazon EC2

Back in June, we released the very first security hardened virtual machine images for the Amazon Web Services Elastic Compute Cloud (EC2) environment. These original images were based upon the OpenSolaris 2008.11 release and were configured in accordance with the guidelines published by Sun the Center for Internet Security. Since its initial release, we have provided an update to offer this image in the European Region. In August, we took another step forward with the release of a security-enhanced image based upon the OpenSolaris 2009.06 release. This image went beyond just the simple hardening of its predecessor to add functionality such as encrypted swap, non-executable stacks and auditing that was enabled by default. With such a strong foundation, it should have been no surprise that it was likely to be used as a foundation for layered functionality. Just this month, for example, we announced the release of an image pre-configured with Drupal (v6.10) along with Apache (v2.2), MySQL (v5.0), and PHP (v5.2).

In parallel, the Immutable Service Containers project was announced back in June. This project was focused on the creation of secure execution environments for services. One of the key deliverables from this project has been the OpenSolaris ISC Construction Kit (Preview) that transforms an OpenSolaris 2009.06 system into an ISC configuration. Interestingly, several of the functional elements used today as part of the security-enhanced AMIs actually got their start as part of the ISC Construction Kit.

This brings us to today. For the first time, we have been able to create ISCs in the Cloud on Amazon EC2! Using the OpenSolaris ISC Construction Kit and the security-enhanced OpenSolaris 2009.06 AMI, we have deployed an ISC that exposes a representative service (in this case, a web server).

HELLO WORLD!

The nice thing about this is that the installation process was essentially the same as the one we used to create our pre-configured OVF image. There were two settings that needed to be adjusted in order for the ISC Construction Kit to properly work on EC2:

export ISC_SVCS_DOCK="fs network zone encrypted_scratch"
export ISC_DOCK_NET_IF_NAME="xnf0"

These two parameters had to be set before running the iscadm.ksh command. The first parameter simply removes steps that have already been completed in the base AMI (or are not needed for EC2). The second parameter changes the network interface name from e1000g0 (default) to xnf0 which is needed on EC2. That's all there was to it!

If you are interested in ISCs and how you can use them in your environment, I would love to hear from you! Also, just in case you missed it, I had the pleasure of joining Hal Stern to discuss ISCs on a recent Innovating@Sun podcast. Check it out and send us your feedback! Thanks in advance!

Take care!

Technorati Tag:

Wednesday Sep 02, 2009

NEW: Security Enhanced OpenSolaris Drupal Stack on EC2

Over the last few months, I have had a number of postings that have talked about security enhanced virtual machine images that we have made available on Amazon Web Services. The goal behind this work was to look at how we could improve baseline security in both virtualized and Cloud Computing computing environments by pre-integrating industry accepted recommended security settings. Organizations leveraging our work would have fewer security steps to undertake as our images were configured to be compliant with the recommendations published by the Center for Internet Security as part of their Solaris Benchmark (adapted for OpenSolaris).

So with this goal in mind, we developed security-enhanced versions of the OpenSolaris 2008.11 and 2009.06 operating systems. The latter went beyond the Center for Internet Security recommendations by also adding support for encrypted swap (as well as enabling auditing and non-executable stacks by default - something that was not done for the 2008.11 version). The next logical step was to validate these images using representative applications and services to illustrate the practiality of having security capabilities pre-integrated into a golden image from which application specific versions can be created.

Building upon the lessons we have learned in the development of the security-enhanced operating system images, today, I am very happy to announce that we have taken a step forward. Using the OpenSolaris 2008.11 image as our foundation, the OpenSolaris on EC2 team with some guidance from Scott Mattoon (all around Drupal Guru!) has installed and pre-configured Drupal (v6.10) along with Apache (v2.2), MySQL (v5.0), and PHP (v5.2). You can read all of the details on the announcement.

There are two things that should be noted about this image. First, no security-relevant changes were necessary to successfully install, configure and test Drupal on this security-enhanced image. While this should likely not come as a surprise, it is an important validation that at least for some (many?) classes of applications, a security tuned golden image can be used as a foundation. This is good news for organizations who are interested in the having a common security baseline for their operating systems. The second thing to note is that MySQL was modified on this image to not listen on the network for connections. This means that the image is compliant with our original security objectives in that it is only exposing required services (e.g., Apache, SSH) and no others by default.

As with all of the others, this is a publicly available AMI (AMI ID: ami-d9ee0eb0) so give it a try and let us know how we can improve it!

Take care!

Technorati Tag:

Friday Aug 14, 2009

NEW: Security Enhanced OpenSolaris 2009.06 on Amazon EC2

It is with great pleasure that I can announce the availability of security enhanced OpenSolaris 2009.06 on Amazon EC2! This release builds upon the work previously completed for the hardened OpenSolaris 2008.11 images as well as recent advances from the Immutable Service Container project. The end result is a OpenSolaris 2009.06 virtual machine image that is hardened, leverages a non-executable stack, encrypted swap as well as auditing enabled and pre-configured to record administrative events, logins, logouts, and all command executions. Just as with the OpenSolaris 2008.11 images, the hardening configuration of these new images complies with the recommendations published by Sun, the Center for Internet Security as well as the U.S. National Security Agency. This really cool thing is that they are all have the exact same guidance! I wonder how that happened?

Want to give it a spin? Check out the release announcement for more details. The AMI identifier is ami-e56e8f8c! Please send us your feedback!

As always, this work would not have been possible without the extensive support of the OpenSolaris on EC2 team. You are the greatest! Thank you so much for all of your help and support in making these images a reality!

Technorati Tag:

Thursday Jul 09, 2009

Encrypted ZFS using Amazon EBS and OpenSolaris 2009.06

Recently, I had the pleasure of exchanging e-mail with István Soós who had contacted our OpenSolaris on EC2 team asking how he could use OpenSolaris along with Amazon's Elastic Compute Cloud (EC2) and Elastic Block Stores (EBS) to create a subversion-based source code control system. Sounds simple, right? Well, István threw us a curve ball. He wanted the revision control system to run on OpenSolaris and be stored on an encrypted, mirrored ZFS file system backed by EBS. Now, you have to admit, that is pretty cool!

This is the point in the story where István met. After going over the requirements, it appeared as though the encrypted scratch space work that had been done for the Immutable Service Container project was a near fit except that persistence was needed. So, I provided István with links to this work which of course linked to Darren's original article on ZFS encryption using LOFI. Just a day later, István replied that his environment was up and running! Talk about speed and agility in the Cloud Computing world!

I would definitely encourage you to check out all of the details on István's blog. I especially want to thank István for sharing this great article that I hope will encourage others to try new things and keep pushing the OpenSolaris envelope forward!

Take care!

Technorati Tag:

Tuesday Jun 16, 2009

NEW: Encrypted ZFS Backups to the Cloud v0.3

Building upon the v0.4 release of the Cloud Safety Box tool, I am happy to announce the availability of v0.3 of the Encrypted ZFS Backups to the Cloud code. This new version uses the Cloud Safety Box project to enable compression, encryption and splitting of the ZFS backups before uploading the results to the Cloud. Due to this change, this project now officially depends upon the Cloud Safety Box project. The nice thing about this change is that it helps to keep the amount of redundant code low (between the two projects) while also improving testing time.

From an end-user perspective, this change is mostly transparent. A few parameters were added or changed in the /etc/default/zfs-backup-to-s3 defaults file such as:

# ENC_PROVIDER defines the cryptographic services provider used for
# encryption operations.  Value values are "solaris" and "openssl".
ENC_PROVIDER="solaris"

# MAX_FILE_SIZE specifies the maximum file size that can be sent
# to the Cloud storage provider without first splitting the file
# up into chunks (of MAX_FILE_SIZE or less).  This value is specified
# in Kbytes.  If this variable is 0 or not defined, then this service
# will _not_ attempt to split the file into chunks.
MAX_FILE_SIZE=40000000

# S3C_CRYPTO_CMD_NAME defines the fully qualified path to the
# s3-crypto.ksh program which is used to perform compression,
# encryption, and file splitting operations.
S3C_CRYPTO_CMD_NAME=""

# S3C_CLI_CMD_NAME defines the fully qualified path to the program
# used to perform actual upload operations to the Cloud storage
# provider.  This program is called (indirectly) by the 
# s3-crypto.ksh program defined by the S3C_CRYPTO_CMD_NAME variable
# above.
S3C_CLI_CMD_NAME=""

It should be noted that compression is always enabled. If this turns out to be a problem, please let me know and we can add a parameter to control the behavior. I would like to try and keep the number of knobs under control, so I figured we would go for simplicity with this release and add additional functionality as necessary.

Encryption is always always enabled. In this release you have the choice of the OpenSSL or Solaris cryptographic providers. Note that just as with the Cloud Safety Box project, key labels are only supported for the Solaris cryptographic provider. The name of the algorithm to be used must match the algorithm name supported by whichever provider you have selected.

File splitting is enabled by default. This behavior can be changed by setting the MAX_FILE_SIZE parameter to 0 (off) or any positive integer value (representing a size in Kbytes).

All of the other changes are basic implementation details and should not impact the installation, configuration or use of the tool. If you have not had a chance, I would encourage you to check out the ZFS Automatic Snapshot as well as the latest version of this project so that you can begin storing compressed, encrypted ZFS backups into Amazon's Simple Storage Service (S3) or Sun's SunCloud Storage Service (when available).

As always, feedback and ideas are greatly appreciated! Come join the discussion at Project Kenai!

Take care!

Technorati Tag:

Thursday Jun 11, 2009

UPDATE: Free Security Hardened Virtual Machine Image

Just a few days ago, I announced the availability of a security hardened OpenSolaris AMI for Amazon EC2. Well, the OpenSolaris on EC2 team has taken the next step by making this image available to our colleagues using the EC2 European Region! This publicly available AMI (AMI ID: ami-d7a189a3) is available today, and just as with the U.S. version, it does not require registration. There is nothing to get in the way of your using them today! Go give it a spin and let us know what you think! Check out the announcement for all of the details.

Technorati Tag:

Wednesday Jun 10, 2009

NEW: Cloud Safety Box v0.4

Today, I am happy to announce the v0.4 release of the Cloud Safety Box project. About a month ago, I announced the initial public release and since that time it was even highlighted and demonstrated at Sun's CommunityOne event! Not too bad for a new project!

The new version released today was a substantial redesign in order to improve the overall design and efficiency of the tools while at the same time adding a few key features. The biggest visible changes include support for compression, splitting up of large files into small chunks, and also support for Solaris key labels. Let's dive into each of these briefly:

  • Compression. Compression is enabled automatically for the Cloud Safety Box (csb) tool and it is configurable when using the s3-crypto.ksh utility. When compression is enabled, the input stream or file is compressed first (before encryption and splitting). By default, compression is formed using the bzip2 utility (with the command-line option -9. To enable compression with the s3-crypto.ksh utility, use the -C option as in the following example:
    $ s3-crypto.ksh -C -m put -b mybucket -l myfile -r myfile
    

    Of course, compression can be used along with encryption and file splitting. Decompression is handled on get operations and is the last step to be performed (after file re-assembly and decryption). Just as with compression, the bzip2 utility is used (with the command-line options -d -c. To enable decompression with the s3-crypto.ksh utility, use the -C option as in the following example:

    $ s3-crypto.ksh -C -m get -b mybucket -l myfile -r myfile
    

    The actual compression and decompression methods can be changed using the S3C_COMPRESS_CMD and S3C_DECOMPRESS_CMD environment variables respectively as in the following example:

    $ env S3C_COMPRESS_CMD="gzip -9" S3C_DECOMPRESS_CMD="gzip -d -c" \\
       s3-crypto.ksh -C -m put -b mybucket -l myfile -r myfile
    

  • Splitting. It is well known that there are file size limits associated with Cloud Storage services. There are times, however, when you may have files that you would like to store that exceed those limits. This is where splitting comes into the picture. Splitting will take an input file and based upon a size threshold, divide it up into a number of files. Splitting is done by default with the csb tool and can be optionally enabled in the s3-crypto.ksh tool. Splitting is accomplished using the GNU split(1) program and is enabled using the -S option. The maximum file size limit is, by default, set at 4 GB, but it can be adjusted using the -L command-line option (specified in Kbytes). Splitting at 2 GB is enabled in the following example:
    $ s3-crypto.ksh -S -L 2000000 -m put -b mybucket -l myfile -r myfile
    

    When splitting is enabled and triggered (when a file's size exceeds the limit), the files stored in the Cloud Storage service use the name as specified by the remote_file (-r) argument. In the above example, the split files will all begin with the name myfile. Each will have a suffix of a ~ followed by an identification string. For example, files stored in the Cloud may look like:

    myfile~aaaaa
    myfile~aaaab
    myfile~aaaac
    myfile~aaaad
    

    The csb and s3-crypto.ksh tools will use this naming convention to automatically reassemble files for get operations. Just as with splitting, reassembly is automatically performed for the csb tool and is enabled in the s3-crypto.ksh tool using the command-line option -S. When specifying a file that has been split, you do not need to include the suffix. The tools will discover that the file has been split and automatically reassemble it. Here is an example for reassembly:

    $ s3-crypto.ksh -S -m get -b mybucket -l myfile -r myfile
    

    The only downsides to splitting are the time it takes to split the files and the additional space that is needed to accommodate both the original file as well as the files created during the splitting process. This is unavoidable however as complete files must be available locally before they can be uploaded to the Cloud Storage provider.

  • Key Labels. The last "big" feature added in this new version is support for symmetric keys stored in PKCS#11 tokens (when the Solaris cryptographic provider is used). By default, the Solaris cryptographic provider is not selected (for reasons of portability), but it can easily be enabled in the s3-crypto.ksh tool using the -p solaris command line option. This setting will cause enable the use of the Solaris encrypt(1) and decrypt commands in place of their OpenSSL counterparts. Using the Solaris cryptographic provider allows you to take advantage of the Solaris Key Management Framework. Today, only the Sun Software PKCS#11 softtoken is supported, but I expect to remove this restriction in a future release.

    Using the pktool(1) command, you can create a key with a specific key label:

    $ pktool genkey keystore=pkcs11 label=my-new-key keytype=aes keylen=256
    Enter PIN for Sun Software PKCS#11 softtoken 
    Enter PIN for Sun Software PKCS#11 softtoken
    Enter PIN for Sun Software : 
    

    The creation of this new key (with label my-new-key) can be verified:

    $ pktool list objtype=key
    Enter PIN for Sun Software PKCS#11 softtoken  Enter PIN for Sun Software P: 
    Found 1 symmetric keys.
    Key #1 - AES:  my-new-key (256 bits)
    

    This key can be used with the s3-crypto.ksh tool when the Solaris cryptographic provider is selected and the key label is provided using the -K command-line option as in the following example:

    $ s3-crypto.ksh -c -p solaris -m put -b mybucket -K my-new-key -l myfile -r myfile
    Enter PIN for Sun Software PKCS#11 softtoken  : 
    

    The same approach is used to decrypt files when a get operation is specified.

As always, I am always looking for feedback! Let me know if these tools are helpful and how they can be improved! You can find out more information on this project at its home page at Project Kenai.

Take care!

Technorati Tag:

Thursday Jun 04, 2009

Free Security Hardened Virtual Machine Image

Perhaps I am a bit sensitive to the topic of security, but I could not let a "first" go by without comment. Back in 1999 and 2000, Sun was _the_ first commercial operating system vendor to publish not only detailed security guidance but also a tool that allowed organizations to harden the security configuration of their systems in accordance with Sun's best practices and their own policies. That tool, known as the Solaris Security Toolkit, continued to be enhanced and evolve for nearly a decade supporting new versions of the Solaris OS and adding new capabilities such as auditing. Recently, it has taken its next step forward as an OpenSolaris project. Best of luck to Jason (the new project leader)!

But, this was not the _first_ that compelled me to write today. Yes, there has been another!

Working together for more than six years, Sun and the Center for Internet Security have consistently collaborated on best-in-class, supportable and complete security hardening guidance for the Solaris operating system. The latest version (previously discussed), developed for the Solaris 10 operating system, was completed with substantial contributions from Sun, CIS, the U.S. National Security Agency (NSA), as well as the U.S. Defense Information Systems Agency (DISA).

Building upon this solid foundation, this week, Sun and the Center for Internet Security are proud to announce a new _first_. We have collaborated to adapt the security recommendations published in the Solaris 10 Benchmark to the OpenSolaris operating system. This alone may be an interesting _first_, but we have gone farther. We have adapted the recommendations to meet the needs of virtual machine images running in Cloud Computing environments. All of our findings and recommendations are freely available and can be found at the Sun OpenSolaris AMI Hardening Wiki. But that is not all!

We have worked with the Sun's OpenSolaris on EC2 team to develop the _first_ vendor-provided machine image that has been hardened based upon industry-accepted and vendor supported security recommendations. As a further commitment to our "Secure by Default" strategy, we have made this AMI publicly available (AMI ID: ami-35ac4a5c) so that anyone can quickly and easily make use of it without having to apply the security hardening steps manually. Interested? Learn more about this AMI from the OpenSolaris on EC2 announcement. Of course, this will also be available for the Sun Cloud too!

Special thanks to Blake Frantz (CIS), Lew Tucker (Sun), Sujeet Vasudevan (Sun), and Divyen Patel (Sun) - without whom this new _first_ would not have been possible!

Technorati Tag:

Friday May 01, 2009

Cloud Safety Box

Yesterday, I wrote about the ZFS Encrypted Backup to S3 project that I started over at Project Kenai. This project integrates with the ZFS Automatic Snapshot service to provide a way for automatically storing encrypted ZFS snapshots into the Cloud.

So, what if you wanted to just store and retrieve individual files? Well, there is a tool to help fill this need as well! The Crypto Front End to S3 CLIs project offers a couple tools that allow you to encrypt and upload files to the Cloud (and of course download and decrypt files as well). This project provides a very simple to use interface in the form of the Cloud Safety Box, a tool that leverages a number of pre-configured default settings to trade-off flexibility for ease of use. For those wanting more control over the settings (including encryption provider, encryption algorithm, key type and other settings), simply use the s3-crypto.sh utility. A diagram is available showing how these tools work together.

Since these tools can be configured to use OpenSSL as their cryptography provider (and there are no further dependencies on OpenSolaris, you can actually use this tool on other operating systems (e.g., Mac OS X was successfully used during one of the tests).

It should be noted that the s3-crypto.sh utility can be used to download and decrypt an ZFS snapshot uploaded to the Cloud using the ZFS Encrypted Backup to S3 utility so that with these two tools you have a way of storing and retrieving regular files as well as ZFS snapshots.

You can find all of the details, documentation and download instructions (as well as a Mercurial gate) at the Crypto Front End to S3 CLIs project page. So, please give it a try and let us know what you think!

Technorati Tag:

Thursday Apr 30, 2009

Saving Encrypted ZFS Snapshots to the Cloud

Are you an OpenSolaris user? Do you use ZFS? Have you tried the ZFS Automatic Snapshot service? If so, you might be interested in a new tool that I just published over at Project Kenai that enables you to encrypt and store ZFS snapshots to either the Sun Cloud Storage Service (Employees Only at the moment) or Amazon's Simple Storage Service (S3).

You can find all of the details, documentation and download instructions (as well as a Mercurial gate) at the ZFS Encrypted Backup to S3 project page. So, please give it a try and let us know what you think!

Technorati Tag:

Wednesday Jan 28, 2009

Amazon S3 Silent Data Corruption

While catching up on my reading, I came across an interesting article focused on the Amazon's Simple Storage Service (S3). The author points to a number of complaints where Amazon S3 customers had experienced silent data corruption. The author recommends calculating MD5 digital fingerprints of files before posting them to S3 and validating those fingerprints after later retrieving them from the service. More recently, Amazon has posted a best practices document for using S3 that includes:

Amazon S3’s REST PUT operation provides the ability to specify an MD5 checksum (http://en.wikipedia.org/wiki/Checksum) for the data being sent to S3. When the request arrives at S3, an MD5 checksum will be recalculated for the object data received and compared to the provided MD5 checksum. If there’s a mismatch, the PUT will be failed, preventing data that was corrupted on the wire from being written into S3. At that point, you can retry the PUT.

MD5 checksums are also returned in the response to REST GET requests and may be used client-side to ensure that the data returned by the GET wasn’t corrupted in transit. If you need to ensure that values returned by a GET request are byte-for-byte what was stored in the service, calculate the returned value’s MD5 checksum and compare it to the checksum returned along with the value by the service.

All in all - good advice, but it strikes me as unnecessarily "left as an exercise to the reader". Just as ZFS has revolutionized end-to-end data integrity within a single system, why can't we have similar protections at the Cloud level? While certainly it would help if Amazon was using ZFS on Amber Road as their storage back-end, even this would be insufficient...

Clearly, more is needed. For example, would it make sense to have an API layer be added that automates the calculation and validation of digital fingerprints? Most people don't think about silent data corruption and honestly they shouldn't have to! Integrity checks like these should be automated just as they are in ZFS and TCP/IP! As we move into 2009, we need to offer easy to use, approachable solutions to these problems, because if the future is in the clouds, it will revolve around the data.

Technorati Tag:

About

gbrunett

Search

Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today