Tuesday Jun 16, 2009

NEW: Encrypted ZFS Backups to the Cloud v0.3

Building upon the v0.4 release of the Cloud Safety Box tool, I am happy to announce the availability of v0.3 of the Encrypted ZFS Backups to the Cloud code. This new version uses the Cloud Safety Box project to enable compression, encryption and splitting of the ZFS backups before uploading the results to the Cloud. Due to this change, this project now officially depends upon the Cloud Safety Box project. The nice thing about this change is that it helps to keep the amount of redundant code low (between the two projects) while also improving testing time.

From an end-user perspective, this change is mostly transparent. A few parameters were added or changed in the /etc/default/zfs-backup-to-s3 defaults file such as:

# ENC_PROVIDER defines the cryptographic services provider used for
# encryption operations.  Value values are "solaris" and "openssl".
ENC_PROVIDER="solaris"

# MAX_FILE_SIZE specifies the maximum file size that can be sent
# to the Cloud storage provider without first splitting the file
# up into chunks (of MAX_FILE_SIZE or less).  This value is specified
# in Kbytes.  If this variable is 0 or not defined, then this service
# will _not_ attempt to split the file into chunks.
MAX_FILE_SIZE=40000000

# S3C_CRYPTO_CMD_NAME defines the fully qualified path to the
# s3-crypto.ksh program which is used to perform compression,
# encryption, and file splitting operations.
S3C_CRYPTO_CMD_NAME=""

# S3C_CLI_CMD_NAME defines the fully qualified path to the program
# used to perform actual upload operations to the Cloud storage
# provider.  This program is called (indirectly) by the 
# s3-crypto.ksh program defined by the S3C_CRYPTO_CMD_NAME variable
# above.
S3C_CLI_CMD_NAME=""

It should be noted that compression is always enabled. If this turns out to be a problem, please let me know and we can add a parameter to control the behavior. I would like to try and keep the number of knobs under control, so I figured we would go for simplicity with this release and add additional functionality as necessary.

Encryption is always always enabled. In this release you have the choice of the OpenSSL or Solaris cryptographic providers. Note that just as with the Cloud Safety Box project, key labels are only supported for the Solaris cryptographic provider. The name of the algorithm to be used must match the algorithm name supported by whichever provider you have selected.

File splitting is enabled by default. This behavior can be changed by setting the MAX_FILE_SIZE parameter to 0 (off) or any positive integer value (representing a size in Kbytes).

All of the other changes are basic implementation details and should not impact the installation, configuration or use of the tool. If you have not had a chance, I would encourage you to check out the ZFS Automatic Snapshot as well as the latest version of this project so that you can begin storing compressed, encrypted ZFS backups into Amazon's Simple Storage Service (S3) or Sun's SunCloud Storage Service (when available).

As always, feedback and ideas are greatly appreciated! Come join the discussion at Project Kenai!

Take care!

Technorati Tag:

Wednesday Jan 28, 2009

Amazon S3 Silent Data Corruption

While catching up on my reading, I came across an interesting article focused on the Amazon's Simple Storage Service (S3). The author points to a number of complaints where Amazon S3 customers had experienced silent data corruption. The author recommends calculating MD5 digital fingerprints of files before posting them to S3 and validating those fingerprints after later retrieving them from the service. More recently, Amazon has posted a best practices document for using S3 that includes:

Amazon S3’s REST PUT operation provides the ability to specify an MD5 checksum (http://en.wikipedia.org/wiki/Checksum) for the data being sent to S3. When the request arrives at S3, an MD5 checksum will be recalculated for the object data received and compared to the provided MD5 checksum. If there’s a mismatch, the PUT will be failed, preventing data that was corrupted on the wire from being written into S3. At that point, you can retry the PUT.

MD5 checksums are also returned in the response to REST GET requests and may be used client-side to ensure that the data returned by the GET wasn’t corrupted in transit. If you need to ensure that values returned by a GET request are byte-for-byte what was stored in the service, calculate the returned value’s MD5 checksum and compare it to the checksum returned along with the value by the service.

All in all - good advice, but it strikes me as unnecessarily "left as an exercise to the reader". Just as ZFS has revolutionized end-to-end data integrity within a single system, why can't we have similar protections at the Cloud level? While certainly it would help if Amazon was using ZFS on Amber Road as their storage back-end, even this would be insufficient...

Clearly, more is needed. For example, would it make sense to have an API layer be added that automates the calculation and validation of digital fingerprints? Most people don't think about silent data corruption and honestly they shouldn't have to! Integrity checks like these should be automated just as they are in ZFS and TCP/IP! As we move into 2009, we need to offer easy to use, approachable solutions to these problems, because if the future is in the clouds, it will revolve around the data.

Technorati Tag:

Security Recommendations for IaaS Providers


Abstract


While under intense interest in the industry today, Cloud Computing (Cloud) will not be able to realize its full potential without strong security assurances offered by Cloud providers.  The type of security controls and the level of assurance required will vary by Cloud layer (IaaS, PaaS, SaaS) and also by the degree to which the Cloud is shared (public, private, hybrid) and interconnected with other services.  The goal of this article is to highlight a few security recommendations that apply to IaaS Cloud architectures.


Definitions

Before we begin, a few definitions are provided to ensure that everyone is working from the same page:

  • Cloud Resources. (Resources) Virtualized compute, storage, networking, infrastructure and application services.

  • Cloud Provider. (Provider) The owner of the Cloud Computing architecture and all of its Cloud Resources.  The Cloud Provider provisions Cloud Resources based upon requests (and optionally payment) from Cloud Customers.

  • Cloud Customer. (Customer) The entity who requests, purchases, rents and/or leverages Cloud Resources makes available by a Cloud Provider. Using the allocated Cloud Resources, a Cloud Customer can optionally deploy and share content, functionality, applications and services that can be accessed and used by Cloud Consumers.

  • Cloud Consumer. (Consumer) The entity who accesses or makes use of content, functionality, applications and services being offered by a Cloud Customer.

Security Recommendations for Providers of Cloud Computing Services

To promote greater security as well as customer confidence in and adoption of Cloud Computing architectures and services, Cloud Providers are strongly encouraged to embrace and embody the following three recommendations:
  • Recommendation #1: All management and control interactions between Cloud Providers and Cloud Customers must take place over secure channels that utilize standards-based protocols and support authentication, authorization, confidentiality, integrity and accountability.  Without these basic protections, the provider and its customers are vulnerable to unwanted disclosure, impersonation, identity and/or service theft or misuse, and repudiation.  These protections must exist for Customers when they access the services offered by the Cloud Provider for the purposes of signup, provisioning, payment, monitoring, and other management and control related functions.

    • Note #1: Providers should move to offer strong mutual authentication mechanisms for their Customers in order to provide greater protection for Customer accounts. Given that management and control interactions have very real financial impact, it is critical that these mechanisms be protected by more than just a simple password.

    • Note #2: If a Provider must use password authentication to grant Customer access to account, control and management functions, then the Provider should take steps to ensure that the passwords chosen are strong and passed only over encrypted channels. Today, very few Providers enforce strong password composition rules that are otherwise in effect throughout modern enterprises. Additional access monitoring to detect and prevent fraudulent access is also advised.

  • Recommendation #2: By default, Resources allocated by the Provider shall not interact with any other Resource not owned by the same Customer.  This includes (physical or virtual) compute, networking and storage resources, (virtual) applications and services, and other related objects.  The Customer to which a Resource is assigned is called its “owner”.  This recommendation is necessary to ensure that objects that exist in the Cloud are not inadvertently exposed to unauthorized consumers.

    • Note #3: The Provider may offer a mechanism to allow Customers to manage access to the Resources that they own thereby allowing other Customers or Cloud Consumers to access their content and services.  A default deny policy is recommended regardless of what other access configurations may be possible.

    • Note #4: If some aspect of a Resource is to be shared between multiple Customers, the Provider must implement security controls that ensure that the intended owner of the object is compartmentalized from the rest of the population.  The Provider must enforce sufficient protections preventing unauthorized access, manipulation or destruction of objects under their care.  That is, a Provider must be able to demonstrate that security protections are in place preventing Customer A from accessing, manipulating or destroying Resources associated with Customer B.  Ideally, these controls will be validated using a trusted third party who will act as an auditor.  The Provider should make available (sanitized) audit reports to Customers as requested.

  • Recommendation #3: Providers must implement controls preventing the accidental or malicious access, use, modification or destruction of Resources under their care.  As Providers have physical access to the underlying infrastructure, they can circumvent many common security protections.  Consequently, it is imperative that Providers implement controls that restrict their own employees access to Resources.  Further, all authorized access must be audited and regularly reviewed in order to promote accountability and ensure that actions are taken in accordance with their stated security policy.
It should be stated that these recommendations should be implemented in a manner that does not significantly compromise a Customer's ability to easily, efficiently and reliably use the Cloud services offered by the Provider. In future articles, I would like to explore these areas in more detail as well as discuss some of the security burden that is placed squarely on IaaS Customers.

Technorati Tag:

About

gbrunett

Search

Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today