Tuesday Jun 16, 2009

NEW: Encrypted ZFS Backups to the Cloud v0.3

Building upon the v0.4 release of the Cloud Safety Box tool, I am happy to announce the availability of v0.3 of the Encrypted ZFS Backups to the Cloud code. This new version uses the Cloud Safety Box project to enable compression, encryption and splitting of the ZFS backups before uploading the results to the Cloud. Due to this change, this project now officially depends upon the Cloud Safety Box project. The nice thing about this change is that it helps to keep the amount of redundant code low (between the two projects) while also improving testing time.

From an end-user perspective, this change is mostly transparent. A few parameters were added or changed in the /etc/default/zfs-backup-to-s3 defaults file such as:

# ENC_PROVIDER defines the cryptographic services provider used for
# encryption operations.  Value values are "solaris" and "openssl".
ENC_PROVIDER="solaris"

# MAX_FILE_SIZE specifies the maximum file size that can be sent
# to the Cloud storage provider without first splitting the file
# up into chunks (of MAX_FILE_SIZE or less).  This value is specified
# in Kbytes.  If this variable is 0 or not defined, then this service
# will _not_ attempt to split the file into chunks.
MAX_FILE_SIZE=40000000

# S3C_CRYPTO_CMD_NAME defines the fully qualified path to the
# s3-crypto.ksh program which is used to perform compression,
# encryption, and file splitting operations.
S3C_CRYPTO_CMD_NAME=""

# S3C_CLI_CMD_NAME defines the fully qualified path to the program
# used to perform actual upload operations to the Cloud storage
# provider.  This program is called (indirectly) by the 
# s3-crypto.ksh program defined by the S3C_CRYPTO_CMD_NAME variable
# above.
S3C_CLI_CMD_NAME=""

It should be noted that compression is always enabled. If this turns out to be a problem, please let me know and we can add a parameter to control the behavior. I would like to try and keep the number of knobs under control, so I figured we would go for simplicity with this release and add additional functionality as necessary.

Encryption is always always enabled. In this release you have the choice of the OpenSSL or Solaris cryptographic providers. Note that just as with the Cloud Safety Box project, key labels are only supported for the Solaris cryptographic provider. The name of the algorithm to be used must match the algorithm name supported by whichever provider you have selected.

File splitting is enabled by default. This behavior can be changed by setting the MAX_FILE_SIZE parameter to 0 (off) or any positive integer value (representing a size in Kbytes).

All of the other changes are basic implementation details and should not impact the installation, configuration or use of the tool. If you have not had a chance, I would encourage you to check out the ZFS Automatic Snapshot as well as the latest version of this project so that you can begin storing compressed, encrypted ZFS backups into Amazon's Simple Storage Service (S3) or Sun's SunCloud Storage Service (when available).

As always, feedback and ideas are greatly appreciated! Come join the discussion at Project Kenai!

Take care!

Technorati Tag:

Wednesday Jun 10, 2009

NEW: Cloud Safety Box v0.4

Today, I am happy to announce the v0.4 release of the Cloud Safety Box project. About a month ago, I announced the initial public release and since that time it was even highlighted and demonstrated at Sun's CommunityOne event! Not too bad for a new project!

The new version released today was a substantial redesign in order to improve the overall design and efficiency of the tools while at the same time adding a few key features. The biggest visible changes include support for compression, splitting up of large files into small chunks, and also support for Solaris key labels. Let's dive into each of these briefly:

  • Compression. Compression is enabled automatically for the Cloud Safety Box (csb) tool and it is configurable when using the s3-crypto.ksh utility. When compression is enabled, the input stream or file is compressed first (before encryption and splitting). By default, compression is formed using the bzip2 utility (with the command-line option -9. To enable compression with the s3-crypto.ksh utility, use the -C option as in the following example:
    $ s3-crypto.ksh -C -m put -b mybucket -l myfile -r myfile
    

    Of course, compression can be used along with encryption and file splitting. Decompression is handled on get operations and is the last step to be performed (after file re-assembly and decryption). Just as with compression, the bzip2 utility is used (with the command-line options -d -c. To enable decompression with the s3-crypto.ksh utility, use the -C option as in the following example:

    $ s3-crypto.ksh -C -m get -b mybucket -l myfile -r myfile
    

    The actual compression and decompression methods can be changed using the S3C_COMPRESS_CMD and S3C_DECOMPRESS_CMD environment variables respectively as in the following example:

    $ env S3C_COMPRESS_CMD="gzip -9" S3C_DECOMPRESS_CMD="gzip -d -c" \\
       s3-crypto.ksh -C -m put -b mybucket -l myfile -r myfile
    

  • Splitting. It is well known that there are file size limits associated with Cloud Storage services. There are times, however, when you may have files that you would like to store that exceed those limits. This is where splitting comes into the picture. Splitting will take an input file and based upon a size threshold, divide it up into a number of files. Splitting is done by default with the csb tool and can be optionally enabled in the s3-crypto.ksh tool. Splitting is accomplished using the GNU split(1) program and is enabled using the -S option. The maximum file size limit is, by default, set at 4 GB, but it can be adjusted using the -L command-line option (specified in Kbytes). Splitting at 2 GB is enabled in the following example:
    $ s3-crypto.ksh -S -L 2000000 -m put -b mybucket -l myfile -r myfile
    

    When splitting is enabled and triggered (when a file's size exceeds the limit), the files stored in the Cloud Storage service use the name as specified by the remote_file (-r) argument. In the above example, the split files will all begin with the name myfile. Each will have a suffix of a ~ followed by an identification string. For example, files stored in the Cloud may look like:

    myfile~aaaaa
    myfile~aaaab
    myfile~aaaac
    myfile~aaaad
    

    The csb and s3-crypto.ksh tools will use this naming convention to automatically reassemble files for get operations. Just as with splitting, reassembly is automatically performed for the csb tool and is enabled in the s3-crypto.ksh tool using the command-line option -S. When specifying a file that has been split, you do not need to include the suffix. The tools will discover that the file has been split and automatically reassemble it. Here is an example for reassembly:

    $ s3-crypto.ksh -S -m get -b mybucket -l myfile -r myfile
    

    The only downsides to splitting are the time it takes to split the files and the additional space that is needed to accommodate both the original file as well as the files created during the splitting process. This is unavoidable however as complete files must be available locally before they can be uploaded to the Cloud Storage provider.

  • Key Labels. The last "big" feature added in this new version is support for symmetric keys stored in PKCS#11 tokens (when the Solaris cryptographic provider is used). By default, the Solaris cryptographic provider is not selected (for reasons of portability), but it can easily be enabled in the s3-crypto.ksh tool using the -p solaris command line option. This setting will cause enable the use of the Solaris encrypt(1) and decrypt commands in place of their OpenSSL counterparts. Using the Solaris cryptographic provider allows you to take advantage of the Solaris Key Management Framework. Today, only the Sun Software PKCS#11 softtoken is supported, but I expect to remove this restriction in a future release.

    Using the pktool(1) command, you can create a key with a specific key label:

    $ pktool genkey keystore=pkcs11 label=my-new-key keytype=aes keylen=256
    Enter PIN for Sun Software PKCS#11 softtoken 
    Enter PIN for Sun Software PKCS#11 softtoken
    Enter PIN for Sun Software : 
    

    The creation of this new key (with label my-new-key) can be verified:

    $ pktool list objtype=key
    Enter PIN for Sun Software PKCS#11 softtoken  Enter PIN for Sun Software P: 
    Found 1 symmetric keys.
    Key #1 - AES:  my-new-key (256 bits)
    

    This key can be used with the s3-crypto.ksh tool when the Solaris cryptographic provider is selected and the key label is provided using the -K command-line option as in the following example:

    $ s3-crypto.ksh -c -p solaris -m put -b mybucket -K my-new-key -l myfile -r myfile
    Enter PIN for Sun Software PKCS#11 softtoken  : 
    

    The same approach is used to decrypt files when a get operation is specified.

As always, I am always looking for feedback! Let me know if these tools are helpful and how they can be improved! You can find out more information on this project at its home page at Project Kenai.

Take care!

Technorati Tag:

Friday May 01, 2009

Cloud Safety Box

Yesterday, I wrote about the ZFS Encrypted Backup to S3 project that I started over at Project Kenai. This project integrates with the ZFS Automatic Snapshot service to provide a way for automatically storing encrypted ZFS snapshots into the Cloud.

So, what if you wanted to just store and retrieve individual files? Well, there is a tool to help fill this need as well! The Crypto Front End to S3 CLIs project offers a couple tools that allow you to encrypt and upload files to the Cloud (and of course download and decrypt files as well). This project provides a very simple to use interface in the form of the Cloud Safety Box, a tool that leverages a number of pre-configured default settings to trade-off flexibility for ease of use. For those wanting more control over the settings (including encryption provider, encryption algorithm, key type and other settings), simply use the s3-crypto.sh utility. A diagram is available showing how these tools work together.

Since these tools can be configured to use OpenSSL as their cryptography provider (and there are no further dependencies on OpenSolaris, you can actually use this tool on other operating systems (e.g., Mac OS X was successfully used during one of the tests).

It should be noted that the s3-crypto.sh utility can be used to download and decrypt an ZFS snapshot uploaded to the Cloud using the ZFS Encrypted Backup to S3 utility so that with these two tools you have a way of storing and retrieving regular files as well as ZFS snapshots.

You can find all of the details, documentation and download instructions (as well as a Mercurial gate) at the Crypto Front End to S3 CLIs project page. So, please give it a try and let us know what you think!

Technorati Tag:

Thursday Apr 30, 2009

Saving Encrypted ZFS Snapshots to the Cloud

Are you an OpenSolaris user? Do you use ZFS? Have you tried the ZFS Automatic Snapshot service? If so, you might be interested in a new tool that I just published over at Project Kenai that enables you to encrypt and store ZFS snapshots to either the Sun Cloud Storage Service (Employees Only at the moment) or Amazon's Simple Storage Service (S3).

You can find all of the details, documentation and download instructions (as well as a Mercurial gate) at the ZFS Encrypted Backup to S3 project page. So, please give it a try and let us know what you think!

Technorati Tag:

About

This area of cyberspace is dedicated the goal of raising cybersecurity awareness. This blog will discuss cybersecurity risks, trends, news and best practices with a focus on improving mission assurance.

Search

Archives
« July 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
  
       
Today