Friday Nov 19, 2010

ZFS encryption what is on disk ?

This article is about what is and isn't stored encrypted on disk for ZFS datasets that are encrypted and how we do the actual encryption. It does require some understanding of Solaris and ZFS debugging tools.

The first important thing to understand about ZFS is that it is not providing "full disk" encryption and you will be able to tell that a disk that has data on it that was encrypted by ZFS is part of a ZFS pool.

This is in part because one of the requirements for adding encryption support to ZFS was that a given ZFS pool be able to contain a mix of encrypted and cleartext datasets and those that are encrypted be able to use different algorithms/keylengths and different encryption keys.

We also require that the key material does not need to have been made available in order for pool wide operations and certain dataset operations (such zfs destroy) to succeed.  One of the most important pool wide operations is scrub/resilver; we need to ensure that hotspare, disk replacement and self healing work even if the key material has never been made available to this running instance of the system. We must also be able to claim (but not necessarily replay) the log blocks (ZIL) on reboot after power loss or panic without requiring the key material (ZFS must remain consistent on disk at all times).

What this means is that even in a pool were all of the datasets are marked as being encrypted (eg zpool create -O encryption=on tank ...) there is some ZFS metadata that is always in the clear.

What is always in the clear even for encrypted datasets?
  • ZFS pool layout
  • Dataset names
  • Pool and dataset properties, including user defined properties
    • compression, encryption, share, etc.
  • Dataset level quotas (zfs set quota)
  • Dataset delegations (zfs allow)
  • The pool history (zpool history)
  • All dnode blocks
    • Needed to traverse the pool for resilver/scrub
  • Block pointer
    • The blkptr_t contains the MAC/AuthTag from AES-CCM or AES-GCM in the top 96 bits of the checksum field. The SHA256 checksum is truncated to provide this 96 bits of space.
      • The checksum for an encrypted block is always sha256-mac
    • The 96bit IV for the block is in dva[2] of the blkptr_t
      • This means that an encrypted block can have a maximum of 2 copies not 3

What is encrypted when a dataset is encrypted?

  • All file content written to a ZFS filesystem via the the ZPL/VFS interfaces (ie POSIX interfaces)
    • open(2), write(2), mmap(2), etc.
  • All POSIX (and ZFS filesystem) metadata: ACLs, file and directory names, permissions, system and extended attributes on files and all file timestamps
    • ZPL metadata is normally contained in the bonusbuf area of a dnode_phys_t but the dnode is in the clear on disk. For encrypted datasets the bonusbuf is always empty and the content normally have been there is pushed out to an encrypted "spill" block, called System Attribtue block.  Normally for ZPL filesystems spill blocks are only used for files with large ACLs.
  • System Attribute (spill) blocks (used for any purpose)
  • All data written to a ZVOL
  • User/group quota information for ZFS filesystems, both the policy and space accounting (zfs set userquota@ | groupquota@)
  • FUID mappings for UNIX <-> CIFS user identities
  • All of the above if it is in a ZIL (ZFS Intent Log) record.
    • Note that the actual ZIL blocks have block pointers and a record header that includes the sizing information that is in the clear.
  • Data encryption keys
    • These are stored in an on disk keychain referenced from the dsl_dir_phys_t. 

The ondisk keychain

The keychain entries are ZAP objects that are indexed by the transaction they were created in. The entries are individually wrapped by the dataset's wrapping key each with their own IV and an indicator of what wrapping key algorithm was used (at this time the wrapping key crypto algorithm always matches the encryption property).  Every encrypted dataset has at least one keychain entry.  Clones have their own keychain and do not reference the one of their origin, because the clone may have a different wrapping key and the clone may have different keychain entries to its origin.

Encrypting a block

Each ZFS on disk block (smallest size is 512 bytes, largest is 128k) is encrypted using AES in either CCM or GCM mode as indicated by the encryption property. Even though CCM and GCM provide the ability to have additional authenticated data that isn't encrypted this isn't used because (with the exception of the ZIL blocks) all data in the block is encrypted.  A 96 bit IV per disk block is used and both CCM and GCM are requested to provide a 96 bit MAC/AuthTag in addition to the ciphertext.  While we could get a larger MAC space in the ZFS on disk blkptr_t is very tight and we need to leave some of it available for future features.  After encryption each block is also checksummed by the ZIO pipeline using SHA256 (fletcher is not available for encrypted datasets).

IV generation for encrypted blocks

Every encrypted on disk block has its own IV, (stored in dva[2] of the blkptr_t).  The IV is generated by taking the first 96 bits of a SHA256 hash of the contents of the zbookmark_t and the transaction the block was first written in.  We actually have all this information available both at read and write time so we don't need to store the IV in the simplest case. However snapshots, clones and deduplication as well as some (non encryption related) future features complicate this so we do store the IV.

If dedup=on for the dataset the per block IVs are generated differently.  They are generated by taking an HMAC-SHA256 of the plaintext and using the left most 96 bits of that as the IV.  The key used for the HMAC-SHA256 is different to the one used by AES for the data encryption, but is stored (wrapped) in the same keychain entry, just like the data encryption key a new one is generated when doing a 'zfs key -K <dataset>'.  Obviously we couldn't calculate this IV when doing a read so it has to be stored.

ZIL blocks

The ZIL log blocks are written in exactly the same way regardless of whether the ZIL is in the main pool disks or a separate intent log (slog) is being used.  The ZIL blocks are encrypted a different way to blocks going through the "normal" write path; this is because log blocks are formated on disk differently anyway.  The log blocks are chained together and have a header (zil_chain_t) that indicates what size the log block is and the blkptr_t to the next block as well as an embedded checksum that chains the blocks together.  For encrypted log blocks the MAC from AES CCM/GCM is also stored in this header (zil_chain_t).   It is log blocks rather than log records that are encrypted.  Within a given log block there maybe multiple log records.  Some of these log records may contain pointers to blocks that were written directly (via dmu_sync), in order for us to claim the ZIL when the pool is imported these embedded block pointers need to be readable even if the encryption keys are not available (which they won't be in most cases during the claim phase).  These means that  we don't encrypt whole log blocks, the log record headers and any blkptr_t embedded in a log record is in the clear, the rest of the log block content is encrypted.

How is the passphrase turned into a wrapping key (keysource=passphrase,prompt)?

When the dataset 'keysource' property indicates that a passphrase should be used we have to derive a wrapping key from it.  The wrapping key is derived from the passphrase provided and a per dataset salt (which is stored as hidden property of the dataset) by using PKCS#5 PBKD2_HMAC_SHA1 with 1000 iterations.  The wrapping key is not stored on disk.  The salt is randomly generated when the dataset is created (with keysource=passphrase,prompt) and changed each time the 'zfs key -c' is run, even if the passphrase the user provides is the same the salt and thus the actual wrapping key will be different.


Looking at the on disk structures

Using mdb macros and zdb we can actually look at some of this.  Remember that mdb and zdb are debugging tools only, use of mdb on a live kernel without understanding what you are doing can corrupt data.  The interfaces used below are not committed interfaces and are subject to change.

Firstly using mdb on the live kernel (of an x86 machine) I've placed a breakpoint on the zio_decrypt function, lets look at the block pointer using the mdb blkptr dcmd:

[2]> <rdi::print zio_t io_bp | ::blkptr
DVA[0]=<0:204200:20000:STD:1>
[L0 PLAIN_FILE_CONTENTS] SHA256_MAC OFF LE contiguous unique encrypted 1-copy
size=20000L/20000P birth=10L/10P fill=1
cksum=a585cb5cce997c21:c79825e93b16d5aa:28df6742e24913e:6b94fbd569cf3cd9

This blkptr_t is for the contents of a file, we can see that it is encrypted and we only have one copy of it - so only one DVA entry. The checksum is SHA256_MAC so the actual MAC value is 2e24913e6b94fbd569cf3cd9.  The blkptr macro doesn't show us the IV that is stored in DVA[2], but we can see that if we print the raw structure using ::print

[2]> <rdi::print zio_t io_bp->blk_dva[2]
blk_dva[2] = {
    blk_dva[2].dva_word = [ 0x521926d500000000, 0x3b13ba46ab9f8a51 ]
}

Now lets use zdb, to look at some things (the output is trimmed slightly for the sake of this article)

# zdb -dd -e tank

Dataset mos [META], ID 0, cr_txg 4, 311K, 54 objects

    Object  lvl   iblk   dblk  dsize  lsize   %full  type          0    1    16K    16K  96.0K    32K   84.38  DMU dnode          1    1    16K     1K  1.50K     1K  100.00  object directory          2    1    16K    512      0    512    0.00  DSL directory          3    1    16K    512  1.50K    512  100.00  DSL props

...

        26    1    16K   128K  18.0K   128K  100.00  SPA history

...

36 1 16K 128K 0 128K 0.00 bpobj 37 1 512 512 3.00K 1K 100.00 DSL keychain 38 1 16K 4K 12.0K 4K 100.00 SPA space map ...

This pool (tank) currently has 3 datasets, one of which is encrypted.  We can see from the above zdb output that the keychains are kept in the special "mos" dataset along with some other pool wide metadata.  Now lets look at those keychains in a bit more detail by asking zdb to be more verbose (again the output is trimmed to show relevant information only):

    # zdb -dddd -e tank
    ...
    Object  lvl   iblk   dblk  dsize  lsize   %full  type
        37    1    512    512  3.00K     1K  100.00  DSL keychain
        dnode flags: USED_BYTES 
        dnode maxblkid: 1
        Fat ZAP stats:
                Pointer table:
                        32 elements
                        zt_blk: 0
                        zt_numblks: 0
                        zt_shift: 5
                        zt_blks_copied: 0
                        zt_nextblk: 0
                ZAP entries: 2
                Leaf blocks: 1
                Total blocks: 2
                zap_block_type: 0x8000000000000001
                zap_magic: 0x2f52ab2ab
                zap_salt: 0x16c6fb
                Leafs with 2\^n pointers:
                        5:      1 \*
                Blocks with n\*5 entries:
                        0:      1 \*
                Blocks n/10 full:
                        9:      1 \*
                Entries with n chunks:
                        9:      2 \*\*
                Buckets with n entries:
                        0:     14 \*\*\*\*\*\*\*\*\*\*\*\*\*\*
                        1:      2 \*\*
        Keychain entries by txg:
                txg 5 : wkeylen = 136
                txg 85 : wkeylen = 136

The above keychain  object shows it has two entries in it, the lowest numbered one (5) is from when the dataset was initially created and the second one (85) is because I had run 'zfs key -K tank/fs' on the dataset a little later.  Now lets illustrate with zdb what I discussed in the previous article about assured delete where I discussed about clones being able to have different set of entries in the keychain to their origin.

To illustrate this I ran the following:

# zfs snapshot tank/fs@1
# zfs clone -K tank/fs@1 tank/fsc1
# zfs key -K tank/fs

First lets look at the keychain object 37 which is for tank/fs, and then at the keychain object for the clone (I've trimmed the output a little more this time):

    Object  lvl   iblk   dblk  dsize  lsize   %full  type
        37    2    512    512  7.50K     2K  100.00  DSL keychain
      ...
        Keychain entries by txg:
                txg 5 : wkeylen = 136
                txg 85 : wkeylen = 136
                txg 174 : wkeylen = 136


     Object  lvl   iblk   dblk  dsize  lsize   %full  type
        101    1    512    512  4.50K  1.50K  100.00  DSL keychain
      ...
        Keychain entries by txg:
                txg 5 : wkeylen = 136
                txg 85 : wkeylen = 136
                txg 152 : wkeylen = 136

What we see above is that the original tank/fs dataset now has an additional entry from the 'zfs key -K tank/fs' that was run.  The keychain  for the clone (object 101) also has three entries in it, it shares the same entries as tank/fs for txg 5 and txg 85 (though they maybe encrypted differently on disk depending on where the wrapping key is inherited from) and it has as a unique entry created at txg 152.  We can see similar information by looking at the 'zpool history -il' output:

2010-11-19.05:58:25 [internal encryption key create txg:85] rekey succeeded dataset = 33 [user root on borg-nas]
2010-11-19.06:05:59 [internal encryption key create txg:152] rekey succeeded dataset = 96 from dataset = 77 [user root on borg-nas]
2010-11-19.06:06:40 [internal encryption key create txg:174] rekey succeeded dataset = 33 [user root on borg-nas]

What is decrypted in memory?

As all ready mentioned the data encryption keys are stored wrapped (encrypted) on disk but they are stored in memory in the clear along with the wrapping key (we need the wrapping key to stay around for 'zfs key -K' and for 'zfs create' where the keysource property is inherited).  They are stored only in non swappable kernel memory (though remember you can swap on an encrypted ZVOL).  They are accessible to someone with all privilege that is able to use mdb on the live kernel or on a crash dump - but so is your plaintext data.  A suitable hardware keystore could be used so that key material is only ever inside its FIPS 140 boundary but that support is not yet complete (note this is not a commitment from Oracle to provide this support in any future release of ZFS or Solaris) - there would be no on disk change required to support it though.

Any data or metadata blocks that are encrypted on disk are in the in-memory cache (ARC) in the clear, this is required because the in memory ARC buffers are sometimes "loaned" using zero copy to other parts of the system - including other file systems such as NFS and CIFS.  If this is too much of a risk for you then you can force the system to always go back to disk and decrypt blocks only when needed but note you will not benefit from the caching and this will have a significant performance penalty: zfs set primarycache=metadata <dataset>.

The L2ARC is not currently available for use by encrypted datasets (note this is not a commitment from Oracle to provide this support in any future release of ZFS or Solaris) it is equivalent to having done 'zfs set secondarycache=none <dataset>'. The DDT for deduplication is not encrypted data and is pool wide metadata (stored in the MOS) so it is still able to be stored in the L2ARC.

All of the above article content could have been discovered by reading the zfs(1M) man page and using mdb, DTrace and zdb while experimenting on a live system, which is actually how I wrote the article.  There is a lot more you can examine about the on disk and in memory state of Solaris, not just ZFS by using mdb and DTrace - neither of which you can hide from, since the kernel modules have CTF data it in them full structure definitions - Note though that unless the interfaces/structures are documented in the Solaris DDI, or other official documentation from Oracle, you are looking at implementation details that are subject to change - often even in an update/patch.

Tuesday Nov 16, 2010

Choosing a value for the ZFS encryption property

The 'on' value for the ZFS encryption property maps to 'aes-128-ccm', because it is the fastest of the 6 available modes of encryption currently provided and is believed to provide sufficient security for many deployments.  Depending on the filesystem/zvol workload you may not be able to notice (or care if you do notice) the difference between the AES key lengths and modes.  However note that at this time I believe the collective wisdom in the cryptography community appears to be to recommend AES128 over AES256. [Note that this is not a statement of Oracle's endorsement or verification of that research].

Both CCM and GCM are provided so that if one turns out to have flaws, and modes of an encryption algorithm some times do have flaws independent of the base algorithm, hopefully the other will still be available for use safely.

On systems without hardware/cpu support for Galios multiplication (for example Intel Westmere  or SPARC T3) GCM will be slower because the Galios field multiplication has to happen in software without any hardware/cpu assist.  However depending on your workload you might not even notice the difference between CCM and GCM.

One reason you may want to select aes-128-gcm rather than aes-128-ccm is that GCM is one of the modes for AES in NSA Suite B but CCM is not.

ZFS encryption was designed and implemented to be extensible to new algorithm/mode combinations for data encryption and key wrapping.

Are there symmetric algorithms, for data encryption, other than AES that are of interest?

The wrapping key algorithm currently matches the data encryption key algorithm, is there interest in providing different wrapping key algorithms and configuration properties for selecting which one ? For example doing key wrapping with an RSA keypair/certificate ?  

[Note this is not a commitment from Oracle to implementing/providing any suggested additions in any release of any product but if there are others of interest we would like to know so they can be considered.]

Monday Nov 15, 2010

Having my secured cake and Cloning it too (aka Encryption + Dedup with ZFS)

The main goal of encryption is to make the (presumably sensitive) cleartext data indistinguishable from random data.  Good file system encryption usually aims to have the same plaintext encrypt to different ciphertext at least when written at a different "location" even if the same key is used.  One way to achieve that is that the initialisation vector (IV) is some how derived from where the blocks of the files are stored on disk.  In this respect the encryption support in ZFS is no different, by default we derive the IV from a combination of what dataset / object the block is for and also when (its transaction) written.  This means that the same block of plaintext data written to a different file in the same filesystem will get a different IV and thus different ciphertext.  Since ZFS is copy-on-write and we use the transaction identifier it also means that if we "overwrite" the same block of a file at a later time it still ends up having a different IV and thus will be different ciphertext.  Each encrypted dataset in ZFS has a different set of data encryption keys (see my earlier post on assured delete for more details on that), so there we change the IV and the encryption key so have a really high level of confidence of getting different ciphertext when written to different datasets.

The goal of deduplication in storage is to coalesce matching disk blocks into a smaller number of copies (ideally 1, but in ZFS that nunber depends on the value of the copies property on the dataset and the pool wide dedupditto property so it could be more than 1).  Given the above description of how we do encryption it would seem that encryption and deduplication are fundamentally at odds with each other - and usually that is true.

When we write a block to disk in ZFS it goes through the ZIO pipeline and in doing so a number of transforms are optionally applied to the data:  compress -> encryption -> checksum -> dedup -> raid.

The deduplication step uses the checksums of the blocks to find suitable matches. This means it is acting on the already compressed and encrypted data.  Also in ZFS deduplication matches are searched for in all datasets in the pool with dedup=on.

So we have very little chance of getting any deduplication hits with encrypted datasets because of how the IV is generated and the fact that each dataset has its own set of encryption keys.  In fact not getting hits with deduplication is actually a good test that we are using different keys and IVs and thus getting different ciphertext for the same plaintext.

So encryption=on + dedup=on is pointless, right ?

Not so with ZFS, I wasn't happy about giving up on deduplication for encrypted datasets, so we found a solution, it has some restrictions but I think they are reasonable and realistic ones.

Within what I'll call a "clone family", ie all datasets are clones of the same original dataset or are clones of those clones, we would be sharing data encryption keys in the default case, because they share data (again see my earlier post on assured delete for info on the data encryption keys). So I found a method of generating the IV such that within the "clone family" we will get dedup hits for the same plaintext.  For this to work you must not run 'zfs key -K' on any of the clones and you must not pass '-K' to 'zfs clone' when you create your clones.  Note that dedup does not apply to child datasets only to the snapshots/clones, and by that I mean it doesn't break you just won't get deduplication matches.

So no it isn't pointless and whats more for some configurations it will actually work really well.  A common use case for a configuration that does work well is a set of visualisation image (maybe filesystems for local Zones or ZVOLs shared over iSCSI for  OVM or similar) where they are all derived from the same original master by using zfs clones and that all get patched/updated with the pretty much the same set of patches/updaets.  This is a case where clones+dedup work well for the unencrypted case, and one which as shown above can still work well even when encryption is enabled.

The usual deployment caveats with ZFS deduplication still apply, ie it is block based and it works best when you have lots of available DRAM and/or L2ARC for caching the DDT.  ZFS Encryption doesn't add any additional requirements to this. 

So we can happily do this type of thing, and have it "work as expected":

$ zfs create -o compression=on -o encryption=on -o dedup=on tank/builds
... 
$ zfs create tank/builds/master
... 
$ zfs clone tank/builds/master@1tank/builds/project-one
...
$ zfs clone tank/builds/master@1 tank/builds/project-two 

General documentation for ZFS support of encryption is in the Oracle Solaris ZFS Administration Guide in the Encrypting ZFS File Systems section.

Assured delete with ZFS dataset encryption

Need to be assured that your data is inaccessible after a certain point in time ?

Many government agency and private sector security policies allow you to achieve that if the data is encrypted and you can show with an acceptable level of confidence that the encryption keys are no longer accessible.  The alternative is overriding all the disk blocks that contained the data, that is both time consuming, very expensive in IOPS and in a copy-on-write filesystem like ZFS actually very difficult to achieve.  So often this is only done on full disks as they come out of production use for recycling/repurposing, but this isn't ideal in a complex RAID layout.

In some situations (compliance or privacy are common reasons) it is desirable to have an assured delete of a subset of the data on a disk (or whole storage pool). Having the encryption policy / key management at that ZFS dataset (file system / ZVOL) level allows us to provide assured delete via key destruction at a much smaller granularity than full disks, it also means that unlike full disk encryption we can do this on a subset of the data while the disk drives remain live in the system.

If the subset of data matches a ZFS file system (or ZVOL) boundary we can provide this assured delete via key destruction; remember ZFS filesystems are relatively very cheap.

Lets start with a simple case of a single encrypted file system:

$ zfs create -o encryption=on -o raw,file:///media/keys/g projects/glasgow
$ zfs create -o encryption=on -o raw,file:///media/keys/e projects/edinburgh

After some time we decide we want to make projects/glasgow completely inaccessible.  The simplest way is to just destroy the wrapping key, in this case it is on /media/keys/g, and destroying the projects/glasgow dataset.  The data on disk will still be there until ZFS starts using those blocks again but since we have destroyed /media/keys/g (which I'm assuming here is on some separate file system) we have a high level of assurance that the encrypted data can't be recovered even by reading "below" ZFS by looking at the disk blocks directly.

I'd recommend a tiny additional step just to make sure that the last version of the data encryption keys (which are stored wrapped on disk in the ZFS pool) are not encrypted by anything the user/admin knows:

$ zfs key -c -o raw,file:///dev/random projects/glasgow
$ zfs key -u projects/glasgow
$ zfs destroy projects/glasgow

While the step of re-wrapping the keys with a key the user/admin doesn't know doesn't provide a huge amount of additional security/assurance it makes the administrative intent much clearer and at least allows the user to assert that they did not know the wrapping key at the point the dataset was destroyed.

If we have clones this situation is slightly more complex since clones share their data encryption key with their origin - since they share data written before the clone was branched off the clone needs to be able to read the shared and unique data as if it was its own.

We can make sure that the unique data in a clone uses a different data encryption key than the origin does from the point the clone was taken:

... time passes data is placed in projects/glasgow
$ zfs snapshot projects/glasgow@1
$ zfs clone -K projects/glasgow@1 projects/mungo

By passing '-K' to 'zfs clone' we ensure that any unique data in projects/mungo is using a different dataset encryption key from projects/glasgow, this means we can use the same operations as above to provide assured delete for the unique data in projects/mungo even though it is a clone.

Additionally we could also do 'zfs key -K projects/glasgow' and have any new data written to projects/glasgow after the projects/mungo clone was taken use a different data encryption key was well.  Note however that that is not atomic so I would recommend making projects/glasgow read-only before taking the snapshot even though normally this isn't necessary, the full sequence then becomes:

$ zfs set readonly=on projects/glasgow
$ zfs snapshot projects/glasgow@1
$ zfs clone -K projects/glasgow@1 projects/mungo
$ zfs set readonly=off projects/mungo
$ zfs key -K projects/glasgow
$ zfs set readonly=off projects/glasgow

If you don't have projects/glasgow marked as read-only then there is a risk that data could be written to projects/glasgow  after the snapshot is taken and before we get to the 'zfs key -K'.  This may be more than is necessary in some cases but it is the safest method.

General documentation for ZFS support of encryption is in the Oracle Solaris ZFS Administration Guide in the Encrypting ZFS File Systems section.

 
  
 
  
 
  

Introducing ZFS Crypto in Oracle Solaris 11 Express

Today Oracle Solaris 11 Express was released and is available for download, this release includes on disk encryption support for ZFS.

Using ZFS encryption support can be as easy as this:

# zfs create -o encryption=on tank/darren
Enter passphrase for 'tank/darren':
Enter again:
#

If you don't wish to use a passphrase then you can use the new keysource property to specify the wrapping key is stored in a file instead.  See the zfs(1M) man page or ZFS documentation set for more details, but here are a few simple examples:

# zfs create -o encryption=on -o keysource=raw,file:///media/stick/mykey tank/darren

# zfs create -o encryption=aes-256-ccm -o keysource=passphrase,prompt tank/tony

If encryption is enabled and keysource is not specified we default to keysource=passphrase,prompt.  I plan to have other keysource locations available at some point in the future, for example retrieving the wrapping key from an https:// location or from a PKCS#11 accessible hardware keystore or key management system.

There are multiple different keys used in ZFS, the one the user/admin manages (or is derived from the entered passphrase) is a wrapping key, this means it is used to encrypt other keys not used for data encryption.  The data encryption keys are randomly generated at the time the dataset is created (using the kernel interfaces for /dev/random), or when a user/admin explicitly requests a new data encryption key for newly written data in that ZVOL or file system (eg zfs key -K tank/darren).

Back to our simple example of 'tank/darren' lets do a little more. If we now create a filesystem below 'tank/darren'  eg 'tank/darren/music' we won't be prompted for an additional passphrase/key since the encryption and keysource properties are inherited by the child dataset.  Note that encryption must to set at create time and it can not be changed on existing datasets.  Inheriting the keysource means that the child dataset inherits the same wrapping key but we generate new data encryption keys for that dataset.

If you create a clone of an encrypted file system then the clone is always encrypted as well, but the wrapping key for a clone need not be the same as the origin, for example:

# zfs snapshot tank/darren@1
# zfs clone tank/darren@1 tank/darren/sub

# zfs clone tank/darren@1 tank/tony
Enter passphrase for 'tank/tony':
Enter again:

In the first clone above the 'tank/darren/sub' dataset inherits all the encryption properties and wrapping key from 'tank/darren'.  In the second case there was no encrypted dataset at 'tank' to inherit the keysource property from (and thus the wrapping key) so we take the default keysource value of passphrase,prompt.

We can also change the wrapping key at any time using the 'zfs key -c <dataset>' command.  Note that in the case of passphrase this does not prompt for the existing passphrase, this is intentional as the filesystem will already be mounted (and maybe shared) and the files accessible.  It also gives us the ability via ZFS allow delegations to distinguish between which users can load/unload the keys (and thus have the filesystem mount) via the 'key' delegation and those users that can change the wrapping keys with the 'keychange' delegation.

At the time the wrapping key is changed you can also choose to use a different style of wrapping key, say switching from a prompted for passphrase to a key in a file.

The easiest way to create the wrapping keys is to use the existing Solaris pktool(1) command, eg:

$ pktool genkey keystore=file keytype=aes keylen=128 outkey=/media/stick/mykey

ZFS uses the Oracle Solaris Cryptographic Services APIs, as such it automatically benefits from the hardware acceleration of AES available on the SPARC T series processors and on Intel processors supporting AES instructions.

General documentation for ZFS support of encryption is in the Oracle Solaris ZFS Administration Guide in the Encrypting ZFS File Systems section.

For more about Oracle Solaris 11 Express see the articles and white papers site. Particularly the security and storage software articles.

Updated 2010-01-17 to change links for documentation to new location.

 

About

DarrenMoffat

Search

Categories
Archives
« November 2010 »
MonTueWedThuFriSatSun
1
2
3
4
5
6
7
8
9
10
11
12
13
14
17
18
20
21
22
23
24
25
26
27
28
29
30
     
       
Today