Assured delete with ZFS dataset encryption

Need to be assured that your data is inaccessible after a certain point in time ?

Many government agency and private sector security policies allow you to achieve that if the data is encrypted and you can show with an acceptable level of confidence that the encryption keys are no longer accessible.  The alternative is overriding all the disk blocks that contained the data, that is both time consuming, very expensive in IOPS and in a copy-on-write filesystem like ZFS actually very difficult to achieve.  So often this is only done on full disks as they come out of production use for recycling/repurposing, but this isn't ideal in a complex RAID layout.

In some situations (compliance or privacy are common reasons) it is desirable to have an assured delete of a subset of the data on a disk (or whole storage pool). Having the encryption policy / key management at that ZFS dataset (file system / ZVOL) level allows us to provide assured delete via key destruction at a much smaller granularity than full disks, it also means that unlike full disk encryption we can do this on a subset of the data while the disk drives remain live in the system.

If the subset of data matches a ZFS file system (or ZVOL) boundary we can provide this assured delete via key destruction; remember ZFS filesystems are relatively very cheap.

Lets start with a simple case of a single encrypted file system:

$ zfs create -o encryption=on -o raw,file:///media/keys/g projects/glasgow
$ zfs create -o encryption=on -o raw,file:///media/keys/e projects/edinburgh

After some time we decide we want to make projects/glasgow completely inaccessible.  The simplest way is to just destroy the wrapping key, in this case it is on /media/keys/g, and destroying the projects/glasgow dataset.  The data on disk will still be there until ZFS starts using those blocks again but since we have destroyed /media/keys/g (which I'm assuming here is on some separate file system) we have a high level of assurance that the encrypted data can't be recovered even by reading "below" ZFS by looking at the disk blocks directly.

I'd recommend a tiny additional step just to make sure that the last version of the data encryption keys (which are stored wrapped on disk in the ZFS pool) are not encrypted by anything the user/admin knows:

$ zfs key -c -o raw,file:///dev/random projects/glasgow
$ zfs key -u projects/glasgow
$ zfs destroy projects/glasgow

While the step of re-wrapping the keys with a key the user/admin doesn't know doesn't provide a huge amount of additional security/assurance it makes the administrative intent much clearer and at least allows the user to assert that they did not know the wrapping key at the point the dataset was destroyed.

If we have clones this situation is slightly more complex since clones share their data encryption key with their origin - since they share data written before the clone was branched off the clone needs to be able to read the shared and unique data as if it was its own.

We can make sure that the unique data in a clone uses a different data encryption key than the origin does from the point the clone was taken:

... time passes data is placed in projects/glasgow
$ zfs snapshot projects/glasgow@1
$ zfs clone -K projects/glasgow@1 projects/mungo

By passing '-K' to 'zfs clone' we ensure that any unique data in projects/mungo is using a different dataset encryption key from projects/glasgow, this means we can use the same operations as above to provide assured delete for the unique data in projects/mungo even though it is a clone.

Additionally we could also do 'zfs key -K projects/glasgow' and have any new data written to projects/glasgow after the projects/mungo clone was taken use a different data encryption key was well.  Note however that that is not atomic so I would recommend making projects/glasgow read-only before taking the snapshot even though normally this isn't necessary, the full sequence then becomes:

$ zfs set readonly=on projects/glasgow
$ zfs snapshot projects/glasgow@1
$ zfs clone -K projects/glasgow@1 projects/mungo
$ zfs set readonly=off projects/mungo
$ zfs key -K projects/glasgow
$ zfs set readonly=off projects/glasgow

If you don't have projects/glasgow marked as read-only then there is a risk that data could be written to projects/glasgow  after the snapshot is taken and before we get to the 'zfs key -K'.  This may be more than is necessary in some cases but it is the safest method.

General documentation for ZFS support of encryption is in the Oracle Solaris ZFS Administration Guide in the Encrypting ZFS File Systems section.

 
  
 
  
 
  
Comments:

This assumes that no copy has been made or can be made of the key, in this case file:///media/keys/g on USB drive. Correct?

Posted by Dan Anderson on November 15, 2010 at 05:18 PM GMT #

Perhaps you gave copies of the key to your co-workers before they left for vacation. They have it on their USB flash drives. Then the boss comes in and says bad news, Wally left his key in a bar and now some Engadget reporter has it. You can re-key all of the online datasets with zfs key -K tank/foo

Key management is always fun. I wonder if zfs key accepts stdin, if so then we can use our existing key-management scripts, which use symmetric encryption (the user uses a password of their choosing) to wrap the key. That way you're probably safe. Even if Wally is on your team.

Actually my middle name is Wally.

Posted by Boyd Waters on November 16, 2010 at 01:09 AM GMT #

Boyd, yes you can use stdin set keysource=raw,prompt then redirect in from stdin.

Dan, yes we are making that assumption, which is why I recommend taking the additional step before deletion of rewrapping the keys using a randomly generated key that the user/admin can't know.
Ideally we would be able to actually overwrite the keychain on disk but currently we can't do that.

Posted by Darren Moffat on November 16, 2010 at 01:55 AM GMT #

Darren,

Does 'zfs key -c' delete all the old copies of the key?

Can a crash dump be used to retrieve the keys?

[I am not a cryptography expert.]

Posted by Manoj Joseph on November 16, 2010 at 06:35 PM GMT #

Manoj, if the system panics and a crash dump is generated then yes the unwrapped keys (and the wrapping key) for any datasets where they have been provided (via zfs key -l or zfs mount) will still be in the crash dump. These are in kmem_alloc'd memory so don't appear in swap but do appear in crash dumps. Note that your dump ZVOL \*can\* be encrypted. I'm investigating the possibility of providing a tool so that the keys can be removed from the dump if desired.

For 'zfs key -c' the old copies of the keys are no longer attached blocks and are on the free list available for allocation, they are not currently explicitly overwritten on disk.

Posted by Darren Moffat on November 17, 2010 at 02:02 AM GMT #

Re: For 'zfs key -c' the old copies of the keys are no longer attached blocks and are on the free list available for allocation, they are not currently explicitly overwritten on disk.

Does it mean (along with ZFS COW) that even if the admin/user re-wrapped the data encryption keys with a random key, the data encryption keys currently may still be located on the underlying disk media "below ZFS" wrapped with the old and possibly compromised key - which would allow reading the destroyed dataset until its blocks are overwritten at some random point in the future?

Also, how does encryption play with "zfs send | zfs recv" (i.e. for backups, remote replication, etc.)?

Thanks,
//Jim

Posted by Jim Klimov on November 21, 2010 at 03:17 AM GMT #

Jim, yes it does mean that in theory but you would have to find the old blocks first. I have plans to address this in the future (no commitment as to when at this time).

As for "zfs send | zfs recv" see the "Encrypting ZFS File Systems" section of the Oracle Solaris documentation: http://docs.sun.com/app/docs/doc/821-1448/gkkih?l=en&a=view

Posted by Darren Moffat on November 22, 2010 at 05:54 AM GMT #

Thank you for recent replies.

A small FanFic for your enjoyment:

I can imagine customers wanting a truly dependable delete, as in overwriting the actual blocks of cipher keys.

One thriller-like scenario could be the capture of computer systems by force, be that the good guys (financial police) or the bad guys (raiders, mafia, etc.) That is a dilemma in itself, though.

During the capture the IT or a Big Boss could trigger the deletion of a dataset (he's more likely to use pyro charges though - but it's out of ZFS scope), but quite soon the hardware is in intruders' hands (i.e. available for bit-by-bit replication for further study). I believe recent changes can be tracked by TxG history which is around for say 200 transactions to be possibly rolled back (as in ZPOOL auto-repair on mount). So finding the recently deleted datasets' blocks would pose a moderate problem.

Posted by Jim Klimov on November 22, 2010 at 07:00 AM GMT #

Post a Comment:
  • HTML Syntax: NOT allowed
About

DarrenMoffat

Search

Categories
Archives
« July 2014
MonTueWedThuFriSatSun
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
   
       
Today