Tuesday Oct 30, 2012

New ZFS Encryption features in Solaris 11.1

Solaris 11.1 brings a few small but significant improvements to ZFS dataset encryption.  There is a new readonly property 'keychangedate' that shows that date and time of the last wrapping key change (basically the last time 'zfs key -c' was run on the dataset), this is similar to the 'rekeydate' property that shows the last time we added a new data encryption key.

$ zfs get creation,keychangedate,rekeydate rpool/export/home/bob
NAME                   PROPERTY       VALUE                  SOURCE
rpool/export/home/bob  creation       Mon Mar 21 11:05 2011  -
rpool/export/home/bob  keychangedate  Fri Oct 26 11:50 2012  local
rpool/export/home/bob  rekeydate      Tue Oct 30  9:53 2012  local

The above example shows that we have changed both the wrapping key and added new data encryption keys since the filesystem was initially created.  If we haven't changed a wrapping key then it will be the same as the creation date.  It should be obvious but for filesystems that were created prior to Solaris 11.1 we don't have this data so it will be displayed as '-' instead.

Another change that I made was to relax the restriction that the size of the wrapping key needed to match the size of the data encryption key (ie the size given in the encryption property).  In Solaris 11 Express and Solaris 11 if you set encryption=aes-256-ccm we required that the wrapping key be 256 bits in length.  This restriction was unnecessary and made it impossible to select encryption property values with key lengths 128 and 192 when the wrapping key was stored in the Oracle Key Manager.  This is because currently the Oracle Key Manager stores AES 256 bit keys only.  Now with Solaris 11.1 this restriciton has been removed.

There is still one case were the wrapping key size and data encryption key size will always match and that is where they keysource property sets the format to be 'passphrase', since this is a key generated internally to libzfs and to preseve compatibility on upgrade from older releases the code will always generate a wrapping key (using PKCS#5 PBKDF2 as before) that matches the key length size of the encryption property.

The pam_zfs_key module has been updated so that it allows you to specify encryption=off.

There were also some bugs fixed including not attempting to load keys for datasets that are delegated to zones and some other fixes to error paths to ensure that we could support Zones On Shared Storage where all the datasets in the ZFS pool were encrypted that I discussed in my previous blog entry.

If there are features you would like to see for ZFS encryption please let me know (direct email or comments on this blog are fine, or if you have a support contract having your support rep log an enhancement request).




  

  


Monday Oct 29, 2012

Solaris 11.1: Encrypted Immutable Zones on (ZFS) Shared Storage

Solaris 11 brought both ZFS encryption and the Immutable Zones feature and I've talked about the combination in the past.  Solaris 11.1 adds a fully supported method of storing zones in their own ZFS using shared storage so lets update things a little and put all three parts together.

When using an iSCSI (or other supported shared storage target) for a Zone we can either let the Zones framework setup the ZFS pool or we can do it manually before hand and tell the Zones framework to use the one we made earlier.  To enable encryption we have to take the second path so that we can setup the pool with encryption before we start to install the zones on it.

We start by configuring the zone and specifying an rootzpool resource:

# zonecfg -z eizoss
Use 'create' to begin configuring a new zone.
zonecfg:eizoss> create
create: Using system default template 'SYSdefault'
zonecfg:eizoss> set zonepath=/zones/eizoss
zonecfg:eizoss> set file-mac-profile=fixed-configuration
zonecfg:eizoss> add rootzpool
zonecfg:eizoss:rootzpool> add storage \
  iscsi://my7120.example.com/luname.naa.600144f09acaacd20000508e64a70001
zonecfg:eizoss:rootzpool> end
zonecfg:eizoss> verify
zonecfg:eizoss> commit
zonecfg:eizoss> 

Now lets create the pool and specify encryption:

# suriadm map \
   iscsi://my7120.example.com/luname.naa.600144f09acaacd20000508e64a70001
PROPERTY	VALUE
mapped-dev	/dev/dsk/c10t600144F09ACAACD20000508E64A70001d0
# echo "zfscrypto" > /zones/p
# zpool create -O encryption=on -O keysource=passphrase,file:///zones/p eizoss \
   /dev/dsk/c10t600144F09ACAACD20000508E64A70001d0
# zpool export eizoss

Note that the keysource example above is just for this example, realistically you should probably use an Oracle Key Manager or some other better keystorage, but that isn't the purpose of this example.  Note however that it does need to be one of file:// https:// pkcs11: and not prompt for the key location.  Also note that we exported the newly created pool.  The name we used here doesn't actually mater because it will get set properly on import anyway. So lets go ahead and do our install:

zoneadm -z eizoss install -x force-zpool-import
Configured zone storage resource(s) from:
    iscsi://my7120.example.com/luname.naa.600144f09acaacd20000508e64a70001
Imported zone zpool: eizoss_rpool
Progress being logged to /var/log/zones/zoneadm.20121029T115231Z.eizoss.install
    Image: Preparing at /zones/eizoss/root.

 AI Manifest: /tmp/manifest.xml.ujaq54
  SC Profile: /usr/share/auto_install/sc_profiles/enable_sci.xml
    Zonename: eizoss
Installation: Starting ...

              Creating IPS image
Startup linked: 1/1 done
              Installing packages from:
                  solaris
                      origin:  http://pkg.oracle.com/solaris/release/
              Please review the licenses for the following packages post-install:
                consolidation/osnet/osnet-incorporation  (automatically accepted,
                                                          not displayed)
              Package licenses may be viewed using the command:
                pkg info --license <pkg_fmri>
DOWNLOAD                                PKGS         FILES    XFER (MB)   SPEED
Completed                            187/187   33575/33575  227.0/227.0  384k/s

PHASE                                          ITEMS
Installing new actions                   47449/47449
Updating package state database                 Done 
Updating image state                            Done 
Creating fast lookup database                   Done 
Installation: Succeeded

         Note: Man pages can be obtained by installing pkg:/system/manual

 done.

        Done: Installation completed in 929.606 seconds.


  Next Steps: Boot the zone, then log into the zone console (zlogin -C)

              to complete the configuration process.

Log saved in non-global zone as /zones/eizoss/root/var/log/zones/zoneadm.20121029T115231Z.eizoss.install

That was really all we had to do, when the install is done boot it up as normal.

The zone administrator has no direct access to the ZFS wrapping keys used for the encrypted pool zone is stored on.  Due to how inheritance works in ZFS he can still create new encrypted datasets that use those wrapping keys (without them ever being inside a process in the zone) or he can create encrypted datasets inside the zone that use keys of his own choosing, the output below shows the two cases:

rpool is inheriting the key material from the global zone (note we can see the value of the keysource property but we don't use it inside the zone nor does that path need to be (or is) accessible inside the zone). Whereas rpool/export/home/bob has set keysource locally.

 
  

# zfs get encryption,keysource rpool rpool/export/home/bob NAME PROPERTY VALUE SOURCE rpool encryption on inherited from $globalzone rpool keysource passphrase,file:///zones/p inherited from $globalzone rpool/export/home/bob encryption on local rpool/export/home/bob keysource passphrase,prompt local

 

 

Thursday Sep 13, 2012

To encryption=on or encryption=off a simple ZFS Crypto demo

I've just been asked twice this week how I would demonstrate ZFS encryption really is encrypting the data on disk.  It needs to be really simple and the target isn't forensics or cryptanalysis just a quick demo to show the before and after.

I usually do this small demo using a pool based on files so I can run strings(1) on the "disks" that make up the pool. The demo will work with real disks too but it will take a lot longer (how much longer depends on the size of your disks).  The file hamlet.txt is this one from gutenberg.org

# mkfile 64m /tmp/pool1_file
# zpool create clear_pool /tmp/pool1_file
# cp hamlet.txt /clear_pool
# grep -i hamlet /clear_pool/hamlet.txt | wc -l

Note the number of times hamlet appears

# zpool export clear_pool
# strings /tmp/pool1_file | grep -i hamlet | wc -l

Note the number of times hamlet appears on disk - it is 2 more because the file is called hamlet.txt and file names are in the clear as well and we keep at least two copies of metadata.

Now lets encrypt the file systems in the pool.
Note you MUST use a new pool file don't reuse the one from above.

# mkfile 64m /tmp/pool2_file
# zpool create -O encryption=on enc_pool /tmp/pool2_file
Enter passphrase for 'enc_pool': 
Enter again: 
# cp hamlet.txt /enc_pool
# grep -i hamlet /enc_pool/hamlet.txt | wc -l

Note the number of times hamlet appears is the same as before

# zpool export enc_pool
# strings /tmp/pool2_file | grep -i hamlet | wc -l

Note the word hamlet doesn't appear at all!

As a said above this isn't indended as "proof" that ZFS does encryption properly just as a quick to do demo.

Wednesday Nov 09, 2011

User home directory encryption with ZFS

ZFS encryption has a very flexible key management capability, including the option to delegate key management to individual users.  We can use this together with a PAM module I wrote to provide per user encrypted home directories.  My laptop and workstation at Oracle are configured like this:

First lest setup console login for encrypted home directories:

    root@ltz:~# cat >> /etc/pam.conf<<_EOM
    login auth     required pam_zfs_key.so.1 create
    other password required pam_zfs_key.so.1
    _EOM

The first line ensures that when we login on the console bob's home directory is created with as an encrypted ZFS file system if it doesn't already exist, the second one ensures that the passphrase for it stays in sync with his login password.

Now lets create a new user 'bob' who looks after his own encryption key for is home directory, note that we do not specify '-m' to useradd so that pam_zfs_key will create the home directory when the user logs in.

root@ltz:~# useradd bob
root@ltz:~# passwd bob
New Password: 
Re-enter new Password: 
passwd: password successfully changed for bob
root@ltz:~# passwd -f bob
passwd: password information changed for bob

We have now created the user bob with an expired password. Lets login as bob and see what happens:

    ltz console login: bob
    Password: 
    Choose a new password.
    New Password: 
    Re-enter new Password: 
    login: password successfully changed for bob
    Creating home directory with encryption=on.
    Your login password will be used as the wrapping key.
    Last login: Tue Oct 18 12:55:59 on console
    Oracle Corporation      SunOS 5.11      11.0    November 2011
    -bash-4.1$ /usr/sbin/zfs get encryption,keysource rpool/export/home/bob
    NAME                   PROPERTY    VALUE              SOURCE
    rpool/export/home/bob  encryption  on                 local
    rpool/export/home/bob  keysource   passphrase,prompt  local

Note that bob had to first change the expired password. After we provided a new login password a new ZFS file system for bob's home directory was created. The new login password that bob chose is also the passphrase for this ZFS encrypted home directory. This means that at no time did the administrator ever know the passphrase for bob's home directory. After the machine reboots bob's home directory won't be mounted anymore until bob logs in again.  If we want bob's home directory to be unmounted and the key removed from the kernel when bob logs out (even if the system isn't rebooting) then we can add the 'force' option to the pam_zfs_key.so.1 module line in /etc/pam.conf

If users login with GDM or ssh then there is a little more configuration needed in /etc/pam.conf to enable pam_zfs_key for those services as well.

 
root@ltz:~# cat >> /etc/pam.conf<<_EOM
gdm     auth requisite          pam_authtok_get.so.1
gdm     auth required           pam_unix_cred.so.1
gdm     auth required           pam_unix_auth.so.1
gdm     auth required           pam_zfs_key.so.1 create
gdm     auth required           pam_unix_auth.so.1
_EOM

root@ltz:~# cat >> /etc/pam.conf<<_EOM
sshd-kbdint     auth requisite          pam_authtok_get.so.1
sshd-kbdint     auth required           pam_unix_cred.so.1
sshd-kbdint     auth required           pam_unix_auth.so.1
sshd-kbdint     auth required           pam_zfs_key.so.1 create
sshd-kbdint     auth required           pam_unix_auth.so.1
_EOM

Note that this only works when we are logging in to SSH with a password. Not if we are doing pubkey authentication because the encryption passphrase for the home directory hasn't been supplied. However pubkey and gssapi will work for later authentications after the home directory is mounted up since the ZFS passphrase is supplied during that first ssh or gdm login.

Tuesday May 10, 2011

Encrypting /var/tmp & swap in Solaris 11 Express

As some readers might remember from previous posts it isn't possible in Solaris 11 Express to boot from an encrypted ZFS dataset.  However it is possible to have encrypted swap space and thus (by default) an encrypted /tmp.  That still leaves /var/tmp unencrypted

First lets look at swap space encryption.  That is as simple as putting the word "encrypted" into the mount options field of /etc/vfstab for the swap ZVOL.  If swap is a ZVOL then ZFS encryption will be used, if swap is a raw disk slice or file then lofi will be interposed between the device/file using a randomly generated key.  That is a fully supported solution in Solaris 11 Express implemented by the swapadd command.

For encrypting /var/tmp we need to beyond the provided services and the following (unsupported) method takes its lead from what I did for swapadd and applies it to /var/tmp.  Note however that this assumes that nothing in /var/tmp should be preserved on boot and won't even be readable from another boot environment, so if you use this don't put stuff into /var/tmp you want to get access to after a reboot.

This takes advantage of the fact that in SMF we can place dependencies onto other services without modifying them.  So while the following makes some basic assumptions about the Solaris ZFS datasets layout it doesn't require modifying any existing binaries or configuration files.

We create a new service svc:/site/system/filesystem/tmp:default this service will create an encrypted dataset for /var/tmp using the manifest and method script that follows:


<?xml version='1.0'?>
<!DOCTYPE service_bundle SYSTEM '/usr/share/lib/xml/dtd/service_bundle.dtd.1'>
<!--

 CDDL HEADER START

 The contents of this file are subject to the terms of the
 Common Development and Distribution License (the "License").
 You may not use this file except in compliance with the License.

 You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE
 or http://www.opensolaris.org/os/licensing.
 See the License for the specific language governing permissions
 and limitations under the License.

 When distributing Covered Code, include this CDDL HEADER in each
 file and include the License file at usr/src/OPENSOLARIS.LICENSE.
 If applicable, add the following below this CDDL HEADER, with the
 fields enclosed by brackets "[]" replaced with your own identifying
 information: Portions Copyright [yyyy] [name of copyright owner]

 CDDL HEADER END

 Copyright (c) 2011, Oracle and/or its affiliates. All rights revserved.

-->


<service_bundle type='manifest' name='darrenm:etmp'>

<service
        name='site/system/filesystem/tmp'
        type='service'
        version='1'>

        <create_default_instance enabled='true' />

        <single_instance />

        <dependency
                name='cryptosvc'
                grouping='require_all'
                type='service'
                restart_on='none'>
                <service_fmri value='svc:/system/cryptosvc' />
        </dependency>

        <dependent
                name='var-tmp'
                grouping='optional_all'
                restart_on='none'>
                <service_fmri value='svc:/system/filesystem/minimal' />
        </dependent>


        <exec_method
                type='method'
                name='start'
                exec='/lib/svc/method/site-etmp'
                timeout_seconds='30' />

        <exec_method
                type='method'
                name='stop'
                exec=':true'
                timeout_seconds='1' />

        <property_group name='startd' type='framework'>
                <propval name='duration' type='astring' value='transient' />
        </property_group>

</service>

</service_bundle>


#!/usr/sbin/sh
#
# CDDL HEADER START
#
# The contents of this file are subject to the terms of the
# Common Development and Distribution License (the "License").
# You may not use this file except in compliance with the License.
#
# You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE
# or http://www.opensolaris.org/os/licensing.
# See the License for the specific language governing permissions
# and limitations under the License.
#
# When distributing Covered Code, include this CDDL HEADER in each
# file and include the License file at usr/src/OPENSOLARIS.LICENSE.
# If applicable, add the following below this CDDL HEADER, with the
# fields enclosed by brackets "[]" replaced with your own identifying
# information: Portions Copyright [yyyy] [name of copyright owner]
#
# CDDL HEADER END
#
# Copyright (c) 2011, Oracle and/or its affiliates. All rights reserved.
#
# /lib/svc/method/site-etmp

zfs destroy rpool/tmp > /dev/null 2>&1
zfs create -o mountpoint=/var/tmp -o encryption=on -o keysource=raw,file:///dev/random rpool/tmp
chmod 1777 /var/tmp

exit 0

On reboot what will happen is that a new dataset for /var/tmp will be created.   It would be possible to have a more sophisticated method script that doesn't use a hardcoded dataset name of root/tmp but this seems sufficient for now.  It will look something like this:

darrenm-pc:pts/1$ cd /var/tmp
darrenm-pc:pts/1$ df -hl .
Filesystem             Size   Used  Available Capacity  Mounted on
rpool/tmp              113G    79K        77G     1%    /var/tmp
darrenm-pc:pts/1$ ls -ld .
drwxrwxrwt   5 root     root           5 May 10 14:39 ./
darrenm-pc:pts/1$ zfs get encryption,keysource,keystatus rpool/tmp
NAME       PROPERTY    VALUE                   SOURCE
rpool/tmp  encryption  on                      local
rpool/tmp  keysource   raw,file:///dev/random  local
rpool/tmp  keystatus   available               -

 
  

        
    

Friday Nov 19, 2010

ZFS encryption what is on disk ?

This article is about what is and isn't stored encrypted on disk for ZFS datasets that are encrypted and how we do the actual encryption. It does require some understanding of Solaris and ZFS debugging tools.

The first important thing to understand about ZFS is that it is not providing "full disk" encryption and you will be able to tell that a disk that has data on it that was encrypted by ZFS is part of a ZFS pool.

This is in part because one of the requirements for adding encryption support to ZFS was that a given ZFS pool be able to contain a mix of encrypted and cleartext datasets and those that are encrypted be able to use different algorithms/keylengths and different encryption keys.

We also require that the key material does not need to have been made available in order for pool wide operations and certain dataset operations (such zfs destroy) to succeed.  One of the most important pool wide operations is scrub/resilver; we need to ensure that hotspare, disk replacement and self healing work even if the key material has never been made available to this running instance of the system. We must also be able to claim (but not necessarily replay) the log blocks (ZIL) on reboot after power loss or panic without requiring the key material (ZFS must remain consistent on disk at all times).

What this means is that even in a pool were all of the datasets are marked as being encrypted (eg zpool create -O encryption=on tank ...) there is some ZFS metadata that is always in the clear.

What is always in the clear even for encrypted datasets?
  • ZFS pool layout
  • Dataset names
  • Pool and dataset properties, including user defined properties
    • compression, encryption, share, etc.
  • Dataset level quotas (zfs set quota)
  • Dataset delegations (zfs allow)
  • The pool history (zpool history)
  • All dnode blocks
    • Needed to traverse the pool for resilver/scrub
  • Block pointer
    • The blkptr_t contains the MAC/AuthTag from AES-CCM or AES-GCM in the top 96 bits of the checksum field. The SHA256 checksum is truncated to provide this 96 bits of space.
      • The checksum for an encrypted block is always sha256-mac
    • The 96bit IV for the block is in dva[2] of the blkptr_t
      • This means that an encrypted block can have a maximum of 2 copies not 3

What is encrypted when a dataset is encrypted?

  • All file content written to a ZFS filesystem via the the ZPL/VFS interfaces (ie POSIX interfaces)
    • open(2), write(2), mmap(2), etc.
  • All POSIX (and ZFS filesystem) metadata: ACLs, file and directory names, permissions, system and extended attributes on files and all file timestamps
    • ZPL metadata is normally contained in the bonusbuf area of a dnode_phys_t but the dnode is in the clear on disk. For encrypted datasets the bonusbuf is always empty and the content normally have been there is pushed out to an encrypted "spill" block, called System Attribtue block.  Normally for ZPL filesystems spill blocks are only used for files with large ACLs.
  • System Attribute (spill) blocks (used for any purpose)
  • All data written to a ZVOL
  • User/group quota information for ZFS filesystems, both the policy and space accounting (zfs set userquota@ | groupquota@)
  • FUID mappings for UNIX <-> CIFS user identities
  • All of the above if it is in a ZIL (ZFS Intent Log) record.
    • Note that the actual ZIL blocks have block pointers and a record header that includes the sizing information that is in the clear.
  • Data encryption keys
    • These are stored in an on disk keychain referenced from the dsl_dir_phys_t. 

The ondisk keychain

The keychain entries are ZAP objects that are indexed by the transaction they were created in. The entries are individually wrapped by the dataset's wrapping key each with their own IV and an indicator of what wrapping key algorithm was used (at this time the wrapping key crypto algorithm always matches the encryption property).  Every encrypted dataset has at least one keychain entry.  Clones have their own keychain and do not reference the one of their origin, because the clone may have a different wrapping key and the clone may have different keychain entries to its origin.

Encrypting a block

Each ZFS on disk block (smallest size is 512 bytes, largest is 128k) is encrypted using AES in either CCM or GCM mode as indicated by the encryption property. Even though CCM and GCM provide the ability to have additional authenticated data that isn't encrypted this isn't used because (with the exception of the ZIL blocks) all data in the block is encrypted.  A 96 bit IV per disk block is used and both CCM and GCM are requested to provide a 96 bit MAC/AuthTag in addition to the ciphertext.  While we could get a larger MAC space in the ZFS on disk blkptr_t is very tight and we need to leave some of it available for future features.  After encryption each block is also checksummed by the ZIO pipeline using SHA256 (fletcher is not available for encrypted datasets).

IV generation for encrypted blocks

Every encrypted on disk block has its own IV, (stored in dva[2] of the blkptr_t).  The IV is generated by taking the first 96 bits of a SHA256 hash of the contents of the zbookmark_t and the transaction the block was first written in.  We actually have all this information available both at read and write time so we don't need to store the IV in the simplest case. However snapshots, clones and deduplication as well as some (non encryption related) future features complicate this so we do store the IV.

If dedup=on for the dataset the per block IVs are generated differently.  They are generated by taking an HMAC-SHA256 of the plaintext and using the left most 96 bits of that as the IV.  The key used for the HMAC-SHA256 is different to the one used by AES for the data encryption, but is stored (wrapped) in the same keychain entry, just like the data encryption key a new one is generated when doing a 'zfs key -K <dataset>'.  Obviously we couldn't calculate this IV when doing a read so it has to be stored.

ZIL blocks

The ZIL log blocks are written in exactly the same way regardless of whether the ZIL is in the main pool disks or a separate intent log (slog) is being used.  The ZIL blocks are encrypted a different way to blocks going through the "normal" write path; this is because log blocks are formated on disk differently anyway.  The log blocks are chained together and have a header (zil_chain_t) that indicates what size the log block is and the blkptr_t to the next block as well as an embedded checksum that chains the blocks together.  For encrypted log blocks the MAC from AES CCM/GCM is also stored in this header (zil_chain_t).   It is log blocks rather than log records that are encrypted.  Within a given log block there maybe multiple log records.  Some of these log records may contain pointers to blocks that were written directly (via dmu_sync), in order for us to claim the ZIL when the pool is imported these embedded block pointers need to be readable even if the encryption keys are not available (which they won't be in most cases during the claim phase).  These means that  we don't encrypt whole log blocks, the log record headers and any blkptr_t embedded in a log record is in the clear, the rest of the log block content is encrypted.

How is the passphrase turned into a wrapping key (keysource=passphrase,prompt)?

When the dataset 'keysource' property indicates that a passphrase should be used we have to derive a wrapping key from it.  The wrapping key is derived from the passphrase provided and a per dataset salt (which is stored as hidden property of the dataset) by using PKCS#5 PBKD2_HMAC_SHA1 with 1000 iterations.  The wrapping key is not stored on disk.  The salt is randomly generated when the dataset is created (with keysource=passphrase,prompt) and changed each time the 'zfs key -c' is run, even if the passphrase the user provides is the same the salt and thus the actual wrapping key will be different.


Looking at the on disk structures

Using mdb macros and zdb we can actually look at some of this.  Remember that mdb and zdb are debugging tools only, use of mdb on a live kernel without understanding what you are doing can corrupt data.  The interfaces used below are not committed interfaces and are subject to change.

Firstly using mdb on the live kernel (of an x86 machine) I've placed a breakpoint on the zio_decrypt function, lets look at the block pointer using the mdb blkptr dcmd:

[2]> <rdi::print zio_t io_bp | ::blkptr
DVA[0]=<0:204200:20000:STD:1>
[L0 PLAIN_FILE_CONTENTS] SHA256_MAC OFF LE contiguous unique encrypted 1-copy
size=20000L/20000P birth=10L/10P fill=1
cksum=a585cb5cce997c21:c79825e93b16d5aa:28df6742e24913e:6b94fbd569cf3cd9

This blkptr_t is for the contents of a file, we can see that it is encrypted and we only have one copy of it - so only one DVA entry. The checksum is SHA256_MAC so the actual MAC value is 2e24913e6b94fbd569cf3cd9.  The blkptr macro doesn't show us the IV that is stored in DVA[2], but we can see that if we print the raw structure using ::print

[2]> <rdi::print zio_t io_bp->blk_dva[2]
blk_dva[2] = {
    blk_dva[2].dva_word = [ 0x521926d500000000, 0x3b13ba46ab9f8a51 ]
}

Now lets use zdb, to look at some things (the output is trimmed slightly for the sake of this article)

# zdb -dd -e tank

Dataset mos [META], ID 0, cr_txg 4, 311K, 54 objects

    Object  lvl   iblk   dblk  dsize  lsize   %full  type          0    1    16K    16K  96.0K    32K   84.38  DMU dnode          1    1    16K     1K  1.50K     1K  100.00  object directory          2    1    16K    512      0    512    0.00  DSL directory          3    1    16K    512  1.50K    512  100.00  DSL props

...

        26    1    16K   128K  18.0K   128K  100.00  SPA history

...

36 1 16K 128K 0 128K 0.00 bpobj 37 1 512 512 3.00K 1K 100.00 DSL keychain 38 1 16K 4K 12.0K 4K 100.00 SPA space map ...

This pool (tank) currently has 3 datasets, one of which is encrypted.  We can see from the above zdb output that the keychains are kept in the special "mos" dataset along with some other pool wide metadata.  Now lets look at those keychains in a bit more detail by asking zdb to be more verbose (again the output is trimmed to show relevant information only):

    # zdb -dddd -e tank
    ...
    Object  lvl   iblk   dblk  dsize  lsize   %full  type
        37    1    512    512  3.00K     1K  100.00  DSL keychain
        dnode flags: USED_BYTES 
        dnode maxblkid: 1
        Fat ZAP stats:
                Pointer table:
                        32 elements
                        zt_blk: 0
                        zt_numblks: 0
                        zt_shift: 5
                        zt_blks_copied: 0
                        zt_nextblk: 0
                ZAP entries: 2
                Leaf blocks: 1
                Total blocks: 2
                zap_block_type: 0x8000000000000001
                zap_magic: 0x2f52ab2ab
                zap_salt: 0x16c6fb
                Leafs with 2\^n pointers:
                        5:      1 \*
                Blocks with n\*5 entries:
                        0:      1 \*
                Blocks n/10 full:
                        9:      1 \*
                Entries with n chunks:
                        9:      2 \*\*
                Buckets with n entries:
                        0:     14 \*\*\*\*\*\*\*\*\*\*\*\*\*\*
                        1:      2 \*\*
        Keychain entries by txg:
                txg 5 : wkeylen = 136
                txg 85 : wkeylen = 136

The above keychain  object shows it has two entries in it, the lowest numbered one (5) is from when the dataset was initially created and the second one (85) is because I had run 'zfs key -K tank/fs' on the dataset a little later.  Now lets illustrate with zdb what I discussed in the previous article about assured delete where I discussed about clones being able to have different set of entries in the keychain to their origin.

To illustrate this I ran the following:

# zfs snapshot tank/fs@1
# zfs clone -K tank/fs@1 tank/fsc1
# zfs key -K tank/fs

First lets look at the keychain object 37 which is for tank/fs, and then at the keychain object for the clone (I've trimmed the output a little more this time):

    Object  lvl   iblk   dblk  dsize  lsize   %full  type
        37    2    512    512  7.50K     2K  100.00  DSL keychain
      ...
        Keychain entries by txg:
                txg 5 : wkeylen = 136
                txg 85 : wkeylen = 136
                txg 174 : wkeylen = 136


     Object  lvl   iblk   dblk  dsize  lsize   %full  type
        101    1    512    512  4.50K  1.50K  100.00  DSL keychain
      ...
        Keychain entries by txg:
                txg 5 : wkeylen = 136
                txg 85 : wkeylen = 136
                txg 152 : wkeylen = 136

What we see above is that the original tank/fs dataset now has an additional entry from the 'zfs key -K tank/fs' that was run.  The keychain  for the clone (object 101) also has three entries in it, it shares the same entries as tank/fs for txg 5 and txg 85 (though they maybe encrypted differently on disk depending on where the wrapping key is inherited from) and it has as a unique entry created at txg 152.  We can see similar information by looking at the 'zpool history -il' output:

2010-11-19.05:58:25 [internal encryption key create txg:85] rekey succeeded dataset = 33 [user root on borg-nas]
2010-11-19.06:05:59 [internal encryption key create txg:152] rekey succeeded dataset = 96 from dataset = 77 [user root on borg-nas]
2010-11-19.06:06:40 [internal encryption key create txg:174] rekey succeeded dataset = 33 [user root on borg-nas]

What is decrypted in memory?

As all ready mentioned the data encryption keys are stored wrapped (encrypted) on disk but they are stored in memory in the clear along with the wrapping key (we need the wrapping key to stay around for 'zfs key -K' and for 'zfs create' where the keysource property is inherited).  They are stored only in non swappable kernel memory (though remember you can swap on an encrypted ZVOL).  They are accessible to someone with all privilege that is able to use mdb on the live kernel or on a crash dump - but so is your plaintext data.  A suitable hardware keystore could be used so that key material is only ever inside its FIPS 140 boundary but that support is not yet complete (note this is not a commitment from Oracle to provide this support in any future release of ZFS or Solaris) - there would be no on disk change required to support it though.

Any data or metadata blocks that are encrypted on disk are in the in-memory cache (ARC) in the clear, this is required because the in memory ARC buffers are sometimes "loaned" using zero copy to other parts of the system - including other file systems such as NFS and CIFS.  If this is too much of a risk for you then you can force the system to always go back to disk and decrypt blocks only when needed but note you will not benefit from the caching and this will have a significant performance penalty: zfs set primarycache=metadata <dataset>.

The L2ARC is not currently available for use by encrypted datasets (note this is not a commitment from Oracle to provide this support in any future release of ZFS or Solaris) it is equivalent to having done 'zfs set secondarycache=none <dataset>'. The DDT for deduplication is not encrypted data and is pool wide metadata (stored in the MOS) so it is still able to be stored in the L2ARC.

All of the above article content could have been discovered by reading the zfs(1M) man page and using mdb, DTrace and zdb while experimenting on a live system, which is actually how I wrote the article.  There is a lot more you can examine about the on disk and in memory state of Solaris, not just ZFS by using mdb and DTrace - neither of which you can hide from, since the kernel modules have CTF data it in them full structure definitions - Note though that unless the interfaces/structures are documented in the Solaris DDI, or other official documentation from Oracle, you are looking at implementation details that are subject to change - often even in an update/patch.

Tuesday Nov 16, 2010

Choosing a value for the ZFS encryption property

The 'on' value for the ZFS encryption property maps to 'aes-128-ccm', because it is the fastest of the 6 available modes of encryption currently provided and is believed to provide sufficient security for many deployments.  Depending on the filesystem/zvol workload you may not be able to notice (or care if you do notice) the difference between the AES key lengths and modes.  However note that at this time I believe the collective wisdom in the cryptography community appears to be to recommend AES128 over AES256. [Note that this is not a statement of Oracle's endorsement or verification of that research].

Both CCM and GCM are provided so that if one turns out to have flaws, and modes of an encryption algorithm some times do have flaws independent of the base algorithm, hopefully the other will still be available for use safely.

On systems without hardware/cpu support for Galios multiplication (for example Intel Westmere  or SPARC T3) GCM will be slower because the Galios field multiplication has to happen in software without any hardware/cpu assist.  However depending on your workload you might not even notice the difference between CCM and GCM.

One reason you may want to select aes-128-gcm rather than aes-128-ccm is that GCM is one of the modes for AES in NSA Suite B but CCM is not.

ZFS encryption was designed and implemented to be extensible to new algorithm/mode combinations for data encryption and key wrapping.

Are there symmetric algorithms, for data encryption, other than AES that are of interest?

The wrapping key algorithm currently matches the data encryption key algorithm, is there interest in providing different wrapping key algorithms and configuration properties for selecting which one ? For example doing key wrapping with an RSA keypair/certificate ?  

[Note this is not a commitment from Oracle to implementing/providing any suggested additions in any release of any product but if there are others of interest we would like to know so they can be considered.]

Monday Nov 15, 2010

Having my secured cake and Cloning it too (aka Encryption + Dedup with ZFS)

The main goal of encryption is to make the (presumably sensitive) cleartext data indistinguishable from random data.  Good file system encryption usually aims to have the same plaintext encrypt to different ciphertext at least when written at a different "location" even if the same key is used.  One way to achieve that is that the initialisation vector (IV) is some how derived from where the blocks of the files are stored on disk.  In this respect the encryption support in ZFS is no different, by default we derive the IV from a combination of what dataset / object the block is for and also when (its transaction) written.  This means that the same block of plaintext data written to a different file in the same filesystem will get a different IV and thus different ciphertext.  Since ZFS is copy-on-write and we use the transaction identifier it also means that if we "overwrite" the same block of a file at a later time it still ends up having a different IV and thus will be different ciphertext.  Each encrypted dataset in ZFS has a different set of data encryption keys (see my earlier post on assured delete for more details on that), so there we change the IV and the encryption key so have a really high level of confidence of getting different ciphertext when written to different datasets.

The goal of deduplication in storage is to coalesce matching disk blocks into a smaller number of copies (ideally 1, but in ZFS that nunber depends on the value of the copies property on the dataset and the pool wide dedupditto property so it could be more than 1).  Given the above description of how we do encryption it would seem that encryption and deduplication are fundamentally at odds with each other - and usually that is true.

When we write a block to disk in ZFS it goes through the ZIO pipeline and in doing so a number of transforms are optionally applied to the data:  compress -> encryption -> checksum -> dedup -> raid.

The deduplication step uses the checksums of the blocks to find suitable matches. This means it is acting on the already compressed and encrypted data.  Also in ZFS deduplication matches are searched for in all datasets in the pool with dedup=on.

So we have very little chance of getting any deduplication hits with encrypted datasets because of how the IV is generated and the fact that each dataset has its own set of encryption keys.  In fact not getting hits with deduplication is actually a good test that we are using different keys and IVs and thus getting different ciphertext for the same plaintext.

So encryption=on + dedup=on is pointless, right ?

Not so with ZFS, I wasn't happy about giving up on deduplication for encrypted datasets, so we found a solution, it has some restrictions but I think they are reasonable and realistic ones.

Within what I'll call a "clone family", ie all datasets are clones of the same original dataset or are clones of those clones, we would be sharing data encryption keys in the default case, because they share data (again see my earlier post on assured delete for info on the data encryption keys). So I found a method of generating the IV such that within the "clone family" we will get dedup hits for the same plaintext.  For this to work you must not run 'zfs key -K' on any of the clones and you must not pass '-K' to 'zfs clone' when you create your clones.  Note that dedup does not apply to child datasets only to the snapshots/clones, and by that I mean it doesn't break you just won't get deduplication matches.

So no it isn't pointless and whats more for some configurations it will actually work really well.  A common use case for a configuration that does work well is a set of visualisation image (maybe filesystems for local Zones or ZVOLs shared over iSCSI for  OVM or similar) where they are all derived from the same original master by using zfs clones and that all get patched/updated with the pretty much the same set of patches/updaets.  This is a case where clones+dedup work well for the unencrypted case, and one which as shown above can still work well even when encryption is enabled.

The usual deployment caveats with ZFS deduplication still apply, ie it is block based and it works best when you have lots of available DRAM and/or L2ARC for caching the DDT.  ZFS Encryption doesn't add any additional requirements to this. 

So we can happily do this type of thing, and have it "work as expected":

$ zfs create -o compression=on -o encryption=on -o dedup=on tank/builds
... 
$ zfs create tank/builds/master
... 
$ zfs clone tank/builds/master@1tank/builds/project-one
...
$ zfs clone tank/builds/master@1 tank/builds/project-two 

General documentation for ZFS support of encryption is in the Oracle Solaris ZFS Administration Guide in the Encrypting ZFS File Systems section.

Assured delete with ZFS dataset encryption

Need to be assured that your data is inaccessible after a certain point in time ?

Many government agency and private sector security policies allow you to achieve that if the data is encrypted and you can show with an acceptable level of confidence that the encryption keys are no longer accessible.  The alternative is overriding all the disk blocks that contained the data, that is both time consuming, very expensive in IOPS and in a copy-on-write filesystem like ZFS actually very difficult to achieve.  So often this is only done on full disks as they come out of production use for recycling/repurposing, but this isn't ideal in a complex RAID layout.

In some situations (compliance or privacy are common reasons) it is desirable to have an assured delete of a subset of the data on a disk (or whole storage pool). Having the encryption policy / key management at that ZFS dataset (file system / ZVOL) level allows us to provide assured delete via key destruction at a much smaller granularity than full disks, it also means that unlike full disk encryption we can do this on a subset of the data while the disk drives remain live in the system.

If the subset of data matches a ZFS file system (or ZVOL) boundary we can provide this assured delete via key destruction; remember ZFS filesystems are relatively very cheap.

Lets start with a simple case of a single encrypted file system:

$ zfs create -o encryption=on -o raw,file:///media/keys/g projects/glasgow
$ zfs create -o encryption=on -o raw,file:///media/keys/e projects/edinburgh

After some time we decide we want to make projects/glasgow completely inaccessible.  The simplest way is to just destroy the wrapping key, in this case it is on /media/keys/g, and destroying the projects/glasgow dataset.  The data on disk will still be there until ZFS starts using those blocks again but since we have destroyed /media/keys/g (which I'm assuming here is on some separate file system) we have a high level of assurance that the encrypted data can't be recovered even by reading "below" ZFS by looking at the disk blocks directly.

I'd recommend a tiny additional step just to make sure that the last version of the data encryption keys (which are stored wrapped on disk in the ZFS pool) are not encrypted by anything the user/admin knows:

$ zfs key -c -o raw,file:///dev/random projects/glasgow
$ zfs key -u projects/glasgow
$ zfs destroy projects/glasgow

While the step of re-wrapping the keys with a key the user/admin doesn't know doesn't provide a huge amount of additional security/assurance it makes the administrative intent much clearer and at least allows the user to assert that they did not know the wrapping key at the point the dataset was destroyed.

If we have clones this situation is slightly more complex since clones share their data encryption key with their origin - since they share data written before the clone was branched off the clone needs to be able to read the shared and unique data as if it was its own.

We can make sure that the unique data in a clone uses a different data encryption key than the origin does from the point the clone was taken:

... time passes data is placed in projects/glasgow
$ zfs snapshot projects/glasgow@1
$ zfs clone -K projects/glasgow@1 projects/mungo

By passing '-K' to 'zfs clone' we ensure that any unique data in projects/mungo is using a different dataset encryption key from projects/glasgow, this means we can use the same operations as above to provide assured delete for the unique data in projects/mungo even though it is a clone.

Additionally we could also do 'zfs key -K projects/glasgow' and have any new data written to projects/glasgow after the projects/mungo clone was taken use a different data encryption key was well.  Note however that that is not atomic so I would recommend making projects/glasgow read-only before taking the snapshot even though normally this isn't necessary, the full sequence then becomes:

$ zfs set readonly=on projects/glasgow
$ zfs snapshot projects/glasgow@1
$ zfs clone -K projects/glasgow@1 projects/mungo
$ zfs set readonly=off projects/mungo
$ zfs key -K projects/glasgow
$ zfs set readonly=off projects/glasgow

If you don't have projects/glasgow marked as read-only then there is a risk that data could be written to projects/glasgow  after the snapshot is taken and before we get to the 'zfs key -K'.  This may be more than is necessary in some cases but it is the safest method.

General documentation for ZFS support of encryption is in the Oracle Solaris ZFS Administration Guide in the Encrypting ZFS File Systems section.

 
  
 
  
 
  

Introducing ZFS Crypto in Oracle Solaris 11 Express

Today Oracle Solaris 11 Express was released and is available for download, this release includes on disk encryption support for ZFS.

Using ZFS encryption support can be as easy as this:

# zfs create -o encryption=on tank/darren
Enter passphrase for 'tank/darren':
Enter again:
#

If you don't wish to use a passphrase then you can use the new keysource property to specify the wrapping key is stored in a file instead.  See the zfs(1M) man page or ZFS documentation set for more details, but here are a few simple examples:

# zfs create -o encryption=on -o keysource=raw,file:///media/stick/mykey tank/darren

# zfs create -o encryption=aes-256-ccm -o keysource=passphrase,prompt tank/tony

If encryption is enabled and keysource is not specified we default to keysource=passphrase,prompt.  I plan to have other keysource locations available at some point in the future, for example retrieving the wrapping key from an https:// location or from a PKCS#11 accessible hardware keystore or key management system.

There are multiple different keys used in ZFS, the one the user/admin manages (or is derived from the entered passphrase) is a wrapping key, this means it is used to encrypt other keys not used for data encryption.  The data encryption keys are randomly generated at the time the dataset is created (using the kernel interfaces for /dev/random), or when a user/admin explicitly requests a new data encryption key for newly written data in that ZVOL or file system (eg zfs key -K tank/darren).

Back to our simple example of 'tank/darren' lets do a little more. If we now create a filesystem below 'tank/darren'  eg 'tank/darren/music' we won't be prompted for an additional passphrase/key since the encryption and keysource properties are inherited by the child dataset.  Note that encryption must to set at create time and it can not be changed on existing datasets.  Inheriting the keysource means that the child dataset inherits the same wrapping key but we generate new data encryption keys for that dataset.

If you create a clone of an encrypted file system then the clone is always encrypted as well, but the wrapping key for a clone need not be the same as the origin, for example:

# zfs snapshot tank/darren@1
# zfs clone tank/darren@1 tank/darren/sub

# zfs clone tank/darren@1 tank/tony
Enter passphrase for 'tank/tony':
Enter again:

In the first clone above the 'tank/darren/sub' dataset inherits all the encryption properties and wrapping key from 'tank/darren'.  In the second case there was no encrypted dataset at 'tank' to inherit the keysource property from (and thus the wrapping key) so we take the default keysource value of passphrase,prompt.

We can also change the wrapping key at any time using the 'zfs key -c <dataset>' command.  Note that in the case of passphrase this does not prompt for the existing passphrase, this is intentional as the filesystem will already be mounted (and maybe shared) and the files accessible.  It also gives us the ability via ZFS allow delegations to distinguish between which users can load/unload the keys (and thus have the filesystem mount) via the 'key' delegation and those users that can change the wrapping keys with the 'keychange' delegation.

At the time the wrapping key is changed you can also choose to use a different style of wrapping key, say switching from a prompted for passphrase to a key in a file.

The easiest way to create the wrapping keys is to use the existing Solaris pktool(1) command, eg:

$ pktool genkey keystore=file keytype=aes keylen=128 outkey=/media/stick/mykey

ZFS uses the Oracle Solaris Cryptographic Services APIs, as such it automatically benefits from the hardware acceleration of AES available on the SPARC T series processors and on Intel processors supporting AES instructions.

General documentation for ZFS support of encryption is in the Oracle Solaris ZFS Administration Guide in the Encrypting ZFS File Systems section.

For more about Oracle Solaris 11 Express see the articles and white papers site. Particularly the security and storage software articles.

Updated 2010-01-17 to change links for documentation to new location.

 

Wednesday Dec 16, 2009

Can we improve ZFS dedup performance via SHA256 ?

Today I integrated the fix for 6344836 ZFS should not have a private copy of SHA256.  The original intent of this was to avoid having a duplicate implementation of the SHA256 source code in the ONNV source tree.   The hope was that for some platforms there would be an improvement in the performance of SHA256 over the private ZFS copy and that would have some impact on ZFS performance.   Until deduplication support arrived in ZFS the SHA256 wasn't heavily used by default since the default data checksum is fletcher not SHA256.  However I had been running a variant of this fix in the ZFS crypto project gate for almost 2 years now since when encryption is enabled on a ZFS dataset we force the use of sha256 as the checksum for data/metadata of a dataset.

As part of approving my RTI Jeff Bonwick rightly wanted to know that the change wouldn't regress the performance of deduplication support that he had just integrated.   So asked for a ZFS micro test based on deduplication and also for some micro benchmark numbers comparing the time to do a SHA256 digest over the various ZFS block sizes 1k through 128k (in powers of two).  The micro benchmark uses an all zeros block (malloc followed by bzero) of the appropriate size and averages the time to do SHA2Init,SHA2Update,SHA2Final or the equivalent and put it into a zio_cksum_t (a big endian layout of 4 unsigned 64 bit ints), these are the averages in nano seconds over a run of 10,000 iterations for each block size.

The micro benchmark was run in userland using 64 bit processes but the SHA256 code used is identical to that used by the zfs kernel module and the misc/sha2 kernel module from the cryptographic framework.

Note these are micro benchmarks and may not be indicative of real world performance, I selected two modern machines from Sun's hardware portfollio an X4140 (the head unit of a Sun Unified Storage 7310) and a UltraSPARC T2 based T5120.  Note that the fix I integrated only uses a software implementation of SHA256 on the T5120 (UltraSPARC T2) and is not (yet) using the on CPU hardware implementation of SHA256.  The reason for this is to do with boot time availability of the Solaris Cryptographic Framework and the need to have ZFS as the root filesystem.  I know how to fix this but it was a slightly more detailed fix and one I didn't think appropriate to introduce in build 131 of OpenSolaris - which means it needs to wait until post 134.

All very nice but as I said a that is a micro benchmark, what does this actually look like at the ZFS level.  Again this is still a benchmark and is not necessarily indicative of any possible real world performance improvements, the goal here is to show that regardless of wither or not there is an improvement in the implementation of calculating a SHA256 checksum for ZFS will it be noticed for dedup.   This test was suggested by Jeff as a quick way to determine if there is any impact to dedup in changing the SHA256 implementation - it uses data that will obviously dedup.  Note here the actual numbers aren't important because the goal wasn't to show how fast ZFS can do IO but to show the relative difference between the before and after implementations of SHA256.  As such I didn't build a pool config with the purpose of doing the best possible IO I just used a single disk pool using one of the available internal drives in each of my machines (again an X4140 and a T5120).

# zpool create mypool -O dedup=on c1t1d0 
# rm -f 100g; sync; ptime sh -c 'mkfile 100g 100g; sync' 

X4140 running build 129 with private ZFS copy of SHA256

real     3:34.386051924
user        0.341710806
sys      1:02.587268898

X4140 running build with misc/sha256

real     2:25.386560294
user        0.317230220
sys        56.600231785

T5120 running build 129 with private ZFS copy of SHA256

real 8:40.703912346
user 2.704046212
sys 4:06.518025697

T5120 running build with misc/sha256

real 5:40.593874259
user 2.704308565
sys 3:59.648897024

So for both the X4140 and the T5120 there is a noticeable decrease in the real time taken to run the test. In each case I ran the test 6 times and picked the lowest time result in each case - the variance was actually very small anyway (usually under a second).  Given that mkfile produces blocks of all zeros, and compression was not on, then there would be massive amounts of dedup hits going on, how much ?

# zpool get dedup mypool
NAME    PROPERTY    VALUE  SOURCE
mypool  dedupratio  819200.00x  -

Big hits for dedup so lots of exercising of SHA256 in this "test" case.

Again these are tests primary done to show no regression in peformance from switching to the common copy of the SHA256 code, but have shown that there is an actually a significant improvement.  Wither or not you will see that improvement with your real world data on your real systems depends on many factors - not least of which is wither your data is dedupable and wither or not SHA256 was anywhere near being the critical factor in your observed performance.

My RTI advocate was happy I'd done the due diligence and approved my RTI so this is integrated into ONNV during build 131.

Happy deduplicating.

Monday Jun 01, 2009

Encrypting ZFS pools using lofi crypto

I'm running OpenSolaris 2009.06 on my laptop, soon I'll be running my own development bits of ZFS Crypto but I couldn't do that because OpenSolaris 2009.06 is based on build 111 but the ZFS crypto source is already at build 116.  Once the /dev repository catches up to 116 (or later) then I can put ZFS Crypto onto my laptop and run it "real world" rather than just pounding on it with the test script.

In the mean time I had a need to store some confidential and also some highly personal data on my laptop - not something I generally do as it is mostly used for remote access or the sources I have on it are all open source.

I also really wanted to take advantage of the ZFS auto-snapshot service for this data.  So I really needed ZFS Crypto but I was also constrained to 2009.06.   I could have gone back and used an older build of ZFS crypto that was in sync with 111 but that would have taken me a few days to backport my current sources - not something that I'd consider useful work given the only person that would benefit from this was me and for a short period of time.

It was also only a relatively small amount of data and performance of access wasn't critical.  I should also mention that storing it on a USB mass storage device of any kind was a complete non starter given where I was going with this laptop at the time - lets just say one can't take usb disks there.

I wrote previously about the crypto capability we have in the lofi(7D) driver - this is "block" device based crypto and knows nothing about filesystems.  As I mentioned before it has a number of weaknesses but by running ZFS on top of it some of those are mitigated for me in the situation I outlined above.  Particularly since I'd end up with "ZFS  - lofi - ZFS".

What I decided to do was use lofi as a "shim" and create a ZFS pool on a lofi device which was backed by a ZVOL.    As I mentioned above the reason for using ZFS as the end filesystem was to take advantage of the auto-snapshot feature, using ZFS below lofi means that I can expand the size of the encrypted area by adjusting the size of the ZVOL if necessary.

darrenm$ export PVOL=rpool/export/home/darrenm/pvol
# zfs create -V 1g $PVOL
darrenm$ pktool genkey keystore=pkcs11 label=$PVOL keylen=256 keytype=aes
Enter PIN for Sun Software PKCS#11 softtoken: 
darrenm$ pfexec lofiadm -a /dev/zvol/rdsk/$PVOL -T:::$PVOL -c aes-256-cbc
# zpool create darrenm -O canmount=off  -O checksum=sha256 -O mountpoint=/export/home/darrenm darrenm /dev/lofi/1
# zfs allow darrenm create,destroy,mount darrenm 
darrenm$ zfs create -o canmount=off darrenm/Documents
darrenm$ zfs create darrenm/Documents/Private
</pre>
That sets things up the first time. This won't be automatically available after a reboot, infact the "darrenm" pool will be known about but will be in an FAULTED state since its devices won't be present.  To make it available again after reboot do this:
<pre>
darrenm$ pfexec lofiadm -a /dev/zvol/rdsk/$PVOL -T :::$PVOL -c aes-256-cbc
darrenm$ pfexec zpool import -d /dev/lofi darrenm
darrenm$ zfs mount -a

A few notes on the choice of naming.  I deliberately made the pool named after my usename and set the mountpoint of the top level dataset to match where my home dir is mounted (NOT the automounted /home/ entry but the actual underlying one) and set that as canmount=off so that my default home dir is still unencrypted but I can have subdirs below it that are actually ZFS datasets mounted encrypted from the encrypted pool.

The lofi encryption key is stored (encrypted) in my keystore managed by pkcs11_softtoken.  (As it happens my latop has a TPM so I could have stored it in there - expect for the fact that the TPM driver stack isn't fully operational until after build 111).  I chose to name the key using the name of the ZVOL just to keep things simple and to remind me what it is for, the label is arbitary but using the ZVOL name will help when this is scripted.

I'm planning on polishing this off a bit and making some scripts available, but first I want to try out some more complex use cases including having the "guest" pool be mirrored.  I also want to get the appropriate ZFS delegation and RBAC details correct.  For how though the above should be enough to show what is possible here until the "real" ZFS encrypted dataset support appears.  This isn't designed to be a replacement for the ZFS crypto project but a, for me at least, a usable workaround using what we have in 2009.06.

Saturday Nov 01, 2008

ZFS Crypto update

It is been a little while since I gave an update on the status of the ZFS Crypto project. A lot has happened recently and I've been really "heads down" writing code.

We had believed we were development complete and had even started code review ready for integration. All our new encryption tests passed and I only had one or two small regression test issues to reconfirm/resolve. However a set of changes (and very good and important ones for performance I might add) were integrated into the onnv_100 build that caused the ZFS Crypto project some serious remerge work. It took me just over 2 weeks to get almost back to where were. In doing that I discovered that the code I had written for the ZFS Intent Log (the ZIL) encryption/decryption wasn't going to work.

The ZIL encryption was "safe" from a crypto view point because all the data written to the ZIL was encrypted. However because of how the ZIL is claimed and replayed after a system failure (the only time the ZIL is actually read since it is mostly write only) mean't that the claim happened before the pool had any encryption keys available to it. The resulted in the ZFS code just thinking that all the datasets had no ZIL needing claimed, so ZFS stayed consistent on disk but anything that was in the ZIL that hadn't been committed in a transaction was lost - so it was equivalent to running without a ZIL. OOPS!

So I had to redesign how the ZIL encryption/decryption works. The ZIL is dealt with in two main stages, first there is the zil_claim which happens during pool import (before the pool starts doing any new transactions and long before userland is up and running). The second stage happens much much later when the filesystems (datasets really because the ZIL applies to ZVOLs too) are mounted - this is good because we have crypto keys available by then.

This mean't I need to really learn how the ZIL gets written to disk. The ZIL has 0 or more log blocks with each log block having 1 or more log records in it. There is a different type of log record (each of different sizes) for the different types of sync operations that can come through the ZIL. Each log record has a common part at the start of it that says how big it is - this is good from a crypto view. So I left the common part in the clear and I encrypt the rest of the log record. This is needed so that the claim can "walk" all the log records in all the log blocks and sanity check the ZIL and do the claim IO. So far this is similar to what was done for the DNODES. Unfortunately it wasn't quite that simple (never is really!).

All the log records other than the TX_WRITE type fell into the nice simple case described above. The TX_WRITE records are "funky" and from a crypto view point a really pain in how they are structured. Like all the other log records there is a common section at the start that says what type it is and how big the log record is. What is different about the TX_WRITE log records though is that they have a blkptr_t embedded in them. That blkptr_t needs to be in the clear because it may point of to where the data really is (especially if it is a "big" write). The blkptr_t is at the end of the log record. So no problem, just leave the common part at the start and the blkptr_t at the end in the clear, right ? Well that only deals with some of the cases. There is another TX_WRITE case, where the log record has the actually write data tagged on after the blkptr inside the log record and this is of variable size. Even in this case there is a blkptr_t embedded inside the log record. Problem is that blkptr_t is now "inside" the area we want to encrypt. So I ended up having to had a clear text log common area, encrypted TX_WRITE content, clear text blkptr_t and maybe encrypted data. Good job the OpenSolaris Cryptographic Framework supports passing in a uio_t for scatter gather!

So with all that done, it worked, right ? Sadly no there was still more to do. Turns out I still had two other special ZIL cases I had to resolve. Not with the log records though. Remember I said there are multiple log records in a log block ? Well at the end of the log block there is a special trailer record that says how big the sum of all the log records is (a good sanity checker!) but this also has an embedded checksum for the log block in it. This is quite unlike how checksums are normally done in ZFS, normally the checksum is in the blkptr_t not with the data. The reason it is done this way for the ZIL is for write performance and the risk is acceptable because the ZIL is very very rarely ever read and only then in a recovery situation. For ZFS Crypto we not only encrypt the data but we have a cryptographically strong keyed authentication as well (the AES_CCM MAC) that is stored in 16 bytes of the blkptr_t checksum field. Problem with the ZIL log blocks is that we don't have a blkptr_t for them we can put that 16 byte MAC into because the checksum for the log block is inside the log block in the trailer record, the blkptr checksum for the ZIL is used for record sequencing. So was there any reserved or padding fields ? Luckily yes there was but there was only 8 bytes available not 16. That is enough though since I could just change the params for CCM mode to ouput an 8 byte MAC instead of a 16 byte one. A little less security but still plenty sufficient - and especially so since we expect to never need to read the ZIL back - it is still cryptographically sound and strong enough for the size of the data blocks we are encrypting (less that 4k at a time). So with that resolved (this took a lot of time in Dtrace and MDB to work out!) I could decrypt the ZIL records after a (simulated) failure.

Still one last problem to go though, the records weren't decrypting properly. I verified, using dtrace and kmdb, that the data was correct and the CCM MAC was correct. So what was wrong why wouldn't they decrypt ? That only left the key and the nonce. Verifying the key was correct was easy, and it was. So what was wrong with the nonce ?

We don't actually store the nonce on disk for ZFS Crypto but instead we calculate it based on other stuff that is stored on disk. The nonce for a normal block is made up from: the block birth transaction (txg: a monatonically increasing unsigned 64 bit integer) , the object, the level, and the blkid. Those are manipulated (via a truncated SHA256 hash) into 12 bytes and used as the nonce. For a ZIL write the txg is always 0 because it isn't being written in a txg (that is the whole point of it!) but the blkid is actually the ZIL record sequence number which has the same properties as the txg. The problem is that when we replay the ZIL (not when we claim it though) we do have a txg number. This mean't we had a different nonce on the decrypt to the encrypt. The solution ? Remove the txg from the nonce for ZIL records - no big loss there since on a ZIL write it is 0 anyway and the blkid (the zil sequence number) has the security properties we want to keep AES CCM safe.

With all that done I managed to get the ZIL test suite to pass. I have a couple of minor follow-on issues to resolve so that zdb doesn't get its knickers in a twist (SEGV really) when it tries to display encrypted log blocks (which it can't decrypt since it is running in userland and without the keys available).


That turned out to be more that I expected to write up. So where are we with schedule ? I expect us to start codereview again shortly. I've just completed the resync to build 102.

Friday Sep 05, 2008

ZFS Crypto Codereview starts today

Prelim codereview for the OpenSolaris ZFS Crypto project starts today (Friday 5th September 2008 at 1200 US/Pacific) and is scheduled to end on Friday 3rd October 2008 at 2359 US/Pacific. Comments recieved after this time will still be considered but unless there are serious in nature (data corruption, security issue, regression from existing ZFS) they may have to wait until post onnv-gate integration to be addressed; however every comment will be looked at and assessed on its own merit.

For the rest of the pointers to the review materials and how to send comments see the project codereview page.

Thursday Aug 14, 2008

Making files on ZFS Immutable (even by root!)

First lets look at the normal POSIX file permissions and show who we are and what privileges our shell is running with:

# ls -l /tank/fs/hamlet.txt 
-rw-rw-rw-   1 root     root      211179 Aug 14 13:00 /tank/fs/hamlet.txt

# pcred $$
100618: e/r/suid=0  e/r/sgid=0
        groups: 0 1 2 3 4 5 6 7 8 9 12

# ppriv $$
100618: -zsh
flags = 
        E: all
        I: all
        P: all
        L: all

So we are running as root and have all privileges in our process and are passing all on to our children. We also own the file (and it is on a local ZFS filesystem not over NFS), and it is writable by us and our group, everyone in fact. So lets try and modify it:

# echo "SCRIBBLE" > /tank/fs/hamlet.txt 
zsh: not owner: /tank/fs/hamlet.txt

That didn't work lets try and delete it, but first check the permissions of the containing directory:

# ls -ld /tank/fs
drwxr-xr-x   2 root     root           3 Aug 14 13:00 /tank/fs

# rm /tank/fs/hamlet.txt
rm: /tank/fs/hamlet.txt: override protection 666 (yes/no)? y
rm: /tank/fs/hamlet.txt not removed: Not owner

That is very strange, so what is going on here ?

Before I started this I made the file immutable. That means that regardless of what privileges(5) the process has and what POSIX permissions or NFSv4/ZFS ACL it has we can't delete it change it nor can we even change the POSIX permissions or the ACL. So how did we do that ? Without good old friend chmod:

# chmod S+ci /tank/fs/hamlet.txt
Or more verbosely:
# chmod chmod S+v immutable /tank/fs/hamlet.txt

See chmod(1) for more details. For those of you running OpenSolaris 2008.05 releases then you need to change the default PATH to have /usr/bin in front of /usr/gnu/bin or use the full path to /usr/bin/chmod. This is because these extensions are only part of the OpenSolaris chmod command not the GNU version. The same applies to my previous posting on the extended output from ls.

About

DarrenMoffat

Search

Categories
Archives
« April 2014
MonTueWedThuFriSatSun
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
17
18
19
20
21
22
23
24
25
26
27
28
29
30
    
       
Today