News, tips, partners, and perspectives for the Oracle Solaris operating system

Immutable Zones on Encrypted ZFS

Darren Moffat
Senior Software Architect

Rather that just discussing the new Immutable Zones feature of Solaris 11, I'm going to show how it can be combined with ZFS file system encryption as part of a defense in depth deployment.

Lets assume as part of our security threat model we need to protect
data written to disk but we also want to protect the system from
malicious or accidental tampering with (system binaries and configuration)
during runtime.

Deploying our application in a Solaris Zone allows
us to provide both of those, even for a user that has gained root
access inside the zone.  We will use two new features of Solaris 11 to
do this.  Firstly ZFS encryption to provide protection of the data
written to disk and secondly the new 'file-mac-profile' mandatory write
access feature that gives us Immtuable Zones.

Normally we can let
'zoneadm install' create the ZFS file system for the zone for us, but it
is also perfectly happy using a file system that already exists. We can use that to our advantage to enable encryption for the Zone. So lets
first setup our encrypted dataset and put a zone on it.  Note in this case the encryption
keys are stored outside of the zone and aren't managed by or visible to the zone users (even root).

# pktool genkey keystore=file keytype=aes keylen=128 outkey=/zones/key
# zfs create -o encryption=on -o keysource=raw,file:///zones/key rpool/zones/ltz
# zonecfg -z ltz 'create ; set zonepath=/zones/ltz'
# zoneadm -z ltz install
zoneadm -z ltz install
/zones/ltz2 must not be group readable.
/zones/ltz2 must not be group executable.
/zones/ltz2 must not be world readable.
/zones/ltz2 must not be world executable.
changing zonepath permissions to 0700.
Progress being logged to /var/log/zones/zoneadm.20111018T123039Z.ltz2.install
Image: Preparing at /zones/ltz/root.
Install Log: /system/volatile/install.4194/install_log
AI Manifest: /tmp/manifest.xml.14a4hi
SC Profile: /usr/share/auto_install/sc_profiles/enable_sci.xml
Zonename: ltz2
Installation: Starting ...
Creating IPS image
Installing packages from:
origin: http://ipkg.us.oracle.com/solaris11/dev/
Completed 167/167 32062/32062 175.8/175.8
Install Phase 44311/44311
Package State Update Phase 167/167
Image State Update Phase 2/2
Installation: Succeeded
Note: Man pages can be obtained by installing pkg:/system/manual
Done: Installation completed in 230.518 seconds.
Next Steps: Boot the zone, then log into the zone console (zlogin -C)
to complete the configuration process.
Log saved in non-global zone as /zones/ltz/root/var/log/zones/zoneadm.20111018T123039Z.ltz.install

Note the first four lines zoneadm ensured that the zonepath had the correct secure permissions but was otherwise perfectly happy with our pre-created encrypted ZFS dataset.

So at this point we just boot the zone and connect to the console and finish of the system configuration (since I didn't supply a manifest the system will be waiting to be told its name and network config.  Once that is done we can login and have a look at what local ZFS filesystems we have:

# zlogin ltz
# zfs list
rpool 423M 447G 33K /rpool
rpool/ROOT 423M 447G 33K legacy
rpool/ROOT/solaris 423M 447G 358M /
rpool/ROOT/solaris/var 61.5M 447G 59.6M /var
rpool/export 68K 447G 35K /export
rpool/export/home 33K 447G 33K /export/home

Notice that we have separate datasets for / and /var as well as /export and /export/home. These will all be encrypted because rpool inside this zone is really the ZFS dataset that is underneath /zones/ltz in the global zone, so lets look and check:

# zfs get encryption,keysource rpool/ROOT/solaris
rpool/ROOT/solaris encryption on inherited from $globalzone
rpool/ROOT/solaris keysource raw,file:///zones/key inherited from $globalzone

Notice that the source is that we inherited this from the globalzone. That has dealt with our on disk protection without needing the admin users inside the zone to do anything additional.  

It also worth now pointing out a third new feature, note that the ZFS dataset names in side the zone look just like they do in the global zone, rpool/ROOT/solaris. This is very different to what we had in Solaris 10 where the zone saw parts of the ZFS namespace that were only applicable in the global zone.  This namespace virtualisation provide both additional security and makes p2v and v2v transitions of Zones much easier. Note only have we virtualised the dataset names but we have hidden any global zone paths when they appear in the property source, we only know that this was set my a global zone admin.

Now lets move on to the Immutable Zones part. In the zone configuration, which is held in the global zone only, we specify what 'file-mac-profile' we want.

# zonecfg -z ltz 'set file-mac-profile=fixed-configuration'
# zoneadm -z ltz reboot

Now lets 'do some damage':

root@ltz:~# touch /etc/foo
touch: cannot create /etc/foo: Read-only file system
root@ltz:~# touch /var/tmp/foo
root@ltz:~# touch /tmp/foo
root@ltz:~# pkg install emacs
pkg install: Could not complete the operation on /var/pkg/lock: read-only filesystem.
root@ltz:~# rm /usr/bin/vi
rm: /usr/bin/vi not removed: Read-only file system
root@ltz:~# useradd alice
UX: useradd: ERROR: Cannot update system - login cannot be created.

So we can't create users or remove binaries or add new ones out side of /var/tmp and /tmp, maybe we can disable SMF services permanently:

root@ltz:~# svcadm disable ssh
root@ltz:~# svcs ssh
disabled 13:21:37 svc:/network/ssh:default

Finally 'some damage' - but lets reboot...

root@ltz:~# svcs ssh
online 13:23:19 svc:/network/ssh:default

The service restarted again on reboot; but we said disable it permanently.  What happened here was that it got disabled in the running SMF but because we couldn't persist the changes back to the on disk SMF database its permanent state didn't change and it came back on line after a reboot.

Thats nice but we need to maintain the Zone still and ensure it gets updated with security fixes so how do we write to it ?

Only from the global zone is it possible to transition the zone into a 'read-write' state.  We do this by passing '-w' or '-W' to the zoneadm boot command.  Note that this is an argument interperted by zoneadmd in the global zone and is not interpreted inside the zone at all, there is thus no way for a privileged user inside the zone to request that we reboot read-write by passing '-w' as an argument to reboot(1M).

# zoneadm -z ltz boot -w
# zlogin -C ltz
[NOTICE: Read-only zone rebooting read-write]

In this case we will get a login prompt and we can login and do everything to the zone we normally could - it is just as if 'file-mac-profile' hadn't been set. When packages are added or updated for a zone with a file-mac-profile it transiently reboots read-write (this can be forced manually with '-W') automatically; this is so any package self-assembly can be done to do config file upgrades etc, we would see this on the console:

[NOTICE: This read-only system transiently booted read/write]
[NOTICE: Now that self assembly has been completed, the system is rebooting]
[NOTICE: Zone rebooting]

At this point the zone is back to being protected just like it was above.

Two very simple to use new features in Solaris 11 that can be used separately or together to give us protection of the zones environment both on disk and at runtime.

Join the discussion

Comments ( 5 )
  • Jim Laurent Friday, November 11, 2011

    Shouldn't this line:

    zfs create -o encryption=on -o keysource=raw,file:///zones/key

    have /zones/ltz in it?

  • Darren Moffat Monday, November 14, 2011

    Yes Jim there was a missing dataset name - copy and paste error! Thanks for catching that - I've updated the article.

  • Dave Walker Sunday, November 24, 2013

    I'll be building some zones with encryption over the course of next week, so this posting will be handy :-).

    One of the things I'll also need to do, is copy the zones across multiple machines, for resilience; I'm looking to use ZFS send / receive for doing this. With encryption=on, I'm assuming this means I'm going to need to sync the keystores containing the zone FS keys, too (rather than expecting the ZFS send of an encrypted FS to happen en clair - or am I wrong, here?). Any particular preferences / recommendations for syncing the keystores if it's required, please?

  • Darren J Moffat Monday, November 25, 2013

    At this time ZFS send|recv is in the clear, this is because ZFS send is read through the ZFS ARC at the DMU layer. At that layer the data has already been decrypted and decompressed. So you should send the data across a suitably secured transport (eg SSH or IPsec).

    The only key requirement is thus that a key exists at the same named location, it doesn't have to have the same value. Basically the keysource property finds something on the destination so that when the first 'zfs recv' runs to do the dataset creation it will find a key and the dataset will be encrypted.

  • Steffen Moser Saturday, July 6, 2019
    Hi Darren,

    thank you very much for your detailed explanation! I've created zones on encrypted ZFS datasets in Solaris 11.3. The raw keys are stored in a PKCS11 key store.
    After upgrading to 11.4 SRU 8 (which went fine), I recognized that various tools like "zoneadm", "pkg update" and so on make problems regarding asking (and not remembering) the PIN for the PKCS11 key store. I've documented my observations here:


    Even doing a ZFS receive causes problems in 11.4 when the sent stream contains more file systems (e.g. when doing a send/receive for a whole zone). No problem occurs when the raw keys are stored in a file (e.g. on a USB stick), it strongly seems to be related to the PKCS11 key store.

    Do you have any ideas about this? Should we file a bug in 11.4 SRU 8 or do I have any misconception about the encrypted ZFS at myself?

Please enter your name.Please provide a valid email address.Please enter a comment.CAPTCHA challenge response provided was incorrect. Please try again.