Wednesday Dec 02, 2009

SMF service notification "applet"

A very quick and dirty "applet" to run to be notified of SMF services that are in maintenance.

Put the following in you path somewhere and then add that to the list of things GNOME starts for you at login: System->Preferences->Startup Applications


while [ true ] ; do
	message="$(svcs -H -o fmri,state -s state | \\
		awk '$2 ~/maintenance/ {printf("%s\\nState: %s", $1, $2)}')"

	notify-send -i remove -u critical "$message" > /dev/null 2>&1
	sleep 1m

Monday Aug 24, 2009

File notification in OpenSolaris - and a bit of shell internals

A reasonably frequently asked question for people porting apps to OpenSolaris is about file notification events.  Linux has inotifyfor doing this.  OpenSolaris has a generic port event system.  The port_create(3C) and port_associate(3C) man pages describe this.  However they don't have what I'd consider the simple example that is likely to be of interest to people familar with the Linux inotify system.   So when the question came up on  again I decided to sketch out the simple pathname based solution.

I hit three interesting problems in implementing what you will see below is a very small amount of code.

1) It wasn't imediatedly obvious to me that you had to call port_associate(3C) again after port_get(3C).

2) If I had read the man page properly I would have noticed the difference between passing NULL for port_associate(3C) timeout versus a zero'd timestruc_t - they effectively mean the opposite thing.

3) I kept geting an event when I ran pstack(1) on my program.  This was very strange, but thanks to truss I worked out what was going on.   I'll leave the answer to after the sample code...

#include <stdio.h>
#include <port.h>
#include <fcntl.h>
#include <sys/types.h>
#include <sys/stat.h>
#include <poll.h>
#include <unistd.h>
#include <strings.h>
#include <errno.h>

main(int argc, char \*\*argv)
        int myport = port_create();
        timespec_t timeout;
        file_obj_t mysf = { 0 };
        struct stat sbuf;
        int myfd;
        port_event_t pe;
        int ret;

        myfd = open("/tmp/test", O_RDONLY);

        fstat(myfd, &sbuf);

        for (;;) {
                mysf.fo_name = "/tmp/test";
                port_associate(myport, PORT_SOURCE_FILE, (uintptr_t)&mysf,
                    FILE_MODIFIED, NULL);

                printf("Waiting for events...");

                errno = 0;
                ret = port_get(myport, &pe, NULL);
                if (ret != 0) {
                        switch (errno) {
                        case EINTR:
                        case ETIME:
                            printf("Timer expired\\n");
                            return (1);

                printf("woke \\n");


The issue with my original version was that the directory I used was "/tmp".  This where we learn a little about shell internals.   I didn't just do pstack on my process what I actually did was:

$ pstack `pgrep file_event`

When I stopped and thought about it for a bit it was obvious that the event was a shell caused side effect.  Even then I couldn't work out what it was until I resorted to using truss from another terminal:

$ truss -f -topen -p 25172

8493: open("/tmp/.stdout2NaOLq", O_RDWR|O_CREAT|O_EXCL, 0600) = 5 8493: open("/tmp/.stderr3NaOLq", O_RDWR|O_CREAT|O_EXCL, 0600) = 5 ...

and there it is. The shell as take the output of the "backticked" command and put it in a temporary file in /tmp.  Which is why I've changed the sample code above to use a subdir of /tmp.

Monday Aug 10, 2009

Sending a Break to Solaris hosted in Virtualbox

I've recently starting using VirtualBox instead of physical machines for some of my basic functional testing.  When doing some types of kernel development it is often necessary to force the system into kmdb.

The F1-A keystroke does this on Solaris x86 systems by default, however that isn't going to work with VirtualBox because that keystore will be grabbed by some very low level kernel routines in the host an never reaches the guest.

So we need an alternate way of getting a break to the guest Solaris from the host one.

I was sure someone else must have worked this out before.  I didn't get the full answer from a quick google search but I did find all the parts.

The CLI for VirtualBox can send an NMI (Non Maskable Interupt) to any running guest. Solaris can beconfigured to drop into kmdb or force a panic when receiving an NMI.

In the guest put this into /etc/system and reboot:

set pcplusmp:apic_kmdb_on_nmi=1

Or to set it interactively do:

# echo apic_kmdb_on_nmi/W1 | mdb -kw

# mdb -K

Then with the VirtualBox CLI we can send an NMI to our guest:

$ VBoxManage debugvm ZFS_Crypto_Test injectnmi

Nice easy solution.  Though I do now wonder why we don't have some default action for when an NMI is received - but then not everyone cares about getting a dump or getting into kmdb!

Updated 2013-10-8: at somepoint this changed from controlvm to debugvm

Tuesday Jul 28, 2009

(Secured) Remote Audit Trail in OpenSolaris

Phase 1 (the sending side) of the Secured (by GSS-API for data integrity and data confidentiality) Remote Audit trail project has integrated l For more info see: PSARC/2009/208  and the project page.

Huge congratulations to Jan Friedel and the whole audit team, and many thanks to the code reviewers. I'm looking forward to integration of the receiving side as well.

Monday Jul 06, 2009

TVOSUG First Meeting tonight (Mon 6th July)

The first meeting of the Thames Valley OpenSolaris User Group TVOSUG will be tonight (Monday 6th July 2009) at 8:30pm in the Monk's Retreat in central Reading.

Wednesday Jun 24, 2009

printf should not SEGV when passed NULL for %s format

Author: Darren Moffat 
Repository: /export/onnv-gate
Total changesets: 1

Changeset: acbef346fd18

PSARC/2008/403 libc printf behaviour for NULL string
6724478 libc printf should not SEGV when passed NULL for %s format



Finally got this in.

Monday Jun 01, 2009

Encrypting ZFS pools using lofi crypto

I'm running OpenSolaris 2009.06 on my laptop, soon I'll be running my own development bits of ZFS Crypto but I couldn't do that because OpenSolaris 2009.06 is based on build 111 but the ZFS crypto source is already at build 116.  Once the /dev repository catches up to 116 (or later) then I can put ZFS Crypto onto my laptop and run it "real world" rather than just pounding on it with the test script.

In the mean time I had a need to store some confidential and also some highly personal data on my laptop - not something I generally do as it is mostly used for remote access or the sources I have on it are all open source.

I also really wanted to take advantage of the ZFS auto-snapshot service for this data.  So I really needed ZFS Crypto but I was also constrained to 2009.06.   I could have gone back and used an older build of ZFS crypto that was in sync with 111 but that would have taken me a few days to backport my current sources - not something that I'd consider useful work given the only person that would benefit from this was me and for a short period of time.

It was also only a relatively small amount of data and performance of access wasn't critical.  I should also mention that storing it on a USB mass storage device of any kind was a complete non starter given where I was going with this laptop at the time - lets just say one can't take usb disks there.

I wrote previously about the crypto capability we have in the lofi(7D) driver - this is "block" device based crypto and knows nothing about filesystems.  As I mentioned before it has a number of weaknesses but by running ZFS on top of it some of those are mitigated for me in the situation I outlined above.  Particularly since I'd end up with "ZFS  - lofi - ZFS".

What I decided to do was use lofi as a "shim" and create a ZFS pool on a lofi device which was backed by a ZVOL.    As I mentioned above the reason for using ZFS as the end filesystem was to take advantage of the auto-snapshot feature, using ZFS below lofi means that I can expand the size of the encrypted area by adjusting the size of the ZVOL if necessary.

darrenm$ export PVOL=rpool/export/home/darrenm/pvol
# zfs create -V 1g $PVOL
darrenm$ pktool genkey keystore=pkcs11 label=$PVOL keylen=256 keytype=aes
Enter PIN for Sun Software PKCS#11 softtoken: 
darrenm$ pfexec lofiadm -a /dev/zvol/rdsk/$PVOL -T:::$PVOL -c aes-256-cbc
# zpool create darrenm -O canmount=off  -O checksum=sha256 -O mountpoint=/export/home/darrenm darrenm /dev/lofi/1
# zfs allow darrenm create,destroy,mount darrenm 
darrenm$ zfs create -o canmount=off darrenm/Documents
darrenm$ zfs create darrenm/Documents/Private
That sets things up the first time. This won't be automatically available after a reboot, infact the "darrenm" pool will be known about but will be in an FAULTED state since its devices won't be present.  To make it available again after reboot do this:
darrenm$ pfexec lofiadm -a /dev/zvol/rdsk/$PVOL -T :::$PVOL -c aes-256-cbc
darrenm$ pfexec zpool import -d /dev/lofi darrenm
darrenm$ zfs mount -a

A few notes on the choice of naming.  I deliberately made the pool named after my usename and set the mountpoint of the top level dataset to match where my home dir is mounted (NOT the automounted /home/ entry but the actual underlying one) and set that as canmount=off so that my default home dir is still unencrypted but I can have subdirs below it that are actually ZFS datasets mounted encrypted from the encrypted pool.

The lofi encryption key is stored (encrypted) in my keystore managed by pkcs11_softtoken.  (As it happens my latop has a TPM so I could have stored it in there - expect for the fact that the TPM driver stack isn't fully operational until after build 111).  I chose to name the key using the name of the ZVOL just to keep things simple and to remind me what it is for, the label is arbitary but using the ZVOL name will help when this is scripted.

I'm planning on polishing this off a bit and making some scripts available, but first I want to try out some more complex use cases including having the "guest" pool be mirrored.  I also want to get the appropriate ZFS delegation and RBAC details correct.  For how though the above should be enough to show what is possible here until the "real" ZFS encrypted dataset support appears.  This isn't designed to be a replacement for the ZFS crypto project but a, for me at least, a usable workaround using what we have in 2009.06.

Wednesday Apr 08, 2009

Running Privileged Applications in the OpenSolaris GNOME Desktop

gksu(1) says:

     This manual page documents briefly gksu and gksudo

     gksu is a frontend to su and gksudo is a frontend  to  sudo.
     Their primary purpose is to run graphical commands that need
     root without the need to run  an  X  terminal  emulator  and
     using su directly.

Unfortunately the man page doesn't currently (I'm logging a bug) say anything about the changes made for integration with OpenSolaris RBAC (LSARC/2006/348).

If the user has an RBAC profile entry to run the command, given as the argument to gksu, then gksu will execute the command using pfexec(1).

If the user doesn't have a direct RBAC profile entry for the command but the user can assume a role that does have an RBAC profile entry then gksu prompts for the role password and uses the role. In this case gksu is doing the exec using embedded_su(1M) (which is really a hardlink to /bin/su).

Failing all that then gksu prompts for the root password; which will fail it root is a role that the user can't assume.

Menu entries for things like packagemanager, updatemanager, printmanager on OpenSolaris already do use gksu(1)

gksu by default does a full X keyboard and mouse grab while prompting for the password.  It also passes on the users X magic cookie so GUI programs work well.

gksu can also use sudo as the back end instead of OpenSolaris RBAC or embeded_su, that is standard gksu behaviour though not something specific to OpenSolaris.

Wednesday Dec 17, 2008

OpenSolaris "disk" encryption in snv_105

lofi(7D) encryption

The encryption part of the OpenSolaris lofi compression & encryption project integrated into snv_105. I initially started this as a proof of concept several years ago but it never became high enough priority for such a long time. Casper Dik made a working version of it that was "distributed" internally for quite a few years as part of frkit. Now Dina has finished it off and got it integrated.

Finishing it off took much longer than we originally projected due to interactions with the compression code that was added to lofi and some very hard to track down bugs where lofi is used by xVM (the Xen based hypervisor) - particularly the interations with dom0 and domU lofi use.

So what can you do with it ? It is similar to what has been available for many many years on Linux using the cryptoloop system. It isn't perfect but it is better than the nothing we had before.

Creating an encrypted UFS filesystem with lofi

   # mkfile 128m /export/lofi-backing-file
   # lofiadm -a /export/lofi-backing-file -c aes-256-cbc
   Enter passphrase: 
   Re-enter passphrase:   
   # newfs /dev/rlofi/1
   newfs: construct a new file system /dev/rlofi/1: (y/n)? y
   /dev/rlofi/1:   262036 sectors in 436 cylinders of 1 tracks, 601 sectors
        127.9MB in 28 cyl groups (16 c/g, 4.70MB/g, 2240 i/g)
   super-block backups (for fsck -F ufs -o b=#) at:
   32, 9648, 19264, 28880, 38496, 48112, 57728, 67344, 76960, 86576,
   173120, 182736, 192352, 201968, 211584, 221200, 230816, 240432, 250048, 259664
   # mount /dev/lofi/1 /mnt

Nice and simple. We can also store the key in a file, key generation can be done with pktool(1). Or we can store it in any PKCS#11 accessible keystore:

   # pktool genkey keystore=pkcs11 keytype=aes keylen=256 label=mylofikey
   Enter PIN for Sun Software PKCS#11 softtoken :
   # lofiadm -a /export/lofi-backing-file -c aes-256-cbc -T :::mylofikey 
   Enter PIN for Sun Software PKCS#11 softtoken : 

Issues with the lofi encryption

  • For lofi compression and encryption are mutually exclusive, compression is readonly lofi anyway. If you need both wait for the integration of encryption support in ZFS.
  • No integrity check. Currently the lofi encryption use CBC mode because we needed a non expanding cipher. Once the OpenSolaris crypto framework has support for XTS (or similar) mode we will likely update the lofi crypto to use that instead.
  • Lofi performance isn't great - this isn't a crypto issue, lofi performance in general just isn't great and adding crypto into the mix doesn't help much.
  • No way to detect the wrong key. We have a reserved area where we could add meta-data to determine if the correct key and algorithm params have been supplied but this hasn't been implemented yet.

I still think this is better than nothing even if we are delivering it much later than we had hoped. Ultimately ZFS encryption is the solution for OpenSolaris encrypted filesystems and volumes.

Saturday Nov 01, 2008

ZFS Crypto update

It is been a little while since I gave an update on the status of the ZFS Crypto project. A lot has happened recently and I've been really "heads down" writing code.

We had believed we were development complete and had even started code review ready for integration. All our new encryption tests passed and I only had one or two small regression test issues to reconfirm/resolve. However a set of changes (and very good and important ones for performance I might add) were integrated into the onnv_100 build that caused the ZFS Crypto project some serious remerge work. It took me just over 2 weeks to get almost back to where were. In doing that I discovered that the code I had written for the ZFS Intent Log (the ZIL) encryption/decryption wasn't going to work.

The ZIL encryption was "safe" from a crypto view point because all the data written to the ZIL was encrypted. However because of how the ZIL is claimed and replayed after a system failure (the only time the ZIL is actually read since it is mostly write only) mean't that the claim happened before the pool had any encryption keys available to it. The resulted in the ZFS code just thinking that all the datasets had no ZIL needing claimed, so ZFS stayed consistent on disk but anything that was in the ZIL that hadn't been committed in a transaction was lost - so it was equivalent to running without a ZIL. OOPS!

So I had to redesign how the ZIL encryption/decryption works. The ZIL is dealt with in two main stages, first there is the zil_claim which happens during pool import (before the pool starts doing any new transactions and long before userland is up and running). The second stage happens much much later when the filesystems (datasets really because the ZIL applies to ZVOLs too) are mounted - this is good because we have crypto keys available by then.

This mean't I need to really learn how the ZIL gets written to disk. The ZIL has 0 or more log blocks with each log block having 1 or more log records in it. There is a different type of log record (each of different sizes) for the different types of sync operations that can come through the ZIL. Each log record has a common part at the start of it that says how big it is - this is good from a crypto view. So I left the common part in the clear and I encrypt the rest of the log record. This is needed so that the claim can "walk" all the log records in all the log blocks and sanity check the ZIL and do the claim IO. So far this is similar to what was done for the DNODES. Unfortunately it wasn't quite that simple (never is really!).

All the log records other than the TX_WRITE type fell into the nice simple case described above. The TX_WRITE records are "funky" and from a crypto view point a really pain in how they are structured. Like all the other log records there is a common section at the start that says what type it is and how big the log record is. What is different about the TX_WRITE log records though is that they have a blkptr_t embedded in them. That blkptr_t needs to be in the clear because it may point of to where the data really is (especially if it is a "big" write). The blkptr_t is at the end of the log record. So no problem, just leave the common part at the start and the blkptr_t at the end in the clear, right ? Well that only deals with some of the cases. There is another TX_WRITE case, where the log record has the actually write data tagged on after the blkptr inside the log record and this is of variable size. Even in this case there is a blkptr_t embedded inside the log record. Problem is that blkptr_t is now "inside" the area we want to encrypt. So I ended up having to had a clear text log common area, encrypted TX_WRITE content, clear text blkptr_t and maybe encrypted data. Good job the OpenSolaris Cryptographic Framework supports passing in a uio_t for scatter gather!

So with all that done, it worked, right ? Sadly no there was still more to do. Turns out I still had two other special ZIL cases I had to resolve. Not with the log records though. Remember I said there are multiple log records in a log block ? Well at the end of the log block there is a special trailer record that says how big the sum of all the log records is (a good sanity checker!) but this also has an embedded checksum for the log block in it. This is quite unlike how checksums are normally done in ZFS, normally the checksum is in the blkptr_t not with the data. The reason it is done this way for the ZIL is for write performance and the risk is acceptable because the ZIL is very very rarely ever read and only then in a recovery situation. For ZFS Crypto we not only encrypt the data but we have a cryptographically strong keyed authentication as well (the AES_CCM MAC) that is stored in 16 bytes of the blkptr_t checksum field. Problem with the ZIL log blocks is that we don't have a blkptr_t for them we can put that 16 byte MAC into because the checksum for the log block is inside the log block in the trailer record, the blkptr checksum for the ZIL is used for record sequencing. So was there any reserved or padding fields ? Luckily yes there was but there was only 8 bytes available not 16. That is enough though since I could just change the params for CCM mode to ouput an 8 byte MAC instead of a 16 byte one. A little less security but still plenty sufficient - and especially so since we expect to never need to read the ZIL back - it is still cryptographically sound and strong enough for the size of the data blocks we are encrypting (less that 4k at a time). So with that resolved (this took a lot of time in Dtrace and MDB to work out!) I could decrypt the ZIL records after a (simulated) failure.

Still one last problem to go though, the records weren't decrypting properly. I verified, using dtrace and kmdb, that the data was correct and the CCM MAC was correct. So what was wrong why wouldn't they decrypt ? That only left the key and the nonce. Verifying the key was correct was easy, and it was. So what was wrong with the nonce ?

We don't actually store the nonce on disk for ZFS Crypto but instead we calculate it based on other stuff that is stored on disk. The nonce for a normal block is made up from: the block birth transaction (txg: a monatonically increasing unsigned 64 bit integer) , the object, the level, and the blkid. Those are manipulated (via a truncated SHA256 hash) into 12 bytes and used as the nonce. For a ZIL write the txg is always 0 because it isn't being written in a txg (that is the whole point of it!) but the blkid is actually the ZIL record sequence number which has the same properties as the txg. The problem is that when we replay the ZIL (not when we claim it though) we do have a txg number. This mean't we had a different nonce on the decrypt to the encrypt. The solution ? Remove the txg from the nonce for ZIL records - no big loss there since on a ZIL write it is 0 anyway and the blkid (the zil sequence number) has the security properties we want to keep AES CCM safe.

With all that done I managed to get the ZIL test suite to pass. I have a couple of minor follow-on issues to resolve so that zdb doesn't get its knickers in a twist (SEGV really) when it tries to display encrypted log blocks (which it can't decrypt since it is running in userland and without the keys available).

That turned out to be more that I expected to write up. So where are we with schedule ? I expect us to start codereview again shortly. I've just completed the resync to build 102.

Friday Sep 05, 2008

ZFS Crypto Codereview starts today

Prelim codereview for the OpenSolaris ZFS Crypto project starts today (Friday 5th September 2008 at 1200 US/Pacific) and is scheduled to end on Friday 3rd October 2008 at 2359 US/Pacific. Comments recieved after this time will still be considered but unless there are serious in nature (data corruption, security issue, regression from existing ZFS) they may have to wait until post onnv-gate integration to be addressed; however every comment will be looked at and assessed on its own merit.

For the rest of the pointers to the review materials and how to send comments see the project codereview page.

Using the Mercurial Forest extension for OpenSolaris onnv-gate

Some background

This is really only relevant to those people doing development on the OpenSolaris onnv-gate that are inside Sun, but it since there is nothing private about it I'm posting it publicly so google can find it.

With the transition from Teamware to Mercurial the onnv-gate source code is now held in two separate repositories. The main repository holding the overwhelming majority of the source code, which is opensourced under various licenses but mostly CDDL, is called "onnv-gate" the source code is in the mercurial repository in a subdirectory called usr/src. The much smaller closed source is in a separate repository that is nested inside the "onnv-gate" as the usr/closed subdirectory.

The usr/src part can be built either by using the downloadable (and redistributable) binaries which make up a proto area for the closed source bits, or the usr/closed part can be built from source along with usr/src. If building both from source then the usr/src and usr/closed mercurial repositories must be in sync otherwise there will be a high risk of either build failures or buggy binaries.

The Mercurial forest extension provides a way to make sure that nested repositorys such as onnv and onnv-closed are always pulled/pushed at the same time.

Initial gate setup

$ hg clone ssh:// myrepo
$ cd myrepo/usr
$ hg clone ssh://

Syncing with onnv-gate

$ cd path/to/myrepo
$ hg fpull

You may find this a little slow as it has to traverse the whole workspace looking for nested repositories so it can build the definition of the forest. This can be speeded up using the "fsnap" command to create a snapshot file. Or you can create one by hand, that may look similar to this:

root = .
revision = tip
path.default = ssh://

root = usr/closed
revision = tip
path.default = ssh://

To use the snapshot file we run fpull like this:

$ hg fpull --snapfile /path/to/mysnapfile

ONNV-gate push rules

The rules for pushing to the onnv-gate when a single set of fixes needs to span the usr/src and usr/closed trees is that usr/closed must be pushed first. This basically means that you should NOT use the forest extension for pushing. However the number of people that this impacts is very small most developers just need an up to date usr/closed for building.

For this reason I'm not showing how to push with the forest extension here.

Friday Aug 15, 2008

OpenSolaris ON development: Picking ws(1) vs bldenv(1).

To be able to build the ON source tree for OpenSolaris you first need to setup some environment variables. There are two tools provided in the SUNWonbld package (usually installed in /opt/onbld/bin) that do this: ws(1) and bldenv(1). How do you pick which one to use ?

I used to use ws(1) a lot because it works well with partial Teamware workspaces, I changed a few years ago (see below for why). The reason it works well in that case is because it sets some additional environment variables to point include and library search paths into the workspace parent. ws(1) doesn't look at your nightly environment file it "works stuff out" based on your repository (Teamware or Mercurial).

For Mercurial that isn't likely to work since for most cases your parent is not available over NFS/local (or even if it theoretically is it could be slow). Also Mercurial doesn't support partial workspaces so you may as well have run at least 'dmake setup' in $SRC anyway. ws knows about this and won't try an setup those environment files if your default hg path is ssh:// http:// or https:// [ note we don't use http/https for ON not even on ].

So how does this differ from bldenv(1) ?

bldenv(1) sets up the environment almost identically to nightly(1), it does so because it uses the same environment file. The main thing to know about bldenv(1) is that by default it sets up for a non-debug build, pass -d if you want a debug build. Like nightly(1), bldenv(1) supports MULTI_PROTO - both the debug and non debug proto areas existing at the same time. This is really useful because you can run nightly builds (full or incremental) but then build components locally as well and they will be built the same way and you can the interactively run makebfu(1) to get a new set of archives.

My personal recommendation is to use bldenv(1) but only after you have run at least one full build first.

Thursday Aug 14, 2008

Making files on ZFS Immutable (even by root!)

First lets look at the normal POSIX file permissions and show who we are and what privileges our shell is running with:

# ls -l /tank/fs/hamlet.txt 
-rw-rw-rw-   1 root     root      211179 Aug 14 13:00 /tank/fs/hamlet.txt

# pcred $$
100618: e/r/suid=0  e/r/sgid=0
        groups: 0 1 2 3 4 5 6 7 8 9 12

# ppriv $$
100618: -zsh
flags = 
        E: all
        I: all
        P: all
        L: all

So we are running as root and have all privileges in our process and are passing all on to our children. We also own the file (and it is on a local ZFS filesystem not over NFS), and it is writable by us and our group, everyone in fact. So lets try and modify it:

# echo "SCRIBBLE" > /tank/fs/hamlet.txt 
zsh: not owner: /tank/fs/hamlet.txt

That didn't work lets try and delete it, but first check the permissions of the containing directory:

# ls -ld /tank/fs
drwxr-xr-x   2 root     root           3 Aug 14 13:00 /tank/fs

# rm /tank/fs/hamlet.txt
rm: /tank/fs/hamlet.txt: override protection 666 (yes/no)? y
rm: /tank/fs/hamlet.txt not removed: Not owner

That is very strange, so what is going on here ?

Before I started this I made the file immutable. That means that regardless of what privileges(5) the process has and what POSIX permissions or NFSv4/ZFS ACL it has we can't delete it change it nor can we even change the POSIX permissions or the ACL. So how did we do that ? Without good old friend chmod:

# chmod S+ci /tank/fs/hamlet.txt
Or more verbosely:
# chmod chmod S+v immutable /tank/fs/hamlet.txt

See chmod(1) for more details. For those of you running OpenSolaris 2008.05 releases then you need to change the default PATH to have /usr/bin in front of /usr/gnu/bin or use the full path to /usr/bin/chmod. This is because these extensions are only part of the OpenSolaris chmod command not the GNU version. The same applies to my previous posting on the extended output from ls.

Heaps of info available on files via good old ls(1) [ But not encryption status ]

In "compact" form:

ls -V@ -/c -% all /tank/fs/hamlet.txt
-rw-r--r--+  1 root     root      211179 Aug 14 12:20 /tank/fs/hamlet.txt
                timestamp: atime         Aug 14 12:37:37 2008 
                timestamp: ctime         Aug 14 12:32:58 2008 
                timestamp: mtime         Aug 14 12:20:08 2008 
                timestamp: crtime        Aug 14 12:19:41 2008 

In verbose form:

ls -v@ -/v -% all /tank/fs/hamlet.txt
-rw-r--r--+  1 root     root      211179 Aug 14 12:20 /tank/fs/hamlet.txt
                timestamp: atime         Aug 14 12:21:12 2008 
                timestamp: ctime         Aug 14 12:32:58 2008 
                timestamp: mtime         Aug 14 12:20:08 2008 
                timestamp: crtime        Aug 14 12:19:41 2008 

One interesting thing it doesn't tell me about this file is that it is that all that information is encrypted on disk. For that I have to use zfs(1):

# zfs get encryption tank/fs
tank/fs  encryption  on           local

Or a little more verbosely:

# zfs list -r -o name,encryption,keyscope,keystatus,mounted tank 
tank             off      pool    undefined      yes
tank/fs           on      pool    available      yes

I wonder if it is worth having the verbose ls(1) output indicate that the file was encrypted on "disk" by the filesystem.

What would people do with that info if they had it ? Any ideas let me know.


Darren Moffat-Oracle


« July 2016