Thursday Sep 12, 2013

Solaris Random Number Generation

The following was originally written to assist some of our new hires in learning about how our random number generators work and also to provide context for some questions that were asked as part of the ongoing (at the time of writing) FIPS 140-2 evaluation of the Solaris 11 Cryptographic Framework.

Updated Febuary 2016 to cover changes in Solaris 11.3

1. Consumer Interfaces

The Solaris random number generation (RNG) system is used to generate random numbers, which utilizes both hardware and software mechanisms for entropy collection. It has consumer interfaces for applications; it can generate high-quality random numbers suitable for long term asymmetric keys and pseudo-random numbers for session keys or other cryptographic uses, such as a nonce.

1.1 Interface to user space

The random(7D) device driver provides the /dev/random and /dev/urandom devices to user space, but it doesn't implement any of the random number generation or extraction itself.

There is a single kernel module (random) for implementing both the /dev/random and /dev/urandom devices. The two primary entry points are rnd_read() and rnd_write() for servicing read(2) and write(2) system calls respectively.

rnd_read() calls either kcf_rnd_get_bytes() or kcf_rnd_get_pseudo_bytes() depending on wither the device node is an instance of /dev/random or /dev/urandom respectively. In FIPS mode, if /dev/random has been opened for nonblocking reads (neither O_NBLOCK nor O_NDELAY set), the rnd_read call will call fips_random_get_bytes() There is a cap on the maximum number of bytes that can be transfered in a single read, MAXRETBYTES_RANDOM (1040) and MAXRETBYTES_URANDOM(128 * 1040) respectively.

rnd_write() uses random_add_entropy() and random_add_pseduo_entropy() they both pass 0 as the estimate of the amount of entropy that came from userspace, so we don't trust userspace to estimate the value of the entropy being provided.

1.2 Interface in kernel space

The kcf module provides an API for randomnes for in kernel KCF consumers. It implements the functions mentioned above that are called to service the read(2)/write(2) calls and also provides the interfaces for kernel consumers to access the random and urandom pools.

If no providers are configured no randomness can be returned and a message logged informing the administrator of the mis-configuration.

2. /dev/random

We periodically collect random bits from providers which are registered with the Kernel Cryptographic Framework (kCF) as capable of random number generation. The random bits are maintained in a cache and it is used for high quality random numbers (/dev/random) requests. If the cache has sufficient random bytes available the request is serviced from the cache.Otherwise we pick a provider and call its SPI routine.If we do not get enough random bytes from the provider call we fill in the remainder of the request by continously replenishing the cache and using that until the full requested size is met.

The maximum request size that will be serviced for a single read(2) system call on /dev/random is 1040 bytes.

2.1 Initialization

kcf_rnd_init() is where we setup the locks and get everything started, it is called by the _init() routine in the kcf module, which itself is called on very early in system boot - before the root filesystem is mounted and most modules are loaded.

For /dev/random and random_get_bytes() a static array of 1024 bytes is setup by kcf_rnd_init().

We start by placing the value of gethrtime(), high resolution time since the boot time, and drv_getparam(), the current time of day as the initial seed values into the pool (both of these are 64 bit integers).We set the number of random bytes available in the pool to 0.

2.2 Adding randomness to the rndpool

The rndc_addbytes() function adds new random bytes to the pool (aka cache). It holds the rndpool_lock mutex while it xor in the bytes to the rndpool.The starting point is the global rindex variable which is updated as each byte is added.It also increases the rnbyte_cnt.

If the rndpool becomes full before the passed in number of bytes is all used we continue to add the bytes to the pool/cache but do not increase rndbyte_cnt, it also moves on the global findex to match rindex as it does so.

2.3 Scheduled generation

The kcf_rnd_schedule_timeout() ensures that we periodically add bytes to the rndpool.The timeout is itself randomly generated by reading (but not consuming) the first 32 bits of rndpool to derive a new time out of between 2 and 5.544480 seconds. When the timeout expires the KCF rnd_handler() function [ from kcf_random.c ] is called.

If we have readers blocked for entropy or the count of available bytes is less than the pool size we start an asynchronous task to call rngprov_getbyte() gather more entropy from the available providers.

If there is at least the minimum (20 bytes) of entropy available we wake up the threads stopped in a poll(2)/select(2) of /dev/random. If there are any threads waiting on entropy we wake those up too.The waiting and wake up is performed by cv_wait_sig() and cv_broadcast(), this means that the pool lock will be held when cv_broadcast wakes up a thread it will have the random pool lock held.

Finally we schedule the next time out.

2.4 External caller Seeding

The random_add_entropy() call is able able to provide entropy from an external (to KCF or its providers) source of randomness.It takes a buffer and a size as well as an estimated amount of entropy in the buffer. There are no callers in Solaris that provide a non 0 value for the estimate of entropy to random_add_entropy().The only caller of random_add_entropy() is actually the write(2) entry point for /dev/random.

Seeding is performed by calling the first available software entropy provider plugged into KCF and calling its KCF_SEED_RANDOM entropy function. The term "software" here really means "device driver driven" rather than CPU instruction set driven.For example the n2rng provideris device driver driven but the architecture based Intel RDRAND is regarded as "software". The terminology is for legacy reasons from the early years of the Solaris cryptographic framework. This does however mean we never attempt to seed the hardware RNG on SPARC S2 or S3 core based systems (T2 through M6 inclusive) but we will attempt to do so on Intel CPUs with RDRAND

2.5 Extraction for /dev/random

We treat rndpool as a circular buffer with findex and rindex tracking the front and back respectively. Both start at position 0 during initialization.

To extract randomness from the pool we use kcf_rnd_get_bytes(). It calls rnd_get_bytes() with the rndpool_lock held. The lock will be released by rnd_get_bytes() on both sucess and failure cases. If the number of bytes requested from rnd_get_bytes() is less than or equal to the number of available bytes (rnbyte_cnt) in the cache then we call rndc_getbytes() immediately, i.e. we use the randomness from the pool. Otherwise we release the rndpool_lock and call rngprov_getbytes() with the number of bytes we want. If that still wasn't enough, we loop picking up as many bytes as we can by successive calls. If at any time the rnbyte_cnt in the pool is less than 20 bytes we wait on the read condition variable (rndpool_read_cv) and try again when we are woken up.

2.6 KCF Random Providers

KCF has the concept of "hardware" and "software" providers. The terminology is a legacy one from before hardware support for cryptographic algorithms and random number generation was available as unprivileged CPU instructions.

It really now maps to "hardware" being a provider that has a specific device driver, such as n2rng and "software" meaning CPU instructions or some other pure software mechanism. It doesn't mean that there is no "hardware" involved since on Intel CPUs with the RDRAND instruction calls are in the intelrd provider but it is regarded as a "software" provider. **REALLY???**

2.6.1 swrand: Random Number Provider

All Solaris installs have a KCF random provider called "swrand". This provider periodically collects unpredictable input and processes it into a pool of entropy. It implements its own extraction and generation algorithms.

It uses a pool called srndpool of 1000 bytes and a list of "raw entropy" chunks. These chunks consist of some collected bytes and an estimate of bits of entropy in those bytes.

The swrand provider has two different raw entropy sources:

  • By reading blocks of physical memory and detecting if changes occurred in the blocks read.

    Physical memory is divided into blocks of fixed size. A block of memory is chosen from the possible blocks and a one byte "checksum" of the block is computed and compared against the previous "checksum" computed for that block. If the single-byte checksum has not changed, no "raw entropy" is added to the pool from this source. If a change is detected then a hash of the block is computed and added to the raw entropy list with 15 bits of entropy credited to it.

  • By measuring the time the above computation takes (both the changing and non-changing cases) and computing the differences in the measured time.

    This method measures the amount of time it takes to read, checksum and (in the "change detected" case) hash a physical memory block (as described above). The time measured can vary depending on system load, scheduling and other factors. Differences between consecutive measurements are computed to come up with an entropy estimate. The first, second, and third order delta is calculated to determine the minimum delta value. The number of bits present in this minimum delta value is the entropy estimate.

When the estimated entropy collected exceeds 320 bits, the collected raw bytes are conditioned (hashed) into 20 bytes (160 bits) of full-entropy strings into srndpool. This is where the FIPS 140-2 mandated continuous RNG test takes place: each newly generated 20-byte block is compared to the previusly generated 20 bytes and depending on FIPS mode, either a system panic is induced or a warning is placed on the syslog if the previous and new bloks are equal.

2.6.1.1 Initialization of swrand

Since physical memory can change size swrand registers with the Solaris DR subsystem so that it can update its cache of the number of blocks of physical memory when it either grows or shrinks.

On initial attach the fips_rng_post() function is run.

During initialization the swrand provider adds entropy from the high resolution time since boot and the current time of day (note that due to the module load system and how KCF providers register these values will always be different from the values that the KCF rndpool is initialized with). It also adds in the initial state of physical memory, the number of blocks and sources described above.

Only after all of the above does the swrand provider register with the cryptographic framework.

2.6.1.2 swrand entropy generation

The swrand_get_entropy() is where all the real work happens when the KCF random pool calls into swrand. This function can be called in either blocking or non blocking mode. The only difference between blocking and non blocking is that the later will return EAGAIN if there is insufficient entropy to generate the randomness, the former blocks indefinitely.

A global uint32_t entropy_bits is used to track how much entropy is available.

When a request is made to swrand_get_entropy() we loop until we have the available requested amount of randomness. First checking if the number of remaining entropy in srndpool is below 20 bytes, if it is then we block waiting for more entropy (or return EGAIN if non blocking mode).

Then we determine how many bytes of entropy to extract, it is the minimum of the total requested and 20 bytes. We then update the output buffer and continue the loop until we have generated the requested about.

2.6.1.3 Adding to the swrand pool>

The swrand_seed_entropy() function is used to mix entropy from an external source via the KCF random_add_entropy() call. It xors the input into srndpool without touching the pointers, so although it changes subsequent output from the pool, no new entropy is added by this call.

2.6.2 n2rng random provider

This applies only to SPARC processors with either an S2 core (T2, T3, T3+) or to S3 core (T4, T5, M5, M6) both CPU families use the same n2rng driver and the same on chip system for the RNG.

The n2rng driver provides the interface between the hyper-privilged access to the RNG registers on the CPU and KCF.

The driver performs attach time diagnostics on the hardware to ensure it continues operating as expected. It determines that it is operating in FIPS 140-2 mode via its driver.conf(5) file before its attach routine has completed. The full hardware health check in conjunction with the hypervisor only when running in the control domain. The FIPS 140 checks are always run regardless of the hypervisor domain type. If the FIPS 140 POST checks fail the driver ensures it is deregistered with KCF.

If the driver is suspended and resumed it reconfigures and re-registers with KCF. This would happen on a suspend/resume cycle or during live migration.

2.6.3 intelrd random provider

This applies only to x86 processors that support the RDRAND instruction.

The intelrd driver uses the RDRAND (or RDSEED if that is also supported by the processor) instruction to provide entropy for KCF.

The driver produces the entropy in 20-byte (160-bit) chunks using the SHA-1 algorithm to condition 1023 (when using RDRAND) or 5 (when using RDSEED) 64-bit values returned by the respective instructions, into 160 full-entropy bits. The FIPS 140-2 continuous RNG test is done on these 160-bit chunks.

3. FIPS Internals

3.0 FIPS approved DRBG

In FIPS mode, the cryptographic framework employs a deterministic random bit generator (DRBG) described in NIST SP 800-90A. We are using the Hash_DRBG with SHA512 as the underlying hashing algorithm.

In userland, the libucrypto library uses a DRBG for servicing random_get_bytes() and another one for servicing random_get_pseudo_bytes(). The former is initialized with prediction resistance requested, which means it is reseeded after each request. The 256-bit entropy for the initialization and reseed is obtained from the getentropy(2) system call.

In the kernel, the random_get_bytes() function is using the DRBG, it is initialized with prediction resistance requested, and the entropy for the initialization and reseed (both of 256-bits) comes from kcf_rnd_get_bytes() called with KCF_RND_BLOCK, i.e. blocking until enough entropy is collected from the system

320 bits are collected and those are fed into the FIPS 186-2 Appendix 3.3 algorithm as X_SEED (two 160-bit values are computed) then the first 256 bits of the resulting 320 bits are used as the seed for the DRBG.

3.1 FIPS 186-2: fips_random_inner()

It is a completely internal to Solaris function that can't be used outside of the cryptographic framework.

fips_random_inner(uint32_t *key, uint32_t *x_j, uint32_t *XSEED_j)
It computes a new random value, which is stored in x_j; updates XKEY.XSEED_j is additional input.In principle, we should protect XKEY, perhaps by placing it in non-paged memory, but we aways clobber XKEY with fresh entropy just before we use it and step 3d irreversibly updates it just after we use it. The only risk is that if an attacker captured the state while the entropy generator was broken, the attacker could predict future values. There are two cases:

  1. The attack gets root access to a live system. But there is no defense against that that we can place in here since they already have full control.
  2. The attacker gets access to a crash dump. But by then no values are being generated.

Note that XSEED_j is overwritten with sensitive stuff, and must be zeroed by the caller. We use two separate symbols (XVAL and XSEED_j) to make each step match the notation in FIPS 186-2.

All parameters (key, x_j, XSEED_j) are the size of a SHA-1 digest, 20 bytes.

The HASH function used is SHA1.

The implementation of this function is verified during POST by fips_rng_post() calling it with a known seed. The POST call is performed before the swrand module registers with KCF or during initialization of any of the libraries in the FIPS 140 boundary (before their symbols are available to be called by other libraries or applications).

4.0 /dev/urandom

This is a software-based generator algorithm that uses the random bits in the cache as a seed. We create one pseudo-random generator (for /dev/urandom) per possible CPU on the system, and use it, kmem-magazine-style, to avoid cache line contention.

4.1 Initialization of /dev/urandom

kcf_rnd_init() calls rnd_alloc_magazines() whichsetups up the empty magazines for the pseduo random number pool (/dev/urandom). A separate magazine per CPU is configured up to the maximum number of possible (not available) CPUs on the system, important because we can add more CPUs after initial boot.

The magazine initialization discards the first 20 bytes so that the rnd_get_bytes() function will be using that for comparisons that the next block always differs from the previous one. It then places the next 20 bytes into the rm_key and next again 20 bytes into rm_seed. It does this for each max_ncpus magazine. Only after this is complete does kcf_rnd_init() return back to kcf_init(). Each of the per CPU magazines has its own state which includes hmac key, seed and previous value, each also has its own rekey timers and limits.

The magazines are only used for the pseduo random number pool (i.e. servicing random_get_pseduo_bytes() and /dev/urandom

4.2 /dev/urandom generator

At a high level this uses the FIPS 186-2 algorithm using a key extracted from the random pool to generate a maximum of 1310720 output blocks before rekeying. Each CPU (this is CPU thread not socket or core) has its own magazine.

4.3 Reading from /dev/urandom

The maximum request size that will be services for a single read(2) system call on /dev/urandom is 133120 bytes.

Reads all come in via the kcf_rnd_get_pseduo_bytes() function.

If the requested size is considered to be large, greater than 2560 bytes, then instead of reading from the pool we tail call the generator directly by using rnd_generate_pseudo_bytes().

If the CPU's magazine has sufficient available randomness already, we use that, otherwise we call the rnd_generate_pseudo_bytes() function directly. rnd_generate_pseduo_bytes() is always called with the cpu magazine mutex already locked and it is released when it returns. We loop through the following until the requested number of bytes has been built up or an unrecoverable error occurs. rm_seed is reinitialized by xoring in the current 64 bit highres time, from gethrtime() into the prior value of rm_seed. The fips_random_inner() call is then made using the current value of rm_key and this new seed.

The returned value from fips_random_inner() is then checked against our previous return value to ensure it is a different 160-bit block. If that fails the system panics when in FIPS 140-2 mode or returns EIO if FIPS mode is not enabled. Before returning from the whole function the local state is zero'd out and the per magazine lock released.

5.0 Randomness for key generation

For asymmetric key generation inside the kernel a special random_get_nzero_bytes() API is provided.It differs from random_get_bytes() in two ways, first calls the random_get_bytes_fips140() function which only returns once all FIPS 140-2 initialization has been completed. The random_get_bytes() function needs to be available slightly earlier because some very early kernel functions need it (particularly setup of the VM system and if ZFS needs to do any writes as part of mounting the root filesystem). Secondly, it ensures that no bytes in the output have the 0 value, those are replaced with freshly extracted additional random bytes, it continues until the entire requested length is entirely made up of non zero bytes.

A corresponding random_get_nzero_pseduo_bytes() is also available for cases were we don't want 0 bytes in other random sequences, such as session keys, nonces and cookies.

The above two functions ensure that even though most of the random pool is available early in boot we can't use it for key generation until the full FIPS 140-2 POST and integrity check has completed, e.g. on the swrand provider.

6.0 Userspace random number

Applications that need random numbers may read directly from /dev/random and /dev/urandom. Or may use a function implementing the FIPS 186-2 rng requirements.

Starting with Solaris 11.3 the getrandom(2) system call is available for application use. For applications or libraries that build their own randomness subsystem but want entropy input they should call getentropy(2) instead of getrandom(2).

The cryptographic framework libraries in userspace provide the following internal functions:

  • pkcs11_get_random()
  • pkcs11_get_urandom()
  • pkcs11_get_nzero_random()
  • pkcs11_get_nzero_urandom()

The above functions are available from the libcryptoutil.so library but are Private to Solaris and MUST not be used by any 3rd party code - see the attributes(5) man page for the Solaris interface taxonomy. Similar to the kernel space there are pkcs11_get_nzero_random() and pkcs11_get_nzero_urandom() variants that ensure none of the bytes are zero. The pkcs11_ prefix is because these are Private functions mostly used for the implementation of the PKCS#11 API.

I hope this is useful and/or interesting insight into how Solaris generates randomness.

Update 2013-09-12 I was asked about how this applies to Illumos: To the best of my knowledge [ I have not read the Illumos source the following is based on what I remember of the old OpenSolaris source ] most of what I said above should apply to Illumos as well. The main exceptions are that the fips_random_inner(), POST and some of the continuous checks don't exist, neither does the Intel RDRAND support. The source or the n2rng driver, random(7D), kcf and swrand were available as part of OpenSolaris. Not that Illumos may have changed some of this so please verify for yourself.

Tuesday Jun 11, 2013

Monitoring per Zone Filesystem activity

Solaris 11 added a new at fsstat(1M) monitoring command that provided the ability to view filesystem activity at the VFS layer (ie filesystem independent).  This command was available in Solaris 11 Express and the OpenSolaris releases as well.

Here is a simple example of looking at 5 one second intervals of writes to all ZFS filesystems

$ fsstat zfs 1 5
 new  name   name  attr  attr lookup rddir  read read  write write
 file remov  chng   get   set    ops   ops   ops bytes   ops bytes
3.79M 3.39M 1.77M 1003M 1.24M  6.16G 49.9M  425M 2.01T  160M  467G zfs
    0     0     0    55     0    416     6     0     0 42.7K 21.3M zfs
    0     0     0     4     0     18     0     0     0 44.0K 22.0M zfs
    0     0     0     5     0     28     0     0     0 44.0K 22.0M zfs
    0     0     0     4     0     18     0     0     0 40.3K 20.1M zfs
    0     0     0   260     0  1.08K     0     0     0 36.7K 18.4M zfs

In Solaris 11.1 support was added for per Zone and aggregated information so now we can very quickly determine which zone it is that is contributing to the operations, for example:

$ fsstat -Z zfs 1 5
 new  name   name  attr  attr lookup rddir  read read  write write
 file remov  chng   get   set    ops   ops   ops bytes   ops bytes
3.79M 3.39M 1.77M  998M 1.24M  6.13G 49.4M  424M 2.01T  159M  467G zfs:global
   62    55    39 4.85M   250  33.2M  575K  157K  138M 8.66K 11.6M zfs:s10u9
  114    91    33  115K   362   429K 3.80K  133K  180M  116K 87.2M zfs:ltz
    0     0     0 1.06K     0  3.44K    14    71 12.9K     6 1.43K zfs:global
    0     0     0     2     0      8     0     1    1K     0     0 zfs:s10u9
    0     0     0     0     0      0     0     0     0 41.7K 20.9M zfs:ltz
    0     0     0     4     0     18     0     0     0     0     0 zfs:global
    0     0     0    51     0    398     6     0     0     0     0 zfs:s10u9
    0     0     0     0     0      0     0     0     0 42.6K 21.3M zfs:ltz
    0     0     0     4     0     18     0     0     0     0     0 zfs:global
    0     0     0     0     0      0     0     0     0     0     0 zfs:s10u9
    0     0     0     0     0      0     0     0     0 44.0K 22.0M zfs:ltz
    0     0     0     5     0     28     0     0     0     0     0 zfs:global
    0     0     0     0     0      0     0     0     0     0     0 zfs:s10u9
    0     0     0     0     0      0     0     0     0 43.9K 22.0M zfs:ltz
    0     0     0     4     0     18     0     0     0     0     0 zfs:global
    0     0     0     0     0      0     0     0     0     0     0 zfs:s10u9
    0     0     0     0     0      0     0     0     0 38.9K 19.5M zfs:ltz
 

Thursday Apr 04, 2013

mdb ::if

The Solaris mdb(1) live and post-mortem debugger gained a really powerful new dcmd called ::if in the Solaris 11.1 release.

As a very quick example of how powerful it can be here is a short one liner to find all the processes that are running with their real and effective uid

> ::ptree | ::if proc_t p_cred->cr_ruid <> p_cred->cr_uid | ::print proc_t p_user.u_comm

Or a similar one to find out all the priv aware processes, this time showing some output:

> ::ptree | ::if proc_t p_cred->cr_priv.crpriv_flags & 0x0002 | ::print proc_t p_user.u_comm
p_user.u_comm = [ "su" ]
p_user.u_comm = [ "nfs4cbd" ]
p_user.u_comm = [ "lockd" ]
p_user.u_comm = [ "nfsmapid" ]
p_user.u_comm = [ "statd" ]
p_user.u_comm = [ "nfs4cbd" ]
...

The new ::if is very powerful and can do much more advanced things, like substring comparison, than my simple examples, but I choose examples that are useful to me and relevant to security.

Wednesday Feb 20, 2013

Generating a crypt_sha256 hash from the CLI

When doing a completely hands off Solaris installation the System Configuration profile needs to contain the hash for the root password, and the optional inital non root user.

Unfortunately Solaris doesn't currently provide a simple to use command for generating these hashes, but with a very simple bit of Python you can easily create them:

#!/usr/bin/python

import crypt, getpass, os, binascii

if __name__ == "__main__":
    cleartext = getpass.getpass()
    salt = '$5$' + binascii.b2a_base64(os.urandom(8)).rstrip() + '$'

    print crypt.crypt(cleartext, salt)



        
    

Tuesday Feb 19, 2013

Linux YAMA Security equivalents in Solaris

The Linux YAMA Loadable Security Module (LSM) provides a small number of protections over and above standard DAC (Discretionary Access Controls).  These can be roughly mapped over to Solaris as follows:

YAMA_HARDLINKS: 

This protects against creation of hardlinks to files that a user does not have access to.  For some strange reason POSIX still requires this behaviour.

Closest Solaris equivalent is removing the file_link_any basic privilege from a process/service/user, the description of file_link_any is:

    Allows a process to create hardlinks to files owned by a uid different from the process' effective uid.

YAMA_PTRACE:

This YAMA protection is designed to protect process running as the same uid from being able to attach to each other and trace them using PTRACE.

For mapping this to Solaris I'd recommend removal of two of the proc basic privileges, this will actually exceed the protection that YAMA_PTRACE gives:

proc_session
    Allows a process to send signals or trace processes outside its session.
proc_info
    Allows a process to examine the status of processes other than those it can send signals to.  Processes which cannot be examined cannot be seen in /proc and appear not to exist.

YAMA_SYMLINKS:

The description of the Linux YAMA LSM that I looked at as one more protection YAMA_SYMLINKS, there is no Solaris equivalent to this one that I can find.  It is intended to protect against race conditions on symlinks in world-writable directories (eg /tmp).  This is a nice protection but we don't have an equivalent of it in Solaris at this time but I think it could be implemented as another basic privilege.

Reminder on Solaris Basic Privileges

As a reminder basic privileges in Solaris are those which processes normally have because they were not normally considered to be security violations in the UNIX process model.  A basic privilege can be removed from an SMF service in its method_credential section, from a users login session (usermod -K defaultpriv=basic,!file_link_any <username>).  So there is no need to patch/rebuild/update the Solaris kernel to be able to take advantage of these.  In fact you can even change a running process using ppriv(1).

Monday Feb 11, 2013

Serial Console with VirtualBox on Solaris host

First make sure you have nc(1) available it is in the pkg:/network/netcat package.

Then configure COM1 serial port in the VM settings as a pipe.  Tell VirtualBox the name you want for the pipe and get it to create it.

You can also set up the serial port from the CLI using the VBoxManage command, here my VM is called "Solaris 11.1 Text Only".

$ VBoxManage modifyvm "Solaris 11.1 Text Only" --uart1 0x3F8 4 --uartmode1 server /tmp/solaris-11.1-console.pipe

 

Start up the VM and in a terminal window run nc and the ttya output of the VM will appear in the terminal window.

$ nc -U /tmp/solaris-11.1-console.pipe
SunOS Release 5.11 Version 11.1 64-bit
Copyright (c) 1983, 2012, Oracle and/or its affiliates. All rights reserved.


Thursday Jan 24, 2013

Compliance reporting with SCAP

In Solaris 11.1 we added the early stages of our (security) Compliance framework.  We have (like some other OS vendors) selected to use the SCAP (Security Content Automation Protocol) standard from NIST.  There are a number of different parts to SCAP but for Compliance reporting one of the important parts is the OVAL (Open Vulnerability Assesment Language) standard.  This is what allows us to write a checkable security policy and verify it against running systems.

The Solaris 11.1 repository includes the OpenSCAP tool that allows us to generate reports written in the OVAL language (as well as other things but I'm only focusing on OVAL for now).

OVAL is expressed in XML with a number of generic and OS/application specific schema.  Over time we expect to deliver various sample security policies with Solaris to help customers with Compliance reporting in various industries (eg, PCI-DSS, DISA-STIG, HIPAA).

The XML in the OVAL langauge is passed to the OpenSCAP tool for evaluation, it produces either a simple text report of which checks passed and which failed or an XML results file and an optional HTML rendered report.

Lets look at a simple example of an policy written in OVAL.  This contains just one check, that we have configured the FTP server on Solaris to display a banner.  We do this in Solaris 11 by updating /etc/proftpd.conf to add the "DisplayConnect /etc/issue" line - which is not there by default.   So in a default Solaris 11.1 system we should get a "fail" from this policy.

The OVAL for this check was generated by a tool called "Enhanced SCAP Editor (eSCAPe)" which is not included in Solaris.  It could well have been hand edited in your text editor of choice. In a later blog posting I'll attempt to explain more of the OVAL language and give some more examples, including some Solaris specific ones but for now here is the raw XML:

<?xml version="1.0" encoding="UTF-8"?>
<oval_definitions xmlns="http://oval.mitre.org/XMLSchema/oval-definitions-5" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xmlns:oval="http://oval.mitre.org/XMLSchema/oval-common-5" 
xmlns:oval-def="http://oval.mitre.org/XMLSchema/oval-definitions-5" 
xmlns:independent-def="http://oval.mitre.org/XMLSchema/oval-definitions-5#independent" 
xsi:schemaLocation="http://oval.mitre.org/XMLSchema/oval-definitions-5 
   oval-definitions-schema.xsd http://oval.mitre.org/XMLSchema/oval-definitions-5#independent 
   independent-definitions-schema.xsd http://oval.mitre.org/XMLSchema/oval-common-5 oval-common-schema.xsd">

  <generator>
    <oval:product_name>Enhanced SCAP Editor</oval:product_name>
    <oval:product_version>0.0.11</oval:product_version>
    <oval:schema_version>5.8</oval:schema_version>
    <oval:timestamp>2012-10-11T10:33:25</oval:timestamp>
  </generator>
  <!--generated.oval.base.identifier=com.oracle.solaris11-->
  <definitions>
    <definition id="oval:com.oracle.solaris11:def:840" version="1" class="compliance">
      <metadata>
        <title>Enable a Warning Banner for the FTP Service</title>
        <affected family="unix">
          <platform>Oracle Solaris 11</platform>
        </affected>
        <description>/etc/proftpd.conf contains "DisplayConnect /etc/issue"</description>
      </metadata>
      <criteria operator="AND" negate="false" comment="Single test">
        <criterion comment="/etc/proftpd.conf contains &quot;DisplayConnect /etc/issue&quot;" 
          test_ref="oval:com.oracle.solaris11:tst:8400" negate="false"/>
      </criteria>
    </definition>
  </definitions>
  <tests>
    <textfilecontent54_test 
        xmlns="http://oval.mitre.org/XMLSchema/oval-definitions-5#independent" 
        id="oval:com.oracle.solaris11:tst:8400" version="1" check="all" 
        comment="/etc/proftpd.conf contains &quot;DisplayConnect /etc/issue&quot;" 
        check_existence="all_exist">
      <object object_ref="oval:com.oracle.solaris11:obj:8400"/>
    </textfilecontent54_test>
  </tests>
  <objects>
    <textfilecontent54_object 
        xmlns="http://oval.mitre.org/XMLSchema/oval-definitions-5#independent" 
        id="oval:com.oracle.solaris11:obj:8400" version="1" 
        comment="/etc/proftpd.conf contains &quot;DisplayConnect /etc/issue&quot;">
      <path datatype="string" operation="equals">/etc</path>
      <filename datatype="string" operation="equals">proftpd.conf</filename>
      <pattern datatype="string" 
         operation="pattern match">^DisplayConnect\s/etc/issue\s$</pattern>
      <instance datatype="int" operation="greater than or equal">1</instance>
    </textfilecontent54_object>
  </objects>
</oval_definitions>

We can evaluate this policy on a given host by using OpenSCAP like this:

$ oscap oval eval ftp-banner.xml 
Definition oval:com.oracle.solaris11:def:840: false
Evaluation done.

As you can see we got the expected failure of the test, but that output isn't very useful, lets instead generate some html output:

$ oscap oval eval --results results.xml --report report.html ftp-banner.xml
Definition oval:com.oracle.solaris11:def:840: false
Evaluation done.
OVAL Results are exported correctly.

Now we have a report.html file which looks like a bit like this:

OVAL Results Generator Information
Schema Version Product Name Product Version Date Time
5.8  cpe:/a:open-scap:oscap 
2013-01-24 14:18:55 
OVAL Definition Generator Information
Schema Version Product Name Product Version Date Time
5.8  Enhanced SCAP Editor  0.0.11  2012-10-11 10:33:25 

System Information
Host Name braveheart 
Operating System SunOS 
Operating System Version 11.1 
Architecture i86pc 
Interfaces
Interface Name net0 
IP Address 192.168.1.1 
MAC Address aa:bb:cc:dd:ee:ff 
OVAL System Characteristics Generator Information
Schema Version Product Name Product Version Date Time
5.8  cpe:/a:open-scap:oscap 
2013-01-24 14:18:55 
Oval Definition Results



 True  



 False  



 Error  



 Unknown  



 Not Applicable  



 Not Evaluated  
OVAL ID Result Class Reference ID Title
oval:com.oracle.solaris11:def:841 true compliance
Enable a Warning Banner for the SSH Service 
oval:com.oracle.solaris11:def:840 false compliance
Enable a Warning Banner for the FTP Service 

As you probably noticed write away the report doesn't match the OVAL I gave above because the report is actually from a very slightly larger OVAL file which checks the banner exists for both SSH and FTP.  I did this purely to cut down on the amount of raw XML above but also so the report would show both a success and failure case.


Tuesday Jan 08, 2013

Solaris 11.1 FIPS 140-2 Evaluation status

Solaris 11.1 has just recently been granted CAVP certificates by NIST, this is the first formal step in getting a FIPS 140-2 certification.  It will be some months yet before we expect to see the results of the CMVP (FIPS 140-2) evaluation.  There are multiple certificates for each algorithm to cover the cases of SPARC and x86 with and with out hardware acceleration from the CPU.

References:
AES 2310, 2309, 2308
3DES 1457, 1456, 1455
DSA 727, 726
RSA 1193, 1192, 1191
ECDSA 375, 375, 374
SHS (SHA1, 224, 256, 384, 512) 1994, 1993, 1992
RNG 1153, 1152, 1151, 1150
HMAC 1424, 1423, 1422




Monday Dec 03, 2012

Using Solaris pkg to list all setuid or setgid programs

$ pkg contents -a mode=4??? -a mode=2??? -t file -o pkg.name,path,mode

We can also add a package name on the end to restrict it to just that single package eg:

 
  

$ pkg contents -a mode=4??? -a mode=2??? -t file -o pkg.name,path,mode core-os

PKG.NAME       PATH                   MODE
system/core-os usr/bin/amd64/newtask  4555
system/core-os usr/bin/amd64/uptime   4555
system/core-os usr/bin/at             4755
system/core-os usr/bin/atq            4755
system/core-os usr/bin/atrm           4755
system/core-os usr/bin/crontab        4555
system/core-os usr/bin/mail           2511
system/core-os usr/bin/mailx          2511
system/core-os usr/bin/newgrp         4755
system/core-os usr/bin/pfedit         4755
system/core-os usr/bin/su             4555
system/core-os usr/bin/tip            4511
system/core-os usr/bin/write          2555
system/core-os usr/lib/utmp_update    4555
system/core-os usr/sbin/amd64/prtconf 2555
system/core-os usr/sbin/amd64/swap    2555
system/core-os usr/sbin/amd64/sysdef  2555
system/core-os usr/sbin/amd64/whodo   4555
system/core-os usr/sbin/prtdiag       2755
system/core-os usr/sbin/quota         4555
system/core-os usr/sbin/wall          2555


Tuesday Oct 30, 2012

New ZFS Encryption features in Solaris 11.1

Solaris 11.1 brings a few small but significant improvements to ZFS dataset encryption.  There is a new readonly property 'keychangedate' that shows that date and time of the last wrapping key change (basically the last time 'zfs key -c' was run on the dataset), this is similar to the 'rekeydate' property that shows the last time we added a new data encryption key.

$ zfs get creation,keychangedate,rekeydate rpool/export/home/bob
NAME                   PROPERTY       VALUE                  SOURCE
rpool/export/home/bob  creation       Mon Mar 21 11:05 2011  -
rpool/export/home/bob  keychangedate  Fri Oct 26 11:50 2012  local
rpool/export/home/bob  rekeydate      Tue Oct 30  9:53 2012  local

The above example shows that we have changed both the wrapping key and added new data encryption keys since the filesystem was initially created.  If we haven't changed a wrapping key then it will be the same as the creation date.  It should be obvious but for filesystems that were created prior to Solaris 11.1 we don't have this data so it will be displayed as '-' instead.

Another change that I made was to relax the restriction that the size of the wrapping key needed to match the size of the data encryption key (ie the size given in the encryption property).  In Solaris 11 Express and Solaris 11 if you set encryption=aes-256-ccm we required that the wrapping key be 256 bits in length.  This restriction was unnecessary and made it impossible to select encryption property values with key lengths 128 and 192 when the wrapping key was stored in the Oracle Key Manager.  This is because currently the Oracle Key Manager stores AES 256 bit keys only.  Now with Solaris 11.1 this restriciton has been removed.

There is still one case were the wrapping key size and data encryption key size will always match and that is where they keysource property sets the format to be 'passphrase', since this is a key generated internally to libzfs and to preseve compatibility on upgrade from older releases the code will always generate a wrapping key (using PKCS#5 PBKDF2 as before) that matches the key length size of the encryption property.

The pam_zfs_key module has been updated so that it allows you to specify encryption=off.

There were also some bugs fixed including not attempting to load keys for datasets that are delegated to zones and some other fixes to error paths to ensure that we could support Zones On Shared Storage where all the datasets in the ZFS pool were encrypted that I discussed in my previous blog entry.

If there are features you would like to see for ZFS encryption please let me know (direct email or comments on this blog are fine, or if you have a support contract having your support rep log an enhancement request).




  

  


Monday Oct 29, 2012

Solaris 11.1: Encrypted Immutable Zones on (ZFS) Shared Storage

Solaris 11 brought both ZFS encryption and the Immutable Zones feature and I've talked about the combination in the past.  Solaris 11.1 adds a fully supported method of storing zones in their own ZFS using shared storage so lets update things a little and put all three parts together.

When using an iSCSI (or other supported shared storage target) for a Zone we can either let the Zones framework setup the ZFS pool or we can do it manually before hand and tell the Zones framework to use the one we made earlier.  To enable encryption we have to take the second path so that we can setup the pool with encryption before we start to install the zones on it.

We start by configuring the zone and specifying an rootzpool resource:

# zonecfg -z eizoss
Use 'create' to begin configuring a new zone.
zonecfg:eizoss> create
create: Using system default template 'SYSdefault'
zonecfg:eizoss> set zonepath=/zones/eizoss
zonecfg:eizoss> set file-mac-profile=fixed-configuration
zonecfg:eizoss> add rootzpool
zonecfg:eizoss:rootzpool> add storage \
  iscsi://my7120.example.com/luname.naa.600144f09acaacd20000508e64a70001
zonecfg:eizoss:rootzpool> end
zonecfg:eizoss> verify
zonecfg:eizoss> commit
zonecfg:eizoss> 

Now lets create the pool and specify encryption:

# suriadm map \
   iscsi://my7120.example.com/luname.naa.600144f09acaacd20000508e64a70001
PROPERTY	VALUE
mapped-dev	/dev/dsk/c10t600144F09ACAACD20000508E64A70001d0
# echo "zfscrypto" > /zones/p
# zpool create -O encryption=on -O keysource=passphrase,file:///zones/p eizoss \
   /dev/dsk/c10t600144F09ACAACD20000508E64A70001d0
# zpool export eizoss

Note that the keysource example above is just for this example, realistically you should probably use an Oracle Key Manager or some other better keystorage, but that isn't the purpose of this example.  Note however that it does need to be one of file:// https:// pkcs11: and not prompt for the key location.  Also note that we exported the newly created pool.  The name we used here doesn't actually mater because it will get set properly on import anyway. So lets go ahead and do our install:

zoneadm -z eizoss install -x force-zpool-import
Configured zone storage resource(s) from:
    iscsi://my7120.example.com/luname.naa.600144f09acaacd20000508e64a70001
Imported zone zpool: eizoss_rpool
Progress being logged to /var/log/zones/zoneadm.20121029T115231Z.eizoss.install
    Image: Preparing at /zones/eizoss/root.

 AI Manifest: /tmp/manifest.xml.ujaq54
  SC Profile: /usr/share/auto_install/sc_profiles/enable_sci.xml
    Zonename: eizoss
Installation: Starting ...

              Creating IPS image
Startup linked: 1/1 done
              Installing packages from:
                  solaris
                      origin:  http://pkg.oracle.com/solaris/release/
              Please review the licenses for the following packages post-install:
                consolidation/osnet/osnet-incorporation  (automatically accepted,
                                                          not displayed)
              Package licenses may be viewed using the command:
                pkg info --license <pkg_fmri>
DOWNLOAD                                PKGS         FILES    XFER (MB)   SPEED
Completed                            187/187   33575/33575  227.0/227.0  384k/s

PHASE                                          ITEMS
Installing new actions                   47449/47449
Updating package state database                 Done 
Updating image state                            Done 
Creating fast lookup database                   Done 
Installation: Succeeded

         Note: Man pages can be obtained by installing pkg:/system/manual

 done.

        Done: Installation completed in 929.606 seconds.


  Next Steps: Boot the zone, then log into the zone console (zlogin -C)

              to complete the configuration process.

Log saved in non-global zone as /zones/eizoss/root/var/log/zones/zoneadm.20121029T115231Z.eizoss.install

That was really all we had to do, when the install is done boot it up as normal.

The zone administrator has no direct access to the ZFS wrapping keys used for the encrypted pool zone is stored on.  Due to how inheritance works in ZFS he can still create new encrypted datasets that use those wrapping keys (without them ever being inside a process in the zone) or he can create encrypted datasets inside the zone that use keys of his own choosing, the output below shows the two cases:

rpool is inheriting the key material from the global zone (note we can see the value of the keysource property but we don't use it inside the zone nor does that path need to be (or is) accessible inside the zone). Whereas rpool/export/home/bob has set keysource locally.

 
  

# zfs get encryption,keysource rpool rpool/export/home/bob NAME PROPERTY VALUE SOURCE rpool encryption on inherited from $globalzone rpool keysource passphrase,file:///zones/p inherited from $globalzone rpool/export/home/bob encryption on local rpool/export/home/bob keysource passphrase,prompt local

 

 

Thursday Sep 13, 2012

To encryption=on or encryption=off a simple ZFS Crypto demo

I've just been asked twice this week how I would demonstrate ZFS encryption really is encrypting the data on disk.  It needs to be really simple and the target isn't forensics or cryptanalysis just a quick demo to show the before and after.

I usually do this small demo using a pool based on files so I can run strings(1) on the "disks" that make up the pool. The demo will work with real disks too but it will take a lot longer (how much longer depends on the size of your disks).  The file hamlet.txt is this one from gutenberg.org

# mkfile 64m /tmp/pool1_file
# zpool create clear_pool /tmp/pool1_file
# cp hamlet.txt /clear_pool
# grep -i hamlet /clear_pool/hamlet.txt | wc -l

Note the number of times hamlet appears

# zpool export clear_pool
# strings /tmp/pool1_file | grep -i hamlet | wc -l

Note the number of times hamlet appears on disk - it is 2 more because the file is called hamlet.txt and file names are in the clear as well and we keep at least two copies of metadata.

Now lets encrypt the file systems in the pool.
Note you MUST use a new pool file don't reuse the one from above.

# mkfile 64m /tmp/pool2_file
# zpool create -O encryption=on enc_pool /tmp/pool2_file
Enter passphrase for 'enc_pool': 
Enter again: 
# cp hamlet.txt /enc_pool
# grep -i hamlet /enc_pool/hamlet.txt | wc -l

Note the number of times hamlet appears is the same as before

# zpool export enc_pool
# strings /tmp/pool2_file | grep -i hamlet | wc -l

Note the word hamlet doesn't appear at all!

As a said above this isn't indended as "proof" that ZFS does encryption properly just as a quick to do demo.

Wednesday Jul 04, 2012

Delegation of Solaris Zone Administration

In Solaris 11 'Zone Delegation' is a built in feature. The Zones system now uses fine grained RBAC authorisations to allow delegation of management of distinct zones, rather than all zones which is what the 'Zone Management' RBAC profile did in Solaris 10.

The data for this can be stored with the Zone or you could also create RBAC profiles (that can even be stored in NIS or LDAP) for granting access to specific lists of Zones to administrators.

For example lets say we have zones named zoneA through zoneF and we have three admins alice, bob, carl.  We want to grant a subset of the zone management to each of them.

We could do that either by adding the admin resource to the appropriate zones via zonecfg(1M) or we could do something like this with RBAC data directly:

First lets look at an example of storing the data with the zone.

# zonecfg -z zoneA
zonecfg:zoneA> add admin
zonecfg:zoneA> set user=alice
zonecfg:zoneA> set auths=manage
zonecfg:zoneA> end
zonecfg:zoneA> commit
zonecfg:zoneA> exit

Now lets look at the alternate method of storing this directly in the RBAC database, but we will show all our admins and zones for this example:

# usermod -P +'Zone Management' -A +solaris.zone.manage/zoneA alice

# usermod -A +solaris.zone.login/zoneB alice


# usermod -P +'Zone Management' -A +solaris.zone.manage/zoneB bob
# usermod -A +solaris.zone.manage/zoneC bob


# usermod -P +'Zone Management' -A +solaris.zone.manage/zoneC carl
# usermod -A +solaris.zone.manage/zoneD carl
# usermod -A +solaris.zone.manage/zoneE carl
# usermod -A +solaris.zone.manage/zoneF carl

In the above alice can only manage zoneA, bob can manage zoneB and zoneC and carl can manage zoneC through zoneF.  The user alice can also login on the console to zoneB but she can't do the operations that require the solaris.zone.manage authorisation on it.

Or if you have a large number of zones and/or admins or you just want to provide a layer of abstraction you can collect the authorisation lists into an RBAC profile and grant that to the admins, for example lets great an RBAC profile for the things that alice and carl can do.

# profiles -p 'Zone Group 1'
profiles:Zone Group 1> set desc="Zone Group 1"
profiles:Zone Group 1> add profile="Zone Management"
profiles:Zone Group 1> add auths=solaris.zone.manage/zoneA
profiles:Zone Group 1> add auths=solaris.zone.login/zoneB
profiles:Zone Group 1> commit
profiles:Zone Group 1> exit
# profiles -p 'Zone Group 3'
profiles:Zone Group 1> set desc="Zone Group 3"
profiles:Zone Group 1> add profile="Zone Management"
profiles:Zone Group 1> add auths=solaris.zone.manage/zoneD
profiles:Zone Group 1> add auths=solaris.zone.manage/zoneE
profiles:Zone Group 1> add auths=solaris.zone.manage/zoneF
profiles:Zone Group 1> commit
profiles:Zone Group 1> exit


Now instead of granting carl  and aliace the 'Zone Management' profile and the authorisations directly we can just give them the appropriate profile.

# usermod -P +'Zone Group 3' carl

# usermod -P +'Zone Group 1' alice


If we wanted to store the profile data and the profiles granted to the users in LDAP just add '-S ldap' to the profiles and usermod commands.

For a documentation overview see the description of the "admin" resource in zonecfg(1M), profiles(1) and usermod(1M)

Tuesday May 01, 2012

Podcast: Immutable Zones in Oracle Solaris 11

In this episode of the "Oracle Solaris: In a Class By Itself" podcast series, the focus is a bit more technical. I was interviewed by host Charlie Boyle, Senior Director of Solaris Product Marketing. We talked about a new feature in Oracle Solaris 11: immutable zones. Those are read-only root zones for highly secure deployment scenarios.

See also my previous blog post on Enctypted Immutable Zones.

Wednesday Feb 29, 2012

Solaris 11 has the security solution Linus wants for Desktop Linux

Recently Linus Torvalds was venting (his words!) about the frustrating requirement to keep giving his root password for common desktop tasks such as connecting to a wifi network or configuring printers.

Well I'm very pleased to say that the Solaris 11 desktop doesn't have this problem thanks to our RBAC system and how it is used including how it is tightly integrated into the desktop.

One of the new RBAC features in Solaris 11 is location context RBAC profiles, by default we grant the user on the system console (ie the one on the laptop or workstation locally at the physical keyboard/screen) the "Console User" profile.  Which on a default install has the necessary authorisations and execution profiles to do things like joining a wireless network, changing CPU power management, and using removal media.   The user created at initial install time also has the much more powerful "System Administrator" profile granted to them so they can do even more without being required to give a password for root (they also have access to the root role and the ability to use sudo).

Authorisations in Solaris RBAC (which dates back in main stream Solaris to Solaris 8 and even further 17+ years in Trusted Solaris) are checked by privileged programs and the whole point is so you don't have to reauthenticate.  SMF is a very heavy user of RBAC authorisations.  In the case of things like joining a wireless network it is privileged daemons that are checking the authorisations of the clients connecting to them (usually over a door)

In addition to that GNOME in Solaris 11 has been explicitly integrated with Solaris RBAC as well, any GNOME menu entry that needs to run with elevated privilege will be exectuted via Solaris RBAC mechanisms.  The panel works out the least intrusive way to get the program running for you.  For example if I select "Wireshark" from the GNOME panel menu it just starts - I don't get prompted for any root password - but it starts with the necessary privileges because GNOME on Solaris 11 knows that I have the "Network Management" RBAC profile which allows running /usr/sbin/wireshark with the net_rawaccess privilege.   If I didn't have "Network Management" directly but I had an RBAC role that had it then GNOME would use gksu to assume the role (which might be root) and in which case I would have been prompted for the role password.  If you are using roleauth=user that password is yours and if you are using pam_tty_tickets you won't keep getting prompted.

GNOME can even go further and not even present menu entries to users who don't have granted to them any RBAC profile that allows running those programs - this is useful in a large multi user system like a Sun Ray deployment.

If you want to do it the "old way" and use the CLI and/or give a root password for every "mundane" little thing, you can still do that too if you really want to.

So maybe Linus could try Solaris 11 desktop ;-)

About

Darren Moffat-Oracle

Search

Categories
Archives
« May 2016
MonTueWedThuFriSatSun
      
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
     
Today