Tuesday Mar 18, 2014

NIAP Recommends no further Common Criteria Evaluation of Operating Systems & DBMS

NIAP has posted its public position statements for future Common Criteria evaluation profiles for General Purpose Operating Systems (eg Solaris, Linux, Windows, MacOS, AIX, ...), DBMS (eg Oracle Database, MySQL, DB2, ...).

They are recommending against further development of Common Criterial profiles and against evaluation of GPOS and DBMS systems, due to complexity and cost reasons. 

Note that this is neither a personal statement nor an Oracle statement on the applicability of CC evaluation to GPOS or DBMS systems.

For the status of Oracle products in evaluation please visit this page.

Monday Dec 16, 2013

Kernel Cryptographic Framework FIPS 140-2 Validation

After many years of preparation work by the Solaris Crypto Team, and almost a year (just shy about two weeks) since we submitted to NIST our documentation for validation, the Kernel Crypto Framework has its first FIPS 140-2 validation completed.  This applies  Solaris 11.1 SRU 5 when running on SPARC T4, T5 (the M5 and M6 use the same crypto core and we expect to vendor affirm those), M3000 class (Fujitsu SPARC64) and on Intel (with and without AES-NI).

Many thanks to all those at Oracle who have helped make this happen and to our consultants and validation lab.


Thursday Sep 12, 2013

Solaris Random Number Generation

The following was originally written to assist some of our new hires in learning about how our random number generators work and also to provide context for some questions that were asked as part of the ongoing (at the time of writing) FIPS 140-2 evaluation of the Solaris 11 Cryptographic Framework.

1. Consumer Interfaces

The Solaris random number generation (RNG) system is used to generate random numbers, which utilizes both hardware and software mechanisms for entropy collection. It has consumer interfaces for applications; it can generate high-quality random numbers suitable for long term
asymmetric keys and pseudo-random numbers for session keys or other cryptographic uses, such as a nonce.

1.1 Interface to user space

The random(7D) device driver provides the /dev/random and /dev/urandom devices to user space, but it doesn't implement any of the random number generation or extraction itself.

There is a single kernel module (random) for implementing both the /dev/random and /dev/urandom devices the two primary entry points are rnd_read() and rnd_write() for servicing read(2) and write(2) system calls respectively.

rnd_read() calls either kcf_rnd_get_bytes() or kcf_rnd_get_pseudo_bytes() depending on wither the device node is an instance of /dev/random or /dev/urandom respectively.  There is a cap on the maximum number of bytes that can be transfered in a single read, MAXRETBYTES_RANDOM (1040) and MAXRETBYTES_URANDOM(128 * 1040) respectively.

rnd_write() uses random_add_entropy() and random_add_pseduo_entropy() they both pass 0 as the estimate of the amount of entropy that came from userspace, so we don't trust userspace to estimate the value of the entropy being provided.  Also only a user with uid root or all privilege can open /dev/random or /dev/urandom for write and thus call rnd_write().

1.2 Interface in kernel space

The kcf module provides an API for randomnes for in kernel KCF consumers. It implements the functions mentioned above that are called to service the read(2)/write(2) calls and also provides the interfaces for kernel consumers to access the random and urandom pools.

If no providers are configured no randomness can be returned and a message logged informing the administrator of the mis-configuration.

2. /dev/random

We periodically collect random bits from providers which are registered with the Kernel Cryptographic Framework (kCF) as capable of random number generation. The random bits are maintained in a cache and it is used for high quality random numbers (/dev/random) requests. If the cache has sufficient random bytes available the request is serviced from the cache.  Otherwise we pick a provider and call its SPI routine.  If we do not get enough random bytes from the provider call we fill in the remainder of the request by continously replenishing the cache and using that until the full requested size is met.

The maximum request size that will be services for a single read(2) system call on /dev/random is 1040 bytes.

2.1 Initialisation

kcf_rnd_init() is where we setup the locks and get everything started, it is called by the _init() routine in the kcf module, which itself is called on very early in system boot - before the root filesystem is mounted and most modules are loaded.

For /dev/random and random_get_bytes() a static array of 1024 bytes is setup by kcf_rnd_init().  

We start by placing the value of gethrtime(), high resolution time since the boot time, and drv_getparam(), the current time of day as the initial seed values into the pool (both of these are 64 bit integers).  We set the number of random bytes available in the pool to 0.

2.2 Adding randomness to the rndpool

The rndc_addbytes() function adds new random bytes to the pool (aka cache). It holds the rndpool_lock mutex while it xor in the bytes to the rndpool.  The starting point is the global rindex variable which is updated as each byte is added.  It also increases the rnbyte_cnt.

If the rndpool becomes full before the passed in number of bytes is all used we continue to add the bytes to the pool/cache but do not increase rndbyte_cnt, it also moves on the global findex to match rindex as it does so.

2.3 Scheduled mixing

The kcf_rnd_schedule_timeout() ensures that we perform mixing of the rndpool.  The timeout is itself randomly generated by reading (but not consuming) the first 32 bits of rndpool to derive a new time out of of between 2 and 5.544480 seconds.  When the timeout expires the KCF rnd_handler() function [ from kcf_random.c ] is called.

If we have readers blocked for entropy or the count of available bytes is less than the pool size we start an asynchronous task to call rngprov_getbyte() gather more entropy from the available providers.

If there is at least the minimum (20 bytes) of entropy available we wake up the threads stopped in a poll(2)/select(2) of /dev/random. If there are any threads waiting on entropy we wake those up too.  The waiting and wake up is performed by cv_wait_sig() and cv_broadcast(), this means that the pool lock will be held when cv_broadcast wakes up a thread it will have the random pool lock held.

Finally it schedules the next time out.

2.4 External caller Seeding

The random_add_entropy() call is able able to provide entropy from an external (to KCF or its providers) source of randomness.  It takes a buffer and a size as well as an estimated amount of entropy in the buffer. There are no callers in Solaris that provide a non 0 value for the estimate
of entropy to random_add_entropy().  The only caller of random_add_entropy() is actually the write(2) entry point for /dev/random.

Seeding is performed by calling the first available software entropy provider plugged into KCF and calling its KCF_SEED_RANDOM entropy function. The term "software" here really means "device driver driven" rather than CPU instruction set driven.  For example the n2rng provider
is device driver driven but the architecture based Intel RDRAND is regardedas "software".  The terminology is for legacy reasons in the early years of the Solaris cryptographic framework.  This does however mean we never attempt to seed the hardware RNG on SPARC S2 or S3 core based systems (T2 through M6 inclusive) but we will attempt to do so on Intel CPUs with RDRAND.

2.5 Extraction for /dev/random

We treat rndpool as a circular buffer with findex and rindex tracking the front and back respectively, both start at position 0 during initial initalisation.

To extract randomness from the pool we use kcf_rnd_get_bytes(); this is a non blocking call and it will return EAGAIN if there is insufficient randomness available (ie rndbyte_cnt is less than the request size) and 0 on success.

It calls rnd_get_bytes() with the rndpool_lock held, the lock will be released by rnd_get_bytes() on both sucess and failure cases.  If the number of bytes requested of rnd_get_bytes() is less than or
equal to the number of available bytes (rnbyte_cnt) then we call rndc_getbytes() immediately, ie we use the randomness from the pool.   Otherwise we release the rndpool_lock and call rngprov_getbytes() with the number of bytes we want, if that still wasn't enough we loop picking up as many bytes as we can by successive calls, if at any time the rnbyte_cnt in the pool is less than 20 bytes we wait on the read condition variable (rndpool_read_cv) and try again when we are woken up.

rngprov_getbytes() finds the first available provider that is plugged into KCF and calls its KCF_OP_RANDOM_GENERATE function.  This function is also used by the KCF timer for scheduled mixing (see later discussion).  It cycles through each available provider until either there are no more available or the requested number of bytes is available.  It returns to the caller the number of bytes it retreived from all of the providers combined.

If no providers were available then rngprov_getbytes() returns an error and logs and error to the system log for the administrator.  A default configuration of Solaris (and the one required by FIPS 140-2 security target) has at least the 'swrand' provider.  A Solaris instance running on SPARC S3 cores (T2 through M6 inclusive) will also have the n2rng provider configured and available.

2.6 KCF Random Providers

KCF has the concept of "hardware" and "software" providers.  The terminology is a legacy one from before hardware support for cryptographic algorithms and random number generation was available as unprivileged CPU instructions.

It really now maps to "hardware" being a provider that as a specific device driver, such as n2rng  and "software" meaning CPU instructions or some other pure software mechanism.  It doesn't mean that there is no "hardware" involved since on Intel CPUs with the RDRAND instruction calls are in the swrand provider but it is regarded as a "software" provider.

2.6.1 swrand: Random Number Provider

All Solaris installs have a KCF random provider called "swrand". This provider periodically collects unpredictable input and processes it into a pool or entropy, it implements its own mixing (distinct from that at the kcf level), extraction and generation algorithms.


It uses a pool called srndpool of 256 bytes and a leftover buffer of 20 bytes.

The swrand provider has two different entropy sources:

1. By reading blocks of physical memory and detecting if changes occurred in the blocks read.

Physical memory is divided into blocks of fixed size.  A block of memory is chosen from the possible blocks and hashed to produce a digest.  This digest is then mixed into the pool.  A single bit from
the digest is used as a parity bit or "checksum" and compared against the previous "checksum" computed for the block.  If the single-bit checksum has not changed, no entropy is credited to the pool.  If there is a change, then the assumption is that at least one bit in the block has changed.  The possible locations within the memory block of where the bit change occurred is used as a measure of entropy. 

For example,

if a block size of 4096 bytes is used, about log_2(4096*8)=15 bits worth of entropy is available.  Because the single-bit checksum will miss half of the changes, the amount of entropy credited to  the pool is doubled when a change is detected.  With a 4096 byte block size, a block change will add a total of 30 bits of entropy to the pool.

2. By measuring the time it takes to load and hash a block of memory and computing the differences in the measured time.

This method measures the amount of time it takes to read and hash a physical memory block (as described above).  The time measured can vary depending on system load, scheduling and other factors.  Differences between consecutive measurements are computed to come up with an entropy estimate.  The first, second, and third order delta is calculated to determine the minimum delta value.  The number of bits present in this minimum delta value is the entropy estimate.

3. Additionally on x86 systems that have the RDRAND instruction we take entropy from there but assume only 10% entropic density from it.  If the rdrand instruction is not available or the call to use it fails (CF=0) then the above two entropy sources are used.

2.6.1.1 Initalisation of swrand

Since physical memory can change size swrand registers with the Solaris DR subsystem so that it  can update its cache of the number of blocks of physical memory when it either grows or shrinks.

On initial attach the fips_rng_post() function is run.

During initalisation the swrand provider adds entropy from the high resolution time since boot and the current time of day (note that due to the module load system and how KCF providers register these values will always be different from the values that the KCF rndpool is initalised with). It also adds in the initial state of physical memory, the number of blocks and sources described above.

The first 20 bytes from this process are used as the XKEY and are also saved as the initial value of previous_bytes for use with the FIPS 186-2 continuous test.

Only after all of the above does the swrand provider register with the cryptographic framework for both random number generation and seeding of the swrand generator.

2.6.1.2 swrand entropy generation

The swrand_get_entropy() is where all the real work happens when the KCF random pool calls into swrand.  This function can be called in either blocking or non blocking mode. The only difference between blocking and non blocking is that the later will return EAGAIN if there is insufficient entropy to generate the randomness, the former blocks indefinitely.

A global uint32_t entropy_bits is used to track how much entropy is available.

When a request is made to swrand_get_entropy() we loop until we have the available requested amount of randomness.  First checking if the number of remaining entropy in srndpool is below 20 bytes, if it is then we block waiting for more entropy (or return EGAIN if non blocking mode).

Then determine how many bytes of entropy to extract, it is the minimum of the total requested and 20 bytes.  The entropy extracted from the srndpool is then hashed using SHA1 and fed back into the pool starting at the previous extraction point.  We ensure that we don't feed the same entropy back into the srndpool at the same position, if we do then the system will force a panic when in FIPS 140 mode or log a warning and return EIO when not in FIPS 140 mode.

The FIPS 186-2 Appendix 3 fips_random_inner() function is then run on that same SHA1 digest and the resulting output checked that each 20 byte block meets the continous RNG test - if that fails we panic or warn as above.


We then update the output buffer and if continue the loop until we have generated the requested about.  Before swrand_get_entropy() returns it zeros out the used SHA1 digest and any temp area and releases the srndpool mutex. lock.

2.6.1.3 Adding to the swrand pool

The swrand_seed_random () function is used to add request adding entropy from an external source via the KCF random_add_entropy() call. If is called from KCF (ie something external to swrand itself) synchronously then the entropy estimate is always 0.  When called asynchronously we delay adding in the entropy until the next mixing time.

The internal swrand_add_entropy() call deals with updating srndpool it does do by adding and then mixing the bytes while holding the srndpool mutex lock.  Thus the pool is always mixed before returning.

2.6.1.4 Mixing the swrand pool

The swrand provider uses the same timeout mechanism for mixing that is described above for the KCF rndpool for adding new entropy to the the srndpool using the sources described above.

The swrand_mix_pool() function is called as a result of the timeout or an explicit request to add more entropy.

To mix the pool we first add in any deferred bytes, then by sliding along the pool in 64 bit chunks hash the data from the start upto this point and  the position we are long the pool with SHA1.  Then XOR the resulting hash back into the block and move along.

2.6.2 n2rng random provider

This applies only to SPARC processors with either an S2 core (T2, T3, T3+) or to S3 core (T4, T5, M5, M6) both CPU families use the same n2rng driver and the same on chip system for the RNG.

The n2rng driver provides the interface between the hyper-privilged access to the RNG registers on the CPU and KCF.

The driver performs attach time diagnostics on the hardware to ensure it continues operating as expected.  It determines that it is operating
in FIPS 140-2 mode via its driver.conf(5) file before its attach routine has completed. The full hardware health check in conjunction with the  hypervisor only when running in the control domain. The FIPS 140 checks are always run regardless of the hypervisor domain type.  If the FIPS 140  POST checks fail the driver ensures it is deregistered with KCF.

If the driver is suspended and resumed it reconfigures and re-registers with KCF.  This would happen on a suspend/resume cycle or during live  migration or system reconfiguration.

External seeding of n2rng is not possible from outside of the driver, and it does not provide the seed_random operation to KCF.

This algorithm used by n2rng is very similar to that of swrand, if loops collecting entropy and building up the requested number of bytes checking that each bit of entropy is different from the previous one, applying the fips_random_inner() function and then checking the resulting processed bytes differs from the previous set.

The entropy collection function n2rng_getentropy() is the significant difference between it and swrand in how they provide random data requests to KCF callers.

n2rng_getentropy() returns the requested number of bytes of entropy by uses hypervisor calls to hv_rng_data_read() and providing error checking to ensure we can retry on certain errors but eventually give up after a period of time or number of failed attempts at reading from the hypervisor.   The function hv_rng_data_read() is a short fragment of assembler code that reads a 64 bit value from the hypervisor RNG register (HV_RNG_DATA_READ 0x134) and is only called by n2rng_getentropy() and the diagnostic routine called at driver attach and resume time.

3.0 FIPS 186-2: fips_random_inner()

We will discuss this function here because it is common to swrand, n2cp, the /dev/urandom implementation as well as being used in userspace function fips_get_random.

It is a completely internal to Solaris function that can't be used outside of the cryptographic framework.

    fips_random_inner(uint32_t *key, uint32_t *x_j, uint32_t *XSEED_j)

It computes a new random value, which is stored in x_j; updates XKEY.  XSEED_j is additional input.  In principle, we should protect XKEY, perhaps by placing it in non-paged memory, but we aways clobber XKEY with fresh entropy just before we use it.  And  step 3d irreversibly updates it just  after we use it.  The only risk is that if an attacker captured the state while the entropy generator was broken, the attacker could predict future  values. There are two cases:

  1. The attack gets root access to a live system.  But there is no defense against that that we can place in here since they already have full control.
  2. The attacker gets access to a crash dump.  But by then no values are being generated.


Note that XSEED_j is overwritten with sensitive stuff, and must be zeroed by the caller.  We use two separate symbols (XVAL and XSEED_j) to make each step match the notation in FIPS 186-2.

All parameters (key, x_j, XSEED_j) are the size of a SHA-1 digest, 20 bytes.

The HASH function used is SHA1.

The implementation of this function is verified during POST by fips_rng_post() calling it with a known seed.  The POST call is performed before the swrand module registers with KCF or during initalisation  of any of the libraries in the FIPS 140 boundary (before their symbols are available to be called by other libraries or applications).

4.0 /dev/urandom

This is a software-based generator algorithm that uses the random bits in the cache as a seed. We create one pseudo-random generator (for /dev/urandom) per possible CPU on the system, and use it, kmem-magazine-style, to avoid cache line contention.

4.1 Initalisation of /dev/urandom

kcf_rnd_init() calls rnd_alloc_magazines() which  setups up the empty magazines for the pseduo random number pool (/dev/urandom). A separate magazine per CPU is configured up to the maximum number of possible (not available) CPUs on the system, important because we can add more
CPUs after initial boot.

The magazine initalisation discards the first 20 bytes so that the rnd_get_bytes() function will be using that for comparisons that the next block always differs from the previous one.  It then places the next  20 bytes into the rm_key and next again 20 bytes into rm_seed.  It does this for each max_ncpus magazine.  Only after this is complete does kcf_rnd_init() return back to kcf_init().  Each of the per CPU magazines has its own state which includes hmac key, seed and previous value, each also has its own rekey timers and limits.

The magazines are only used for the pseduo random number pool (ie servicing random_get_pseduo_bytes() and /dev/urandom) not for random_get_bytes() or /dev/random.

Note that this usage is preemption-safe; a thread entering a critical section remembers which generator it locked and unlocks the same one; should it be preempted and wind up running on a different CPU, there will be a brief period of increased contention before it exits the critical section but nothing will melt.

4.2 /dev/urandom generator

At a high level this uses the FIPS 186-2 algorithm using a key extracted from the random pool to generate a maximum of 1310720 output blocks before rekeying.  Each CPU (this is CPU thread not socket or core) has its own magazine.

4.3 Reading from /dev/urandom

The maximum request size that will be services for a single read(2) system call on /dev/urandom is 133120 bytes.

Reads all come in via the kcf_rnd_get_pseduo_bytes() function.

If the requested size is considered a to be large, greater than 2560 bytes, then instead of reading from the pool we tail call the generator directly by using rnd_generate_pseudo_bytes().

If the CPUs magazine has sufficient available randomness already we use that, otherwise we call the rnd_generate_pseudo_bytes() function directly.

rnd_generate_pseduo_bytes() is always called with the cpu magazine mutex already locked and it is released when it returns.

We loop through the following until the requested number of bytes has been built up or an unrecoverable error occurs.

rm_seed is reinitialised by xoring in the current 64 bit highres time, from gethrtime() into the prior value of rm_seed.  The fips_random_inner() call is then made using the current value of rm_key and this new seed.

The returned value from fips_random_inner() is then checked against our previous return value to ensure it is a different 160bit block.  If that fails the system panics when in FIPS 140-2 mode or returns EIO if FIPS mode is not enabled.

Before returning from the whole function the local state is zero'd out and the per magazine lock released.

5.0 Randomness for key generation

For asymmetric key generation inside the kernel a special random_get_nzero_bytes() API is provided.  It differs from random_get_bytes() in two  ways, first calls the random_get_bytes_fips140() function which only returns once all FIPS 140-2 initalisation has been completed.  The  random_get_bytes() function needs to be available slightly earlier because some very early kernel functions need it (particularly setup of the VM system and if ZFS needs to do any writes as part of mounting the root filesystem).  Secondly it ensures that no bytes in the output have the 0 value, those are replaced with freshly extracted additional random bytes, it continues until the entire requested length is entirely made up of non zero bytes.

A corresponding random_get_nzero_pseduo_bytes() is also available for cases were we don't want 0 bytes in other random sequences, such as session keys, nonces and cookies.

The above to functions ensure that even though most of the random pool is available early in boot we can't use it for key generation until the full FIPS 140-2 POST and integrity check has completed, eg on the swrand provider.

6.0 Userspace random number

Applications that need random numbers may read directly from /dev/random and /dev/urandom. Or may use a function implementing
the FIPS 186-2 rng requirements.

The cryptographic framework libraries in userspace provide the following
internal functions:

    pkcs11_get_random(), pkcs11_get_urandom()
    pkcs11_get_nzero_random(), pkcs11_get_nzero_urandom()

The above functions are available from the libcryptoutil.so library but are private to Solaris. Similar to the kernel space there are pkcs11_get_nzero_random() and pkcs11_get_nzero_urandom() variants that ensure none of the bytes are zero.  The pkcs11_ prefix is because these are private functions mostly used for the implementation of the PKCS#11 API.  The Solaris private ucrypto API does not provide key generation functions.

The pkcs11_softtoken C_GenerateRandom() function is implemented by calling pkcs11_get_urandom().

When pkcs11_softtoken is performing key generation C_GenerateKey() or C_GenerateKeyPair() it uses pkcs11_get_random for persistent (token) keys and pkcs11_get_urandom() for ephemeral (session) keys.

The above mentioned internal functions generate random numbers in  the following way.

While holding the pre_rnd_mutex (which is per userspace process) pkcs11_get_random() reads in 20 byte chunks from /dev/random and calls fips_get_random() on that 20 bytes and continutes a loop building up the output until the caller requested number of bytes are retrived or an unrecoverable error occurs (in that case it will kill the whole process using abort() when in FIPS 140-2 mode).

fips_get_random() performs a continous test by comparing the bytes taken from /dev/random. It then performs a SHA1 digest of those bytes and calls fips_random_inner().  It then again performs the byte by byte continous test.

When the caller requested number of bytes have been read and post processed the pre_rnd_mutex is released and the bytes returned to the caller
from pkcs11_get_random().

The initial seed and XKEY for fips_random_inner() are setup during the initalisation of the libcryptoutil library before the main() of the application  is called or any of the functions in libcryptoutil are available. XKEY is setup by feeding the the current high resolution time into seed48() and  drand48() functions to create a buffer of 20 bytes that is then digested through SHA1 and becomes the initial XKEY value.  XKEY is then updated when fips_random_inner() is called.

pkcs11_get_urandom() follows exactly the same algorithm as pkcs11_get_random() except that the reads are from /dev/urandom instead of /dev/random.

When a userspace program forks pthread_at_fork handlers ensure that requests to retreive randomness are locked out during the fork.


I hope this is useful and/or interesting insight into how Solaris generates randomness.

Update 2013-09-12 I was asked about how this applies to Illumos: To the best of my knowledge [ I have not read the Illumos source the following is based on what I remember of the old OpenSolaris source ] most of what I said above should apply to Illumos as well.  The main exceptions are that the fips_random_inner(), POST and some of the continuous checks don't exist, neither does the Intel RDRAND support. The source or the n2rng driver, random(7D), kcf  and swrand were available as part of OpenSolaris.  Not that Illumos may have changed some of this so please verify for yourself.

Thursday Jan 24, 2013

Compliance reporting with SCAP

In Solaris 11.1 we added the early stages of our (security) Compliance framework.  We have (like some other OS vendors) selected to use the SCAP (Security Content Automation Protocol) standard from NIST.  There are a number of different parts to SCAP but for Compliance reporting one of the important parts is the OVAL (Open Vulnerability Assesment Language) standard.  This is what allows us to write a checkable security policy and verify it against running systems.

The Solaris 11.1 repository includes the OpenSCAP tool that allows us to generate reports written in the OVAL language (as well as other things but I'm only focusing on OVAL for now).

OVAL is expressed in XML with a number of generic and OS/application specific schema.  Over time we expect to deliver various sample security policies with Solaris to help customers with Compliance reporting in various industries (eg, PCI-DSS, DISA-STIG, HIPAA).

The XML in the OVAL langauge is passed to the OpenSCAP tool for evaluation, it produces either a simple text report of which checks passed and which failed or an XML results file and an optional HTML rendered report.

Lets look at a simple example of an policy written in OVAL.  This contains just one check, that we have configured the FTP server on Solaris to display a banner.  We do this in Solaris 11 by updating /etc/proftpd.conf to add the "DisplayConnect /etc/issue" line - which is not there by default.   So in a default Solaris 11.1 system we should get a "fail" from this policy.

The OVAL for this check was generated by a tool called "Enhanced SCAP Editor (eSCAPe)" which is not included in Solaris.  It could well have been hand edited in your text editor of choice. In a later blog posting I'll attempt to explain more of the OVAL language and give some more examples, including some Solaris specific ones but for now here is the raw XML:

<?xml version="1.0" encoding="UTF-8"?>
<oval_definitions xmlns="http://oval.mitre.org/XMLSchema/oval-definitions-5" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xmlns:oval="http://oval.mitre.org/XMLSchema/oval-common-5" 
xmlns:oval-def="http://oval.mitre.org/XMLSchema/oval-definitions-5" 
xmlns:independent-def="http://oval.mitre.org/XMLSchema/oval-definitions-5#independent" 
xsi:schemaLocation="http://oval.mitre.org/XMLSchema/oval-definitions-5 
   oval-definitions-schema.xsd http://oval.mitre.org/XMLSchema/oval-definitions-5#independent 
   independent-definitions-schema.xsd http://oval.mitre.org/XMLSchema/oval-common-5 oval-common-schema.xsd">

  <generator>
    <oval:product_name>Enhanced SCAP Editor</oval:product_name>
    <oval:product_version>0.0.11</oval:product_version>
    <oval:schema_version>5.8</oval:schema_version>
    <oval:timestamp>2012-10-11T10:33:25</oval:timestamp>
  </generator>
  <!--generated.oval.base.identifier=com.oracle.solaris11-->
  <definitions>
    <definition id="oval:com.oracle.solaris11:def:840" version="1" class="compliance">
      <metadata>
        <title>Enable a Warning Banner for the FTP Service</title>
        <affected family="unix">
          <platform>Oracle Solaris 11</platform>
        </affected>
        <description>/etc/proftpd.conf contains "DisplayConnect /etc/issue"</description>
      </metadata>
      <criteria operator="AND" negate="false" comment="Single test">
        <criterion comment="/etc/proftpd.conf contains &quot;DisplayConnect /etc/issue&quot;" 
          test_ref="oval:com.oracle.solaris11:tst:8400" negate="false"/>
      </criteria>
    </definition>
  </definitions>
  <tests>
    <textfilecontent54_test 
        xmlns="http://oval.mitre.org/XMLSchema/oval-definitions-5#independent" 
        id="oval:com.oracle.solaris11:tst:8400" version="1" check="all" 
        comment="/etc/proftpd.conf contains &quot;DisplayConnect /etc/issue&quot;" 
        check_existence="all_exist">
      <object object_ref="oval:com.oracle.solaris11:obj:8400"/>
    </textfilecontent54_test>
  </tests>
  <objects>
    <textfilecontent54_object 
        xmlns="http://oval.mitre.org/XMLSchema/oval-definitions-5#independent" 
        id="oval:com.oracle.solaris11:obj:8400" version="1" 
        comment="/etc/proftpd.conf contains &quot;DisplayConnect /etc/issue&quot;">
      <path datatype="string" operation="equals">/etc</path>
      <filename datatype="string" operation="equals">proftpd.conf</filename>
      <pattern datatype="string" 
         operation="pattern match">^DisplayConnect\s/etc/issue\s$</pattern>
      <instance datatype="int" operation="greater than or equal">1</instance>
    </textfilecontent54_object>
  </objects>
</oval_definitions>

We can evaluate this policy on a given host by using OpenSCAP like this:

$ oscap oval eval ftp-banner.xml 
Definition oval:com.oracle.solaris11:def:840: false
Evaluation done.

As you can see we got the expected failure of the test, but that output isn't very useful, lets instead generate some html output:

$ oscap oval eval --results results.xml --report report.html ftp-banner.xml
Definition oval:com.oracle.solaris11:def:840: false
Evaluation done.
OVAL Results are exported correctly.

Now we have a report.html file which looks like a bit like this:

OVAL Results Generator Information
Schema Version Product Name Product Version Date Time
5.8  cpe:/a:open-scap:oscap 
2013-01-24 14:18:55 
OVAL Definition Generator Information
Schema Version Product Name Product Version Date Time
5.8  Enhanced SCAP Editor  0.0.11  2012-10-11 10:33:25 

System Information
Host Name braveheart 
Operating System SunOS 
Operating System Version 11.1 
Architecture i86pc 
Interfaces
Interface Name net0 
IP Address 192.168.1.1 
MAC Address aa:bb:cc:dd:ee:ff 
OVAL System Characteristics Generator Information
Schema Version Product Name Product Version Date Time
5.8  cpe:/a:open-scap:oscap 
2013-01-24 14:18:55 
Oval Definition Results



 True  



 False  



 Error  



 Unknown  



 Not Applicable  



 Not Evaluated  
OVAL ID Result Class Reference ID Title
oval:com.oracle.solaris11:def:841 true compliance
Enable a Warning Banner for the SSH Service 
oval:com.oracle.solaris11:def:840 false compliance
Enable a Warning Banner for the FTP Service 

As you probably noticed write away the report doesn't match the OVAL I gave above because the report is actually from a very slightly larger OVAL file which checks the banner exists for both SSH and FTP.  I did this purely to cut down on the amount of raw XML above but also so the report would show both a success and failure case.


Monday Oct 29, 2012

Solaris 11.1: Encrypted Immutable Zones on (ZFS) Shared Storage

Solaris 11 brought both ZFS encryption and the Immutable Zones feature and I've talked about the combination in the past.  Solaris 11.1 adds a fully supported method of storing zones in their own ZFS using shared storage so lets update things a little and put all three parts together.

When using an iSCSI (or other supported shared storage target) for a Zone we can either let the Zones framework setup the ZFS pool or we can do it manually before hand and tell the Zones framework to use the one we made earlier.  To enable encryption we have to take the second path so that we can setup the pool with encryption before we start to install the zones on it.

We start by configuring the zone and specifying an rootzpool resource:

# zonecfg -z eizoss
Use 'create' to begin configuring a new zone.
zonecfg:eizoss> create
create: Using system default template 'SYSdefault'
zonecfg:eizoss> set zonepath=/zones/eizoss
zonecfg:eizoss> set file-mac-profile=fixed-configuration
zonecfg:eizoss> add rootzpool
zonecfg:eizoss:rootzpool> add storage \
  iscsi://my7120.example.com/luname.naa.600144f09acaacd20000508e64a70001
zonecfg:eizoss:rootzpool> end
zonecfg:eizoss> verify
zonecfg:eizoss> commit
zonecfg:eizoss> 

Now lets create the pool and specify encryption:

# suriadm map \
   iscsi://my7120.example.com/luname.naa.600144f09acaacd20000508e64a70001
PROPERTY	VALUE
mapped-dev	/dev/dsk/c10t600144F09ACAACD20000508E64A70001d0
# echo "zfscrypto" > /zones/p
# zpool create -O encryption=on -O keysource=passphrase,file:///zones/p eizoss \
   /dev/dsk/c10t600144F09ACAACD20000508E64A70001d0
# zpool export eizoss

Note that the keysource example above is just for this example, realistically you should probably use an Oracle Key Manager or some other better keystorage, but that isn't the purpose of this example.  Note however that it does need to be one of file:// https:// pkcs11: and not prompt for the key location.  Also note that we exported the newly created pool.  The name we used here doesn't actually mater because it will get set properly on import anyway. So lets go ahead and do our install:

zoneadm -z eizoss install -x force-zpool-import
Configured zone storage resource(s) from:
    iscsi://my7120.example.com/luname.naa.600144f09acaacd20000508e64a70001
Imported zone zpool: eizoss_rpool
Progress being logged to /var/log/zones/zoneadm.20121029T115231Z.eizoss.install
    Image: Preparing at /zones/eizoss/root.

 AI Manifest: /tmp/manifest.xml.ujaq54
  SC Profile: /usr/share/auto_install/sc_profiles/enable_sci.xml
    Zonename: eizoss
Installation: Starting ...

              Creating IPS image
Startup linked: 1/1 done
              Installing packages from:
                  solaris
                      origin:  http://pkg.oracle.com/solaris/release/
              Please review the licenses for the following packages post-install:
                consolidation/osnet/osnet-incorporation  (automatically accepted,
                                                          not displayed)
              Package licenses may be viewed using the command:
                pkg info --license <pkg_fmri>
DOWNLOAD                                PKGS         FILES    XFER (MB)   SPEED
Completed                            187/187   33575/33575  227.0/227.0  384k/s

PHASE                                          ITEMS
Installing new actions                   47449/47449
Updating package state database                 Done 
Updating image state                            Done 
Creating fast lookup database                   Done 
Installation: Succeeded

         Note: Man pages can be obtained by installing pkg:/system/manual

 done.

        Done: Installation completed in 929.606 seconds.


  Next Steps: Boot the zone, then log into the zone console (zlogin -C)

              to complete the configuration process.

Log saved in non-global zone as /zones/eizoss/root/var/log/zones/zoneadm.20121029T115231Z.eizoss.install

That was really all we had to do, when the install is done boot it up as normal.

The zone administrator has no direct access to the ZFS wrapping keys used for the encrypted pool zone is stored on.  Due to how inheritance works in ZFS he can still create new encrypted datasets that use those wrapping keys (without them ever being inside a process in the zone) or he can create encrypted datasets inside the zone that use keys of his own choosing, the output below shows the two cases:

rpool is inheriting the key material from the global zone (note we can see the value of the keysource property but we don't use it inside the zone nor does that path need to be (or is) accessible inside the zone). Whereas rpool/export/home/bob has set keysource locally.

 
  

# zfs get encryption,keysource rpool rpool/export/home/bob NAME PROPERTY VALUE SOURCE rpool encryption on inherited from $globalzone rpool keysource passphrase,file:///zones/p inherited from $globalzone rpool/export/home/bob encryption on local rpool/export/home/bob keysource passphrase,prompt local

 

 

Wednesday Jul 04, 2012

Delegation of Solaris Zone Administration

In Solaris 11 'Zone Delegation' is a built in feature. The Zones system now uses fine grained RBAC authorisations to allow delegation of management of distinct zones, rather than all zones which is what the 'Zone Management' RBAC profile did in Solaris 10.

The data for this can be stored with the Zone or you could also create RBAC profiles (that can even be stored in NIS or LDAP) for granting access to specific lists of Zones to administrators.

For example lets say we have zones named zoneA through zoneF and we have three admins alice, bob, carl.  We want to grant a subset of the zone management to each of them.

We could do that either by adding the admin resource to the appropriate zones via zonecfg(1M) or we could do something like this with RBAC data directly:

First lets look at an example of storing the data with the zone.

# zonecfg -z zoneA
zonecfg:zoneA> add admin
zonecfg:zoneA> set user=alice
zonecfg:zoneA> set auths=manage
zonecfg:zoneA> end
zonecfg:zoneA> commit
zonecfg:zoneA> exit

Now lets look at the alternate method of storing this directly in the RBAC database, but we will show all our admins and zones for this example:

# usermod -P +'Zone Management' -A +solaris.zone.manage/zoneA alice

# usermod -A +solaris.zone.login/zoneB alice


# usermod -P +'Zone Management' -A +solaris.zone.manage/zoneB bob
# usermod -A +solaris.zone.manage/zoneC bob


# usermod -P +'Zone Management' -A +solaris.zone.manage/zoneC carl
# usermod -A +solaris.zone.manage/zoneD carl
# usermod -A +solaris.zone.manage/zoneE carl
# usermod -A +solaris.zone.manage/zoneF carl

In the above alice can only manage zoneA, bob can manage zoneB and zoneC and carl can manage zoneC through zoneF.  The user alice can also login on the console to zoneB but she can't do the operations that require the solaris.zone.manage authorisation on it.

Or if you have a large number of zones and/or admins or you just want to provide a layer of abstraction you can collect the authorisation lists into an RBAC profile and grant that to the admins, for example lets great an RBAC profile for the things that alice and carl can do.

# profiles -p 'Zone Group 1'
profiles:Zone Group 1> set desc="Zone Group 1"
profiles:Zone Group 1> add profile="Zone Management"
profiles:Zone Group 1> add auths=solaris.zone.manage/zoneA
profiles:Zone Group 1> add auths=solaris.zone.login/zoneB
profiles:Zone Group 1> commit
profiles:Zone Group 1> exit
# profiles -p 'Zone Group 3'
profiles:Zone Group 1> set desc="Zone Group 3"
profiles:Zone Group 1> add profile="Zone Management"
profiles:Zone Group 1> add auths=solaris.zone.manage/zoneD
profiles:Zone Group 1> add auths=solaris.zone.manage/zoneE
profiles:Zone Group 1> add auths=solaris.zone.manage/zoneF
profiles:Zone Group 1> commit
profiles:Zone Group 1> exit


Now instead of granting carl  and aliace the 'Zone Management' profile and the authorisations directly we can just give them the appropriate profile.

# usermod -P +'Zone Group 3' carl

# usermod -P +'Zone Group 1' alice


If we wanted to store the profile data and the profiles granted to the users in LDAP just add '-S ldap' to the profiles and usermod commands.

For a documentation overview see the description of the "admin" resource in zonecfg(1M), profiles(1) and usermod(1M)

Tuesday May 01, 2012

Podcast: Immutable Zones in Oracle Solaris 11

In this episode of the "Oracle Solaris: In a Class By Itself" podcast series, the focus is a bit more technical. I was interviewed by host Charlie Boyle, Senior Director of Solaris Product Marketing. We talked about a new feature in Oracle Solaris 11: immutable zones. Those are read-only root zones for highly secure deployment scenarios.

See also my previous blog post on Enctypted Immutable Zones.

Wednesday Feb 29, 2012

Solaris 11 has the security solution Linus wants for Desktop Linux

Recently Linus Torvalds was venting (his words!) about the frustrating requirement to keep giving his root password for common desktop tasks such as connecting to a wifi network or configuring printers.

Well I'm very pleased to say that the Solaris 11 desktop doesn't have this problem thanks to our RBAC system and how it is used including how it is tightly integrated into the desktop.

One of the new RBAC features in Solaris 11 is location context RBAC profiles, by default we grant the user on the system console (ie the one on the laptop or workstation locally at the physical keyboard/screen) the "Console User" profile.  Which on a default install has the necessary authorisations and execution profiles to do things like joining a wireless network, changing CPU power management, and using removal media.   The user created at initial install time also has the much more powerful "System Administrator" profile granted to them so they can do even more without being required to give a password for root (they also have access to the root role and the ability to use sudo).

Authorisations in Solaris RBAC (which dates back in main stream Solaris to Solaris 8 and even further 17+ years in Trusted Solaris) are checked by privileged programs and the whole point is so you don't have to reauthenticate.  SMF is a very heavy user of RBAC authorisations.  In the case of things like joining a wireless network it is privileged daemons that are checking the authorisations of the clients connecting to them (usually over a door)

In addition to that GNOME in Solaris 11 has been explicitly integrated with Solaris RBAC as well, any GNOME menu entry that needs to run with elevated privilege will be exectuted via Solaris RBAC mechanisms.  The panel works out the least intrusive way to get the program running for you.  For example if I select "Wireshark" from the GNOME panel menu it just starts - I don't get prompted for any root password - but it starts with the necessary privileges because GNOME on Solaris 11 knows that I have the "Network Management" RBAC profile which allows running /usr/sbin/wireshark with the net_rawaccess privilege.   If I didn't have "Network Management" directly but I had an RBAC role that had it then GNOME would use gksu to assume the role (which might be root) and in which case I would have been prompted for the role password.  If you are using roleauth=user that password is yours and if you are using pam_tty_tickets you won't keep getting prompted.

GNOME can even go further and not even present menu entries to users who don't have granted to them any RBAC profile that allows running those programs - this is useful in a large multi user system like a Sun Ray deployment.

If you want to do it the "old way" and use the CLI and/or give a root password for every "mundane" little thing, you can still do that too if you really want to.

So maybe Linus could try Solaris 11 desktop ;-)

Monday Feb 20, 2012

Solaris 11 Common Criteria Evaluation

Oracle Solaris 11 is now "In Evaluation" for Common Criteria at EAL4+.  The protection profile is OSPP with the following extended packages: AM - Advanced Management  EIA - Extended Identification and Authentication, LS - Label Security, VIRT - Virtualization.  For information on other Oracle products that are evaluated under Common Criteria or FIPS 140 please see the general Oracle Security Evalutions page.

Please email seceval_us@oracle.com for all inquiries regarding Oracle security evaluations, I can't answer questions about the content of the evaluation on this blog or directly by email to me.

Tuesday Dec 20, 2011

How low can we go ? (Minimised install of Solaris 11)

I wondered how little we can actually install as a starting point for building a minimised system. The new IPS package system makes this much easier and makes it work in a supportable way without all the pit falls of patches and packages we had previously.

For Solaris 11 I believe the currently smallest configuration we recommend is the solaris-small-server group package.

Note the following is not intended to imply this is a supported configuration and is illustrative only of a short amount of investigative work.

First lets look at a zone (it is easier since there are no driver issues to deal with): I discovered it is possible to get a 'working' zone by choosing a single package to install in the zone AI manifest and that package is: pkg:/package/pkg

That resulted in 175M being reported as being transferred by pkg. Which results in 255M on disk of which about 55M is /var/pkg. I had 140 'real' packages (ie excluding the incorporations). We have 71 online SMF services (nothing in maintenance) with 96 known SMF services. Around 23 processes are running (excluding the ps I used to check this and the shell I was logged in on).

I have discovered some potential packaging that could result in this being a little bit smaller but not much unless a break up of pkg:/system/core-os was done.

Now onto the bare metal install case. This was a little harder and I've not gotten it to where I wanted yet.

Ignoring drivers the only thing I needed on an x86 system was: pkg:/package/pkg and pkg:/system/boot/grub

Which is good and not really different from the zones case. However that won't produce a bootable system - even though it produces one that will install!

To get it to boot I took the list of all the network and storage drivers from the solaris-small-server group package.  I removed all the wlan drivers and also any drivers I knew to be SPARC only.   My list of packages in the AI manifest had 113 driver packages in it. That got me a bootable system, though not one minimized with respect to drivers.

We have a few more processes in the global zone (again ignoring the ps and my shell) this time I counted 32.  This came from 89 online services. Again ignoring the incorporation packages  I had 161 packages installed of which 73 were in the pkg:/driver namespace.

The disk space footprint is much bigger at total of 730M - but remember I've likely installed drivers that I might not need. This time /var/pkg is 174M of that.


Wednesday Nov 09, 2011

Completely disabling root logins on Solaris 11

Since Solaris 8 it has been possible to make the root account a role.  That means you can't login directly as root (except in single user mode) but have to login as an authorised user first and assume (via su) the root role.  This still required the root account to have a valid and known password as it is needed for the su step and for single user access.

With Solaris 11 it is possible to go one step further and completely disable all need for a root password even for access in single user mode.

There are two complementary new features that make this possible.  The first is the ability to change which password is used when authenticating to a role.  A new per role property called roleauth was added, if it isn't present the prior behaviour of using the role account password is retained, if roleauth=user is set instead then the password of the user assuming the role is used.

The second feature was one that existed in the Solaris 11 Express release which changed how the sulogin command worked, prior releases all just asked for the root password.  The sulogin program was changed to authenticate a specific user instead so now asks for a username and the password of that user.  The user must be one authorised to enter single user mode by being granted the 'solaris.system.maintenance' authorisation - and obviously be one that can actually connect to the system console (which I recommend is protected by "other means" eg ILOM level accounts or central "terminal server")

The following sequence of commands takes root from being a normal root account (which depending on how you install Solaris 11 it maybe, or it might already be a role) and granting the user darrrenm the ability to assume the root role and enter single user mode.

# usermod -K type=role root
# usermod -R +root -A +solaris.system.maintenance darrenm
# rolemod -K roleauth=user  root
# passwd -N root

Note that some of the install methods for Solaris 11 will have created an initial user account that is granted the root role and has been given the "System Administrator" profile, in those cases only the last two steps are required as the equivalent of the first two will already have been done at install time for the initial non root user.

Note that we do not lock (-l) the root account but instead ensure it has no valid password (-N) this is because the root account does still have some cron jobs that we ideally want to run and if it was locked then the pam_unix_account.so.1 PAM module would prevent cron from running those jobs.

When root is a role like this you authenticate to the system first as yourself, in this case the user darrenm logs in first.  Once darrenm has logged in we use su(1M) to be come root - just like we would have if root wasn't a role.  The main difference here is that the password given to su(1M) in the above config is darrenm's password.

If you boot the system in single user mode (boot -s) you will be asked for a username and password, we give the username of darrenm and darrenm's password. Once you do that you get a # prompt that is truely root in single user mode.  The distinction here is we have an audit trail and know it was darrenm that authenticated and we have no separate root password to manage.

In some deployment cases there may not be any locally defined accounts, in those cases it is necessary to allow the root to allow direct login on the system console in multiuser mode.  This is achived by adding the following to /etc/pam.conf, and also give the root account a valid password.

login account required	pam_unix_account.so.1

By having that entry we do not have pam_roles.so.1 active for console login so the root account will be able to login directly.  The assumption here is that access to the system console is sufficiently secured (and audite) by means external to the Solaris instance.  For example the ILOM of the system is on an access restricted management network that has specific user accounts for SSH access to the ILOM.  You may also want to only give out that root password in emergency cases.  This will allow direct root login only on the console but require that users authenticate to root using their own password when using su.


    

If you have made root as role and you want to go back to a traditional direct login capability for root you can do so by simply running:

 # rolemod -K type=normal root

Update 1 to answer the first question: Basically exactly the same as if the password was locked, expired or forgotten if you just used root directly.  Failed account locking is not enabled by default.  As for forgetting who was the authorised account that isn't a problem Solaris can fix on its own that is part of your administative procedures.  You can have any number of authorised users and the userattr, roles, profiles commands can be used tell you who they are and manage them.

Update 2 to make it clearer how you use this in multi-user and single user.

Update 3 add information on how to allow root on console.


Password (PAM) caching for Solaris su - "a la sudo"

I talk to a lot of users about Solaris RBAC but many of them prefer to use sudo for various reasons.  One the common usability features that users like is the that they don't have to continually type their password.  This is because sudo uses a "ticket" system for caching the authentication for a defined period (by default 5 minutes).

To bring this usability feature to Solaris 11 I wrote a new PAM module (pam_tty_tickets) that provides a similar style of caching for Solaris roles. 

By default the tickets are stored in /system/volatile/tty_tickets (/var/run is a symlink to /system/volatile now). 

When using su(1M) the user you currently are is set in PAM_USER and PAM_AUSER is the user you are becoming (ie the username argument to su or root if one is not specified).  The PAM module implements the caching using tickets, the internal format of the tickets is the same as what sudo uses. The location can be changed to be compatible with sudo so the same ticket can be used for su and sudo.

To enable pam_tty_tickets for su put the following into /etc/pam.conf (the module is in the pkg:/system/library package so it is always installed but not configured for use by default):

su      auth required           pam_unix_cred.so.1
su      auth sufficient         pam_tty_tickets.so.1
su      auth requisite          pam_authtok_get.so.1
su      auth required           pam_unix_auth.so.1

So what does it now look like:

braveheart:pts/3$ su -
Password: 
root@braveheart:~# id -a
uid=0(root) gid=0(root) groups=0(root),1(other),2(bin),3(sys),4(adm),5(uucp),6(mail),7(tty),8(lp),9(nuucp),12(daemon)
darrenm@braveheart:~# exit
braveheart:pts/3$ su -
root@braveheart:~# 

If you want to enable it in the desktop for gksu then you need to add a similar set of changes to /etc/pam.conf with the service name as "embedded_su" with the same modules as is  listed above.  The default timeout matches the sudo default of 5 minutes, the timeout= module option allows specifying a different timeout.

[ NOTE: The man page for pam_tty_tickets was mistakenly placed in section 1 for Solaris 11, it should have been in section 5. ]

Update for Solaris 11.1, now that we have /etc/pam.d/ support it is recommended that instead of updating /etc/pam.conf the following lines be placed into /etc/pam.d/su

auth sufficient	pam_tty_tickets.so.1
auth definitive	pam_user_policy.so.1
auth requisite	pam_authtok_get.so.1
auth required	pam_unix_auth.so.1
auth required	pam_unix_cred.so.1


User home directory encryption with ZFS

ZFS encryption has a very flexible key management capability, including the option to delegate key management to individual users.  We can use this together with a PAM module I wrote to provide per user encrypted home directories.  My laptop and workstation at Oracle are configured like this:

First lest setup console login for encrypted home directories:

    root@ltz:~# cat >> /etc/pam.conf<<_EOM
    login auth     required pam_zfs_key.so.1 create
    other password required pam_zfs_key.so.1
    _EOM

The first line ensures that when we login on the console bob's home directory is created with as an encrypted ZFS file system if it doesn't already exist, the second one ensures that the passphrase for it stays in sync with his login password.

Now lets create a new user 'bob' who looks after his own encryption key for is home directory, note that we do not specify '-m' to useradd so that pam_zfs_key will create the home directory when the user logs in.

root@ltz:~# useradd bob
root@ltz:~# passwd bob
New Password: 
Re-enter new Password: 
passwd: password successfully changed for bob
root@ltz:~# passwd -f bob
passwd: password information changed for bob

We have now created the user bob with an expired password. Lets login as bob and see what happens:

    ltz console login: bob
    Password: 
    Choose a new password.
    New Password: 
    Re-enter new Password: 
    login: password successfully changed for bob
    Creating home directory with encryption=on.
    Your login password will be used as the wrapping key.
    Last login: Tue Oct 18 12:55:59 on console
    Oracle Corporation      SunOS 5.11      11.0    November 2011
    -bash-4.1$ /usr/sbin/zfs get encryption,keysource rpool/export/home/bob
    NAME                   PROPERTY    VALUE              SOURCE
    rpool/export/home/bob  encryption  on                 local
    rpool/export/home/bob  keysource   passphrase,prompt  local

Note that bob had to first change the expired password. After we provided a new login password a new ZFS file system for bob's home directory was created. The new login password that bob chose is also the passphrase for this ZFS encrypted home directory. This means that at no time did the administrator ever know the passphrase for bob's home directory. After the machine reboots bob's home directory won't be mounted anymore until bob logs in again.  If we want bob's home directory to be unmounted and the key removed from the kernel when bob logs out (even if the system isn't rebooting) then we can add the 'force' option to the pam_zfs_key.so.1 module line in /etc/pam.conf

If users login with GDM or ssh then there is a little more configuration needed in /etc/pam.conf to enable pam_zfs_key for those services as well.

 
root@ltz:~# cat >> /etc/pam.conf<<_EOM
gdm     auth requisite          pam_authtok_get.so.1
gdm     auth required           pam_unix_cred.so.1
gdm     auth required           pam_unix_auth.so.1
gdm     auth required           pam_zfs_key.so.1 create
gdm     auth required           pam_unix_auth.so.1
_EOM

root@ltz:~# cat >> /etc/pam.conf<<_EOM
sshd-kbdint     auth requisite          pam_authtok_get.so.1
sshd-kbdint     auth required           pam_unix_cred.so.1
sshd-kbdint     auth required           pam_unix_auth.so.1
sshd-kbdint     auth required           pam_zfs_key.so.1 create
sshd-kbdint     auth required           pam_unix_auth.so.1
_EOM

Note that this only works when we are logging in to SSH with a password. Not if we are doing pubkey authentication because the encryption passphrase for the home directory hasn't been supplied. However pubkey and gssapi will work for later authentications after the home directory is mounted up since the ZFS passphrase is supplied during that first ssh or gdm login.

Thursday Aug 25, 2011

100% of Solaris users use RBAC

Some of us in the Solaris Security Engineering group been asked a few times recently questions like "so how many customers actually use Solaris RBAC ?"

The answer we give is usually variant of "For Solaris 10 onwards 100% of users use RBAC".

Surely that is wrong and we can't guarantee 100% of users of Solaris 10 and Solaris 11 are or will be using RBAC can we ?  We don't have data to back that up because we don't even know who all the users of Solaris actually are.

It actually is correct we don't need data on usage to back it up.  The reason being you can't turn RBAC off in Solaris 10 onwards it is always in use in parts of the system that 100% of users of Solaris always use.

The kernel always checks Solaris's fine grained privileges (82 distinct privileges in Solaris 11 Express), even if the process is running "as root".  So 100% of Solaris systems make RBAC privilege checks.

SMF always checks RBAC authoriations for any enable/disable operation and any change to or viewing of a property on a service - even if you are running 'svcadm/svccfg' as root.  Also SMF itself uses RBAC to set the privileges of services (sometimes defined in RBAC profiles sometimes defined directly in the method credential of the service definition).  Solaris doesn't run with out SMF so 100% of Solaris systems are using RBAC authorisation checks.

Several other parts of Solaris 10 also make authorisation checks, and in Solaris 11 there will be a increased number of those in some of the core administration utilities giving us the ability to have more fine grained control and enhanced separation of duty for some common administration tasks.   I'll post more on this when Solaris 11 is released.

In ZFS the operations performed by the zfs(1M) command first check if the user has an 'allow' delegation and then check privilege - again even if the user is root.

So 100% of Solaris users really do use RBAC - there is no means to turn it off - and this applies even if you use sudo rather than using a profile shell (eg /usr/bin/pfksh) or pfexec directly.

Tuesday Jan 04, 2011

When setuid root no longer means setuid root (Forced Privileges)

Starting with Solaris 10 (though similar functionality had been available for more than 15 years in Trusted Solaris variants) it was no longer necessary to be "root" to perform privileged operations via the Solaris kernel.  Instead individual fine grained privileges are checked by the kernel (and in some cases the X server as well).  A number of core setuid root binaries on Solaris were updated to drop all privileges they knew they wouldn't need, but they were still setuid root and thus ran as root for a very short period of time, others that weren't explicitly modified continued to run as root for their lifetime.

The setuid bit on a file system binary (or script) has traditionally indicated to the Solaris kernel that on exec(2) the uid of the process should be switched to that of the owner.  The most common use of this is to make programs run with "root" privileges. 

Trusted Solaris 8 and older had the concept of forced privileges on binaries.  This was achieved by storing with the binary the list of privileges it should always be run with, this required a modified version of the UFS file system and as a result customised backup tools as well (the pax/tar/cpio/ufsdump provided with Solaris knew about these (and other) UFS extensions but not all 3rd party backup software did.  This functionality wasn't carried forward into Solaris 10, which in addition to UFS also has ZFS (which is now the default and only choice for the root file system in Solaris 11 Express).

Solaris 11 Express includes a new implementation of the forced privileges mechanism.  Unlike the version from Trusted Solaris 8 and earlier it is file system agnostic and doesn't store additional information on disk with the binary.

When the kernel is processing an exec(2) it now treats setuid to root differently (setuid to any other uid or setgid is as in Solaris 10).  The kernel now looks for an entry in the Forced Privilege RBAC profile in exec_attr(4) to determine which privileges the program should run with. Instead of having it start with uid root and all privileges, which was the previous behaviour, it will now run with the current uid and just those additional privileges the Forced Privilege RBAC execution profile assigned to that pathname.

We can easily see this difference in behaviour by showing how this works for /usr/sbin/ping.

First lets look with truss at what it looked like prior to the introduction of Forced Privilege:

#  truss -f -texec -p 20549   
3735:    execve("/usr/sbin/ping", 0xFED50640, 0x080477EC)  argc = 2
3735:        \*\*\* SUID: ruid/euid/suid = 101 / 0 / 0  \*\*\*

As we can see the real uid is 101, the effective and saved are 0 (root).  Now lets compare that to the new behaviour:

# truss -f -texec -p 20549
3590:    execve("/usr/sbin/ping", 0xFED50638, 0x080477EC)  argc = 2
3590:        \*\*\* FPRIV: P/E: net_icmpaccess,sys_ip_config \*\*\*

This time the ruid/euid/suid doesn't change (all three stay at 101 in this example) instead we set the permitted (P) and effective (E) privilege sets to be the set obtained for /usr/sbin/ping in the "Forced Privileges" RBAC profile (net_icmpaccess,sys_ip_config in this case).

This means that at no time was ping running with uid 0 nor did it ever have all privileges, so we no longer require the code for ping to do its own privilege management.

If there is't an entry in the "Forced Privileges" RBAC profile for the program being exec'd then the traditional behaviour of setting euid to 0 remains.

As a result Solaris 11 Express currently has 15 fewer "true" setuid binaries - note that the list of binaries in the "Forced Privileges" RBAC profile may change over time including in software updates.

So setuid root no longer means setuid root in all cases.

About

DarrenMoffat

Search

Categories
Archives
« April 2014
MonTueWedThuFriSatSun
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
17
18
19
20
21
22
23
24
25
26
27
28
29
30
    
       
Today