Thursday Jan 28, 2016

Is ZFS Encryption PCI-DSS Compliant ?

Is ZFS Encryption PCI-DSS Compliant ? No it isn't, and I'll explain why.

PCI-DSS applies to a given merchant or financial institution it does not evaluate or validate products. This is very different to Common Criteria (CC) or FIPS 140.

One of the many requirements of PCI-DSS is that certain types of data (credit card numbers and card holder data) are encrypted on persistent storage and in transit. There are many ways to achieve that PCI-DSS requirement, ZFS encryption can be one of them using Oracle DB TDE is another.

There is a peer standard called PA-DSS (Payment Application Data Security Standard) but storage is not a payment application so again ZFS encryption doesn't apply here.

Even using a PCA-DSS compliant application does not imply you have PCI-DSS compliant deployment. The distinction is covered very well in this article on the PCI compliance guide website, I particularly like this quote: 

"The bottom line is that only an organization can be validated to be PCI-DSS compliant, never an application or a system."

So we can't claim ZFS is PCI-DSS complaint but then no other storage or database vendor can make those claims either. What we can say is that ZFS encryption can be used as part of a PCI-DSS solution to encrypt card holder data at rest. We can also say that we know of cases where ZFS encryption has been used as part of meeting the PCI-DSS requirements and it has succefully passed an audit.

So the answer is "NO" ZFS encryption is NOT PCI-DSS compliance because that is an invalid question to ask.

In this case a useful question would be:

"Has ZFS Encryption been used as part of a PCI-DSS deployment for encrypting credit card numbers and/or card holder data ?" Then the answer is YES.

Tuesday Jul 07, 2015

Solaris new system calls: getentropy(2) and getrandom(2)

The traditional UNIX/Linux method of getting random numbers (bit streams really) from the kernel to a user space process was to open(2) the /dev/random or /dev/urandom pseudo devices and read(2) an appropriate amount from it, remembering to close(2) the file descriptor if your application or library cached it.  Solaris 11.3 adds two new system calls, getrandom(2) and getentropy(2), for getting random bit streams or raw entropy. These are compatible with APIs recently introduced in OpenBSD and Linux.

OpenBSD introduced a getentropy(2) system call that reads a maximum of 256 bytes from the kernel entropy pool and returns it to user space for use in user space random number generators (such as in the OpenSSL library). The getentropy(2) system call returns 0 on success and always returns the amount of data requested or fails completely, setting errno.  It is an error to request more than 256 bytes of entropy and doing so causes errno to be set to EIO. 

On Solaris the output of getentropy(2) is entropy and should not be used where randomness is needed, in particular it must not be used where an IV or nonce is needed when calling a cryptographic operation.  It is intended only for seeding a user space RBG (Random Bit Generator) system. More specifically the data returned by getentropy(2) has not had the required FIPS 140-2 processing for the DRBG applied to it.

Recent Linux kernels have a getrandom(2) system call that reads between 1 and 1024 bytes of randomness from the kernel.  Unlike getentropy(2) it is intended to be used directly for cryptographic use cases such as an IV or nonce.  The getrandom(2) call can be told whether to use the kernel pool usually used for /dev/random or the one for /dev/urandom by using the GRND_RANDOM flag to request the former. If GRND_RANDOM is specified then getrandom(2) will block until sufficient randomness can be generated if the pool is low, if non blocking behaviour is specified then the GRND_NONBLOCK flag can be passed. 

SYNOPSIS
#include <sys/random.h>
int getrandom(void *buf, size_t buflen, unsigned int flags);
int getentropy(void *buf, size_t buflen);

On Solaris if GRND_RANDOM is not specified then getrandom(2) is always a non blocking call. Note that this differs slightly from Linux but not in a way that impacts its usage.  The other difference is that on Solaris getrandom(2) will either fail completely or will return a buffer filled with the requested size, where as the Linux implementation can return partial buffers.  In order to ensure code portability developers must check the return value of getrandom(2) every time it is called, for example:

#include <sys/random.h>
#include <stdlib.h>
.
size_t bufsz = 128;
char *buf;
int ret;
.
...
buf = malloc(bufsz);
...
errno = 0;
ret = getrandom(buf, bufsz, GRND_RANDOM);
if (ret < 0 || ret != bufsz) {
    perror("getrandom failed");
    ...
}

The output of getrandom(2) on Solaris has been through a FIPS 140-2 approved DRBG function as defined in NIST SP-900-90A.

In addition to the above to system calls the OpenBSD arc4random(3C), arc4random_buf(3C) and arc4random_uniform(3C) functions are also now provided from libc, these are available by including <stdlib.h>.

OpenSSH in Solaris 11.3

Solaris 9 was the first release where we included an implementation of the IETF SSH client and server protocols, I led that project and at the time I was also the document editor for the IETF standards documents. We started with OpenSSH but for various reasons it ended up over time being a Solaris specific fork called SunSSH. We have regularly resynced for features and bug fixes with OpenSSH, but SunSSH remains a fork.

Starting with Solaris 11.3 we supply OpenSSH in addition to SunSSH.  The intent is that in some future release SunSSH will be removed leaving only OpenSSH. 

For Solaris 11.3 both OpenSSH and SunSSH can be installed on a machine at the same time or the administrator can choose to install only one.  SunSSH is delivered by pkg:/network/ssh and OpenSSH by pkg:/network/openssh.

Both packages effectively deliver the same svc:/network/ssh:default, really it comes from pkg:/network/ssh-common. When both OpenSSH and SunSSH are installed an IPS package mediator is used to select which one is run by the SMF service and which one is /usr/bin/ssh.  A system with OpenSSH set as the default would look like this:

$ pkg mediator ssh
MEDIATOR     VER. SRC. VERSION IMPL. SRC. IMPLEMENTATION
ssh          system            local      openssh

Our intent is that the OpenSSH delivered in Solaris has as few Solaris-specific changes applied as possible. We have managed to push some bug fixes upstream to the OpenSSH community but there are still some Solaris-specific changes for enhancements we felt were important to customers migrating from SunSSH. These Solaris-specific changes are applied to OpenSSH during the build process using patch(1) and are thus maintained in a directory called patches Some of the patches are purely build related and some are features. The current list of feature patches include:

  • GSS credential storage
  • PAM Service Name per SSH userauth method as per SunSSH
  • PAM can not be disabled with the UsePAM option
  • DisableBanner option for ssh client: see ssh_config(4)/li>

While the intent going forward is to keep up with the OpenSSH releases we may choose to backport a fix from a later version of OpenSSH to fix a bug or security vulnerability rather than delivering the whole release.  Often the reasons for doing this will be releated to the available Solaris release train at the time and the size of change in the later release. This means that the version of OpenSSH could change in a Solaris SRU.

The OpenSSH releases also include some very useful features that we hadn't ported over to SunSSH. The pkg mediator allows for selecting which binaries are the system default but it doesn't help with the per user configuration file. In order to use features from OpenSSH when the users home directory may also be used by a SunSSH client use of the options to ignore unknown options is needed.  This is a feature that originated in SunSSH but when the equivalent feature arrived in OpenSSH the option name was different.  To over come this I have the following in my ~/.ssh/config file:

IgnoreUnknown IgnoreIfUnknown
IgnoreIfUnknown IgnoreUnknown,ControlMaster,ControlPersist,ControlPath

That allows me to have Host blocks that use ControlMaster configuration that OpenSSH knows about but SunSSH doesn't and ensures that neither of them complains about unknown options. Some of the other important differences between SunSSH and OpenSSH are:

SunSSH OpenSSH
UseOpenSSLEngine No replacement, OpenSSL defaults to using Hardware acceleration on modern SPARC and Intel CPUs
MaxAuthTriesLog OpenSSH always logs at MaxAuthTries / 2
PreUserAuthHook Use AuthorizedKeysCommand instead.
IgnoreIfUknown IgnoreUnknown
Message Localisation Client and Server messages are no longer localised.
Use of /etc/default/login No replacement, set policy using PAM and sshd_config.

Customising Solaris Compliance Policies

When we introduced the compliance framework in Solaris 11.2 there was no easy way to customise (tailor) the policies to suit individual machine or site deployment needs. While it was certainly possible for users familiar with the XCCDF/OVAL policy language it wasn't easy to do in away that preserved your customisations while still allowing access to new and policy rules when the system was updated.

To address this a new subcommand for compliance(1M) has been added that allows creation of a tailoring.  The initial release of tailoring in Solaris 11.3 allows the enabling and disabling of individual checks, and the team is already working on enhancing it to support variables in a future release.

The default and simplest way of using 'compliance tailor' is use the interactive pick tool:

# compliance tailor -t mysite
*** compliance tailor: No existing tailoring 'mysite', initializing
tailoring:mysite> pick



The above shows the interactive mode where using 'x' or 'space' allows us to enable or disable an individual test.  Note also that since the Solaris 11.2 release all the tests have been renumbered and now have unique rule identifiers that are stable across releases of Solaris.  The same rule number always refers to the same test in all of the security benchmark policy files delivered with Solaris.

When exiting from the interactive pick mode just type 'commit' to write this out to a locally installed tailoring; that will create an XCCDF tailoring file under /var/share/compliance/tailorings.  Those tailoring files should not be copied from release to release.

There is also an 'export' action for the tailoring subcommand that allows you to save off your customisations for importing into a different system, this works similarly to zonecfg(1M) export.

$ compliance tailor -t mysite export | tee /tmp/mysite.out
set tailoring=mysite
# version=2015-06-29T14:16:34.000+00:00
set benchmark=solaris
set profile=Baseline
# OSC-16005: All local filesystems are ZFS
exclude OSC-16005
# OSC-15000: Find and list files with extended attributes
include OSC-15000
# OSC-35000: /etc/motd and /etc/issue contain appropriate policy text
include OSC-35000

The saved command file can then be used for input redirection to create the same tailoring on another system.

To run an assessment of the system using a tailoring we simply need to do this:

# compliance assess -t mysite
Assessment will be named 'mysite.2015-06-29,15:22'
Title   Package integrity is verified
Rule    OSC-54005
Result  PASS
...


 
  



        
    

Wednesday Apr 15, 2015

OpenSSH sftp(1) 'ls -l' vs 'ls -lh' and uid/gid translation

I had some one recently question why with OpenSSH using sftp(1) when the do 'ls -l' the get username/groupname in the output but when the do 'ls -lh' the file sizes are translated into SI units but the output now shows the uid/gid. It took myself and another engineer a while to work through this so I thought I would blog the explanation for what is going on.

The protocol used by sftp isn't actually an IETF standard.  OpenSSH (and SunSSH) uses this document:

http://tools.ietf.org/html/draft-ietf-secsh-filexfer-02[This is actually protocol version 3]

In that version of the draft there was a 'longname' field in the SSH_FXP_NAME response.  The standard explicitly says:

    The SSH_FXP_NAME response has the following format:

       uint32     id
       uint32     count
       repeats count times:
           string     filename
           string     longname
           ATTRS      attrs

...

   The format of the `longname' field is unspecified by this protocol.
   It MUST be suitable for use in the output of a directory listing
   command (in fact, the recommended operation for a directory listing
   command is to simply display this data).  However, clients SHOULD NOT
   attempt to parse the longname field for file attributes; they SHOULD
   use the attrs field instead.

When you do 'ls -l' the sftp client is displaying longname so it is the server that created that.  The longname is generated on the server and looks like the output of 'ls -l', the uid/gid to username/groupname translation was done on the server side.

When you add in '-h' the sftp client is obeying the draft standard and not parsing the longname field because it has to pretty print the size into SI units.  So it must just display the uid/gid.

The OpenSSH code explicitly does not attempt to translate the uid/gid because it has no way of knowing if the nameservice domain on the remote and local sides is the same.  This is why when you do 'lls -lh' you do get SI units and translated names but when you do 'ls -l' you get untranslated names.

In the very next version of the draft:

https://filezilla-project.org/specs/draft-ietf-secsh-filexfer-03.txt [ Protocol version 4]

The format of SSH_FXP_NAME and importantly ATTRS changes very significantly.  The longname is dropped and the ATTRS no longer has a UNIX uid/gid but an NFSv4 owner/group.

OpenSSH never implemented anything past the -02 draft.  Attempts at standardising the SFTP protocol eventually stopped at the -13 draft (which was protocol version 6) in 2006.

The proftpd server also has the ability to do the sftp protocol and it implements the -13 draft.  However Solaris 11 doesn't currently ship with the mod_sftp module, and even if it did the /usr/bin/sftp client doesn't talk that protocol version so an alternate client would be needed too.

Wednesday Jan 07, 2015

ZFS Encryption in Oracle ZFS Storage Appliance

With the 2013.1.3.0 (aka OS8.3) release of the software for the Oracle ZFS Storage Appliance the underlying ZFS encryption functionality is now available for use.  This is the same ZFS encryption that is available in general purpose Solaris but with appliance interfaces added for key management.

I originally wrote the following quick start guide for our internal test engineers and other developers while were developing the functionality and since the functionality is now available I thought I'd share it here. It walks through the required steps to configure encryption on the ZFSSA and perform some basic steps with keys and encrypted shares. Note that the BUI and CLI screenshots are not showing exactly the same system and configuration.

Setup Encryption with LOCAL keystore (CLI)

The first step is to setup the master passphrase, then we can create keys that will be used for assigning to encrypted shares.

brm7330-020:> shares encryption 
brm7330-020:shares encryption> show
Children:
                              okm => Manage encryption keys
                            local => Manage encryption keys

brm7330-020:shares encryption> local
brm7330-020:shares encryption local> show
Properties:
             master_passphrase = 

Children:
                       keys => Manage this Keystore's Keys

brm7330-020:shares encryption local> set master_passphrase
Enter new master_passphrase: 
Re-enter new master_passphrase: 
             master_passphrase = *********

Setup Encryption with LOCAL keystore (BUI)

encryption-local.png

Creating Keys

Now lets create our first key, the only thing we have to provide is the keyname field, this is the name that is used in the CLI and BUI when assigning a key to a project or share.  

It is possible to provide a hex encoded raw 256 bit key in the key field, if that is no provided a new randomly generated key value is used instead.  Note that the keys are stored in an encrypted form using the master_passphrase supplied above. For this simple walkthrough we will let the system generate the key value for us.

brm7330-020:shares encryption local> keys create
brm7330-020:shares encryption local>
show
Properties:
                        cipher = AES
                           key = 
                       keyname = (unset)
brm7330-020:shares encryption local key (uncommitted)>set keyname=MyFirstKey
                       keyname = MyFirstKey (uncommitted)
brm7330-020:shares encryption local key (uncommitted)> commit

If we were doing this from the BUI it would look like this:

encryption-local-new-key.png

Setup Encryption with OKM keystore (CLI)

For OKM you need to set the agent_id and the IP address (NOT hostname) and registration_pin given to you by your OKM security officer, the example below shows an already configured setup for OKM.

brm7330-020:> shares encryption 
brm7330-020:shares encryption> show
Children:
                              okm => Manage encryption keys
                            local => Manage encryption keys
brm7330-020:shares encryption> okm
brm7330-020:shares encryption okm> show
Properties:
                      agent_id = ExternalClient041
              registration_pin = *********
                   server_addr = 10.80.180.109
Children:
                             keys => Manage this Keystore's Keys

We are now ready to create our first encrypted share/project

Creating an Encrypted Share

Creation of encrypted project results in all shares in that project being encrypted, by default the shares (filesystems & LUNs) will inherit the encryption properties from the parent project.

brm7330-020:shares> project myproject
brm7330-020:shares myproject (uncommitted)> set encryption=aes-128-ccm
                    encryption = aes-128-ccm (uncommitted)
brm7330-020:shares myproject (uncommitted)> set keystore=LOCAL
                      keystore = LOCAL (uncommitted)
brm7330-020:shares myproject (uncommitted)> set keyname=MyFirstKey 
                       keyname = MyFirstKey (uncommitted)
brm7330-020:shares myproject (uncommitted)> commit
brm7330-020:shares> 

That is it now all shares we create under this project are automatically encrypted with AES 128 CCM using the key named "MyFirstKey" from the LOCAL keystore.

Lets now create a filesystem in our new project and show that it inherited the encryption properties:

brm7330-020:shares> select myproject
brm7330-020:shares myproject> filesystem f1
brm7330-020:shares myproject/f1 (uncommitted)> commit
brm7330-020:shares myproject> select f1
brm7330-020:shares myproject/f1> get encryption keystore keyname keystatus
                    encryption = aes-128-ccm (inherited)
                      keystore = LOCAL (inherited)
                       keyname = MyFirstKey (inherited)
                     keystatus = available
brm7330-020:shares myproject/f1> done

 For the BUI the filesystem and LUN creation dialogs allow selection of encryption properties.

encryption-create-share.png

Key Change

It is possible to change the key associated with a Project/Share at any time, even while it is in use by client systems.

Lets now create an additional key and perform a key change on the project we have just created.

brm7330">-00:> shares encryption local keys create
brm7330-020:shares encryption local key (uncommitted)> set keyname=MySecondKey
                       keyname = MySecondKey (uncommitted)
brm7330-020:sares encryption local key (uncommitted)> commit
    

Now lets change the key used for "myproject" and all the shares in it that are inheriting the key properties:

brm7330-020:> shares select myproject 
brm7330-020:shares myproject> set keyname=MySecondKey
                       keyname = MySecondKey (uncommitted)
brm7330-020:shares myproject> commit

If we look at the keyname property of our share "myproject/f1" we will see it has changed. The filesystem remained shared during the key change and was accesible for clients writting to it.

brm7330-020:shares myproject> select f1 get keyname
                       keyname = MySecondKey (inherited)
brm7330-020:shares myproject>
    

encryption-project-general.png

Deleting Keys

Deletion of a key is a very fast and effective way to make a large amount of data inaccessible.  Keys can be deleted even if they are in use.  If the key is in use a warning will be given and confirmation is required.  All shares using that key will be unshared and will no longer be able to be accessed by clients.

Example of deleting a key that is in use:

brm7330-020:shares encryption local keys> destroy keyname=MyFirstKey
This key has the following dependent shares:
  pool-010/local/HR
  pool-010/local/HR/EMEA
  pool-010/local/HR/US
Destroying this key will render the data inaccessible. Are you sure? (Y/N) Y

A similar message is displayed via a popup dialog in the BUI

encryption-delete-key.png

Now lets look at a share in a project that was using that key:

brm7330-010:> shares select HR select EMEA
brm7330-010:shares HR/EMEA> get encryption keystore keyname keystatus
                    encryption = aes-128-ccm (inherited)
                      keystore = LOCAL (inherited)
                       keyname = 1 (inherited)
                     keystatus = unavailable
Errors:
        key_unavailable

Thursday Nov 27, 2014

CVE metadata in Solaris IPS packages for improved Compliance reporting

I'm pleased to announce that the Solaris 11 /support repository now contains metadata for tracking security vulnerability fixes by the assigned CVE number. This was a project that Pete Dennis and I have been working on for some time now.  While we worked together on the design of the metadata and how it should be presented I'm completely indebted to Pete for doing the automation of the package generation from the data in our bug databases - we don't want humans to generate this by hand because it is error prone.

What does this mean for you the Solaris system admin or a compliance auditor  ?

The intent of this is to make it much easier to determine if your system has all the known and required security vulnerability fixes.  This should stop you from attempting to derive this information based on other sources. This is critically important since sometimes in Solaris we choose to fix a bug in an upstream FOSS component by applying the code patch rather than consuming a whole new version - this is a risk/benefit trade off that we take on a case by case basis.  While all this data has been available for sometime in My Oracle Support documents it wasn't easily available in a scriptable form - since it was intended to be read by humans.

The implemenation of this is designed to be applied to any other Oracle, or 3rd party, products that are also delivered in IPS form. 

For Solaris we have added a new metadata package called:  pkg:/support/critical-patch-update/solaris-11-cpu. This new package lives above entire in the dependency hiearchy and contains only metadata that maps the CVE-IDs to the package version that contains the fix.  This allows us to retrospectively update the critical-patch-update metadata where an already shipping version contains the fix for a given CVE.

Each time we publish a new critical patch update a new version of the package is published to the /support repository along with all the new versions of the packages that contain the fixes.  The versioning for this package is @YYYY.MM.VV where VV is usually going to be '1' but this allows us to respin/republish a given critical patch update with in the same month, note the VV is not DD it is NOT the day in the month.

We can search this data using either the web interface on https://pkg.oracle.com/solaris/support or using the CLI.  So lets look at some examples of the CLI searching.  Say we want to find out which packages contain the fix for the bash(1) Shellshock vulnerability and we know that one of the CVE-IDs for that is CVE-2014-7187:

# pkg search :CVE-2014-7187:
INDEX         ACTION VALUE                                                PACKAGE
CVE-2014-7187 set    pkg://solaris/shell/bash@4.1.11,5.11-0.175.2.2.0.8.0 pkg:/support/critical-patch-update/solaris-11-cpu@2014.10-1
CVE-2014-7187 set    pkg://solaris/shell/bash@4.1.11,5.11-0.175.2.3.0.4.0 pkg:/support/critical-patch-update/solaris-11-cpu@2014.10-1

That output tells us which packages and versions contain a fix and which critical patch update it was provided in.

For another example lets find out the whole list of fixes in the October 2014 critical patch update:

pkg search -r info.cve:|grep 2014.10
info.cve   set    CVE-1999-0103 pkg:/support/critical-patch-update/solaris-11-cpu@2014.10-1
info.cve   set    CVE-2002-2443 pkg:/support/critical-patch-update/solaris-11-cpu@2014.10-1
info.cve   set    CVE-2003-0001 pkg:/support/critical-patch-update/solaris-11-cpu@2014.10-1
....

Note that the placement of colon (:) is very signficant in IPS and that the keyname is 'info.cve' and the values have CVE in upper case.

The way that metadata is setup allows for the cases where a given CVE-ID applies to multiple packages and also where a given package version contains fixes for multiple CVE-IDs.

If we simply want to know if the fix for a given CVE-ID is installed the using 'pkg search -l' with the CVE-ID is sufficent eg:

# pkg search -l CVE-2014-7187
INDEX      ACTION VALUE         PACKAGE
info.cve   set    CVE-2014-7187 pkg:/support/critical-patch-update/solaris-11-cpu@2014.10-1

If it wasn't installed then we would have gotten no output.

If you want to have a system track only the Critical Patch Updates and not every SRU available in the /support repository then when you do an update instead of doing 'pkg update' or 'pkg update entire@<latest>' do 'pkg update solaris-11-cpu@latest'.  Note that initially systems won't have the 'solaris-11-cpu' package installed since as I mentioned previously is is "above" the entire package.  When you update to the latest version of the Critical Patch Update package it will install any intermediated SRUs that were released between this version and the prior one you had installed.

For some further examples have a look at MOS Doc ID 1948847.  Currently only the Solaris 11.2 critical patch update metadata is present but we intend to very soon backpublish the data for prior Solaris 11 critical patch updates as well.

Hope this helps with your compliance auditing.  If you are an auditor no more arguments with the sys admin teams about whither a fix is installed. If you are a system admin more time for your "real job" since you can now easily give the auditor the data they need.

The compliance team is also working on updating the Solaris and PCI-DSS security benchmarks we deliver with the compliance(1M) framework to use this new CVE metadata in determining if the system is upto date, since from a compliance view point we really want to know that all known and available security fixes are installed not that you are running the absolutely latest and greatest version of Solaris.

I have also been working with the OVAL community (part of SCAP) to define a schema for the Solaris IPS system.  This is due to be released with a future version of the OVAL standard.  That will allow us to use this same framework of metadata to also deliver OVAL xml files that map the CVE-IDs to packages.  When that is available we may choose to deliver OVAL xml files as part of the critical-patch-update package as an additional method of querying the same metadata. 

-- Darren

Thursday Jul 31, 2014

OpenStack Security integration for Solaris 11.2

As a part-time member/meddeler of the Solaris OpenStack engineering team I was asked to create some posts for the team's new OpenStack blog.

I've so far written up two short articles, one covering using ZFS encryption with Cinder, and one on Immutable OpenStack VMs.


Friday May 23, 2014

Overview of Solaris Zones Security Models

Over the years of explaining the security model of Solaris Zones and LDOMs to customers "security people" I've encountered two basic "schools of thought".  The first is "shared kernel bad" the second is "shared kernel good".

Which camp is right ?  Well both are, because there are advantages to both models. 

If you have a shared kernel there the policy engine has more information about what is going on and can make more informed access and data flow decisions, however if an exploit should happen at the kernel level it has the potential to impact multiple (or all) guests. 

If you have separate kernels then a kernel level exploit should only impact that single guest, except if it then results in a VM breakout.

Solaris non global zones fall into the "shared kernel" style.  Solaris Zones are included in the Solaris 11 Common Criteria Evaluation for the virtualisation extension (VIRT) to the OSPP.  Solaris Zones are also the foundation of our multi-level Trusted Extensions feature and is used for separation of classified data by many government/military deployments around the world.

LDOMs are "separate kernel", but LDOMs are also unlike hypervisors in the x86 world because we can shutdown the underlying control domain OS (assuming the guests have another path to the IO requirements they need, either on their own or from root-domains or io-domains).  So LDOMs can be deployed in a way that they are more protected from a VM breakout being used to cause a reconfiguration of resources.  The Solaris 11 CC evaluation still applies to Solaris instances running in an LDOM regardless of wither they are the control-domain, io-domain, root-domain or guest-domain.

Solaris 11.2 introduces a new brand of zone called "Kernel Zones", they look like native Zones, you configure them like native Zones and they actually are Solaris zones but with the ability to run a separate kernel.  Having their own kernel means that they support suspend/resume independent of the host OS - which gives us warm migration.  In particular "Kernel Zones" are not like popular x86 hypervisors such as VirtualBox, Xen or VMWare.

General purpose hypervisors, like VirtualBox, support multiple different guest operating systems so they have to virtualise the hardware. Some hypervisors (most type 2 but even some type 1) support multiple different host operating systems for providing the services as well. So this means the guest can only assume virtualised hardware.

Solaris Kernel Zones are different, the zones kernel knows it is running on Solaris as the "host" and the Solaris global zone "host" also knows that the kernel zone is Solaris.  This means we get to make more informed decisions about resources, general requirements and security policy. Even with this we can still host Solaris 11.2 kernel zones on the internal developement release of Solaris 12 and vice versa.

Note that what follows is an out line of implementation details that are subject to change at any time: The kernel of a Solaris Kernel Zone is represented as a user land process in a Solaris non global zone.  That non global zone is configured with less privilege than a normal non global zone would have and it is always configured as an immutable zone.  So if there happened to be an exploit of the guest kernel that resulted in a VM break out you would end up in an immutable non global zone with lowered privilege.

This means that we can have advantages of "shared kernel" and "separate kernel" security models with Solaris Kernel Zones and we have the management simplicity of traditional Solaris Zones (# zonecfg -z mykz 'create -t SYSsolaris-kz' && zoneadm -z mykz install && zoneadm -z mykz boot)

If you want even more layers of protection on SPARC it is possible to host Kernel Zones inside a guest LDOM.


Tuesday Apr 29, 2014

Using /etc/system.d rather than /etc/system to package your Solaris kernel config

The request for an easy way to package Solaris kernel configuration (/etc/system basically) came up both via the Solaris Customer Advisory Board meetings and requests from customers with early access to Solaris 11.2 via the Platinum Customer Program.  I also had another fix for the Solaris Cryptographic Framework that I needed to implement to stop cryptoadm(1M) from writing to /etc/system (some of the background to what that is needed is in my recent blog post about FIPS 140-2).

So /etc/system.d was born.  My initial plan for the implementation was to read the "fragment" files directly from the kernel. However that is very complex to do at the time we need to read these; since it happens (in kernel boot time scales) eons before we have the root file system mounted. We can however read from a well known file name that is in the boot archive.

The way I ended up implementing this is that during boot archive creation (either manually running 'bootadm update-archive' or as a result of BE or packaging operations or just a system reboot) we assemble together the content of /etc/system.d into a single well known /etc/system.d/.self-assembly (but considered a Private interface) file.  We read the files in /etc/system.d/ in C locale collation order and ignore all files that start with a "." character, this ensures that the assembly is predictable and consistent across all systems.

I then had too choose wither /etc/system.d or /etc/system "wins" if a variable happens to get set in both.  The decision was that /etc/system is read second and thus wins, this preserves existing behaviours. 

I also enhanced the diagnostic output from when the system file parser detects duplication so that we could indicate which file it was that caused the issue. When bootadm creates the .self-assembly file it includes START/END comment markers so that you will be able to easily determine which file from /etc/system.d delivered a given setting.

So now you can much more easily deliver any Solaris kernel customisations you need by using IPS to deliver fragments (one line or  many) into /etc/system.d/ instead of attempting to modify /etc/system via first boot SMF services or other scripting.  This also means they apply on first boot of the image after install as well. 

So how do I pick which file name in /etc/system.d/ to use so that it doesn't clash with other people ? The recommendation (which will be documented in the man pages and in /etc/system itself) is to use the full name of the IPS package (with '/' replaced by ':' ) as the prefix or name of any files you deliver to /etc/system.

As part of the same change I updated cryptoadm(1M) and dtrace(1M) to no longer write to /etc/system but instead write to files in /etc/system.d/ and I followed my own advice on file naming!

Information on how to get the Solaris 11.2 Beta is available from this OTN page.

Note that this particular change came in after the Solaris 11.2 Beta build was closed so you won't see this in Solaris 11.2 Beta (which is build 37).

Solaris 11 Compliance Framework

During the Solaris 11 launch (November 2011) one of the questions I was asked from the audience was from a retail customer asking for documentation on how to configure Solaris to pass a PCI-DSS audit.  At that time we didn't have anything beyond saying that Solaris was secure by default and it was no longer necessary to run the Solaris Security Toolkit to get there.  Since then we have produced a PCI-DSS white paper with Coalfire (a PCI-DSS QSA) and we have invested a significant amount of work in building a new Compliance Framework and making compliance a "lifestyle" feature in Solaris core development.

We delievered OpenSCAP in Solaris 11.1 since SCAP is the foundation language of how we will provide compliance reporting. So I'm please to be able to finally talk about the first really signficant part of the Solaris compliance infrastruture which is part of Solaris 11.2.

Starting with Solaris 11.2 we have a new command compliance(1M) for running system assements against security/compliance benchmarks and for generating html reports from those.  For now this only works on a single host but the team hard at work adding multi-node support (using the Solaris RAD infrastructure) for a future release.

The much more signficant part of what the compliance team has been working on is "content".  A framework without any content is just a new "box of bits, lots of assembly required" and that doesn't meet the needs of busy Solaris administrators.  So starting with Solaris 11.2 we are delivering our interpretation of important security/compliance standards such as PCI-DSS.  We have also provided two Oracle authored policies: 'Solaris Baseline' and 'Solaris Recommended', a freshly installed system should be getting all passes on the Baseline benchmark.  The checks in the Recommended benchmark are those that are a little more controversial and/or take longer to run.

Lets dive in and generate an assesment and report from one of the Solaris 11.2 compliance benchmarks we provide:

# pkg install security/compliance 
# compliance assess
# compliance report

That will give us an html report that we can then view.  Since we didn't give any compliance benchmark name it defaults to 'Solaris Baseline', so now lets install and run the PCI-DSS benchmark. The 'security/compliance' package has a group dependency for 'security/compliance/benchmark/pci-dss' so it will be installed already but if you don't want it you can remove that benchmark and keep the others and the infrastructure.

# compliance assess -b pci-dss
Assessment will be named 'pci-dss.Solaris_PCI-DSS.2014-04-14,16:39'
# compliance report -a pci-dss.Solaris_PCI-DSS.2014-04-14,16:39

If we want the report to only show those tests that failed we can do that like this:

# compliance report -s fail -a pci-dss.Solaris_PCI-DSS.2014-04-14,16:39

We understand that many of your Solaris systems won't match up exactly to the benchmarks we have provided and as a result we have delivered the content in a way that you can customise it. Over time the ability to build custom benchmarks from the checks we provide will be come part of the compliance(1M) command (tailoring was added in Solaris 11.3 so the information below has been superceeded) but for now you can enable/disable checks by editing a copy of the XML files. Yes I know many of you don't like XML but this time it isn't too scary for just this part, crafting a whole check from scratch is hard though but that is the SCAP/XCCDF/OVAL language for you!.

So for now here is the harder than it should be way to customise one of the delivered benchmarks, using the PCI-DSS benchmark as an example:

# cd /usr/lib/compliance/benchmarks
# mkdir example
# cd example
# cp ../pci-dss/pci-dss-xccdf.xml example-xccdf.xml
# ln -s ../../tests
# ln -s example-xccdf.xml xccdf.xml

# vi example-xccdf.xml

In your editor you are looking for lines that look like this to enable or disable a given test:

<select idref="OSC-27505" selected="true" />

You probably also want to update these lines to indicate that it is your benchmark rather than the original we delivered.

<status date="2013-12-12">draft</status>
<title>Payment Card Industry Data Security Standard</title>
<description>solaris-PCI-DSS-v.1</description>

Once you have made the changes you want exit from your editor and run 'compliance list' and you should see your example benchmark listed, you can run run assesments and generate reports from that one just as above.  It is important you do this by making a copy of the xccdf.xml file otherwise the 'pkg verify' test is always going to fail and more importantly your changes would be lost on package update.

Note that we re-numbered these tests in the Solaris 11.2 SRU and 11.3 to provide a peristent unique identifier and namespace for each of the tests we deliver, it just didn't make the cut off for Solaris 11.2 release.

I would really value feedback on the framework itself and probably even more importantly the actual compliance checks that our Solaris Baseline, Solaris Recommended, and PCI-DSS security benchmarks include.

Updated August 6th 2015 to added information about Solaris 11.3 changes.

Wednesday Apr 16, 2014

Is FIPS 140-2 Actively harmful to software?

Solaris 11 recently completed a FIPS 140-2 validation for the kernel and userspace cryptographic frameworks.  This was  a huge amount of work for the teams and it is something I had been pushing for since before we wrote a single line of code for the cryptographic framework back in 2000 during its initial design for Solaris 10.

So you would imaging I'm happy right ?  Well not exactly, I'm glad I won't have to keep answering questions from customers as to why we don't have a FIPS 140-2 validation but I'm not happy with the process or what it has done to our code base.

FIPS 140-2 is an old standard that doesn't deal well with modern systems and especially doesn't fit nicely with software implementations.  It is very focused on standalone hardware devices, and plugin hardware security modules or similar physical devices.  My colleague Josh over in Oracle Seceval  has already posted a great article on why we only managed to get FIPS 140-2 @ level 1 instead of level 2.  So I'm not going to cover that but instead talk about some of the technical code changes we had to make inorder to "pass" our validation of FIPS 140-2.

There are two main parts to completing a FIPS 140-2 validation: the first part is CAVP  (Cryptographic Algorithm Validation Program) this is about proving your implementation of a given algorithm is correct using NIST assigned test vectors.  This part went relatively quickly and easily and has the potential to find bugs in crypto algorithms that otherwise appear to be working correctly.  The second part is CMVP (Cryptographic Module Validation Program), this part looks at the security model of the whole "FIPS 140-2 module", in our case we had a separate validation for kernel crypto framework and userspace crypto framework.

CMVP requires we draw boundary around the delivered software components that make up the FIPS 140-2 validation boundary - so files in the file system.  Ideally you want to keep this as small as possible so that non crypto relevant libraries and tools are not part of the FIPS 140-2 boundary. We certainly made some mistakes drawing our boundary in userspace since it was a little larger than it needed to be.  We ended up with some "utility" libraries inside the boundary, so good software engineering practice of factoring out code actually made our FIPS 140-2 boundary bigger.

Why does the FIPS 140-2 boundary matter ?  Well unlike in Common Criteria with flaw remediation in the FIPS 140-2 validation world you can't make any changes to the compiled binaries that make up the boundary without potentially invalidating the existing valiation. Which means having to go through some or all of the process again and importantly this cost real money and a significant amount of elapsed time. 

It isn't even possible to fix "obvious" bugs such as memory leaks, or even things that might lead to vulnerabilties without at least engaging with a validation lab.  This is bad for over all system security, after all isn't FIPS 140-2 supposed to be a security standard ?  I can see, with a bit of squinting, how this can maybe make some sense in a hardware module world but it doesn't make any sense for software.

We also had to add POST (Power On Self Test) code that runs known answer tests for all the FIPS 140-2 approved algorithms that are implemented inside the boundary at "startup time" and before any consumer outside of the framework can use the crypto interfaces. 

For our Kernerl framework we implemented this using the module init hooks and also leveraged the fact that the kcf module itself starts very early in boot (long before we even mount the root file system from inside the kernel).  Since kernel modules are generally only unloaded to be updated the impact of having to do this self test on every startup isn't a big deal.

However in userspace we were forced because of "Implementation Guidance", I'll get back to this later on why it isn't guidance, to do this on every process that directly or indirectly causes the cryptographic framework libaries to be loaded.  This is really bad and is counter to sensible software engineering practice. On general purpose modern operating systems (well anything from the last 15+ years really) like Solaris share library pages are mapped shared so the same readonly pages of code are being used by all the processes that start up.  So this just wastes CPU resources and causes performance problems for short lived processes.  We measured the impact this had on Solaris boot time and it was if I'm remebering correctly about a 9% increase in the time it takes to boot to multi-user. 

I've actually spoken with NIST about the "always on POST" and we tried hard to come up with an alternative solution but so far we can't seem to agree on a method that would allow this to be done just once at system boot and only do it again if the on disk binaries are acutally changed (which we can easily detect!).

Now lets combine these last two things, we had to add code that runs every time our libraries load and we can't make changes to the delivered binaries without possibly causing our validation to become invalid.  Solaris actually had a bug in some of the new FIPS 140-2 POST code in userspace that had a risk of a file descriptor leak (it wasn't something that was an exploitable security vulnerability and it was only one single fd) but we couldn't have changed that without revising which binaries were part of the FIPS 140-2 validation.  This is bad for customers that are forced by their governments or other security standards to run with FIPS 140-2 validated crypto modules, because sometimes they might have to miss out on critical fixes.

I promissed I'd get back to "Implementation Guidance", this is really aroundable way of updating the standard with new interpretations that often look to developers like whole new requirements (that we were supposed to magically know about) without the standard being revised.  While the approved validation labs to get pre-review of these new or updated IGs the impact for vendors is huge.   A module that passes FIPS 140-2 (which is a specific revision, the current one as of this time, of the standard) today might not pass FIPS 140-2 in the future - even if nothing was changed. 

In fact we are in potentially in this situation with Solaris 11.  We have completed and passed a FIPS 140-2 validation but due to changes in the Implementation Guidance we aren't sure we would be able to submit the identical code again and pass. So we may have to make changes just to pass FIPS 140-2 new or updated IGs that has no functional beneift to our customers. 

This has serious implications for software implementations of cryptographic modules.  I can understand that if we change any of the core crypto algorithm code we should re run the CAVP test vectors again - and in fact we do that internally using our test suite for all changes to the crypto framework anyway (our test suite is actually much more comprehensive than what FIPS 140 required), but not being able to make simple bug fixes or changes to non algorithm code is not good for software quality.

So what we do we do in Solaris ?  We make the bug fixes and and new non FIPS 140-2 relevant algorithms (such as Camellia) anyway because most of our customers don't care about FIPS 140-2 and even many of those that do they only care to "tick the box" that the vendor has completed the validation.

In Solaris the kernel and userland cryptographic frameworks always contain the FIPS 140-2 required code but it is only enabled if you run 'cryptoadm enable fips-140' .  This turns on the FIPS 140-2 POST checking and a few other runtime checks.

So should I run Solaris 11 with FIPS 140-2 mode enabled ?

My personal opinion is that unless you have a very hard requirement to do so I wouldn't - it is the same crypto algorithm and key management code you are running anyway but you won't have the pointless POST code running that will hurt the start up time of short lived processes. Now having said that my day to day Solaris workstation (which runs the latest bi weekly builds of the Solaris 12 development train) does actually run in FIPS 140-2 mode so that I can help detect any possible issues in the FIPS 140-2 mode of operating long before a code release gets to customers.  We also run our test suites with it enabled and disabled.

I really hope that when a revision to FIPS 140 finally does come around (it is already many years behind schedule) it will deal better with software implementations. When FIPS 140-3 was first in public review I sent on a lot of comments to it for that area.   I really hope that the FIPS 140 program can adopt a sensible approach to allowing vendors to provide bugfixes without having to redo validations - in particular it should not cost the vendor any time or money beyond what they normally do themselves.

In the mean time the Solaris Cryptographic Framework team are hard at work; fixing bugs, improving performance adding features new algorithms and (grudgingly) adding what we think will allow us to pass a future FIPS 140 validation based on the currently known IGs.

-- Darren

Tuesday Mar 18, 2014

NIAP Recommends no further Common Criteria Evaluation of Operating Systems & DBMS

NIAP has posted its public position statements for future Common Criteria evaluation profiles for General Purpose Operating Systems (eg Solaris, Linux, Windows, MacOS, AIX, ...), DBMS (eg Oracle Database, MySQL, DB2, ...).

They are recommending against further development of Common Criterial profiles and against evaluation of GPOS and DBMS systems, due to complexity and cost reasons. 

Note that this is neither a personal statement nor an Oracle statement on the applicability of CC evaluation to GPOS or DBMS systems.

For the status of Oracle products in evaluation please visit this page.

Monday Dec 16, 2013

Kernel Cryptographic Framework FIPS 140-2 Validation

After many years of preparation work by the Solaris Crypto Team, and almost a year (just shy about two weeks) since we submitted to NIST our documentation for validation, the Kernel Crypto Framework has its first FIPS 140-2 validation completed.  This applies  Solaris 11.1 SRU 5 when running on SPARC T4, T5 (the M5 and M6 use the same crypto core and we expect to vendor affirm those), M3000 class (Fujitsu SPARC64) and on Intel (with and without AES-NI).

Many thanks to all those at Oracle who have helped make this happen and to our consultants and validation lab.


Monday Oct 21, 2013

Do your filesystems have un-owned files ?

As part of our work for integrated compliance reporting in Solaris we plan to provide a check for determining if the system has "un-owned files", ie those which are owned by a uid that does not exist in our configured nameservice.  Tests such as this already exist in the Solaris CIS Benchmark (9.24 Find Un-owned Files and Directories) and other security benchmarks.

The obvious method of doing this would be using find(1) with the -nouser flag.  However that requires we bring into memory the metadata for every single file and directory in every local file system we have mounted.  That is probaby not an acceptable thing to do on a production system that has a large amount of storage and it is potentially going to take a long time.

Just as I went to bed last night an idea for a much faster way of listing file systems that have un-owned files came to me. I've now implemented it and I'm happy to report it works very well and peforms many orders of magnatude better than using find(1) ever will.   ZFS (since pool version 15) has per user space accounting and quotas.  We can report very quickly and without actually reading any files at all how much space any given user id is using on a ZFS filesystem.  Using that information we can implement a check to very quickly list which filesystems contain un-owned files.

First a few caveats because the output data won't be exactly the same as what you get with find but it answers the same basic question.  This only works for ZFS and it will only tell you which filesystems have files owned by unknown users not the actual files.  If you really want to know what the files are (ie to give them an owner) you still have to run find(1).  However it has the huge advantage that it doesn't use find(1) so it won't be dragging the metadata for every single file and directory on the system into memory. It also has the advantage that it can check filesystems that are not mounted currently (which find(1) can't do).

It ran in about 4 seconds on a system with 300 ZFS datasets from 2 pools totalling about 3.2T of allocated space, and that includes the uid lookups and output.

#!/bin/sh

for fs in $(zfs list -H -o name -t filesystem -r rpool) ; do
        unknowns=""
        for uid in $(zfs userspace -Hipn -o name,used $fs | cut -f1); do
                if [ -z "$(getent passwd $uid)" ]; then
                        unknowns="$unknowns$uid "
                fi
        done
        if [ ! -z "$unknowns" ]; then
                mountpoint=$(zfs list -H -o mountpoint $fs)
                mounted=$(zfs list -H -o mounted $fs)
                echo "ZFS File system $fs mounted ($mounted) on $mountpoint \c"
                echo "has files owned by unknown user ids: $unknowns";
        fi
done
Sample output:

ZFS File system rpool/ROOT/solaris-30/var mounted (no) on /var has files owned by unknown user ids: 6435 33667 101
ZFS File system rpool/ROOT/solaris-32/var mounted (yes) on /var has files owned by unknown user ids: 6435 33667
ZFS File system builds/bob mounted (yes) on /builds/bob has files owned by unknown user ids: 101

Note that the above might not actually appear exactly like that in any future Solaris product or feature, it is provided just as an example of what you can do with ZFS user space accounting to answer questions like the above.

About

Darren Moffat-Oracle

Search

Categories
Archives
« July 2016
MonTueWedThuFriSatSun
    
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
       
Today