Friday Dec 02, 2016

20 years at Sun/Oracle

20 years ago today I turned up to Sun Microsystems in Watchmoor Park in a suit to start my job in SunService supporting the SunOS 4.1.3 based Trusted Solaris. By January I was on my first international business trip to California for training and to meet the SunFederal engineering team.

In 1999 I moved to Solaris Sustaining Engineering to work on Kerberos and NIS+. In 2000 my wife (Bronwyn) and I moved to California so I could join the newly forming Solaris Security Technologies Group.

5 years later we returned to the UK (via three months in Dublin) with me still doing the save job.

I'm now one of the architects for Solaris Security and still today at Oracle work with people I met on my first day at Sun. I've the pleasure to work with many more smart people doing things in different parts of the software/hardware stack since we became part of Oracle.

I've worked (and continue to do so) on some amazing projects with fantastically smart people. I continue to learn everyday from people of all experience levels, some of them barely born the day I started and some that could almost say the same about me.

I love working on a product that is innovating, making admins jobs easier, being used to store and process a huge variety of data in all industries. Oracle Solaris is alive, well and kicking butt in compute (SPARC & x86) and storage (ZFS) and I look forward to many more years working with it at Oracle, what we do with it in the future and the amazing team of people. Many of whom aren't just colleagues but people I trust respect and call my friends.

I've come along way from that first day, I don't normally wear suit and tie to work anymore but somethings haven't changed: SunOS, zsh, vi and the Sun US UNIX keyboard layout are still the foundation of my toolbox.

Now back to the terminal window, I've got Solaris bugs to fix and features to finish....

Wednesday Nov 02, 2016

GNOME 3 on Solaris

As a result of a fantastic amount of porting and validation work done by such a small team of Solaris engineers to bring GNOME 3 to the in development release.

This essentially started with a "grumbling session" between some of the core OS senior engineers.

We knew we had to do something and there were false internal rumours going around that the desktop was dead on Solaris going forward. So I led the effort to come up with a realistic plan.

My original feeling was that we had to consider options other than GNOME 3 since it as a) very big and b) was reported to depend on many Linuxisms. It turned out that yes it was big but the core requirements for doing something like Xfce and also due to the work done by others to get GNOME 3 working on *BSD it was much more viable than initially thought.

I can't take credit for the actual work that goes to Alan Coopersmith and a very dedicated group of engineers. They really care about running Solaris not just in or as the cloud but giving us a modern desktop UI to complement the core Solaris unique functionally and current GNU tools and other FOSS.

Now where is my cheese my desktop looks really different.

Monday Oct 17, 2016

Solaris is immune from virus/malware right ?

During an internal interest list discussion recently someone attemted to assert that Oracle Solaris was immune from virus attacks and therefore didn't need anti-virus.

There is really no immunity to almost any security threat. There are layers of defense to specific threats. Just like in the physical world there is no complete immunity from all viruses - they can and do adapt and evolve and impact differently depending on the host.

Someone in the discussion suggested that we were immune or better than others because we have FIPS 140-2 validation, CC evaluation and a Compliance framework.

  • Trusted Extentsions is about controlling flows between applications and networks via labeling.
    • This is at heart about data loss prevention and data classificatoin.
  • Compliance (eg PCI-DSS or DISA STIG) is about providing evidence the system is configured to an approved configuration policy;
    • I like to remind people that Compliance != Security: you can be compliant and insecure or secure (against your threat model) yet out of compliance.
  • FIPS 140-2 is a 3rd party (NIST) validation that the vendor implemented the cryptographic primitives correctly.
    • Good cryptography alone isn't enough, it can certainly help against some threats but not all. Most of the time data needs to be decrypted to be operated on anyway.
  • Common Criteria is a 3rd party validation that we implemented an agreed feature set correctly.

Oracle Solaris does have a number of features that can be deployed to reduce the risks where malware is the threat.

  • Immutable Zones (including bare metal and LDOMs):
    • This provides protection against malware persisting.
  • Verified Boot:
    • Detect corrupt or malicious kernel and modules
  • Privileges / Extended Policy:
    • Reduce the damage from security exploit bugs to prevent escalation and potential virus propagation.
  • Role Base Access Control (RBAC) administration
    • Use the least privilege possible to get the job done, and were required provide separation of duty (eg require two different admins to configure a new user account).
    • Build onto of privileges and user authorisations.
  • Signed Packages / Install over TLS
    • Ensure we start out with and update to an OS and applications that haven't been tampered with since it left our (or yours) release engineering process.
  • Signed ELF binaries for userspace
    • Manual/Periodic check to modified binaries - 'pkg verify' often a better option though since it covers more than just ELF binaries.
  • Silicon Secured Memory (specifically ADI)
    • Hardware assist for a specific class of bad programming errors that can (and often do) lead to corruption and/or security exploits in running software.
  • ZFS can require Virus Scanning (using a 3rd party engine) on file access (local or remote)

The above is far from a comprehensive list of security features available in Oracle Solaris, some of the above have equivalents in other operating systems as well.

So do you need anti-virus software when deploying Solaris

That depends on your environment and threat model. 

Personally if I was serving out file systems over SMB to Windows and macOS clients then I would seriously consider using the ZFS virus scanning integration, it provides a useful additional layer of defence. It might not be required or appropriate everywhere though. 

One the other hand if I was building an OpenStack based cloud infrastruture my focus would be much more on using Immutable Zones and ensuring all the cloud infrastructure used TLS (or IPsec) to communicate securely.

Monday Oct 10, 2016

Requiring both Public Key (or GSSAPI/Kerberos) and OTP for OpenSSH

I've had two conversations recently on how to configure OpenSSH to require public key or gssapi as well as OTP but the users UNIX password.  The way to do this isn't completely obvious unless you know how OpenSSH and PAM interact. On Oracle Solaris we have an additional patch to OpenSSH that allows for per SSH userauth PAM stacks, the configuration below takes advantage of this.

When OpenSSH authenticates the user using either pubkey or GSSAPI it does not call pam_authenticate(3PAM), there is no path to communicate with the user when using those userauth methods so there is nothing to be gained from asking PAM.  However to use OTP we need to communicate with the user.  So we need to configure both sshd and our PAM stack.  OpenSSH has a very useful feature here where we can require multiple of the SSH userauth methods to pass.

Add the following to: /etc/ssh/sshd_config

AuthenticationMethods publickey,keyboard-interactive

Or in the case of requiring GSSAPI (Kerberos credentials):

AuthenticationMethods gssapi-with-mic,keyboard-interactive

Then create /etc/pam.d/sshd-kbdint with the following content:

auth required
auth required

With the above configuration all users will be required to do SSH with pubkey (or gssapi)  and will be prompted for OTP using the Google Authenticator module. So lets see what this looks like from the users view, I'm going to use the GSSAPI case but it will be similar for pubkey:

$ ssh
Permission denied (gssapi-with-mic).
$ kinit 
Password for darren@EXAMPLE.COM:
$ ssh
Verification code: 

How do we know that gssapi was used though ? We can look at the debug output by adding -v to the ssh invocation on the client and we will see output that looks like this:

debug1: Next authentication method: gssapi-with-mic
debug1: Delegating credentials
debug1: Delegating credentials
Authenticated with partial success.
debug1: Authentications that can continue: keyboard-interactive
debug1: Next authentication method: keyboard-interactive

In the pubkey case the output would be very similar (the first line would say pubkey instead of gssapi-with-mic). If you don't need the above configuration for all users you can use the Match block directive in sshd_config and/or the pam_user_policy facility in Solaris to select which users require the multiple factors.

Thursday Jan 28, 2016

Is ZFS Encryption PCI-DSS Compliant ?

Is ZFS Encryption PCI-DSS Compliant ? No it isn't, and I'll explain why.

PCI-DSS applies to a given merchant or financial institution it does not evaluate or validate products. This is very different to Common Criteria (CC) or FIPS 140.

One of the many requirements of PCI-DSS is that certain types of data (credit card numbers and card holder data) are encrypted on persistent storage and in transit. There are many ways to achieve that PCI-DSS requirement, ZFS encryption can be one of them using Oracle DB TDE is another.

There is a peer standard called PA-DSS (Payment Application Data Security Standard) but storage is not a payment application so again ZFS encryption doesn't apply here.

Even using a PCA-DSS compliant application does not imply you have PCI-DSS compliant deployment. The distinction is covered very well in this article on the PCI compliance guide website, I particularly like this quote: 

"The bottom line is that only an organization can be validated to be PCI-DSS compliant, never an application or a system."

So we can't claim ZFS is PCI-DSS complaint but then no other storage or database vendor can make those claims either. What we can say is that ZFS encryption can be used as part of a PCI-DSS solution to encrypt card holder data at rest. We can also say that we know of cases where ZFS encryption has been used as part of meeting the PCI-DSS requirements and it has succefully passed an audit.

So the answer is "NO" ZFS encryption is NOT PCI-DSS compliance because that is an invalid question to ask.

In this case a useful question would be:

"Has ZFS Encryption been used as part of a PCI-DSS deployment for encrypting credit card numbers and/or card holder data ?" Then the answer is YES.

Tuesday Jul 07, 2015

Solaris new system calls: getentropy(2) and getrandom(2)

The traditional UNIX/Linux method of getting random numbers (bit streams really) from the kernel to a user space process was to open(2) the /dev/random or /dev/urandom pseudo devices and read(2) an appropriate amount from it, remembering to close(2) the file descriptor if your application or library cached it.  Solaris 11.3 adds two new system calls, getrandom(2) and getentropy(2), for getting random bit streams or raw entropy. These are compatible with APIs recently introduced in OpenBSD and Linux.

OpenBSD introduced a getentropy(2) system call that reads a maximum of 256 bytes from the kernel entropy pool and returns it to user space for use in user space random number generators (such as in the OpenSSL library). The getentropy(2) system call returns 0 on success and always returns the amount of data requested or fails completely, setting errno.  It is an error to request more than 256 bytes of entropy and doing so causes errno to be set to EIO. 

On Solaris the output of getentropy(2) is entropy and should not be used where randomness is needed, in particular it must not be used where an IV or nonce is needed when calling a cryptographic operation.  It is intended only for seeding a user space RBG (Random Bit Generator) system. More specifically the data returned by getentropy(2) has not had the required FIPS 140-2 processing for the DRBG applied to it.

Recent Linux kernels have a getrandom(2) system call that reads between 1 and 1024 bytes of randomness from the kernel.  Unlike getentropy(2) it is intended to be used directly for cryptographic use cases such as an IV or nonce.  The getrandom(2) call can be told whether to use the kernel pool usually used for /dev/random or the one for /dev/urandom by using the GRND_RANDOM flag to request the former. If GRND_RANDOM is specified then getrandom(2) will block until sufficient randomness can be generated if the pool is low, if non blocking behaviour is specified then the GRND_NONBLOCK flag can be passed. 

#include <sys/random.h>
int getrandom(void *buf, size_t buflen, unsigned int flags);
int getentropy(void *buf, size_t buflen);

On Solaris if GRND_RANDOM is not specified then getrandom(2) is always a non blocking call. Note that this differs slightly from Linux but not in a way that impacts its usage.  The other difference is that on Solaris getrandom(2) will either fail completely or will return a buffer filled with the requested size, where as the Linux implementation can return partial buffers.  In order to ensure code portability developers must check the return value of getrandom(2) every time it is called, for example:

#include <sys/random.h>
#include <stdlib.h>
size_t bufsz = 128;
char *buf;
int ret;
buf = malloc(bufsz);
errno = 0;
ret = getrandom(buf, bufsz, GRND_RANDOM);
if (ret < 0 || ret != bufsz) {
    perror("getrandom failed");

The output of getrandom(2) on Solaris has been through a FIPS 140-2 approved DRBG function as defined in NIST SP-900-90A.

In addition to the above to system calls the OpenBSD arc4random(3C), arc4random_buf(3C) and arc4random_uniform(3C) functions are also now provided from libc, these are available by including <stdlib.h>.

OpenSSH in Solaris 11.3

Solaris 9 was the first release where we included an implementation of the IETF SSH client and server protocols, I led that project and at the time I was also the document editor for the IETF standards documents. We started with OpenSSH but for various reasons it ended up over time being a Solaris specific fork called SunSSH. We have regularly resynced for features and bug fixes with OpenSSH, but SunSSH remains a fork.

Starting with Solaris 11.3 we supply OpenSSH in addition to SunSSH.  The intent is that in some future release SunSSH will be removed leaving only OpenSSH. 

For Solaris 11.3 both OpenSSH and SunSSH can be installed on a machine at the same time or the administrator can choose to install only one.  SunSSH is delivered by pkg:/network/ssh and OpenSSH by pkg:/network/openssh.

Both packages effectively deliver the same svc:/network/ssh:default, really it comes from pkg:/network/ssh-common. When both OpenSSH and SunSSH are installed an IPS package mediator is used to select which one is run by the SMF service and which one is /usr/bin/ssh.  A system with OpenSSH set as the default would look like this:

$ pkg mediator ssh
ssh          system            local      openssh

Our intent is that the OpenSSH delivered in Solaris has as few Solaris-specific changes applied as possible. We have managed to push some bug fixes upstream to the OpenSSH community but there are still some Solaris-specific changes for enhancements we felt were important to customers migrating from SunSSH. These Solaris-specific changes are applied to OpenSSH during the build process using patch(1) and are thus maintained in a directory called patches Some of the patches are purely build related and some are features. The current list of feature patches include:

  • GSS credential storage
  • PAM Service Name per SSH userauth method as per SunSSH
  • PAM can not be disabled with the UsePAM option
  • DisableBanner option for ssh client: see ssh_config(4)/li>

While the intent going forward is to keep up with the OpenSSH releases we may choose to backport a fix from a later version of OpenSSH to fix a bug or security vulnerability rather than delivering the whole release.  Often the reasons for doing this will be releated to the available Solaris release train at the time and the size of change in the later release. This means that the version of OpenSSH could change in a Solaris SRU.

The OpenSSH releases also include some very useful features that we hadn't ported over to SunSSH. The pkg mediator allows for selecting which binaries are the system default but it doesn't help with the per user configuration file. In order to use features from OpenSSH when the users home directory may also be used by a SunSSH client use of the options to ignore unknown options is needed.  This is a feature that originated in SunSSH but when the equivalent feature arrived in OpenSSH the option name was different.  To over come this I have the following in my ~/.ssh/config file:

IgnoreUnknown IgnoreIfUnknown
IgnoreIfUnknown IgnoreUnknown,ControlMaster,ControlPersist,ControlPath

That allows me to have Host blocks that use ControlMaster configuration that OpenSSH knows about but SunSSH doesn't and ensures that neither of them complains about unknown options. Some of the other important differences between SunSSH and OpenSSH are:

UseOpenSSLEngine No replacement, OpenSSL defaults to using Hardware acceleration on modern SPARC and Intel CPUs
MaxAuthTriesLog OpenSSH always logs at MaxAuthTries / 2
PreUserAuthHook Use AuthorizedKeysCommand instead.
IgnoreIfUknown IgnoreUnknown
Message Localisation Client and Server messages are no longer localised.
Use of /etc/default/login No replacement, set policy using PAM and sshd_config.

Customising Solaris Compliance Policies

When we introduced the compliance framework in Solaris 11.2 there was no easy way to customise (tailor) the policies to suit individual machine or site deployment needs. While it was certainly possible for users familiar with the XCCDF/OVAL policy language it wasn't easy to do in away that preserved your customisations while still allowing access to new and policy rules when the system was updated.

To address this a new subcommand for compliance(1M) has been added that allows creation of a tailoring.  The initial release of tailoring in Solaris 11.3 allows the enabling and disabling of individual checks, and the team is already working on enhancing it to support variables in a future release.

The default and simplest way of using 'compliance tailor' is use the interactive pick tool:

# compliance tailor -t mysite
*** compliance tailor: No existing tailoring 'mysite', initializing
tailoring:mysite> pick

The above shows the interactive mode where using 'x' or 'space' allows us to enable or disable an individual test.  Note also that since the Solaris 11.2 release all the tests have been renumbered and now have unique rule identifiers that are stable across releases of Solaris.  The same rule number always refers to the same test in all of the security benchmark policy files delivered with Solaris.

When exiting from the interactive pick mode just type 'commit' to write this out to a locally installed tailoring; that will create an XCCDF tailoring file under /var/share/compliance/tailorings.  Those tailoring files should not be copied from release to release.

There is also an 'export' action for the tailoring subcommand that allows you to save off your customisations for importing into a different system, this works similarly to zonecfg(1M) export.

$ compliance tailor -t mysite export | tee /tmp/mysite.out
set tailoring=mysite
# version=2015-06-29T14:16:34.000+00:00
set benchmark=solaris
set profile=Baseline
# OSC-16005: All local filesystems are ZFS
exclude OSC-16005
# OSC-15000: Find and list files with extended attributes
include OSC-15000
# OSC-35000: /etc/motd and /etc/issue contain appropriate policy text
include OSC-35000

The saved command file can then be used for input redirection to create the same tailoring on another system.

To run an assessment of the system using a tailoring we simply need to do this:

# compliance assess -t mysite
Assessment will be named 'mysite.2015-06-29,15:22'
Title   Package integrity is verified
Rule    OSC-54005
Result  PASS



Wednesday Apr 15, 2015

OpenSSH sftp(1) 'ls -l' vs 'ls -lh' and uid/gid translation

I had some one recently question why with OpenSSH using sftp(1) when the do 'ls -l' the get username/groupname in the output but when the do 'ls -lh' the file sizes are translated into SI units but the output now shows the uid/gid. It took myself and another engineer a while to work through this so I thought I would blog the explanation for what is going on.

The protocol used by sftp isn't actually an IETF standard.  OpenSSH (and SunSSH) uses this document:[This is actually protocol version 3]

In that version of the draft there was a 'longname' field in the SSH_FXP_NAME response.  The standard explicitly says:

    The SSH_FXP_NAME response has the following format:

       uint32     id
       uint32     count
       repeats count times:
           string     filename
           string     longname
           ATTRS      attrs


   The format of the `longname' field is unspecified by this protocol.
   It MUST be suitable for use in the output of a directory listing
   command (in fact, the recommended operation for a directory listing
   command is to simply display this data).  However, clients SHOULD NOT
   attempt to parse the longname field for file attributes; they SHOULD
   use the attrs field instead.

When you do 'ls -l' the sftp client is displaying longname so it is the server that created that.  The longname is generated on the server and looks like the output of 'ls -l', the uid/gid to username/groupname translation was done on the server side.

When you add in '-h' the sftp client is obeying the draft standard and not parsing the longname field because it has to pretty print the size into SI units.  So it must just display the uid/gid.

The OpenSSH code explicitly does not attempt to translate the uid/gid because it has no way of knowing if the nameservice domain on the remote and local sides is the same.  This is why when you do 'lls -lh' you do get SI units and translated names but when you do 'ls -l' you get untranslated names.

In the very next version of the draft: [ Protocol version 4]

The format of SSH_FXP_NAME and importantly ATTRS changes very significantly.  The longname is dropped and the ATTRS no longer has a UNIX uid/gid but an NFSv4 owner/group.

OpenSSH never implemented anything past the -02 draft.  Attempts at standardising the SFTP protocol eventually stopped at the -13 draft (which was protocol version 6) in 2006.

The proftpd server also has the ability to do the sftp protocol and it implements the -13 draft.  However Solaris 11 doesn't currently ship with the mod_sftp module, and even if it did the /usr/bin/sftp client doesn't talk that protocol version so an alternate client would be needed too.

Wednesday Jan 07, 2015

ZFS Encryption in Oracle ZFS Storage Appliance

With the 2013.1.3.0 (aka OS8.3) release of the software for the Oracle ZFS Storage Appliance the underlying ZFS encryption functionality is now available for use.  This is the same ZFS encryption that is available in general purpose Solaris but with appliance interfaces added for key management.

I originally wrote the following quick start guide for our internal test engineers and other developers while were developing the functionality and since the functionality is now available I thought I'd share it here. It walks through the required steps to configure encryption on the ZFSSA and perform some basic steps with keys and encrypted shares. Note that the BUI and CLI screenshots are not showing exactly the same system and configuration.

Setup Encryption with LOCAL keystore (CLI)

The first step is to setup the master passphrase, then we can create keys that will be used for assigning to encrypted shares.

brm7330-020:> shares encryption 
brm7330-020:shares encryption> show
                              okm => Manage encryption keys
                            local => Manage encryption keys

brm7330-020:shares encryption> local
brm7330-020:shares encryption local> show
             master_passphrase = 

                       keys => Manage this Keystore's Keys

brm7330-020:shares encryption local> set master_passphrase
Enter new master_passphrase: 
Re-enter new master_passphrase: 
             master_passphrase = *********

Setup Encryption with LOCAL keystore (BUI)


Creating Keys

Now lets create our first key, the only thing we have to provide is the keyname field, this is the name that is used in the CLI and BUI when assigning a key to a project or share.  

It is possible to provide a hex encoded raw 256 bit key in the key field, if that is no provided a new randomly generated key value is used instead.  Note that the keys are stored in an encrypted form using the master_passphrase supplied above. For this simple walkthrough we will let the system generate the key value for us.

brm7330-020:shares encryption local> keys create
brm7330-020:shares encryption local>
                        cipher = AES
                           key = 
                       keyname = (unset)
brm7330-020:shares encryption local key (uncommitted)>set keyname=MyFirstKey
                       keyname = MyFirstKey (uncommitted)
brm7330-020:shares encryption local key (uncommitted)> commit

If we were doing this from the BUI it would look like this:


Setup Encryption with OKM keystore (CLI)

For OKM you need to set the agent_id and the IP address (NOT hostname) and registration_pin given to you by your OKM security officer, the example below shows an already configured setup for OKM.

brm7330-020:> shares encryption 
brm7330-020:shares encryption> show
                              okm => Manage encryption keys
                            local => Manage encryption keys
brm7330-020:shares encryption> okm
brm7330-020:shares encryption okm> show
                      agent_id = ExternalClient041
              registration_pin = *********
                   server_addr =
                             keys => Manage this Keystore's Keys

We are now ready to create our first encrypted share/project

Creating an Encrypted Share

Creation of encrypted project results in all shares in that project being encrypted, by default the shares (filesystems & LUNs) will inherit the encryption properties from the parent project.

brm7330-020:shares> project myproject
brm7330-020:shares myproject (uncommitted)> set encryption=aes-128-ccm
                    encryption = aes-128-ccm (uncommitted)
brm7330-020:shares myproject (uncommitted)> set keystore=LOCAL
                      keystore = LOCAL (uncommitted)
brm7330-020:shares myproject (uncommitted)> set keyname=MyFirstKey 
                       keyname = MyFirstKey (uncommitted)
brm7330-020:shares myproject (uncommitted)> commit

That is it now all shares we create under this project are automatically encrypted with AES 128 CCM using the key named "MyFirstKey" from the LOCAL keystore.

Lets now create a filesystem in our new project and show that it inherited the encryption properties:

brm7330-020:shares> select myproject
brm7330-020:shares myproject> filesystem f1
brm7330-020:shares myproject/f1 (uncommitted)> commit
brm7330-020:shares myproject> select f1
brm7330-020:shares myproject/f1> get encryption keystore keyname keystatus
                    encryption = aes-128-ccm (inherited)
                      keystore = LOCAL (inherited)
                       keyname = MyFirstKey (inherited)
                     keystatus = available
brm7330-020:shares myproject/f1> done

 For the BUI the filesystem and LUN creation dialogs allow selection of encryption properties.


Key Change

It is possible to change the key associated with a Project/Share at any time, even while it is in use by client systems.

Lets now create an additional key and perform a key change on the project we have just created.

brm7330">-00:> shares encryption local keys create
brm7330-020:shares encryption local key (uncommitted)> set keyname=MySecondKey
                       keyname = MySecondKey (uncommitted)
brm7330-020:sares encryption local key (uncommitted)> commit

Now lets change the key used for "myproject" and all the shares in it that are inheriting the key properties:

brm7330-020:> shares select myproject 
brm7330-020:shares myproject> set keyname=MySecondKey
                       keyname = MySecondKey (uncommitted)
brm7330-020:shares myproject> commit

If we look at the keyname property of our share "myproject/f1" we will see it has changed. The filesystem remained shared during the key change and was accesible for clients writting to it.

brm7330-020:shares myproject> select f1 get keyname
                       keyname = MySecondKey (inherited)
brm7330-020:shares myproject>


Deleting Keys

Deletion of a key is a very fast and effective way to make a large amount of data inaccessible.  Keys can be deleted even if they are in use.  If the key is in use a warning will be given and confirmation is required.  All shares using that key will be unshared and will no longer be able to be accessed by clients.

Example of deleting a key that is in use:

brm7330-020:shares encryption local keys> destroy keyname=MyFirstKey
This key has the following dependent shares:
Destroying this key will render the data inaccessible. Are you sure? (Y/N) Y

A similar message is displayed via a popup dialog in the BUI


Now lets look at a share in a project that was using that key:

brm7330-010:> shares select HR select EMEA
brm7330-010:shares HR/EMEA> get encryption keystore keyname keystatus
                    encryption = aes-128-ccm (inherited)
                      keystore = LOCAL (inherited)
                       keyname = 1 (inherited)
                     keystatus = unavailable

Thursday Nov 27, 2014

CVE metadata in Solaris IPS packages for improved Compliance reporting

I'm pleased to announce that the Solaris 11 /support repository now contains metadata for tracking security vulnerability fixes by the assigned CVE number. This was a project that Pete Dennis and I have been working on for some time now.  While we worked together on the design of the metadata and how it should be presented I'm completely indebted to Pete for doing the automation of the package generation from the data in our bug databases - we don't want humans to generate this by hand because it is error prone.

What does this mean for you the Solaris system admin or a compliance auditor  ?

The intent of this is to make it much easier to determine if your system has all the known and required security vulnerability fixes.  This should stop you from attempting to derive this information based on other sources. This is critically important since sometimes in Solaris we choose to fix a bug in an upstream FOSS component by applying the code patch rather than consuming a whole new version - this is a risk/benefit trade off that we take on a case by case basis.  While all this data has been available for sometime in My Oracle Support documents it wasn't easily available in a scriptable form - since it was intended to be read by humans.

The implemenation of this is designed to be applied to any other Oracle, or 3rd party, products that are also delivered in IPS form. 

For Solaris we have added a new metadata package called:  pkg:/support/critical-patch-update/solaris-11-cpu. This new package lives above entire in the dependency hiearchy and contains only metadata that maps the CVE-IDs to the package version that contains the fix.  This allows us to retrospectively update the critical-patch-update metadata where an already shipping version contains the fix for a given CVE.

Each time we publish a new critical patch update a new version of the package is published to the /support repository along with all the new versions of the packages that contain the fixes.  The versioning for this package is @YYYY.MM.VV where VV is usually going to be '1' but this allows us to respin/republish a given critical patch update with in the same month, note the VV is not DD it is NOT the day in the month.

We can search this data using either the web interface on or using the CLI.  So lets look at some examples of the CLI searching.  Say we want to find out which packages contain the fix for the bash(1) Shellshock vulnerability and we know that one of the CVE-IDs for that is CVE-2014-7187:

# pkg search :CVE-2014-7187:
INDEX         ACTION VALUE                                                PACKAGE
CVE-2014-7187 set    pkg://solaris/shell/bash@4.1.11,5.11- pkg:/support/critical-patch-update/solaris-11-cpu@2014.10-1
CVE-2014-7187 set    pkg://solaris/shell/bash@4.1.11,5.11- pkg:/support/critical-patch-update/solaris-11-cpu@2014.10-1

That output tells us which packages and versions contain a fix and which critical patch update it was provided in.

For another example lets find out the whole list of fixes in the October 2014 critical patch update:

pkg search -r info.cve:|grep 2014.10
info.cve   set    CVE-1999-0103 pkg:/support/critical-patch-update/solaris-11-cpu@2014.10-1
info.cve   set    CVE-2002-2443 pkg:/support/critical-patch-update/solaris-11-cpu@2014.10-1
info.cve   set    CVE-2003-0001 pkg:/support/critical-patch-update/solaris-11-cpu@2014.10-1

Note that the placement of colon (:) is very signficant in IPS and that the keyname is 'info.cve' and the values have CVE in upper case.

The way that metadata is setup allows for the cases where a given CVE-ID applies to multiple packages and also where a given package version contains fixes for multiple CVE-IDs.

If we simply want to know if the fix for a given CVE-ID is installed the using 'pkg search -l' with the CVE-ID is sufficent eg:

# pkg search -l CVE-2014-7187
info.cve   set    CVE-2014-7187 pkg:/support/critical-patch-update/solaris-11-cpu@2014.10-1

If it wasn't installed then we would have gotten no output.

If you want to have a system track only the Critical Patch Updates and not every SRU available in the /support repository then when you do an update instead of doing 'pkg update' or 'pkg update entire@<latest>' do 'pkg update solaris-11-cpu@latest'.  Note that initially systems won't have the 'solaris-11-cpu' package installed since as I mentioned previously is is "above" the entire package.  When you update to the latest version of the Critical Patch Update package it will install any intermediated SRUs that were released between this version and the prior one you had installed.

For some further examples have a look at MOS Doc ID 1948847.  Currently only the Solaris 11.2 critical patch update metadata is present but we intend to very soon backpublish the data for prior Solaris 11 critical patch updates as well.

Hope this helps with your compliance auditing.  If you are an auditor no more arguments with the sys admin teams about whither a fix is installed. If you are a system admin more time for your "real job" since you can now easily give the auditor the data they need.

The compliance team is also working on updating the Solaris and PCI-DSS security benchmarks we deliver with the compliance(1M) framework to use this new CVE metadata in determining if the system is upto date, since from a compliance view point we really want to know that all known and available security fixes are installed not that you are running the absolutely latest and greatest version of Solaris.

I have also been working with the OVAL community (part of SCAP) to define a schema for the Solaris IPS system.  This is due to be released with a future version of the OVAL standard.  That will allow us to use this same framework of metadata to also deliver OVAL xml files that map the CVE-IDs to packages.  When that is available we may choose to deliver OVAL xml files as part of the critical-patch-update package as an additional method of querying the same metadata. 

-- Darren

Thursday Jul 31, 2014

OpenStack Security integration for Solaris 11.2

As a part-time member/meddeler of the Solaris OpenStack engineering team I was asked to create some posts for the team's new OpenStack blog.

I've so far written up two short articles, one covering using ZFS encryption with Cinder, and one on Immutable OpenStack VMs.

Friday May 23, 2014

Overview of Solaris Zones Security Models

Over the years of explaining the security model of Solaris Zones and LDOMs to customers "security people" I've encountered two basic "schools of thought".  The first is "shared kernel bad" the second is "shared kernel good".

Which camp is right ?  Well both are, because there are advantages to both models. 

If you have a shared kernel there the policy engine has more information about what is going on and can make more informed access and data flow decisions, however if an exploit should happen at the kernel level it has the potential to impact multiple (or all) guests. 

If you have separate kernels then a kernel level exploit should only impact that single guest, except if it then results in a VM breakout.

Solaris non global zones fall into the "shared kernel" style.  Solaris Zones are included in the Solaris 11 Common Criteria Evaluation for the virtualisation extension (VIRT) to the OSPP.  Solaris Zones are also the foundation of our multi-level Trusted Extensions feature and is used for separation of classified data by many government/military deployments around the world.

LDOMs are "separate kernel", but LDOMs are also unlike hypervisors in the x86 world because we can shutdown the underlying control domain OS (assuming the guests have another path to the IO requirements they need, either on their own or from root-domains or io-domains).  So LDOMs can be deployed in a way that they are more protected from a VM breakout being used to cause a reconfiguration of resources.  The Solaris 11 CC evaluation still applies to Solaris instances running in an LDOM regardless of wither they are the control-domain, io-domain, root-domain or guest-domain.

Solaris 11.2 introduces a new brand of zone called "Kernel Zones", they look like native Zones, you configure them like native Zones and they actually are Solaris zones but with the ability to run a separate kernel.  Having their own kernel means that they support suspend/resume independent of the host OS - which gives us warm migration.  In particular "Kernel Zones" are not like popular x86 hypervisors such as VirtualBox, Xen or VMWare.

General purpose hypervisors, like VirtualBox, support multiple different guest operating systems so they have to virtualise the hardware. Some hypervisors (most type 2 but even some type 1) support multiple different host operating systems for providing the services as well. So this means the guest can only assume virtualised hardware.

Solaris Kernel Zones are different, the zones kernel knows it is running on Solaris as the "host" and the Solaris global zone "host" also knows that the kernel zone is Solaris.  This means we get to make more informed decisions about resources, general requirements and security policy. Even with this we can still host Solaris 11.2 kernel zones on the internal developement release of Solaris 12 and vice versa.

Note that what follows is an out line of implementation details that are subject to change at any time: The kernel of a Solaris Kernel Zone is represented as a user land process in a Solaris non global zone.  That non global zone is configured with less privilege than a normal non global zone would have and it is always configured as an immutable zone.  So if there happened to be an exploit of the guest kernel that resulted in a VM break out you would end up in an immutable non global zone with lowered privilege.

This means that we can have advantages of "shared kernel" and "separate kernel" security models with Solaris Kernel Zones and we have the management simplicity of traditional Solaris Zones (# zonecfg -z mykz 'create -t SYSsolaris-kz' && zoneadm -z mykz install && zoneadm -z mykz boot)

If you want even more layers of protection on SPARC it is possible to host Kernel Zones inside a guest LDOM.

Tuesday Apr 29, 2014

Using /etc/system.d rather than /etc/system to package your Solaris kernel config

The request for an easy way to package Solaris kernel configuration (/etc/system basically) came up both via the Solaris Customer Advisory Board meetings and requests from customers with early access to Solaris 11.2 via the Platinum Customer Program.  I also had another fix for the Solaris Cryptographic Framework that I needed to implement to stop cryptoadm(1M) from writing to /etc/system (some of the background to what that is needed is in my recent blog post about FIPS 140-2).

So /etc/system.d was born.  My initial plan for the implementation was to read the "fragment" files directly from the kernel. However that is very complex to do at the time we need to read these; since it happens (in kernel boot time scales) eons before we have the root file system mounted. We can however read from a well known file name that is in the boot archive.

The way I ended up implementing this is that during boot archive creation (either manually running 'bootadm update-archive' or as a result of BE or packaging operations or just a system reboot) we assemble together the content of /etc/system.d into a single well known /etc/system.d/.self-assembly (but considered a Private interface) file.  We read the files in /etc/system.d/ in C locale collation order and ignore all files that start with a "." character, this ensures that the assembly is predictable and consistent across all systems.

I then had too choose wither /etc/system.d or /etc/system "wins" if a variable happens to get set in both.  The decision was that /etc/system is read second and thus wins, this preserves existing behaviours. 

I also enhanced the diagnostic output from when the system file parser detects duplication so that we could indicate which file it was that caused the issue. When bootadm creates the .self-assembly file it includes START/END comment markers so that you will be able to easily determine which file from /etc/system.d delivered a given setting.

So now you can much more easily deliver any Solaris kernel customisations you need by using IPS to deliver fragments (one line or  many) into /etc/system.d/ instead of attempting to modify /etc/system via first boot SMF services or other scripting.  This also means they apply on first boot of the image after install as well. 

So how do I pick which file name in /etc/system.d/ to use so that it doesn't clash with other people ? The recommendation (which will be documented in the man pages and in /etc/system itself) is to use the full name of the IPS package (with '/' replaced by ':' ) as the prefix or name of any files you deliver to /etc/system.

As part of the same change I updated cryptoadm(1M) and dtrace(1M) to no longer write to /etc/system but instead write to files in /etc/system.d/ and I followed my own advice on file naming!

Information on how to get the Solaris 11.2 Beta is available from this OTN page.

Note that this particular change came in after the Solaris 11.2 Beta build was closed so you won't see this in Solaris 11.2 Beta (which is build 37).

Solaris 11 Compliance Framework

During the Solaris 11 launch (November 2011) one of the questions I was asked from the audience was from a retail customer asking for documentation on how to configure Solaris to pass a PCI-DSS audit.  At that time we didn't have anything beyond saying that Solaris was secure by default and it was no longer necessary to run the Solaris Security Toolkit to get there.  Since then we have produced a PCI-DSS white paper with Coalfire (a PCI-DSS QSA) and we have invested a significant amount of work in building a new Compliance Framework and making compliance a "lifestyle" feature in Solaris core development.

We delievered OpenSCAP in Solaris 11.1 since SCAP is the foundation language of how we will provide compliance reporting. So I'm please to be able to finally talk about the first really signficant part of the Solaris compliance infrastruture which is part of Solaris 11.2.

Starting with Solaris 11.2 we have a new command compliance(1M) for running system assements against security/compliance benchmarks and for generating html reports from those.  For now this only works on a single host but the team hard at work adding multi-node support (using the Solaris RAD infrastructure) for a future release.

The much more signficant part of what the compliance team has been working on is "content".  A framework without any content is just a new "box of bits, lots of assembly required" and that doesn't meet the needs of busy Solaris administrators.  So starting with Solaris 11.2 we are delivering our interpretation of important security/compliance standards such as PCI-DSS.  We have also provided two Oracle authored policies: 'Solaris Baseline' and 'Solaris Recommended', a freshly installed system should be getting all passes on the Baseline benchmark.  The checks in the Recommended benchmark are those that are a little more controversial and/or take longer to run.

Lets dive in and generate an assesment and report from one of the Solaris 11.2 compliance benchmarks we provide:

# pkg install security/compliance 
# compliance assess
# compliance report

That will give us an html report that we can then view.  Since we didn't give any compliance benchmark name it defaults to 'Solaris Baseline', so now lets install and run the PCI-DSS benchmark. The 'security/compliance' package has a group dependency for 'security/compliance/benchmark/pci-dss' so it will be installed already but if you don't want it you can remove that benchmark and keep the others and the infrastructure.

# compliance assess -b pci-dss
Assessment will be named 'pci-dss.Solaris_PCI-DSS.2014-04-14,16:39'
# compliance report -a pci-dss.Solaris_PCI-DSS.2014-04-14,16:39

If we want the report to only show those tests that failed we can do that like this:

# compliance report -s fail -a pci-dss.Solaris_PCI-DSS.2014-04-14,16:39

We understand that many of your Solaris systems won't match up exactly to the benchmarks we have provided and as a result we have delivered the content in a way that you can customise it. Over time the ability to build custom benchmarks from the checks we provide will be come part of the compliance(1M) command (tailoring was added in Solaris 11.3 so the information below has been superceeded) but for now you can enable/disable checks by editing a copy of the XML files. Yes I know many of you don't like XML but this time it isn't too scary for just this part, crafting a whole check from scratch is hard though but that is the SCAP/XCCDF/OVAL language for you!.

So for now here is the harder than it should be way to customise one of the delivered benchmarks, using the PCI-DSS benchmark as an example:

# cd /usr/lib/compliance/benchmarks
# mkdir example
# cd example
# cp ../pci-dss/pci-dss-xccdf.xml example-xccdf.xml
# ln -s ../../tests
# ln -s example-xccdf.xml xccdf.xml

# vi example-xccdf.xml

In your editor you are looking for lines that look like this to enable or disable a given test:

<select idref="OSC-27505" selected="true" />

You probably also want to update these lines to indicate that it is your benchmark rather than the original we delivered.

<status date="2013-12-12">draft</status>
<title>Payment Card Industry Data Security Standard</title>

Once you have made the changes you want exit from your editor and run 'compliance list' and you should see your example benchmark listed, you can run run assesments and generate reports from that one just as above.  It is important you do this by making a copy of the xccdf.xml file otherwise the 'pkg verify' test is always going to fail and more importantly your changes would be lost on package update.

Note that we re-numbered these tests in the Solaris 11.2 SRU and 11.3 to provide a peristent unique identifier and namespace for each of the tests we deliver, it just didn't make the cut off for Solaris 11.2 release.

I would really value feedback on the framework itself and probably even more importantly the actual compliance checks that our Solaris Baseline, Solaris Recommended, and PCI-DSS security benchmarks include.

Updated August 6th 2015 to added information about Solaris 11.3 changes.


Darren Moffat-Oracle


  • General
« December 2016