Friday May 23, 2014

Overview of Solaris Zones Security Models

Over the years of explaining the security model of Solaris Zones and LDOMs to customers "security people" I've encountered two basic "schools of thought".  The first is "shared kernel bad" the second is "shared kernel good".

Which camp is right ?  Well both are, because there are advantages to both models. 

If you have a shared kernel there the policy engine has more information about what is going on and can make more informed access and data flow decisions, however if an exploit should happen at the kernel level it has the potential to impact multiple (or all) guests. 

If you have separate kernels then a kernel level exploit should only impact that single guest, except if it then results in a VM breakout.

Solaris non global zones fall into the "shared kernel" style.  Solaris Zones are included in the Solaris 11 Common Criteria Evaluation for the virtualisation extension (VIRT) to the OSPP.  Solaris Zones are also the foundation of our multi-level Trusted Extensions feature and is used for separation of classified data by many government/military deployments around the world.

LDOMs are "separate kernel", but LDOMs are also unlike hypervisors in the x86 world because we can shutdown the underlying control domain OS (assuming the guests have another path to the IO requirements they need, either on their own or from root-domains or io-domains).  So LDOMs can be deployed in a way that they are more protected from a VM breakout being used to cause a reconfiguration of resources.  The Solaris 11 CC evaluation still applies to Solaris instances running in an LDOM regardless of wither they are the control-domain, io-domain, root-domain or guest-domain.

Solaris 11.2 introduces a new brand of zone called "Kernel Zones", they look like native Zones, you configure them like native Zones and they actually are Solaris zones but with the ability to run a separate kernel.  Having their own kernel means that they support suspend/resume independent of the host OS - which gives us warm migration.  In particular "Kernel Zones" are not like popular x86 hypervisors such as VirtualBox, Xen or VMWare.

General purpose hypervisors, like VirtualBox, support multiple different guest operating systems so they have to virtualise the hardware. Some hypervisors (most type 2 but even some type 1) support multiple different host operating systems for providing the services as well. So this means the guest can only assume virtualised hardware.

Solaris Kernel Zones are different, the zones kernel knows it is running on Solaris as the "host" and the Solaris global zone "host" also knows that the kernel zone is Solaris.  This means we get to make more informed decisions about resources, general requirements and security policy. Even with this we can still host Solaris 11.2 kernel zones on the internal developement release of Solaris 12 and vice versa.

Note that what follows is an out line of implementation details that are subject to change at any time: The kernel of a Solaris Kernel Zone is represented as a user land process in a Solaris non global zone.  That non global zone is configured with less privilege than a normal non global zone would have and it is always configured as an immutable zone.  So if there happened to be an exploit of the guest kernel that resulted in a VM break out you would end up in an immutable non global zone with lowered privilege.

This means that we can have advantages of "shared kernel" and "separate kernel" security models with Solaris Kernel Zones and we have the management simplicity of traditional Solaris Zones (# zonecfg -z mykz 'create -t SYSsolaris-kz' && zoneadm -z mykz install && zoneadm -z mykz boot)

If you want even more layers of protection on SPARC it is possible to host Kernel Zones inside a guest LDOM.


Monday Oct 29, 2012

Solaris 11.1: Encrypted Immutable Zones on (ZFS) Shared Storage

Solaris 11 brought both ZFS encryption and the Immutable Zones feature and I've talked about the combination in the past.  Solaris 11.1 adds a fully supported method of storing zones in their own ZFS using shared storage so lets update things a little and put all three parts together.

When using an iSCSI (or other supported shared storage target) for a Zone we can either let the Zones framework setup the ZFS pool or we can do it manually before hand and tell the Zones framework to use the one we made earlier.  To enable encryption we have to take the second path so that we can setup the pool with encryption before we start to install the zones on it.

We start by configuring the zone and specifying an rootzpool resource:

# zonecfg -z eizoss
Use 'create' to begin configuring a new zone.
zonecfg:eizoss> create
create: Using system default template 'SYSdefault'
zonecfg:eizoss> set zonepath=/zones/eizoss
zonecfg:eizoss> set file-mac-profile=fixed-configuration
zonecfg:eizoss> add rootzpool
zonecfg:eizoss:rootzpool> add storage \
  iscsi://my7120.example.com/luname.naa.600144f09acaacd20000508e64a70001
zonecfg:eizoss:rootzpool> end
zonecfg:eizoss> verify
zonecfg:eizoss> commit
zonecfg:eizoss> 

Now lets create the pool and specify encryption:

# suriadm map \
   iscsi://my7120.example.com/luname.naa.600144f09acaacd20000508e64a70001
PROPERTY	VALUE
mapped-dev	/dev/dsk/c10t600144F09ACAACD20000508E64A70001d0
# echo "zfscrypto" > /zones/p
# zpool create -O encryption=on -O keysource=passphrase,file:///zones/p eizoss \
   /dev/dsk/c10t600144F09ACAACD20000508E64A70001d0
# zpool export eizoss

Note that the keysource example above is just for this example, realistically you should probably use an Oracle Key Manager or some other better keystorage, but that isn't the purpose of this example.  Note however that it does need to be one of file:// https:// pkcs11: and not prompt for the key location.  Also note that we exported the newly created pool.  The name we used here doesn't actually mater because it will get set properly on import anyway. So lets go ahead and do our install:

zoneadm -z eizoss install -x force-zpool-import
Configured zone storage resource(s) from:
    iscsi://my7120.example.com/luname.naa.600144f09acaacd20000508e64a70001
Imported zone zpool: eizoss_rpool
Progress being logged to /var/log/zones/zoneadm.20121029T115231Z.eizoss.install
    Image: Preparing at /zones/eizoss/root.

 AI Manifest: /tmp/manifest.xml.ujaq54
  SC Profile: /usr/share/auto_install/sc_profiles/enable_sci.xml
    Zonename: eizoss
Installation: Starting ...

              Creating IPS image
Startup linked: 1/1 done
              Installing packages from:
                  solaris
                      origin:  http://pkg.oracle.com/solaris/release/
              Please review the licenses for the following packages post-install:
                consolidation/osnet/osnet-incorporation  (automatically accepted,
                                                          not displayed)
              Package licenses may be viewed using the command:
                pkg info --license <pkg_fmri>
DOWNLOAD                                PKGS         FILES    XFER (MB)   SPEED
Completed                            187/187   33575/33575  227.0/227.0  384k/s

PHASE                                          ITEMS
Installing new actions                   47449/47449
Updating package state database                 Done 
Updating image state                            Done 
Creating fast lookup database                   Done 
Installation: Succeeded

         Note: Man pages can be obtained by installing pkg:/system/manual

 done.

        Done: Installation completed in 929.606 seconds.


  Next Steps: Boot the zone, then log into the zone console (zlogin -C)

              to complete the configuration process.

Log saved in non-global zone as /zones/eizoss/root/var/log/zones/zoneadm.20121029T115231Z.eizoss.install

That was really all we had to do, when the install is done boot it up as normal.

The zone administrator has no direct access to the ZFS wrapping keys used for the encrypted pool zone is stored on.  Due to how inheritance works in ZFS he can still create new encrypted datasets that use those wrapping keys (without them ever being inside a process in the zone) or he can create encrypted datasets inside the zone that use keys of his own choosing, the output below shows the two cases:

rpool is inheriting the key material from the global zone (note we can see the value of the keysource property but we don't use it inside the zone nor does that path need to be (or is) accessible inside the zone). Whereas rpool/export/home/bob has set keysource locally.

 
  

# zfs get encryption,keysource rpool rpool/export/home/bob NAME PROPERTY VALUE SOURCE rpool encryption on inherited from $globalzone rpool keysource passphrase,file:///zones/p inherited from $globalzone rpool/export/home/bob encryption on local rpool/export/home/bob keysource passphrase,prompt local

 

 

Wednesday Jul 04, 2012

Delegation of Solaris Zone Administration

In Solaris 11 'Zone Delegation' is a built in feature. The Zones system now uses fine grained RBAC authorisations to allow delegation of management of distinct zones, rather than all zones which is what the 'Zone Management' RBAC profile did in Solaris 10.

The data for this can be stored with the Zone or you could also create RBAC profiles (that can even be stored in NIS or LDAP) for granting access to specific lists of Zones to administrators.

For example lets say we have zones named zoneA through zoneF and we have three admins alice, bob, carl.  We want to grant a subset of the zone management to each of them.

We could do that either by adding the admin resource to the appropriate zones via zonecfg(1M) or we could do something like this with RBAC data directly:

First lets look at an example of storing the data with the zone.

# zonecfg -z zoneA
zonecfg:zoneA> add admin
zonecfg:zoneA> set user=alice
zonecfg:zoneA> set auths=manage
zonecfg:zoneA> end
zonecfg:zoneA> commit
zonecfg:zoneA> exit

Now lets look at the alternate method of storing this directly in the RBAC database, but we will show all our admins and zones for this example:

# usermod -P +'Zone Management' -A +solaris.zone.manage/zoneA alice

# usermod -A +solaris.zone.login/zoneB alice


# usermod -P +'Zone Management' -A +solaris.zone.manage/zoneB bob
# usermod -A +solaris.zone.manage/zoneC bob


# usermod -P +'Zone Management' -A +solaris.zone.manage/zoneC carl
# usermod -A +solaris.zone.manage/zoneD carl
# usermod -A +solaris.zone.manage/zoneE carl
# usermod -A +solaris.zone.manage/zoneF carl

In the above alice can only manage zoneA, bob can manage zoneB and zoneC and carl can manage zoneC through zoneF.  The user alice can also login on the console to zoneB but she can't do the operations that require the solaris.zone.manage authorisation on it.

Or if you have a large number of zones and/or admins or you just want to provide a layer of abstraction you can collect the authorisation lists into an RBAC profile and grant that to the admins, for example lets great an RBAC profile for the things that alice and carl can do.

# profiles -p 'Zone Group 1'
profiles:Zone Group 1> set desc="Zone Group 1"
profiles:Zone Group 1> add profile="Zone Management"
profiles:Zone Group 1> add auths=solaris.zone.manage/zoneA
profiles:Zone Group 1> add auths=solaris.zone.login/zoneB
profiles:Zone Group 1> commit
profiles:Zone Group 1> exit
# profiles -p 'Zone Group 3'
profiles:Zone Group 1> set desc="Zone Group 3"
profiles:Zone Group 1> add profile="Zone Management"
profiles:Zone Group 1> add auths=solaris.zone.manage/zoneD
profiles:Zone Group 1> add auths=solaris.zone.manage/zoneE
profiles:Zone Group 1> add auths=solaris.zone.manage/zoneF
profiles:Zone Group 1> commit
profiles:Zone Group 1> exit


Now instead of granting carl  and aliace the 'Zone Management' profile and the authorisations directly we can just give them the appropriate profile.

# usermod -P +'Zone Group 3' carl

# usermod -P +'Zone Group 1' alice


If we wanted to store the profile data and the profiles granted to the users in LDAP just add '-S ldap' to the profiles and usermod commands.

For a documentation overview see the description of the "admin" resource in zonecfg(1M), profiles(1) and usermod(1M)

Tuesday May 01, 2012

Podcast: Immutable Zones in Oracle Solaris 11

In this episode of the "Oracle Solaris: In a Class By Itself" podcast series, the focus is a bit more technical. I was interviewed by host Charlie Boyle, Senior Director of Solaris Product Marketing. We talked about a new feature in Oracle Solaris 11: immutable zones. Those are read-only root zones for highly secure deployment scenarios.

See also my previous blog post on Enctypted Immutable Zones.

Monday Jan 08, 2007

OpenSolaris Zones: TX & BrandZ

Trusted Extensions (TX) integrated into OpenSolaris before BrandZ did so TX doesn't use the branded zone zones concept.  BrandZ wasn't just about providing the ability to run userland Linux code in a Zone it also provided an infrastructure to support different styles of zones, it doesn't even need to be some thing as complex as a Linux zone.  There are instructions on the brandz mail alias for creating a Belenix zone hosted on Solaris Express in this cases the branded zone doesn't require a different kernel module.

I think it should be possible to use the BrandZ hooks to avoid the need for doing some of the additional zone creation work that needs to be done manually for TX zones, or via tools like txzonemgr.

Currently lx branded zones (Linux ones) aren't allowed to exist if TX labeling is enabled but that is a different issue than using the BrandZ infrastructure to create TX zones.
 

About

Darren Moffat-Oracle

Search

Categories
Archives
« August 2015
MonTueWedThuFriSatSun
     
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
      
Today