Montag Apr 04, 2016

ASM Scoped Security - A Realistic Example

If you run multiple grid infrastructures (aka RAC Clusters) on SuperCluster, which share the same set of Exadata Storage Servers (aka cells), adding ASM Scoped Security to the setup is a good idea.  Even if there are no security reasons like multi-tenancy, just simply preventing accidental use of one cluster's diskgroups by another cluster should be reason enough to implement this simple precaution.

Of course, there is good documentation on this feature, available here.   However, as often the case, the devil is in the details, so here's a comprehensive example of how to do this:

  1. Shutdown the cluster you want to modify.  Use "crsctl stop crs" on all cluster nodes.
  2. Create a key for this cluster
    On any storage cell, use "cellcli -e create key".  It will give you an ASCII string to use as a key.  Copy that string to a temporary place.  In this example, I'll use the key '9e9a606a461a1abc6af43626e85af3b7'
  3. Invent a unique name to use for this cluster.  In this example, I'll use "marsc1" to denote the first cluster running on mars.
  4. Create a name/key pair on all cells using this unique name and the key from above.  On all cells, execute this cellcli command:
    assign key for 'marsc1'='9e9a606a461a1abc6af43626e85af3b7'
  5. Here's the most difficult part.  We'll need to assign all griddisks that are used by our cluster to this unique name.  Cellcli's filters and wildcards don't help much here.  Here's how I did it:
    1. On all cells, create a list of all disks belonging to marsc1.  In cellcli, do:
      spool /tmp/disks
      list griddisk where asmdiskgroupname='DATAC1' attributes name
      list griddisk where asmdiskgroupname='RECOC1' attributes name
    2. In /tmp/disks on each cell, there will now be a number of lines similar to this:
    3. Using your favorite file manipulation tools (I used awk and vi), use this file to create a command file that contains one "alter griddisk" command for each griddisk.  Mine looked like this afterwards:
      alter griddisk DATAC1_CD_00_marsceladm04 availableTo='marsc1'
      alter griddisk DATAC1_CD_01_marsceladm04 availableTo='marsc1'
      alter griddisk DATAC1_CD_02_marsceladm04 availableTo='marsc1'
      alter griddisk DATAC1_CD_03_marsceladm04 availableTo='marsc1'
    4. Run this command script on each cell.  Of course, each cell will have its own script.
      # cellcli < script
    5. Check that it worked using cellcli:
      list griddisk attributes name,availableTo
  6. Finally, enter the unique name and the key in a file called "cellkey.ora".  On Solaris, this file is located in /etc/oracle/cell/network-config
    My file looks like this:
  7. Restart crs on all nodes:
    crsctl start crs

That should be all.  You can easily verify that your other clusters can no longer see these diskgroups or disks from another cluster's asm:  asmcmd lsdg --discovery

Now, repeat this for all of your clusters.  The end result will be exclusive access to each cluster's disks, with no danger of intentional snooping or unintentional use.

One tool that comes in very handy for doing stuff on all cells at the same time is "cssh" - a one to many commandline included in recent versions of Solaris.

Montag Sep 10, 2012

Secure Deployment of Oracle VM Server for SPARC - updated

Quite a while ago, I published a paper with recommendations for a secure deployment of LDoms.  Many things happend in the mean time, and an update to that paper was due.  Besides some minor spelling corrections, many obsolete or changed links were updated.  However, the main reason for the update was the introduction of a second usage model for LDoms.  In a very short few words: With the success especially of the T4-4, many deployments make use of the hardware partitioning capabilities of that platform, assigning full PCIe root complexes to domains, mimicking dynamic system domains if you will.  This different way of using the hypervisor needed to be addressed in the paper.  You can find the updated version here:

Secure Deployment of Oracle VM Server for SPARC
Second Edition

I hope it'll be useful!

Mittwoch Feb 29, 2012

Solaris Fingerprint Database - How it's done in Solaris 11

Many remember the Solaris Fingerprint Database. It was a great tool to verify the integrity of a solaris binary.  Unfortunately, it went away with the rest of sunsolve, and was not revived in the replacement, "My Oracle Support".  Here's the good news:  It's back for Solaris 11, and it's better than ever!

It is now totally integrated with IPS...  Read more

[Read More]

Montag Feb 20, 2012

Solaris 11 submitted for EAL4+ certification

Solaris 11 has been submitted for certification by the Canadian Common Criteria Scheme in Level EAL4+. They will be certifying against the protection profile "Operating System Protection Profile (OS PP)" as well as the extensions

  • Advanced Management (AM)
  • Extended Identification and Authentication (EIA)
  • Labeled Security (LS)
  • Virtualization (VIRT)

EAL4+ is the highest level typically achievable for commercial software,
and is the highest level mutually recognized by 26 countries, including Germany and the USA. Completion of the certification lies in the hands of the certification authority.

You can check the current status of this certification (as well as other certified Oracle software) on the page Oracle Security Evaluations.

Mittwoch Jun 08, 2011

Erasing disks securely

Actually, both the question and the answer are old and well known.  However, these things tend to be forgotten and pop up as questions from time to time.  Hence a little reminder for all of us:

Solaris makes it easy to erase a disk so that all the data can't be restored, even with sophisticated methods.  There is a subcommand "analyze/purge" in the command format(1M) that does it all for you.  It will overwrite the selected area of your disk (usually s2) a total of four times with different patterns to achieve this.  Of course, depending on the size of the disk, this might take a while.  But it's secure enough to comply with Department Of Defence(DOD) wipe disk standard 5220.22-M.  Note however that as of June 28, 2007, overwriting in general is no longer accepted as a method to securely erase data.  Here is a link to the relevant DSS publication.

Some more details are here:

Note that this method does not apply to SSDs of all kind!  And of course, to avoid any risk of losing your data with your disk, simply encrypt it!  It's quite easy using ZFS or Oracle TDE :-)

Update 2015-05-29:

  • The link to the original DoD standard doesn't work anymore and has been replaced by a link to Wikipedia.
  • Here's an additional link to a more recent NIST publication.
  • Note that with modern drives, destroying data with OS or application level tools will not satisfy higher security requirements.  The sector management of these drives might make defective sectors with sensitive data unavailable to such tools - but not to more intrusive methods of active data recovery.  If you want to protect against those, physical destruction is your only reliable option.

Update 2015-09-29:

This is my final comment on this matter:

  • If you are worried about the data on storage devices you no longer use, physical destruction of those devices is the only truly secure option.
  • Encrypt your data right from the start to avoid this issue.  Encryption is easily and in many cases freely available.  If you don't care enough about your data to encrypt it, you are unlikely to worry about data on decommissioned storage devices.
  • If you are worried enough not to trust encryption, no erasing technique will be good enough to satisfy your requirements.  And the cost of physically destroying those devices will not matter to you.

Mittwoch Jan 26, 2011

Logical Domains - sure secure

LDoms Oracle VM Server for SPARC are being used wide and far.  And I've been asked several times, how secure they actually were.  One customer especially wanted to be very very sure. So we asked for independent expertise on the subject matter.  The results were quite pleasing, but not exactly night time literature. So I decided to add some generic deployment recommendations to the core results and came up with a whitepaper. Publishing was delayed a bit due to the change of ownership which resulted in a significant change in process.  The good thing about that is that now it's also up to date with the latest release of the software. I am now happy and proud to present::

Secure Deployment of Oracle VM for SPARC

A big Thanks You to Steffen Gundel of Cirosec, who laid the foundation for this paper with his study.

I do hope that it will be usefull to some of you!


Mittwoch Nov 24, 2010

Encrypting Your Filesystem with ZFS and AES128

ZFS filesystem encryption is finally available in Solaris 11 Express.  This closes a gap in Solaris that hurt all those that carried their data around with them.  But of course there are many good reasons to encrypt data living on disks well secured in a datacenter.  After all, they will all leave the datacenter in one way or another eventually...

Enough introduction, here's how simple this is:

  1. You will need to upgrade the zpool intended to host the encrypted filesystem to version 30.  Issue a simple "zpool upgrade <poolname>.  Of course, you can skip this step on a newly installed Solaris 11 Express.
  2. Now create a new filesystem, with encryption enabled: zfs create -o encryption=on <poolname/newfs>
    The command will interactively prompt for a passphrase which will be used to generate the key for this filesystem.  You're done!  You can not encrypt an already existing filesystem.  Of course there are several more options on how and where to store the key.  Just have a look at the manpage :-)

Likewise, you also have a choice of three different key lengths for AES, the algorithm used for encryption.  The default used for "encryption=on" is AES-128 in CCM mode.  But you can also choose the longer 192 or 256 bit keys.  While developing ZFS crypto, it was discussed what default keylength to choose.  AES-128 was chosen for two reasons:  First, of course, the 128 bit variant is faster than the longer key lengths, especially without hardware acceleration like it is available in the SPARC T2/T3 and Intel 5600 Chips.  Second, there is new research including successful attacks on AES256 and AES 192 that requires a search of only 2\^39.  These attacks don't work for AES128, which is therefore, as of today, not only faster, but also more secure than the variants with longer keys.

More details about ZFS Crypto in the ZFS Admin Guide.

Dienstag Sep 21, 2010

No Excuse for no Security

Ever since Sun shipped the UltraSPARC T2 CPU, there was no excuse for not using SSL security for web services.  With the new T3 chips, this is more true than ever.  Not only have the supported algorithms been modernized.  The required documentation on how to use this feature has also been updated.  Find the first two papers hers.  I expect there to be more soon.

Happy encrypting!

Dienstag Jun 01, 2010

SCA 6000 for Oracle TDE

In February, I described, how to configure the softtoken store of the Solaris Cryptographic Framework as a "software-HSM" for Oracle TDE.  In the meantime, the SCA 6000 card was certified for use with Oracle TDE.  There is also a "Whitepaper" available, describing SCA 6000 setup and configuration for TDE.  I was lucky enough to get my hands on one of these cards and test for myself.  It works, of course.  What makes using the SCA 6000 so attractive is the additional possibilities the card has to offer.  You can lock and unlock the keystore to prevent any further wallet and column encryption operations.  You can also implement a Two-Person-Rule, using the card's software.  This allows to separate access to the master key from "normal" database administration.  This is often required in high-security environments.


Neuigkeiten, Tipps und Wissenswertes rund um SPARC, CMT, Performance und ihre Analyse sowie Erfahrungen mit Solaris auf dem Server und dem Laptop.

This is a bilingual blog (most of the time). Please select your prefered language:
The views expressed on this blog are my own and do not necessarily reflect the views of Oracle.


« July 2016