Montag Apr 04, 2016

ASM Scoped Security - A Realistic Example

If you run multiple grid infrastructures (aka RAC Clusters) on SuperCluster, which share the same set of Exadata Storage Servers (aka cells), adding ASM Scoped Security to the setup is a good idea.  Even if there are no security reasons like multi-tenancy, just simply preventing accidental use of one cluster's diskgroups by another cluster should be reason enough to implement this simple precaution.

Of course, there is good documentation on this feature, available here.   However, as often the case, the devil is in the details, so here's a comprehensive example of how to do this:

  1. Shutdown the cluster you want to modify.  Use "crsctl stop crs" on all cluster nodes.
  2. Create a key for this cluster
    On any storage cell, use "cellcli -e create key".  It will give you an ASCII string to use as a key.  Copy that string to a temporary place.  In this example, I'll use the key '9e9a606a461a1abc6af43626e85af3b7'
  3. Invent a unique name to use for this cluster.  In this example, I'll use "marsc1" to denote the first cluster running on mars.
  4. Create a name/key pair on all cells using this unique name and the key from above.  On all cells, execute this cellcli command:
    assign key for 'marsc1'='9e9a606a461a1abc6af43626e85af3b7'
  5. Here's the most difficult part.  We'll need to assign all griddisks that are used by our cluster to this unique name.  Cellcli's filters and wildcards don't help much here.  Here's how I did it:
    1. On all cells, create a list of all disks belonging to marsc1.  In cellcli, do:
      spool /tmp/disks
      list griddisk where asmdiskgroupname='DATAC1' attributes name
      list griddisk where asmdiskgroupname='RECOC1' attributes name
    2. In /tmp/disks on each cell, there will now be a number of lines similar to this:
    3. Using your favorite file manipulation tools (I used awk and vi), use this file to create a command file that contains one "alter griddisk" command for each griddisk.  Mine looked like this afterwards:
      alter griddisk DATAC1_CD_00_marsceladm04 availableTo='marsc1'
      alter griddisk DATAC1_CD_01_marsceladm04 availableTo='marsc1'
      alter griddisk DATAC1_CD_02_marsceladm04 availableTo='marsc1'
      alter griddisk DATAC1_CD_03_marsceladm04 availableTo='marsc1'
    4. Run this command script on each cell.  Of course, each cell will have its own script.
      # cellcli < script
    5. Check that it worked using cellcli:
      list griddisk attributes name,availableTo
  6. Finally, enter the unique name and the key in a file called "cellkey.ora".  On Solaris, this file is located in /etc/oracle/cell/network-config
    My file looks like this:
  7. Restart crs on all nodes:
    crsctl start crs

That should be all.  You can easily verify that your other clusters can no longer see these diskgroups or disks from another cluster's asm:  asmcmd lsdg --discovery

Now, repeat this for all of your clusters.  The end result will be exclusive access to each cluster's disks, with no danger of intentional snooping or unintentional use.

One tool that comes in very handy for doing stuff on all cells at the same time is "cssh" - a one to many commandline included in recent versions of Solaris.

Freitag Aug 28, 2015

Open a MOS Document by its DocID

I've always been wondering if there wasn't an easier way to view a MOS document than to go to the portal and "search" for the DocID which I already know.  Using a Firefox search plugin had been an idea for a while.  Now I finally found a few minutes to try that.  It works - find the plugin below.  Simply save it as "DocID.xml" in the searchplugins directory of your firefox profile, restart firefox and enter the DocID in the search field of the browser.

<SearchPlugin xmlns="" xmlns:os="">
<os:ShortName>MOS DocID</os:ShortName>
<os:Description>MOS DocID Search</os:Description>
<os:Image width="16" height="16">data:image/x-icon;base64,
<os:Url type="text/html" method="GET" template="{searchTer

(The full text of the plugin might not be displayed, but copy & paste to a file will work.)

Mittwoch Nov 07, 2012

20 Years of Solaris - 25 Years of SPARC!

I don't usually duplicate what can be found elsewhere.  But this is worth an exception.

20 Years of Solaris - Guess who got all those innovation awards!
25 Years of SPARC - And the future has just begun :-)

Check out those pages for some links pointing to the past, and, more interesting, to the future...

There are also some nice videos: 20 Years of Solaris - 25 Years of SPARC

(Come to think of it - I got to be part of all but the first 4 years of Solaris.  I must be getting older...)

Dienstag Nov 08, 2011

Oracle TDE and Hardware Accelerated Crypto

Finally, there's a clear and brief description of the hardware and software required to use hardware accelerated encryption with Oracle TDE..  Here's an even shorter summary ;-)

  • SPARC T4 or Intel CPU with AES-NI
  • Solaris 11 or Linux (a patch for Solaris 10 will be provided eventually)
  • Oracle
    • If you use Linux, Oracle DB with patch 10296641 will also work.

The longer version of this summary is available as MOS-Note 1365021.1

Happy crypting!

Freitag Jan 28, 2011

Cash for Clunkers 2.0

This ad is too nice to not repeat it here... Enjoy!
Of course this is quite serious: Trading in an HP Superdome for an M8000 or M9000 is indeed a deal you can only win!

Donnerstag Jul 15, 2010

Bye Bye Sun - Hello Oracle

Back from vacation, I'm finally managed to switch my email address.  So what was


is now


The blog remains unchanged :-)

Dienstag Jun 01, 2010

SCA 6000 for Oracle TDE

In February, I described, how to configure the softtoken store of the Solaris Cryptographic Framework as a "software-HSM" for Oracle TDE.  In the meantime, the SCA 6000 card was certified for use with Oracle TDE.  There is also a "Whitepaper" available, describing SCA 6000 setup and configuration for TDE.  I was lucky enough to get my hands on one of these cards and test for myself.  It works, of course.  What makes using the SCA 6000 so attractive is the additional possibilities the card has to offer.  You can lock and unlock the keystore to prevent any further wallet and column encryption operations.  You can also implement a Two-Person-Rule, using the card's software.  This allows to separate access to the master key from "normal" database administration.  This is often required in high-security environments.

Mittwoch Mrz 31, 2010

Improved Memory Interface for M8000 and M9000

Memory throughput of servers, especially large servers, is a significant contributor to overall system performance. This is especially true for applications with a large memory footprint. Oralce's three largest Sun SPARC Enterprise Servers recently had their memory interface enhanced, along with the recent CPU upgrade. This improved the measured memory bandwith significantly, as is shown by the Stream Benchmark results:

Server 	Stream Triad on SPARC64 VI 	Stream Triad on SPARC64 VII 	Improvement
M8000 69.6 GB/sec 81.75 GB/sec 17.5%
M9000-32 134.4 GB/sec 164.6 GB/sec 22.3%
M9000-64 227.1 GB/sec Not submitted (yet?)

All of these values are directly off the official Stream Website dated March 31st, 2010

It is important to note that performance in this benchmark depends mostly on the memory subsystem. CPU clock rates contribute little here.


Neuigkeiten, Tipps und Wissenswertes rund um SPARC, CMT, Performance und ihre Analyse sowie Erfahrungen mit Solaris auf dem Server und dem Laptop.

This is a bilingual blog (most of the time). Please select your prefered language:
The views expressed on this blog are my own and do not necessarily reflect the views of Oracle.


« June 2016