Dienstag Apr 17, 2012

Solaris Zones: Virtualization that Speeds up Benchmarks

One of the first questions that typically comes up when I talk to customers about virtualization is the overhead involved.  Now we all know that virtualization with hypervisors comes with an overhead of some sort.  We should also all know that exactly how big that overhead is depends on the type of workload as much as it depends on the hypervisor used.  While there have been attempts to create standard benchmarks for this, quantifying hypervisor overhead is still mostly hidden in the mists of marketing and benchmark uncertainty.  However, what always raises eyebrows is when I come to Solaris Zones (called Containers in Solaris 10) as an alternative to hypervisor virtualization.  Since Zones are, greatly simplyfied, nothing more than a group of Unix processes contained by a set of rules which are enforced by the Solaris kernel, it is quite evident that there can't be much overhead involved.  Nevertheless, since many people think in hypervisor terms, there is almost always some doubt about this claim of zero overhead.  And as much as I find the explanation with technical details compelling, I also understand that seeing is so much better than believing.  So - look and see:

The Oracle benchmark teams are so convinced of the advantages of Solaris Zones that they actually use them in the configurations for public benchmarking.  Solaris resource management will also work in a non Zones environment, but Zones make it just so much easier to handle, especially with some of the more complex benchmark configurations.  There are numerous benchmark publications available using Solaris Containers, dating back to the days of the T5440.  Some recent examples, all of them world records, are:

The use of Solaris Zones is documented in all of these benchmark publications.

The benchmarking team also published a blog entry detailing how they make use of resource management with Solaris Zones to actually increase application performance.  That almost asks for calling this "negative overhead", if the term weren't somewhat misleading.

So, if you ever need to substantiate why Solaris Zones have no virtualization overhead, point to these (and probably some more) published benchmarks.

Montag Mrz 19, 2012

Setting up a local AI server - easy with Solaris 11

Many things are new in Solaris 11, Autoinstall is one of them.  If, like me, you've known Jumpstart for the last 2 centuries or so, you'll have to start from scratch.  Well, almost, as the concepts are similar, and it's not all that difficult.  Just new.

I wanted to have an AI server that I could use for demo purposes, on the train if need be.  That answers the question of hardware requirements: portable.  But let's start at the beginning.

First, you need an OS image, of course.  In the new world of Solaris 11, it is now called a repository.  The original can be downloaded from the Solaris 11 page at Oracle.   What you want is the "Oracle Solaris 11 11/11 Repository Image", which comes in two parts that can be combined using cat.  MD5 checksums for these (and all other downloads from that page) are available closer to the top of the page.

With that, building the repository is quick and simple:

# zfs create -o mountpoint=/export/repo rpool/ai/repo
# zfs create rpool/ai/repo/sol11
# mount -o ro -F hsfs /tmp/sol-11-1111-repo-full.iso /mnt
# rsync -aP /mnt/repo /export/repo/sol11
# umount /mnt
# pkgrepo rebuild -s /export/repo/sol11/repo
# zfs snapshot rpool/ai/repo/sol11@fcs
# pkgrepo info -s  /export/repo/sol11/repo
solaris   4292     online           2012-03-12T20:47:15.378639Z
That's all there's to it.  Let's make a snapshot, just to be on the safe side.  You never know when one will come in handy.  To use this repository, you could just add it as a file-based publisher:
# pkg set-publisher -g file:///export/repo/sol11/repo solaris
In case I'd want to access this repository through a (virtual) network, i'll now quickly activate the repository-service:
# svccfg -s application/pkg/server \
setprop pkg/inst_root=/export/repo/sol11/repo
# svccfg -s application/pkg/server setprop pkg/readonly=true
# svcadm refresh application/pkg/server
# svcadm enable application/pkg/server

That's all you need - now point your browser to http://localhost/ to view your beautiful repository-server. Step 1 is done.  All of this, by the way, is nicely documented in the README file that's contained in the repository image.

Of course, we already have updates to the original release.  You can find them in MOS in the Oracle Solaris 11 Support Repository Updates (SRU) Index.  You can simply add these to your existing repository or create separate repositories for each SRU.  The individual SRUs are self-sufficient and incremental - SRU4 includes all updates from SRU2 and SRU3.  With ZFS, you can also get both: A full repository with all updates and at the same time incremental ones up to each of the updates:

# mount -o ro -F hsfs /tmp/sol-11-1111-sru4-05-incr-repo.iso /mnt
# pkgrecv -s /mnt/repo -d /export/repo/sol11/repo '*'
# umount /mnt
# pkgrepo rebuild -s /export/repo/sol11/repo
# zfs snapshot rpool/ai/repo/sol11@sru4
# zfs set snapdir=visible rpool/ai/repo/sol11
# svcadm restart svc:/application/pkg/server:default
The normal repository is now updated to SRU4.  Thanks to the ZFS snapshots, there is also a valid repository of Solaris 11 11/11 without the update located at /export/repo/sol11/.zfs/snapshot/fcs . If you like, you can also create another repository service for each update, running on a separate port.

But now lets continue with the AI server.  Just a little bit of reading in the dokumentation makes it clear that we will need to run a DHCP server for this.  Since I already have one active (for my SunRay installation) and since it's a good idea to have these kinds of services separate anyway, I decided to create this in a Zone.  So, let's create one first:

# zfs create -o mountpoint=/export/install rpool/ai/install
# zfs create -o mountpoint=/zones rpool/zones
# zonecfg -z ai-server
zonecfg:ai-server> create
create: Using system default template 'SYSdefault'
zonecfg:ai-server> set zonepath=/zones/ai-server
zonecfg:ai-server> add dataset
zonecfg:ai-server:dataset> set name=rpool/ai/install
zonecfg:ai-server:dataset> set alias=install
zonecfg:ai-server:dataset> end
zonecfg:ai-server> commit
zonecfg:ai-server> exit
# zoneadm -z ai-server install
# zoneadm -z ai-server boot ; zlogin -C ai-server
Give it a hostname and IP address at first boot, and there's the Zone.  For a publisher for Solaris packages, it will be bound to the "System Publisher" from the Global Zone.  The /export/install filesystem, of course, is intended to be used by the AI server.  Let's configure it now:
#zlogin ai-server
root@ai-server:~# pkg install install/installadm
root@ai-server:~# installadm create-service -n x86-fcs -a i386 \
-s pkg://solaris/install-image/solaris-auto-install@5.11,5.11- \
-d /export/install/fcs -i -c 3

With that, the core AI server is already done.  What happened here?  First, I installed the AI server software.  IPS makes that nice and easy.  If necessary, it'll also pull in the required DHCP-Server and anything else that might be missing.  Watch out for that DHCP server software.  In Solaris 11, there are two different versions.  There's the one you might know from Solaris 10 and earlier, and then there's a new one from ISC.  The latter is the one we need for AI.  The SMF service names of both are very similar.  The "old" one is "svc:/network/dhcp-server:default". The ISC-server comes with several SMF-services. We at least need "svc:/network/dhcp/server:ipv4". 

The command "installadm create-service" creates the installation-service. It's called "x86-fcs", serves the "i386" architecture and gets its boot image from the repository of the system publisher, using version 5.11,5.11-, which is Solaris 11 11/11.  (The option "-a i386" in this example is optional, since the installserver itself runs on a x86 machine.) The boot-environment for clients is created in /export/install/fcs and the DHCP-server is configured for 3 IP-addresses starting at  This configuration is stored in a very human readable form in /etc/inet/dhcpd4.conf.  An AI-service for SPARC systems could be created in the very same way, using "-a sparc" as the architecture option.

Now we would be ready to register and install the first client.  It would be installed with the default "solaris-large-server" using the publisher "http://pkg.oracle.com/solaris/release" and would query it's configuration interactively at first boot.  This makes it very clear that an AI-server is really only a boot-server.  The true source of packets to install can be different.  Since I don't like these defaults for my demo setup, I did some extra config work for my clients.

The configuration of a client is controlled by manifests and profiles.  The manifest controls which packets are installed and how the filesystems are layed out.  In that, it's very much like the old "rules.ok" file in Jumpstart.  Profiles contain additional configuration like root passwords, primary user account, IP addresses, keyboard layout etc.  Hence, profiles are very similar to the old sysid.cfg file.

The easiest way to get your hands on a manifest is to ask the AI server we just created to give us it's default one.  Then modify that to our liking and give it back to the installserver to use:

root@ai-server:~# mkdir -p /export/install/configs/manifests
root@ai-server:~# cd /export/install/configs/manifests
root@ai-server:~# installadm export -n x86-fcs -m orig_default \
-o orig_default.xml
root@ai-server:~# cp orig_default.xml s11-fcs.small.local.xml
root@ai-server:~# vi s11-fcs.small.local.xml
root@ai-server:~# more s11-fcs.small.local.xml
<!DOCTYPE auto_install SYSTEM "file:///usr/share/install/ai.dtd.1">
  <ai_instance name="S11 Small fcs local">
        <zpool name="rpool" is_root="true">
          <filesystem name="export" mountpoint="/export"/>
          <filesystem name="export/home"/>
          <be name="solaris"/>
    <software type="IPS">
          <!-- Specify locales to install -->
          <facet set="false">facet.locale.*</facet>
          <facet set="true">facet.locale.de</facet>
          <facet set="true">facet.locale.de_DE</facet>
          <facet set="true">facet.locale.en</facet>
          <facet set="true">facet.locale.en_US</facet>
        <publisher name="solaris">
          <origin name=""/>
        By default the latest build available, in the specified IPS
        repository, is installed.  If another build is required, the
        build number has to be appended to the 'entire' package in the
        following form:

      <software_data action="install">

root@ai-server:~# installadm create-manifest -n x86-fcs -d \
-f ./s11-fcs.small.local.xml 
root@ai-server:~# installadm list -m -n x86-fcs
Manifest             Status    Criteria 
--------             ------    -------- 
S11 Small fcs local  Default   None
orig_default         Inactive  None

The major points in this new manifest are:

  • Install "solaris-small-server"
  • Install a few locales less than the default.  I'm not that fluid in French or Japanese...
  • Use my own package service as publisher, running on IP address
  • Install the initial release of Solaris 11:  pkg:/entire@0.5.11,5.11-

Using a similar approach, I'll create a default profile interactively and use it as a template for a few customized building blocks, each defining a part of the overall system configuration.  The modular approach makes it easy to configure numerous clients later on:

root@ai-server:~# mkdir -p /export/install/configs/profiles
root@ai-server:~# cd /export/install/configs/profiles
root@ai-server:~# sysconfig create-profile -o default.xml
root@ai-server:~# cp default.xml general.xml; cp default.xml mars.xml
root@ai-server:~# cp default.xml user.xml
root@ai-server:~# vi general.xml mars.xml user.xml
root@ai-server:~# more general.xml mars.xml user.xml
<!DOCTYPE service_bundle SYSTEM "/usr/share/lib/xml/dtd/service_bundle.dtd.1">
<service_bundle type="profile" name="sysconfig">
  <service version="1" type="service" name="system/timezone">
    <instance enabled="true" name="default">
      <property_group type="application" name="timezone">
        <propval type="astring" name="localtime" value="Europe/Berlin"/>
  <service version="1" type="service" name="system/environment">
    <instance enabled="true" name="init">
      <property_group type="application" name="environment">
        <propval type="astring" name="LANG" value="C"/>
  <service version="1" type="service" name="system/keymap">
    <instance enabled="true" name="default">
      <property_group type="system" name="keymap">
        <propval type="astring" name="layout" value="US-English"/>
  <service version="1" type="service" name="system/console-login">
    <instance enabled="true" name="default">
      <property_group type="application" name="ttymon">
        <propval type="astring" name="terminal_type" value="vt100"/>
  <service version="1" type="service" name="network/physical">
    <instance enabled="true" name="default">
      <property_group type="application" name="netcfg">
        <propval type="astring" name="active_ncp" value="DefaultFixed"/>
  <service version="1" type="service" name="system/name-service/switch">
    <property_group type="application" name="config">
      <propval type="astring" name="default" value="files"/>
      <propval type="astring" name="host" value="files dns"/>
      <propval type="astring" name="printer" value="user files"/>
    <instance enabled="true" name="default"/>
  <service version="1" type="service" name="system/name-service/cache">
    <instance enabled="true" name="default"/>
  <service version="1" type="service" name="network/dns/client">
    <property_group type="application" name="config">
      <property type="net_address" name="nameserver">
          <value_node value=""/>
    <instance enabled="true" name="default"/>
<!DOCTYPE service_bundle SYSTEM "/usr/share/lib/xml/dtd/service_bundle.dtd.1">
<service_bundle type="profile" name="sysconfig">
  <service version="1" type="service" name="network/install">
    <instance enabled="true" name="default">
      <property_group type="application" name="install_ipv4_interface">
        <propval type="astring" name="address_type" value="static"/>
        <propval type="net_address_v4" name="static_address" 
        <propval type="astring" name="name" value="net0/v4"/>
        <propval type="net_address_v4" name="default_route" 
      <property_group type="application" name="install_ipv6_interface">
        <propval type="astring" name="stateful" value="yes"/>
        <propval type="astring" name="stateless" value="yes"/>
        <propval type="astring" name="address_type" value="addrconf"/>
        <propval type="astring" name="name" value="net0/v6"/>
  <service version="1" type="service" name="system/identity">
    <instance enabled="true" name="node">
      <property_group type="application" name="config">
        <propval type="astring" name="nodename" value="mars"/>
<!DOCTYPE service_bundle SYSTEM "/usr/share/lib/xml/dtd/service_bundle.dtd.1">
<service_bundle type="profile" name="sysconfig">
  <service version="1" type="service" name="system/config-user">
    <instance enabled="true" name="default">
      <property_group type="application" name="root_account">
        <propval type="astring" name="login" value="root"/>
        <propval type="astring" name="password" 
        <propval type="astring" name="type" value="role"/>
      <property_group type="application" name="user_account">
        <propval type="astring" name="login" value="stefan"/>
        <propval type="astring" name="password" 
        <propval type="astring" name="type" value="normal"/>
        <propval type="astring" name="description" value="Stefan Hinker"/>
        <propval type="count" name="uid" value="12345"/>
        <propval type="count" name="gid" value="10"/>
        <propval type="astring" name="shell" value="/usr/bin/bash"/>
        <propval type="astring" name="roles" value="root"/>
        <propval type="astring" name="profiles" value="System Administrator"/>
        <propval type="astring" name="sudoers" value="ALL=(ALL) ALL"/>
root@ai-server:~# installadm create-profile -n x86-fcs -f general.xml
root@ai-server:~# installadm create-profile -n x86-fcs -f user.xml
root@ai-server:~# installadm create-profile -n x86-fcs -f mars.xml \
-c ipv4=
root@ai-server:~# installadm list -p

Service Name  Profile     
------------  -------     
x86-fcs       general.xml

root@ai-server:~# installadm list -n x86-fcs -p

Profile      Criteria 
-------      -------- 
general.xml  None
mars.xml     ipv4 =
user.xml     None

Here's the idea behind these files:

  • "general.xml" contains settings valid for all my clients.  Stuff like DNS servers, for example, which in my case will always be the same.
  • "user.xml" only contains user definitions.  That is, a root password and a primary user.
    Both of these profiles will be valid for all clients (for now).
  • "mars.xml" defines network settings for an individual client.  This profile is associated with an IP-Address.  For this to work, I'll have to tweak the DHCP-settings in the next step:
root@ai-server:~# installadm create-client -e 08:00:27:AA:3D:B1 -n x86-fcs
root@ai-server:~# vi /etc/inet/dhcpd4.conf
root@ai-server:~# tail -5 /etc/inet/dhcpd4.conf
host 080027AA3DB1 {
  hardware ethernet 08:00:27:AA:3D:B1;
  filename "01080027AA3DB1";

This completes the client preparations.  I manually added the IP-Address for mars to /etc/inet/dhcpd4.conf.  This is needed for the "mars.xml" profile.  Disabling arbitrary DHCP-replies will shut up this DHCP server, making my life in a shared environment a lot more peaceful ;-)

Note: The above example shows the configuration for x86 clients.  SPARC clients have a slightly different entry in the dhcp config file, again with some manual tweaking to create a fixed IP address for my client:

subnet netmask {
  option broadcast-address;
  option routers;

class "SPARC" {
  match if not (substring(option vendor-class-identifier, 0, 9) = "PXEClient");
  filename "";

host sparcy {
   hardware ethernet 00:14:4f:fb:52:3c ;
   fixed-address ;
Now, I of course want this installation to be completely hands-off.  For this to work, I'll need to modify the grub boot menu for this client slightly.  You can find it in /etc/netboot.  "installadm create-client" will create a new boot menu for every client, identified by the client's MAC address.  The template for this can be found in a subdirectory with the name of the install service, /etc/netboot/x86-fcs in our case.  If you don't want to change this manually for every client, modify that template to your liking instead.
root@ai-server:~# cd /etc/netboot
root@ai-server:~# cp menu.lst.01080027AA3DB1 menu.lst.01080027AA3DB1.org
root@ai-server:~# vi menu.lst.01080027AA3DB1
root@ai-server:~# diff menu.lst.01080027AA3DB1 menu.lst.01080027AA3DB1.org
< default=1
< timeout=10
> default=0
> timeout=30
root@ai-server:~# more menu.lst.01080027AA3DB1

title Oracle Solaris 11 11/11 Text Installer and command line
	kernel$ /x86-fcs/platform/i86pc/kernel/$ISADIR/unix -B install_media=htt
	module$ /x86-fcs/platform/i86pc/$ISADIR/boot_archive

title Oracle Solaris 11 11/11 Automated Install
	kernel$ /x86-fcs/platform/i86pc/kernel/$ISADIR/unix -B install=true,inst
	module$ /x86-fcs/platform/i86pc/$ISADIR/boot_archive

Now just boot the client off the network using PXE-boot.  For my demo purposes, that's a client from VirtualBox, of course.   Again, if this were a SPARC system, you'd instead be typing "boot net:dhcp - install" at the OK prompt and then just watch the installation.

That's all there's to it.  And despite the fact that this blog entry is a little longer - that wasn't that hard now, was it?

Mittwoch Feb 29, 2012

Solaris Fingerprint Database - How it's done in Solaris 11

Many remember the Solaris Fingerprint Database. It was a great tool to verify the integrity of a solaris binary.  Unfortunately, it went away with the rest of sunsolve, and was not revived in the replacement, "My Oracle Support".  Here's the good news:  It's back for Solaris 11, and it's better than ever!

It is now totally integrated with IPS...  Read more

[Read More]

Montag Feb 20, 2012

Solaris 11 submitted for EAL4+ certification

Solaris 11 has been submitted for certification by the Canadian Common Criteria Scheme in Level EAL4+. They will be certifying against the protection profile "Operating System Protection Profile (OS PP)" as well as the extensions

  • Advanced Management (AM)
  • Extended Identification and Authentication (EIA)
  • Labeled Security (LS)
  • Virtualization (VIRT)

EAL4+ is the highest level typically achievable for commercial software,
and is the highest level mutually recognized by 26 countries, including Germany and the USA. Completion of the certification lies in the hands of the certification authority.

You can check the current status of this certification (as well as other certified Oracle software) on the page Oracle Security Evaluations.

Mittwoch Dez 21, 2011

Which IO Option for which Server?

For those of you who always wanted to know what IO option cards were available for which server, there is now a new portal on wikis.oracle.com.  This wiki contains a full list of IO options, ordered by server, and maintained for all current systems. Also included is the number of cards supported on each system.  The same information, for all current as well as for all older models, is available in the Systems Handbook, the ultimate answerbook for all hardware questions ;-)

(For those that have been around for a while: This service is the replacement for the previous "Cross Platform IO Wiki", which is no longer available.)

Montag Dez 05, 2011

Hard Partitioning!

Good news for all users of Oracle VM Server for SPARC (aka LDoms) with Oracle Software:  Since December 2, LDoms count as "Hard Partitioning".  This makes it possible to license only those cores of a server with Oracle software that you really need.  Details are available from License Management Services.

Dienstag Nov 08, 2011

Oracle TDE and Hardware Accelerated Crypto

Finally, there's a clear and brief description of the hardware and software required to use hardware accelerated encryption with Oracle TDE..  Here's an even shorter summary ;-)

  • SPARC T4 or Intel CPU with AES-NI
  • Solaris 11 or Linux (a patch for Solaris 10 will be provided eventually)
  • Oracle
    • If you use Linux, Oracle DB with patch 10296641 will also work.

The longer version of this summary is available as MOS-Note 1365021.1

Happy crypting!

Freitag Okt 07, 2011

Solaris 11 Launch

There have been many questions and rumors about the upcoming launch of Solaris 11.  Now it's out:  Watch the webcast on

November 9, 2011
at 10am ET

Be invited to join!

(I hope to get around summarizing all the OpenWorld announcements, especially around T4, soon...)

Donnerstag Sep 08, 2011

Core Factor for T4 published

Oracle has published an update to the Processor Core Factor Table that lists the (yet to be released) T4 CPU with a factor of 0.5.  This leaves the license cost per socket the same compared to T3 and puts T4 in the same league as SPARC64 VII+ and all current x86 CPUs.  We will have to wait for the announcement of the CPU until we can actually speak about performance.  But this core factor (which is by no means a measure of CPU performance!) seems to confirm what the few other available bits of information seem to be hinting at:  T4 will deliver on Oracle's performance claims. 

Montag Aug 01, 2011

Oracle Solaris Studio 12.3 Beta Program

The beta program for Oracle Solaris Studio 12.3 is now open for participation.  Anyone willing to test the newest compiler and developer tools is welcome to join.  You may expect performance improvements over earlier versions of Studio as well as GCC that make testing worth your while.

Happy testing!

Donnerstag Jul 28, 2011

No humor left?

Just in case we all forgot that originally, Unix people were blessed with lots of humor:
OTN ID 1172413.1: How to make a grilled cheese sandwich

Dienstag Jul 05, 2011

Some Thoughts about the new SPARC Roadmap

As most of you know by now, Oracle has recently released an updated roadmap for its SPARC based servers.  Of course, others also publish roadmaps, and we all know how CPUs tend to slip and roadmaps sort of evolve backwards over time.  So what's the value in this new roadmap?

When Oracle aquired Sun, there were all these big promises about increased investement in SPARC and Solaris.  We've already seen the first delivery on these promises:

  • The SPARC T3 CPU doubled the throughput per socket for the T-Series line of servers.
  • Solaris 11 Express gives customers a supported (!) preview of what's coming with Solaris 11
  • M-Series servers can be upgraded yet again for up to an additional 20% performance gain.

All this is shown in the new roadmap as tick-off items.  All this was already well into completion at the time of the aquisition, although the speed with which T3 came to market can already be attributed to Oracle's accelerated investment.  Nevertheless, the real prove point will be the next promise on that roadmap.  It says: "3x Single Strand" for a T-Series CPU.  By now, we all know that the name will probably be "T4".  Rick Hetherington told us a while ago, that the chip will have 8 cores and will execute single threaded workloads up to 5 times faster than T3.  That would be even more than the 3x promised by the roadmap.  The only question that remains is:  Will Oracle deliver?

While no one can tell until the chip and systems are actually announced, chances are good it will.  Why else would Oracle invite customers to participate in a T4 Beta Program?

Back to the original question:  What's the value of this new roadmap?  Well, the foundation of trust, based on delivering on promises, looks good for that part of the roadmap arrow that's already past.  Oracle delivered, and it delivered on time.  That's more than some others can claim.

A little disclaimer to keep everyone happy:  This is my personal view and not an official statement from Oracle.

Donnerstag Jun 09, 2011

OVM Server for SPARC 2.1 is here!

The newest version of OVM Server for SPARC aka LDoms is released!  Here's the press release...

What, already a new version again?  Well, the most missed feature in the previous versions was finally completed, and we didn't want to keep everyone waiting ;-)  The new version 2.1 turns "Warm Migration" into "Live Migration".  All the other improvements can be found in "What's New".  Once I have further details about Live Migration, I'll post them here.  You can find the download on MOS and the documentation on OTN.

Mittwoch Jun 08, 2011

Erasing disks securely

Actually, both the question and the answer are old and well known.  However, these things tend to be forgotten and pop up as questions from time to time.  Hence a little reminder for all of us:

Solaris makes it easy to erase a disk so that all the data can't be restored, even with sophisticated methods.  There is a subcommand "analyze/purge" in the command format(1M) that does it all for you.  It will overwrite the selected area of your disk (usually s2) a total of four times with different patterns to achieve this.  Of course, depending on the size of the disk, this might take a while.  But it's secure enough to comply with Department Of Defence(DOD) wipe disk standard 5220.22-M.  Note however that as of June 28, 2007, overwriting in general is no longer accepted as a method to securely erase data.  Here is a link to the relevant DSS publication.

Some more details are here:

Note that this method does not apply to SSDs of all kind!  And of course, to avoid any risk of losing your data with your disk, simply encrypt it!  It's quite easy using ZFS or Oracle TDE :-)

Update 2015-05-29:

  • The link to the original DoD standard doesn't work anymore and has been replaced by a link to Wikipedia.
  • Here's an additional link to a more recent NIST publication.
  • Note that with modern drives, destroying data with OS or application level tools will not satisfy higher security requirements.  The sector management of these drives might make defective sectors with sensitive data unavailable to such tools - but not to more intrusive methods of active data recovery.  If you want to protect against those, physical destruction is your only reliable option.

Update 2015-09-29:

This is my final comment on this matter:

  • If you are worried about the data on storage devices you no longer use, physical destruction of those devices is the only truly secure option.
  • Encrypt your data right from the start to avoid this issue.  Encryption is easily and in many cases freely available.  If you don't care enough about your data to encrypt it, you are unlikely to worry about data on decommissioned storage devices.
  • If you are worried enough not to trust encryption, no erasing technique will be good enough to satisfy your requirements.  And the cost of physically destroying those devices will not matter to you.

Dienstag Mai 24, 2011

ILOM for ALOM Users

ALOM ist dead, long live ILOM! The current T3 systems use the new ILOM 3.0.  This version no longer supports the legacy ALOM commands.  That's a change for all those that got used to the alom command syntax over the years.  A table comparing the old and new commands would be helpful.  But where would one find such a table?  In the dokumentation!

But it's not always easy to find what you're looking for right away. To help, here's a link to exactly that table:

ALOM ILOM Commando Comparison


Neuigkeiten, Tipps und Wissenswertes rund um SPARC, CMT, Performance und ihre Analyse sowie Erfahrungen mit Solaris auf dem Server und dem Laptop.

This is a bilingual blog (most of the time). Please select your prefered language:
The views expressed on this blog are my own and do not necessarily reflect the views of Oracle.


« February 2016