Montag Mai 14, 2012

Benchware Test of T4

There's a rather thorough performance comparison between an M5000 and a T4-2 that I can only recommend to anyone still wondering if those TPC-H world records are really possible:

Find the test report on the Benchware website - look for T4 in the "Benchmark" section.

And of course, check out the TPC-H results, too.  Look for 1000GB and 3000GB ;-)

(No, I didn't transfer to marketing.  I just think this test is worth being mentioned on a blog that's about performance, among other things.)

Donnerstag Mai 10, 2012

T4 Crypto Cheat Sheet

In an earlier post, I already mentioned what's needed to make use of T4 crypto acceleration for Oracle TDE.  This hasn't changed - the patch for Solaris 10 is still under development.  However, there are of course other usecases for hardware crypto on T4.  Since the code path to this functionality has changed considerably from earlier CPUs, there have also been some changes in how it's used and observed.  Here's a short summary of these changes.

Using it:

 Feature / Software consumer
 T3 and before*
 T4 / Solaris 10
T4 / Solaris 11
 SSH

Automatically enabled with Solaris 10 5/09 and later.

Disable/Enable with "UseOpenSSLEngine" clause in /etc/ssh/sshd_config

Requires patch 147707-01

Disable/Enable with "UseOpenSSLEngine" clause in /etc/ssh/sshd_config

Automatically enabled.

Disable/Enable with "UseOpenSSLEngine" clause in /etc/ssh/sshd_config

 Java / JCE

Automatically enabled. 

Configure in $JAVA_HOME/jre/lib/security/java.security

Automatically enabled. 

Configure in $JAVA_HOME/jre/lib/security/java.security

Automatically enabled. 

Configure in $JAVA_HOME/jre/lib/security/java.security

 ZFS Crypto
Not available
Not available
HW crypto automatically enabled if dataset encrypted.
 IPsec

Automatically enabled. 

Automatically enabled. 

Automatically enabled. 

OpenSSL

Use "-engine pkcs11"

Requires patch 147707-01

Use "-engine pkcs11"

The engine "t4" is automatically used.  Optionally use "-engine pkcs11".

pkcs11 recommended for RSA/DSA at this time.

KSSL (Kernel SSL proxy)

Automatically enabled. 

Automatically enabled. 

Automatically enabled. 

Oracle TDE

Not supported

Pending patch

Automatically enabled with Oracle DB 11.2.0.3 and ASO

Apache SSL
Configure with "SSLCryptoDevice pkcs11"
Configure with "SSLCryptoDevice pkcs11"
Configure with "SSLCryptoDevice pkcs11"
Logical Domains
Assign crypto units to domains.
Functionality always available, no configuration required.
Functionality always available, no configuration required.

* T1 CPUs do not support symetric ciphers like AES.  Consumers like SSH will therefore use software crypto on T1.

Observability:
  • Note that unlike T3 and before, T4 crypto doesn't require kernel modules like ncp or n2cp, there is no visibility of crypto hardware with kstats or cryptoadm.  
  • T4 does provide hardware counters for crypto operations.  You can see these using cpustat:
    cpustat -c pic0=Instr_FGU_crypto 5
    
  • You can check the availability of the openssl engine with the command "openssl engine", and the general crypto support of the hardware and OS with the command "isainfo -v".
  • Since T4 crypto's implementation now allows direct userland access, there are no "crypto units" visible to cryptoadm.   For the same reason, there are no "crypto units" visible in LDoms Manager.  In LDoms, the functionality is always available and does not need to be configured separately.  Note that you should have the latest LDoms Manager Patch 147507 installed.
Additional Reading:

Dienstag Apr 17, 2012

Solaris Zones: Virtualization that Speeds up Benchmarks

One of the first questions that typically comes up when I talk to customers about virtualization is the overhead involved.  Now we all know that virtualization with hypervisors comes with an overhead of some sort.  We should also all know that exactly how big that overhead is depends on the type of workload as much as it depends on the hypervisor used.  While there have been attempts to create standard benchmarks for this, quantifying hypervisor overhead is still mostly hidden in the mists of marketing and benchmark uncertainty.  However, what always raises eyebrows is when I come to Solaris Zones (called Containers in Solaris 10) as an alternative to hypervisor virtualization.  Since Zones are, greatly simplyfied, nothing more than a group of Unix processes contained by a set of rules which are enforced by the Solaris kernel, it is quite evident that there can't be much overhead involved.  Nevertheless, since many people think in hypervisor terms, there is almost always some doubt about this claim of zero overhead.  And as much as I find the explanation with technical details compelling, I also understand that seeing is so much better than believing.  So - look and see:

The Oracle benchmark teams are so convinced of the advantages of Solaris Zones that they actually use them in the configurations for public benchmarking.  Solaris resource management will also work in a non Zones environment, but Zones make it just so much easier to handle, especially with some of the more complex benchmark configurations.  There are numerous benchmark publications available using Solaris Containers, dating back to the days of the T5440.  Some recent examples, all of them world records, are:

The use of Solaris Zones is documented in all of these benchmark publications.

The benchmarking team also published a blog entry detailing how they make use of resource management with Solaris Zones to actually increase application performance.  That almost asks for calling this "negative overhead", if the term weren't somewhat misleading.

So, if you ever need to substantiate why Solaris Zones have no virtualization overhead, point to these (and probably some more) published benchmarks.

Montag Mrz 19, 2012

Setting up a local AI server - easy with Solaris 11

Many things are new in Solaris 11, Autoinstall is one of them.  If, like me, you've known Jumpstart for the last 2 centuries or so, you'll have to start from scratch.  Well, almost, as the concepts are similar, and it's not all that difficult.  Just new.

I wanted to have an AI server that I could use for demo purposes, on the train if need be.  That answers the question of hardware requirements: portable.  But let's start at the beginning.

First, you need an OS image, of course.  In the new world of Solaris 11, it is now called a repository.  The original can be downloaded from the Solaris 11 page at Oracle.   What you want is the "Oracle Solaris 11 11/11 Repository Image", which comes in two parts that can be combined using cat.  MD5 checksums for these (and all other downloads from that page) are available closer to the top of the page.

With that, building the repository is quick and simple:

# zfs create -o mountpoint=/export/repo rpool/ai/repo
# zfs create rpool/ai/repo/sol11
# mount -o ro -F hsfs /tmp/sol-11-1111-repo-full.iso /mnt
# rsync -aP /mnt/repo /export/repo/sol11
# umount /mnt
# pkgrepo rebuild -s /export/repo/sol11/repo
# zfs snapshot rpool/ai/repo/sol11@fcs
# pkgrepo info -s  /export/repo/sol11/repo
PUBLISHER PACKAGES STATUS           UPDATED
solaris   4292     online           2012-03-12T20:47:15.378639Z
That's all there's to it.  Let's make a snapshot, just to be on the safe side.  You never know when one will come in handy.  To use this repository, you could just add it as a file-based publisher:
# pkg set-publisher -g file:///export/repo/sol11/repo solaris
In case I'd want to access this repository through a (virtual) network, i'll now quickly activate the repository-service:
# svccfg -s application/pkg/server \
setprop pkg/inst_root=/export/repo/sol11/repo
# svccfg -s application/pkg/server setprop pkg/readonly=true
# svcadm refresh application/pkg/server
# svcadm enable application/pkg/server

That's all you need - now point your browser to http://localhost/ to view your beautiful repository-server. Step 1 is done.  All of this, by the way, is nicely documented in the README file that's contained in the repository image.

Of course, we already have updates to the original release.  You can find them in MOS in the Oracle Solaris 11 Support Repository Updates (SRU) Index.  You can simply add these to your existing repository or create separate repositories for each SRU.  The individual SRUs are self-sufficient and incremental - SRU4 includes all updates from SRU2 and SRU3.  With ZFS, you can also get both: A full repository with all updates and at the same time incremental ones up to each of the updates:

# mount -o ro -F hsfs /tmp/sol-11-1111-sru4-05-incr-repo.iso /mnt
# pkgrecv -s /mnt/repo -d /export/repo/sol11/repo '*'
# umount /mnt
# pkgrepo rebuild -s /export/repo/sol11/repo
# zfs snapshot rpool/ai/repo/sol11@sru4
# zfs set snapdir=visible rpool/ai/repo/sol11
# svcadm restart svc:/application/pkg/server:default
The normal repository is now updated to SRU4.  Thanks to the ZFS snapshots, there is also a valid repository of Solaris 11 11/11 without the update located at /export/repo/sol11/.zfs/snapshot/fcs . If you like, you can also create another repository service for each update, running on a separate port.

But now lets continue with the AI server.  Just a little bit of reading in the dokumentation makes it clear that we will need to run a DHCP server for this.  Since I already have one active (for my SunRay installation) and since it's a good idea to have these kinds of services separate anyway, I decided to create this in a Zone.  So, let's create one first:

# zfs create -o mountpoint=/export/install rpool/ai/install
# zfs create -o mountpoint=/zones rpool/zones
# zonecfg -z ai-server
zonecfg:ai-server> create
create: Using system default template 'SYSdefault'
zonecfg:ai-server> set zonepath=/zones/ai-server
zonecfg:ai-server> add dataset
zonecfg:ai-server:dataset> set name=rpool/ai/install
zonecfg:ai-server:dataset> set alias=install
zonecfg:ai-server:dataset> end
zonecfg:ai-server> commit
zonecfg:ai-server> exit
# zoneadm -z ai-server install
# zoneadm -z ai-server boot ; zlogin -C ai-server
Give it a hostname and IP address at first boot, and there's the Zone.  For a publisher for Solaris packages, it will be bound to the "System Publisher" from the Global Zone.  The /export/install filesystem, of course, is intended to be used by the AI server.  Let's configure it now:
#zlogin ai-server
root@ai-server:~# pkg install install/installadm
root@ai-server:~# installadm create-service -n x86-fcs -a i386 \
-s pkg://solaris/install-image/solaris-auto-install@5.11,5.11-0.175.0.0.0.2.1482 \
-d /export/install/fcs -i 192.168.2.20 -c 3

With that, the core AI server is already done.  What happened here?  First, I installed the AI server software.  IPS makes that nice and easy.  If necessary, it'll also pull in the required DHCP-Server and anything else that might be missing.  Watch out for that DHCP server software.  In Solaris 11, there are two different versions.  There's the one you might know from Solaris 10 and earlier, and then there's a new one from ISC.  The latter is the one we need for AI.  The SMF service names of both are very similar.  The "old" one is "svc:/network/dhcp-server:default". The ISC-server comes with several SMF-services. We at least need "svc:/network/dhcp/server:ipv4". 

The command "installadm create-service" creates the installation-service. It's called "x86-fcs", serves the "i386" architecture and gets its boot image from the repository of the system publisher, using version 5.11,5.11-0.175.0.0.0.2.1482, which is Solaris 11 11/11.  (The option "-a i386" in this example is optional, since the installserver itself runs on a x86 machine.) The boot-environment for clients is created in /export/install/fcs and the DHCP-server is configured for 3 IP-addresses starting at 192.168.2.20.  This configuration is stored in a very human readable form in /etc/inet/dhcpd4.conf.  An AI-service for SPARC systems could be created in the very same way, using "-a sparc" as the architecture option.

Now we would be ready to register and install the first client.  It would be installed with the default "solaris-large-server" using the publisher "http://pkg.oracle.com/solaris/release" and would query it's configuration interactively at first boot.  This makes it very clear that an AI-server is really only a boot-server.  The true source of packets to install can be different.  Since I don't like these defaults for my demo setup, I did some extra config work for my clients.

The configuration of a client is controlled by manifests and profiles.  The manifest controls which packets are installed and how the filesystems are layed out.  In that, it's very much like the old "rules.ok" file in Jumpstart.  Profiles contain additional configuration like root passwords, primary user account, IP addresses, keyboard layout etc.  Hence, profiles are very similar to the old sysid.cfg file.

The easiest way to get your hands on a manifest is to ask the AI server we just created to give us it's default one.  Then modify that to our liking and give it back to the installserver to use:

root@ai-server:~# mkdir -p /export/install/configs/manifests
root@ai-server:~# cd /export/install/configs/manifests
root@ai-server:~# installadm export -n x86-fcs -m orig_default \
-o orig_default.xml
root@ai-server:~# cp orig_default.xml s11-fcs.small.local.xml
root@ai-server:~# vi s11-fcs.small.local.xml
root@ai-server:~# more s11-fcs.small.local.xml
<!DOCTYPE auto_install SYSTEM "file:///usr/share/install/ai.dtd.1">
<auto_install>
  <ai_instance name="S11 Small fcs local">
    <target>
      <logical>
        <zpool name="rpool" is_root="true">
          <filesystem name="export" mountpoint="/export"/>
          <filesystem name="export/home"/>
          <be name="solaris"/>
        </zpool>
      </logical>
    </target>
    <software type="IPS">
      <destination>
        
      </destination>
      <source>
        <publisher name="solaris">
          <origin name="http://192.168.2.12/"/>
        </publisher>
      </source>
      <!--
        By default the latest build available, in the specified IPS
        repository, is installed.  If another build is required, the
        build number has to be appended to the 'entire' package in the
        following form:

            <name>pkg:/entire@0.5.11-0.build#</name>
      -->
      <software_data action="install">
        <name>pkg:/entire@0.5.11,5.11-0.175.0.0.0.2.0</name>
        <name>pkg:/group/system/solaris-small-server</name>
      </software_data>
    </software>
  </ai_instance>
</auto_install>

root@ai-server:~# installadm create-manifest -n x86-fcs -d \
-f ./s11-fcs.small.local.xml 
root@ai-server:~# installadm list -m -n x86-fcs
Manifest             Status    Criteria 
--------             ------    -------- 
S11 Small fcs local  Default   None
orig_default         Inactive  None

The major points in this new manifest are:

  • Install "solaris-small-server"
  • Install a few locales less than the default.  I'm not that fluid in French or Japanese...
  • Use my own package service as publisher, running on IP address 192.168.2.12
  • Install the initial release of Solaris 11:  pkg:/entire@0.5.11,5.11-0.175.0.0.0.2.0

Using a similar approach, I'll create a default profile interactively and use it as a template for a few customized building blocks, each defining a part of the overall system configuration.  The modular approach makes it easy to configure numerous clients later on:

root@ai-server:~# mkdir -p /export/install/configs/profiles
root@ai-server:~# cd /export/install/configs/profiles
root@ai-server:~# sysconfig create-profile -o default.xml
root@ai-server:~# cp default.xml general.xml; cp default.xml mars.xml
root@ai-server:~# cp default.xml user.xml
root@ai-server:~# vi general.xml mars.xml user.xml
root@ai-server:~# more general.xml mars.xml user.xml
::::::::::::::
general.xml
::::::::::::::
<!DOCTYPE service_bundle SYSTEM "/usr/share/lib/xml/dtd/service_bundle.dtd.1">
<service_bundle type="profile" name="sysconfig">
  <service version="1" type="service" name="system/timezone">
    <instance enabled="true" name="default">
      <property_group type="application" name="timezone">
        <propval type="astring" name="localtime" value="Europe/Berlin"/>
      </property_group>
    </instance>
  </service>
  <service version="1" type="service" name="system/environment">
    <instance enabled="true" name="init">
      <property_group type="application" name="environment">
        <propval type="astring" name="LANG" value="C"/>
      </property_group>
    </instance>
  </service>
  <service version="1" type="service" name="system/keymap">
    <instance enabled="true" name="default">
      <property_group type="system" name="keymap">
        <propval type="astring" name="layout" value="US-English"/>
      </property_group>
    </instance>
  </service>
  <service version="1" type="service" name="system/console-login">
    <instance enabled="true" name="default">
      <property_group type="application" name="ttymon">
        <propval type="astring" name="terminal_type" value="vt100"/>
      </property_group>
    </instance>
  </service>
  <service version="1" type="service" name="network/physical">
    <instance enabled="true" name="default">
      <property_group type="application" name="netcfg">
        <propval type="astring" name="active_ncp" value="DefaultFixed"/>
      </property_group>
    </instance>
  </service>
  <service version="1" type="service" name="system/name-service/switch">
    <property_group type="application" name="config">
      <propval type="astring" name="default" value="files"/>
      <propval type="astring" name="host" value="files dns"/>
      <propval type="astring" name="printer" value="user files"/>
    </property_group>
    <instance enabled="true" name="default"/>
  </service>
  <service version="1" type="service" name="system/name-service/cache">
    <instance enabled="true" name="default"/>
  </service>
  <service version="1" type="service" name="network/dns/client">
    <property_group type="application" name="config">
      <property type="net_address" name="nameserver">
        <net_address_list>
          <value_node value="192.168.2.1"/>
        </net_address_list>
      </property>
    </property_group>
    <instance enabled="true" name="default"/>
  </service>
</service_bundle>
::::::::::::::
mars.xml
::::::::::::::
<!DOCTYPE service_bundle SYSTEM "/usr/share/lib/xml/dtd/service_bundle.dtd.1">
<service_bundle type="profile" name="sysconfig">
  <service version="1" type="service" name="network/install">
    <instance enabled="true" name="default">
      <property_group type="application" name="install_ipv4_interface">
        <propval type="astring" name="address_type" value="static"/>
        <propval type="net_address_v4" name="static_address" 
                 value="192.168.2.100/24"/>
        <propval type="astring" name="name" value="net0/v4"/>
        <propval type="net_address_v4" name="default_route" 
                 value="192.168.2.1"/>
      </property_group>
      <property_group type="application" name="install_ipv6_interface">
        <propval type="astring" name="stateful" value="yes"/>
        <propval type="astring" name="stateless" value="yes"/>
        <propval type="astring" name="address_type" value="addrconf"/>
        <propval type="astring" name="name" value="net0/v6"/>
      </property_group>
    </instance>
  </service>
  <service version="1" type="service" name="system/identity">
    <instance enabled="true" name="node">
      <property_group type="application" name="config">
        <propval type="astring" name="nodename" value="mars"/>
      </property_group>
    </instance>
  </service>
</service_bundle>
::::::::::::::
user.xml
::::::::::::::
<!DOCTYPE service_bundle SYSTEM "/usr/share/lib/xml/dtd/service_bundle.dtd.1">
<service_bundle type="profile" name="sysconfig">
  <service version="1" type="service" name="system/config-user">
    <instance enabled="true" name="default">
      <property_group type="application" name="root_account">
        <propval type="astring" name="login" value="root"/>
        <propval type="astring" name="password" 
                 value="noIWillNotTellYouMyPasswordNotEvenEncrypted"/>
        <propval type="astring" name="type" value="role"/>
      </property_group>
      <property_group type="application" name="user_account">
        <propval type="astring" name="login" value="stefan"/>
        <propval type="astring" name="password" 
                 value="noIWillNotTellYouMyPasswordNotEvenEncrypted"/>
        <propval type="astring" name="type" value="normal"/>
        <propval type="astring" name="description" value="Stefan Hinker"/>
        <propval type="count" name="uid" value="12345"/>
        <propval type="count" name="gid" value="10"/>
        <propval type="astring" name="shell" value="/usr/bin/bash"/>
        <propval type="astring" name="roles" value="root"/>
        <propval type="astring" name="profiles" value="System Administrator"/>
        <propval type="astring" name="sudoers" value="ALL=(ALL) ALL"/>
      </property_group>
    </instance>
  </service>
</service_bundle>
root@ai-server:~# installadm create-profile -n x86-fcs -f general.xml
root@ai-server:~# installadm create-profile -n x86-fcs -f user.xml
root@ai-server:~# installadm create-profile -n x86-fcs -f mars.xml \
-c ipv4=192.168.2.100
root@ai-server:~# installadm list -p

Service Name  Profile     
------------  -------     
x86-fcs       general.xml
              mars.xml
              user.xml

root@ai-server:~# installadm list -n x86-fcs -p

Profile      Criteria 
-------      -------- 
general.xml  None
mars.xml     ipv4 = 192.168.2.100
user.xml     None

Here's the idea behind these files:

  • "general.xml" contains settings valid for all my clients.  Stuff like DNS servers, for example, which in my case will always be the same.
  • "user.xml" only contains user definitions.  That is, a root password and a primary user.
    Both of these profiles will be valid for all clients (for now).
  • "mars.xml" defines network settings for an individual client.  This profile is associated with an IP-Address.  For this to work, I'll have to tweak the DHCP-settings in the next step:
root@ai-server:~# installadm create-client -e 08:00:27:AA:3D:B1 -n x86-fcs
root@ai-server:~# vi /etc/inet/dhcpd4.conf
root@ai-server:~# tail -5 /etc/inet/dhcpd4.conf
host 080027AA3DB1 {
  hardware ethernet 08:00:27:AA:3D:B1;
  fixed-address 192.168.2.100;
  filename "01080027AA3DB1";
}

This completes the client preparations.  I manually added the IP-Address for mars to /etc/inet/dhcpd4.conf.  This is needed for the "mars.xml" profile.  Disabling arbitrary DHCP-replies will shut up this DHCP server, making my life in a shared environment a lot more peaceful ;-)

Note: The above example shows the configuration for x86 clients.  SPARC clients have a slightly different entry in the dhcp config file, again with some manual tweaking to create a fixed IP address for my client:

subnet 192.168.2.0 netmask 255.255.255.0 {
  range 192.168.2.200 192.168.2.201
  option broadcast-address 192.168.2.255;
  option routers 192.168.2.1;
  next-server 192.168.2.13;
}

class "SPARC" {
  match if not (substring(option vendor-class-identifier, 0, 9) = "PXEClient");
  filename "http://192.168.2.13:5555/cgi-bin/wanboot-cgi";
}

host sparcy {
   hardware ethernet 00:14:4f:fb:52:3c ;
   fixed-address 192.168.2.202 ;
}
Now, I of course want this installation to be completely hands-off.  For this to work, I'll need to modify the grub boot menu for this client slightly.  You can find it in /etc/netboot.  "installadm create-client" will create a new boot menu for every client, identified by the client's MAC address.  The template for this can be found in a subdirectory with the name of the install service, /etc/netboot/x86-fcs in our case.  If you don't want to change this manually for every client, modify that template to your liking instead.
root@ai-server:~# cd /etc/netboot
root@ai-server:~# cp menu.lst.01080027AA3DB1 menu.lst.01080027AA3DB1.org
root@ai-server:~# vi menu.lst.01080027AA3DB1
root@ai-server:~# diff menu.lst.01080027AA3DB1 menu.lst.01080027AA3DB1.org
1,2c1,2
< default=1
< timeout=10
---
> default=0
> timeout=30
root@ai-server:~# more menu.lst.01080027AA3DB1
default=1
timeout=10
min_mem64=0

title Oracle Solaris 11 11/11 Text Installer and command line
	kernel$ /x86-fcs/platform/i86pc/kernel/$ISADIR/unix -B install_media=htt
p://$serverIP:5555//export/install/fcs,install_service=x86-fcs,install_svc_addre
ss=$serverIP:5555
	module$ /x86-fcs/platform/i86pc/$ISADIR/boot_archive

title Oracle Solaris 11 11/11 Automated Install
	kernel$ /x86-fcs/platform/i86pc/kernel/$ISADIR/unix -B install=true,inst
all_media=http://$serverIP:5555//export/install/fcs,install_service=x86-fcs,inst
all_svc_address=$serverIP:5555,livemode=text
	module$ /x86-fcs/platform/i86pc/$ISADIR/boot_archive

Now just boot the client off the network using PXE-boot.  For my demo purposes, that's a client from VirtualBox, of course.   Again, if this were a SPARC system, you'd instead be typing "boot net:dhcp - install" at the OK prompt and then just watch the installation.

That's all there's to it.  And despite the fact that this blog entry is a little longer - that wasn't that hard now, was it?

Mittwoch Feb 29, 2012

Solaris Fingerprint Database - How it's done in Solaris 11

Many remember the Solaris Fingerprint Database. It was a great tool to verify the integrity of a solaris binary.  Unfortunately, it went away with the rest of sunsolve, and was not revived in the replacement, "My Oracle Support".  Here's the good news:  It's back for Solaris 11, and it's better than ever!

It is now totally integrated with IPS...  Read more

[Read More]

Montag Feb 20, 2012

Solaris 11 submitted for EAL4+ certification

Solaris 11 has been submitted for certification by the Canadian Common Criteria Scheme in Level EAL4+. They will be certifying against the protection profile "Operating System Protection Profile (OS PP)" as well as the extensions

  • Advanced Management (AM)
  • Extended Identification and Authentication (EIA)
  • Labeled Security (LS)
  • Virtualization (VIRT)

EAL4+ is the highest level typically achievable for commercial software,
and is the highest level mutually recognized by 26 countries, including Germany and the USA. Completion of the certification lies in the hands of the certification authority.

You can check the current status of this certification (as well as other certified Oracle software) on the page Oracle Security Evaluations.

Mittwoch Dez 21, 2011

Which IO Option for which Server?

For those of you who always wanted to know what IO option cards were available for which server, there is now a new portal on wikis.oracle.com.  This wiki contains a full list of IO options, ordered by server, and maintained for all current systems. Also included is the number of cards supported on each system.  The same information, for all current as well as for all older models, is available in the Systems Handbook, the ultimate answerbook for all hardware questions ;-)

(For those that have been around for a while: This service is the replacement for the previous "Cross Platform IO Wiki", which is no longer available.)

Montag Dez 05, 2011

Hard Partitioning!

Good news for all users of Oracle VM Server for SPARC (aka LDoms) with Oracle Software:  Since December 2, LDoms count as "Hard Partitioning".  This makes it possible to license only those cores of a server with Oracle software that you really need.  Details are available from License Management Services.

Dienstag Nov 08, 2011

Oracle TDE and Hardware Accelerated Crypto

Finally, there's a clear and brief description of the hardware and software required to use hardware accelerated encryption with Oracle TDE..  Here's an even shorter summary ;-)

  • SPARC T4 or Intel CPU with AES-NI
  • Solaris 11 or Linux (a patch for Solaris 10 will be provided eventually)
  • Oracle 11.2.0.3
    • If you use Linux, Oracle DB 11.2.0.2 with patch 10296641 will also work.

The longer version of this summary is available as MOS-Note 1365021.1

Happy crypting!

Freitag Okt 07, 2011

Solaris 11 Launch

There have been many questions and rumors about the upcoming launch of Solaris 11.  Now it's out:  Watch the webcast on

November 9, 2011
at 10am ET

Be invited to join!

(I hope to get around summarizing all the OpenWorld announcements, especially around T4, soon...)

Donnerstag Sep 08, 2011

Core Factor for T4 published

Oracle has published an update to the Processor Core Factor Table that lists the (yet to be released) T4 CPU with a factor of 0.5.  This leaves the license cost per socket the same compared to T3 and puts T4 in the same league as SPARC64 VII+ and all current x86 CPUs.  We will have to wait for the announcement of the CPU until we can actually speak about performance.  But this core factor (which is by no means a measure of CPU performance!) seems to confirm what the few other available bits of information seem to be hinting at:  T4 will deliver on Oracle's performance claims. 

Montag Aug 01, 2011

Oracle Solaris Studio 12.3 Beta Program

The beta program for Oracle Solaris Studio 12.3 is now open for participation.  Anyone willing to test the newest compiler and developer tools is welcome to join.  You may expect performance improvements over earlier versions of Studio as well as GCC that make testing worth your while.

Happy testing!

Donnerstag Jul 28, 2011

No humor left?

Just in case we all forgot that originally, Unix people were blessed with lots of humor:
OTN ID 1172413.1: How to make a grilled cheese sandwich

Dienstag Jul 05, 2011

Some Thoughts about the new SPARC Roadmap

As most of you know by now, Oracle has recently released an updated roadmap for its SPARC based servers.  Of course, others also publish roadmaps, and we all know how CPUs tend to slip and roadmaps sort of evolve backwards over time.  So what's the value in this new roadmap?

When Oracle aquired Sun, there were all these big promises about increased investement in SPARC and Solaris.  We've already seen the first delivery on these promises:

  • The SPARC T3 CPU doubled the throughput per socket for the T-Series line of servers.
  • Solaris 11 Express gives customers a supported (!) preview of what's coming with Solaris 11
  • M-Series servers can be upgraded yet again for up to an additional 20% performance gain.

All this is shown in the new roadmap as tick-off items.  All this was already well into completion at the time of the aquisition, although the speed with which T3 came to market can already be attributed to Oracle's accelerated investment.  Nevertheless, the real prove point will be the next promise on that roadmap.  It says: "3x Single Strand" for a T-Series CPU.  By now, we all know that the name will probably be "T4".  Rick Hetherington told us a while ago, that the chip will have 8 cores and will execute single threaded workloads up to 5 times faster than T3.  That would be even more than the 3x promised by the roadmap.  The only question that remains is:  Will Oracle deliver?

While no one can tell until the chip and systems are actually announced, chances are good it will.  Why else would Oracle invite customers to participate in a T4 Beta Program?

Back to the original question:  What's the value of this new roadmap?  Well, the foundation of trust, based on delivering on promises, looks good for that part of the roadmap arrow that's already past.  Oracle delivered, and it delivered on time.  That's more than some others can claim.

A little disclaimer to keep everyone happy:  This is my personal view and not an official statement from Oracle.

Donnerstag Jun 09, 2011

OVM Server for SPARC 2.1 is here!

The newest version of OVM Server for SPARC aka LDoms is released!  Here's the press release...


What, already a new version again?  Well, the most missed feature in the previous versions was finally completed, and we didn't want to keep everyone waiting ;-)  The new version 2.1 turns "Warm Migration" into "Live Migration".  All the other improvements can be found in "What's New".  Once I have further details about Live Migration, I'll post them here.  You can find the download on MOS and the documentation on OTN.

About

Neuigkeiten, Tipps und Wissenswertes rund um SPARC, CMT, Performance und ihre Analyse sowie Erfahrungen mit Solaris auf dem Server und dem Laptop.

This is a bilingual blog (most of the time). Please select your prefered language:
.
The views expressed on this blog are my own and do not necessarily reflect the views of Oracle.

Search

Categories
Archives
« May 2016
SunMonTueWedThuFriSat
      
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
     
Today