Monday Jul 20, 2009

Sun T5440 SPECjbb2005 Beats IBM POWER6 Chip-to-Chip

A Sun SPARC Enterprise T5440 server equipped with four UltraSPARC T2 Plus processors running at 1.6GHz, delivered a result of 841380 SPECjbb2005 bops, 26293 SPECjbb2005 bops/JVM when running the SPECjbb2005 benchmark.

  • One Sun SPARC Enterprise T5440 (four 1.6GHz UltraSPARC T2 Plus chips, 4RU) demonstrated 5% better performance than the IBM Power 570 (16RU) result of 798752 SPECjbb2005 bops using eight 4.7 GHz POWER6 chips. The IBM system requires twice the number of processor chips and the system requires 4 times more space than the T5440.
  • One Sun SPARC Enterprise T5440 (four 1.6GHz UltraSPARC T2 Plus chips, 4RU) has 2.3 times better power/performance than the IBM Power 570 (16RU) that used eight 4.7 GHz POWER6 chips.
  • Sun's 1.6GHz UltraSPARC T2 Plus Processor is over 2.3x performance of 4.7 GHz IBM POWER6 processor.
  • One Sun SPARC Enterprise T5440 (four 1.6GHz UltraSPARC T2 Plus chips) demonstrated 21% better performance when compared to the Sun SPARC Enterprise T5440 result of 692736 SPECjbb2005 bops using four 1.4GHz UltraSPARC T2 Plus chips.
  • The Sun SPARC Enterprise T5440 used OpenSolaris 2009.06 and the Sun JDK 1.6.0_14 Performance Release to obtain this result.

Performance Landscape

SPECjbb2005 Performance Chart (ordered by performance)

bops : SPECjbb2005 Business Operations per Second (bigger is better)

System Processors Performance
Chips Cores Threads GHz Type bops bops/JVM
HP DL585 G6 4 24 24 2.8 AMD Opteron 937207 234302
Sun SPARC Enterprise T5440 4 32 256 1.6 US-T2 Plus 841380 26293
IBM Power 570 8 16 32 4.7 POWER6 798752 99844
Sun SPARC Enterprise T5440 4 32 256 1.4 US-T2 Plus 692736 21648

Complete benchmark results may be found at the SPEC benchmark website

Results and Configuration Summary

Hardware Configuration:

    Sun SPARC Enterprise T5440
      4 x 1.6 GHz UltraSPARC T2 Plus processors
      256 GB

Software Configuration:

    OpenSolaris 2009.06
    Java HotSpot(TM) 32-Bit Server, Version 1.6.0_14 Performance Release

Benchmark Description

SPECjbb2005 (Java Business Benchmark) measures the performance of a Java implemented application tier (server-side Java). The benchmark is based on the order processing in a wholesale supplier application. The performance of the user tier and the database tier are not measured in this test. The metrics given are number of SPECjbb2005 bops (Business Operations per Second) and SPECjbb2005 bops/JVM (bops per JVM instance).

Key Points and Best Practices

  • Enhancements to the JVM had a major impact on performance.
  • Each JVM executed in the FX scheduling class to improve performance by reducing the frequency of context switches.
  • Each JVM bound to a separate processor containing 1 core to reduce memory access latency using the physical memory closest to the processor.

See Also

Disclosure Statement:

SPECjbb2005 Sun SPARC Enterprise T5440 (4 chips, 32 cores) 841380 SPECjbb2005 bops, 26293 SPECjbb2005 bops/JVM. Results submitted to SPEC. HP DL585 G5 (4 chips, 24 cores) 937207 SPECjbb2005 bops, 234302 SPECjbb2005 bops/JVM. IBM Power 570 (8 chips, 16 cores) 798752 SPECjbb2005 bops, 99844 SPECjbb2005 bops/JVM. Sun SPARC Enterprise T5440 (4 chips, 32 cores) 692736 SPECjbb2005 bops, 21648 SPECjbb2005 bops/JVM. SPEC, SPECjbb reg tm of Standard Performance Evaluation Corporation. Results from as of 7/20/09

Sun watts were measured on the system during the test.

IBM p 570 8P (4 building blocks) power specifications calculated as 80% of maximum input power reported 7/8/09 in “Facts and Features Report”:

Friday Jul 03, 2009

SPECmail2009 on Sun Fire X4275+Sun Storage 7110: Mail Server System Solution

Significance of Results

Sun has a new SPECmail2009 result on a Sun Fire X4275 server and Sun Storage 7110 Unified Storage System running Sun Java Messaging server 6.2.  OpenStorage and ZFS were a key part of the new World Record SPECmail2009.

  • The Sun Fire X4275 server, equipped by 2, 2.93GHz Intel Xeon QC X5570 processors, running the Sun Java System Messaging Server 6.2 on Solaris 10 achieved a new World Record 8000 SPECmail_Ent2009 IMAP4 users at 38,348 Sessions/hour. 

  • The Sun result was obtained using  about half disk spindles, with the Sun Storage 7110 Unified Storage System, than  Apple's direct attached storage solution.  The Sun submission is the first result using a NAS solution, specifically two Sun Storage 7110 Unified Storage System.
  • This benchmark result clearly demonstrates that the Sun Fire X4275 server together with Sun Java System Messaging Server 6.2, Solaris 10 on Sun Storage 7110 Unified Storage Systems can support a large, enterprise level IMAP mail server environment as a reliable low cost solution, delivering best performance and maximizing data integrity with ZFS.

SPECmail 2009 Performance Landscape(ordered by performance)

System Processors Performance
Type GHz Ch, Co, Th SPECmail_Ent2009
Sun Fire X4275 Intel X5570 2.93 2, 8, 16 8000 38,348
Apple Xserv3,1 Intel X5570 2.93 2, 8, 16 6000 28,887
Sun SPARC Enterprise T5220 UltraSPARC T2 1.4 1, 8, 64 3600 17,316

    Number of SPECmail_Ent2009 users (bigger is better)
    SPECmail2009 Sessions/hour (bigger is better)
    Ch, Co, Th: Chips, Cores, Threads

Complete benchmark results may be found at the SPEC benchmark website

Results and Configuration Summary

Hardware Configuration:
    Sun Fire X4275
      2 x 2.93 GHz QC Intel Xeon X5570 processors
      72 GB
      12 x 300GB, 10000 RPM SAS disk

    2 x Sun Storage 7110 Unified Storage System, each with
      16 x 146GB SAS 10K RPM

Software Configuration:

    O/S: Solaris 10
    Mail Server: Sun Java System Messaging Server 6.2

Benchmark Description

The SPECmail2009 benchmark measures the ability of corporate e-mail systems to meet today's demanding e-mail users over fast corporate local area networks (LAN). The SPECmail2009 benchmark simulates corporate mail server workloads that range from 250 to 10,000 or more users, using industry standard SMTP and IMAP4 protocols. This e-mail server benchmark creates client workloads based on a 40,000 user corporation, and uses folder and message MIME structures that include both traditional office documents and a variety of rich media content. The benchmark also adds support for encrypted network connections using industry standard SSL v3.0 and TLS 1.0 technology. SPECmail2009 replaces all versions of SPECmail2008, first released in August 2008. The results from the two benchmarks are not comparable.

Software on one or more client machines generates a benchmark load for a System Under Test (SUT) and measures the SUT response times. A SUT can be a mail server running on a single system or a cluster of systems.

A SPECmail2009 'run' simulates a 100% load level associated with the specific number of users, as defined in the configuration file. The mail server must maintain a specific Quality of Service (QoS) at the 100% load level to produce a valid benchmark result. If the mail server does maintain the specified QoS at the 100% load level, the performance of the mail server is reported as SPECmail_Ent2009 SMTP and IMAP Users at SPECmail2009 Sessions per hour. The SPECmail_Ent2009 users at SPECmail2009 Sessions per Hour metric reflects the unique workload combination for a SPEC IMAP4 user.

Key Points and Best Practices

  • Each XTA7110 was configured as 1xRAID1 (14x143GB) volume with 2 NFS Shared LUNs, accessed by the SUT via NFSV4. The mailstore volumes were mounted with the nfs mount options: nointr,hard, xattr. There were a total of 4GB/sec Network connections between the SUT and the 7110 Unified Storage systems using 2 NorthStar dual 1GB/sec NICs.
  • The clients used these Java options: java -d64 -Xms4096m -Xmx4096m -XX:+AggressiveHeap
  • See the SPEC Report for all OS, network and messaging server tunings.

See Also

Disclosure Statement

SPEC, SPECmail reg tm of Standard Performance Evaluation Corporation. Results as of 07/06/2009 on SPECmail2009: Sun Fire X4275 (8 cores, 2 chips) SPECmail_Ent2009 8000 users at 38,348 SPECmail2009 Sessions/hour. Apple Xserv3,1 (8 cores, 2 chips) SPECmail_Ent2009 6000 users at 28,887 SPECmail2009 Sessions/hour.

Friday Jun 19, 2009

Pointers to Java Performance Tuning resources

On Madhu Konda's Weblog he writes: For more info, please see Madhu Konda's Weblog.

Friday Jun 05, 2009

Interpreting Sun's SPECpower_ssj2008 Publications

Sun recently entered the SPECpower fray with the publication of three results on the SPECpower_ssj2008 benchmark.  Strangely, the three publications documented results on the same hardware platform (Sun Netra X4250) running identical software stacks, but the results were markedly different.  What exactly were we trying to get at?

 Benchmark Configurations

Sun produces robust industrial-grade servers with a range of redundancy features we believe benefit our customers.   These features increase reliability, at the cost of additional power consumption. For example, redundant power supplies and redundant fans allow servers to tolerate faults, and hot-swap capabilities further minimize downtime.

The benchmark run and reporting rules require the incorporation within the tested configuration of all components implied by the model name.  Within these limitations, the first publication was intended to be the best result (that is, the lowest power consumption per unit of performance) achievable on the Sun Netra X4250 platform, by minimizing the configured hardware to the greatest extent possible.

Common Components

All tested configurations had the following components in common:

  • System:  Sun Netra X4250
  • Processor: 2 x Intel L5408 QC @ 2.13GHz
  • 2 x 658 watt redundant AC power supplies
  • redundant fans
  • standard I/O expansion mezzanine
  • standard Telco dry contact alarm

And the same software stack:

  • OS: Windows Server 2003 R2 Enterprise X64 Edition SP2
  • Drivers: platform-specific drivers from Sun Netra X4250 Tools and Drivers DVD Version 2.1N
  • JVM: Java HotSpot 32-Bit Server VM on Windows, version 1.6.0_14

Tiny Configuration

In addition to the common hardware components, the tiny configuration was limited to:

  • 8 GB of Memory (4 x 2048 MB as PC2-5300F 2Rx8)
  • 1 x Sun 146 GB 10K RPM SAS internal drive

This is called the tiny configuration because it seems unlikely that most customers would configure an 8-core server with only one disk and only 1 GB available per core. Nevertheless, from a benchmark point of view, this configuration gave the best result.

Typical Configuration

The other two results were both produced on a configuration we considered much more typical of configurations that are actually ordered by customers.  In addition to the common hardware, these typical configuration included:

  • 32 GB of Memory (8 x 4096 MB as PC2-5300F)
  • 4 x Sun 146 GB 10K RPM SAS internal drives
  • 1 x Sun x8 PCIe Quad Gigabit Ethernet option card (X4447A-Z)

Nothing special was done with the additional components.  The added memory increased the performance component of the benchmark. The other components were installed and configured but allowed to sit idle, so consumed less power than they would have under load.

One Other Thing: Tuning for Performance

So one thing we're getting at is the difference in power consumption between a small configuration optimized for a power-performance benchmark and a typical configuration optimized for customer workloads.  Hardware (power consumption) is only half of the benchmark--the other half being the performance achieved by the System Under Test (SUT).

Tuning Choices 

In all three publications the identical tunings were applied at the software level: identical java command-line arguments and JVM-to-processor affinity.  We also applied, in the case of the better results, the common (but usually non-default) BIOS-level optimization of disabling hardware prefetcher and adjacent cache line prefetch.  These optimizations are commonly applied to produce optimized SPECpower_ssj2008 results but it is unlikely that many production applications would benefit from these settings.  To demonstrate the effect of this tuning, the final result was generated with standard BIOS settings.

 And just so we couldn't be accused of sand-bagging the results, the number of JVMs was increased in the typical configurations to take advantage of the additional memory populated over and above the tiny configuration.  Additional performance was achieved but sadly it doesn't compensate for the higher power consumption of all that memory.

So in summary we tuned:

  • Tiny Configuration: non-default BIOS settings
  • Typical Configuration 1: non-default BIOS settings; additional JVMs to utilize added memory
  • Typical Configuration 2: default BIOS settings; additional JVMs to utilize added memory

At the OS level, all tunings  were identical.


The results are summarized in this table:

(Click system for SPEC full disclosure)









Sun Netra X4250
(8GB non-default BIOS)







Sun Netra X425
(32GB non-default BIOS)







Sun Netra X4250
(32GB default BIOS)








  • The measurement and reporting methods of the benchmark encourage small memory configurations.  Comparing the first and second result, adding additional memory yielded minimal performance improvement (from 244832 to 251555) but a large increase in power consumption, 68 watts at peak.

  • In our opinion, unrealistically small configurations yield the best results on this benchmark.  On the more typical system, the benchmark overall metric decreased from 600 overall ssj_ops per watt to 478 overall ssj_ops per watt, despite our best effort to utilize the additional configured memory.

  • On typical configurations, reverting to default BIOS settings resulted in a significant decrease in performance (from 25155 to 229828) with no corresponding decrease in power consumption (essentially identical for both results).

Configurations typical of customer systems (with adequate memory, internal disks, and option cards) consume more power than configurations which are commonly benchmarked, while providing no corresponding improvement in SPECpower_ssj2008 benchmark performance. The result is a lower overall power-performance metric on typical configurations and a lack of published benchmark results on robust systems with the capacities and redundancies that enterprise customers desire.

Fair Use Disclosure

SPEC, SPECpower, and SPECpower_ssj are trademarks of the Standard Performance Evaluation Corporation.  All results from the SPEC website (  as of June 5, 2009.  For a complete set of accepted results refer to that site.

Wednesday Jun 03, 2009

Wide Variety of Topics to be discussed on BestPerf

A sample of the various Sun and partner technologies to be discussed:
OpenSolaris, Solaris, Linux, Windows, vmware, gcc, Java, Glassfish, MySQL, Java, Sun-Studio, ZFS, dtrace, perflib, Oracle, DB2, Sybase, OpenStorage, CMT, SPARC64, X64, X86, Intel, AMD


BestPerf is the source of Oracle performance expertise. In this blog, Oracle's Strategic Applications Engineering group explores Oracle's performance results and shares best practices learned from working on Enterprise-wide Applications.

Index Pages

« July 2016