Thursday Nov 19, 2015

SPECvirt_2013: SPARC T7-2 World Record Performance for Two- and Four-Chip Systems

Oracle's SPARC T7-2 server delivered a world record SPECvirt_sc2013 result for systems with two to four chips.

  • The SPARC T7-2 server produced a result of 3198 @ 179 VMs SPECvirt_sc2013.

  • The two-chip SPARC T7-2 server beat the best four-chip x86 Intel E7-8890 v3 server (HP ProLiant DL580 Gen9), demonstrating that the SPARC M7 processor is 2.1 times faster than the Intel Xeon Processor E7-8890 v3 (chip-to-chip comparison).

  • The two-chip SPARC T7-2 server beat the best two-chip x86 Intel E5-2699 v3 server results by nearly 2 times (Huawei FusionServer RH2288H V3, HP ProLiant DL360 Gen9).

  • The two-chip SPARC T7-2 server delivered nearly 2.2 times the performance of the four-chip IBM Power System S824 server solution which used 3.5 GHz POWER8 six core chips.

  • The SPARC T7-2 server running Oracle Solaris 11.3 operating system, utilizes embedded virtualization products as the Oracle Solaris 11 zones, which in turn provide a low overhead, flexible, scalable and manageable virtualization environment.

  • The SPARC T7-2 server result used Oracle VM Server for SPARC 3.3 and Oracle Solaris Zones providing a flexible, scalable and manageable virtualization environment.

Performance Landscape

Complete benchmark results are at the SPEC website, SPECvirt_sc2013 Results. The following table highlights the leading two-, and four-chip results for the benchmark, bigger is better.

SPECvirt_sc2013
Leading Two to Four-Chip Results
System
Processor
Chips Result @ VMs Virtualization Software
SPARC T7-2
SPARC M7 (4.13 GHz, 32core)
2 3198 @ 179 Oracle VM Server for SPARC 3.3
Oracle Solaris Zones
HP ProLiant DL580 Gen9
Intel E7-8890 v3 (2.5 GHz, 18core)
4 3020 @ 168 Red Hat Enterprise Linux 7.1 KVM
Lenovo System x3850 X6
Intel E7-8890 v3 (2.5 GHz, 18core)
4 2655 @ 147 Red Hat Enterprise Linux 6.6 KVM
Huawei FusionServer RH2288H V3
Intel E5-2699 v3 (2.3 GHz, 18core)
2 1616 @ 95 Huawei FusionSphere V1R5C10
HP ProLiant DL360 Gen9
Intel E5-2699 v3 (2.3 GHz, 18core)
2 1614 @ 95 Red Hat Enterprise Linux 7.1 KVM
IBM Power S824
POWER8 (3.5 GHz, 6core)
4 1370 @ 79 PowerVM Enterprise Edition 2.2.3

Configuration Summary

System Under Test Highlights:

Hardware:
1 x SPARC T7-2 server, with
2 x 4.13 GHz SPARC M7
1 TB memory
2 Sun Dual Port 10GBase-T Adapter
2 Sun Storage Dual 16 Gb Fibre Channel PCIe Universal HBA

Software:
Oracle Solaris 11.3
Oracle VM Server for SPARC 3.3 (LDom)
Oracle Solaris Zones
Oracle iPlanet Web Server 7.0.20
Oracle PHP 5.3.29
Dovecot v2.2.18
Oracle WebLogic Server Standard Edition Release 10.3.6
Oracle Database 12c Enterprise Edition (12.1.0.2.0)
Java HotSpot(TM) 64-Bit Server VM on Solaris, version 1.7.0_85-b15

Storage:
3 x Oracle Server X5-2L, with
2 x Intel Xeon Processor E5-2630 v3 8-core 2.4 GHz
32 GB memory
4 x Oracle Flash Accelerator F160 PCIe Card
Oracle Solaris 11.3

1 x Oracle Server X5-2L, with
2 x Intel Xeon Processor E5-2630 v3 8-core 2.4 GHz
32 GB memory
4 x Oracle Flash Accelerator F160 PCIe Card
4x 400 GB SSDs
Oracle Solaris 11.3

Benchmark Description

SPECvirt_sc2013 is SPEC's updated benchmark addressing performance evaluation of datacenter servers used in virtualized server consolidation. SPECvirt_sc2013 measures the end-to-end performance of all system components including the hardware, virtualization platform, and the virtualized guest operating system and application software. It utilizes several SPEC workloads representing applications that are common targets of virtualization and server consolidation. The workloads were made to match a typical server consolidation scenario of CPU resource requirements, memory, disk I/O, and network utilization for each workload. These workloads are modified versions of SPECweb2005, SPECjAppServer2004, SPECmail2008, and SPEC CPU2006. The client-side SPECvirt_sc2013 harness controls the workloads. Scaling is achieved by running additional sets of virtual machines, called "tiles", until overall throughput reaches a peak.

Key Points and Best Practices

  • The SPARC T7-2 server running the Oracle Solaris 11.3, utilizes embedded virtualization products as the Oracle VM Server for SPARC and Oracle Solaris Zones, which provide a low overhead, flexible, scalable and manageable virtualization environment.

  • In order to provide a high level of data integrity and availability, all the benchmark data sets are stored on mirrored (RAID1) storage

  • Using Oracle VM Server for SPARC to bind the SPARC M7 processor with its local memory optimized the memory use in this virtual environment.

  • This improved result used a fractional tile to fully saturate the system.

See Also

Disclosure Statement

SPEC and the benchmark name SPECvirt_sc are registered trademarks of the Standard Performance Evaluation Corporation. Results from www.spec.org as of 11/19/2015. SPARC T7-2, SPECvirt_sc2013 3198@179 VMs; HP ProLiant DL580 Gen9, SPECvirt_sc2013 3020@168 VMs; Lenovo x3850 X6; SPECvirt_sc2013 2655@147 VMs; Huawei FusionServer RH2288H V3, SPECvirt_sc2013 1616@95 VMs; HP ProLiant DL360 Gen9, SPECvirt_sc2013 1614@95 VMs; IBM Power S824, SPECvirt_sc2013 1370@79 VMs.

Friday Nov 13, 2015

SPECjbb2015: SPARC T7-1 World Record for 1 Chip Result

Updated November 30, 2015 to point to published results and add latest, best x86 two-chip result.

Oracle's SPARC T7-1 server, using Oracle Solaris and Oracle JDK, produced world record one-chip SPECjbb2015 benchmark (MultiJVM metric) results beating all previous one- and two-chip results in the process. This benchmark was designed by the industry to showcase Java performance in the Enterprise. Performance is expressed in terms of two metrics, max-jOPS which is the maximum throughput number, and critical-jOPS which is critical throughput under service level agreements (SLAs).

  • The SPARC T7-1 server achieved 120,603 SPECjbb2015-MultiJVM max-jOPS and 60,280 SPECjbb2015-MultiJVM critical-jOPS on the SPECjbb2015 benchmark.

  • The one-chip SPARC T7-1 server delivered 2.5 times more max-jOPS performance per chip than the best two-chip result which was run on the Cisco UCS C220 M4 server using Intel v3 processors. The SPARC T7-1 server also produced 4.3 times more critical-jOPS performance per chip compared to the Cisco UCS C220 M4. The Cisco result enabled the COD BIOS option.

  • The SPARC T7-1 server delivered 2.7 times more max-jOPS performance per chip than the IBM Power S812LC using POWER8 chips. The SPARC T7-1 server also produced 4.6 times more critical-jOPS performance per chip compared to the IBM server. The SPARC M7 processor also delivered 1.45 times more critical-jOPS performance per core than IBM POWER8 processor.

  • The one-chip SPARC T7-1 server delivered 3 times more max-jOPS performance per chip than the two-chip result on the Lenovo Flex System x240 M5 using Intel v3 processors. The SPARC T7-1 server also produced 2.8 times more critical-jOPS performance per chip compared to the Lenovo. The Lenovo result did not enable the COD BIOS option.

  • The SPARC T5-2 server achieved 80,889 SPECjbb2015-MultiJVM max-jOPS and 37,422 SPECjbb2015-MultiJVM critical-jOPS on the SPECjbb2015 benchmark.

  • The one-chip SPARC T7-1 server demonstrated a 3 times max-jOPS performance improvement per chip compared to the previous generation two-chip SPARC T5-2 server.

From SPEC's press release: "The SPECjbb2015 benchmark is based on the usage model of a worldwide supermarket company with an IT infrastructure that handles a mix of point-of-sale requests, online purchases, and data-mining operations. It exercises Java 7 and higher features, using the latest data formats (XML), communication using compression, and secure messaging."

The Cluster on Die (COD) mode is a BIOS setting that effectively splits the chip in half, making the operating system think it has twice as many chips as it does (in this case, four, 9 core chips). Intel has said that COD is appropriate only for highly NUMA optimized workloads . Dell has shown that there is a 3.7x slower bandwidth to the other half of the chip split by COD.

Performance Landscape

One- and two-chip results of SPECjbb2015 MultiJVM from www.spec.org as of November 30, 2015.

SPECjbb2015
One- and Two-Chip Results
System SPECjbb2015-MultiJVM OS JDK Notes
max-jOPS critical-jOPS
SPARC T7-1
1 x SPARC M7
(4.13 GHz, 1x 32core)
120,603 60,280 Oracle Solaris 11.3 8u66 -
Cisco UCS C220 M4
2 x Intel E5-2699 v3
(2.3 GHz, 2x 18core)
97,551 28,318 Red Hat 6.5 8u60 COD
Dell PowerEdge R730
2 x Intel E5-2699 v3
(2.3 GHz, 2x 18core)
94,903 29,033 SUSE 12 8u60 COD
Cisco UCS C220 M4
2 x Intel E5-2699 v3
(2.3 GHz, 2x 18core)
92,463 31,654 Red Hat 6.5 8u60 COD
Lenovo Flex System x240 M5
2 x Intel E5-2699 v3
(2.3 GHz, 2x 18core)
80,889 43,654 Red Hat 6.5 8u60 -
SPARC T5-2
2 x SPARC T5
(3.6 GHz, 2x 16core)
80,889 37,422 Oracle Solaris 11.2 8u66 -
Oracle Server X5-2L
2 x Intel E5-2699 v3
(2.3 GHz, 2x 18core)
76,773 26,458 Oracle Solaris 11.2 8u60 -
Sun Server X4-2
2 x Intel E5-2697 v2
(2.7 GHz, 2x 12core)
52,482 19,614 Oracle Solaris 11.1 8u60 -
HP ProLiant DL120 Gen9
1 x Intel Xeon E5-2699 v3
(2.3 GHz, 18core)
47,334 9,876 Red Hat 7.1 8u51 -
IBM Power S812LC
1 x POWER8
(2.92 GHz, 10core)
44,883 13,032 Ubuntu 14.04.3 J9 VM -

* Note COD: result uses non-default BIOS setting of Cluster on Die (COD) which splits the chip in two. This requires specific NUMA optimization, in that memory traffic to the other half of the chip can see a 3.7x decrease in bandwidth

Configuration Summary

Systems Under Test:

SPARC T7-1
1 x SPARC M7 processor (4.13 GHz)
512 GB memory (16 x 32 GB dimms)
Oracle Solaris 11.3 (11.3.1.5.0)
Java HotSpot 64-Bit Server VM, version 1.8.0_66

SPARC T5-2
2 x SPARC T5 processors (3.6 GHz)
512 GB memory (32 x 16 GB dimms)
Oracle Solaris 11.2
Java HotSpot 64-Bit Server VM, version 1.8.0_66

Benchmark Description

The benchmark description, as found at the SPEC website.

The SPECjbb2015 benchmark has been developed from the ground up to measure performance based on the latest Java application features. It is relevant to all audiences who are interested in Java server performance, including JVM vendors, hardware developers, Java application developers, researchers and members of the academic community.

Features include:

  • A usage model based on a world-wide supermarket company with an IT infrastructure that handles a mix of point-of-sale requests, online purchases and data-mining operations.
  • Both a pure throughput metric and a metric that measures critical throughput under service level agreements (SLAs) specifying response times ranging from 10ms to 100ms.
  • Support for multiple run configurations, enabling users to analyze and overcome bottlenecks at multiple layers of the system stack, including hardware, OS, JVM and application layers.
  • Exercising new Java 7 features and other important performance elements, including the latest data formats (XML), communication using compression, and messaging with security.
  • Support for virtualization and cloud environments.

Key Points and Best Practices

  • For the SPARC T5-2 server results, processor sets were use to isolate the different JVMs used during the test.

See Also

Disclosure Statement

SPEC and the benchmark name SPECjbb are registered trademarks of Standard Performance Evaluation Corporation (SPEC). Results from http://www.spec.org as of 11/30/2015. SPARC T7-1 120,603 SPECjbb2015-MultiJVM max-jOPS, 60,280 SPECjbb2015-MultiJVM critical-jOPS; Cisco UCS C220 M4 97,551 SPECjbb2015-MultiJVM max-jOPS, 28,318 SPECjbb2015-MultiJVM critical-jOPS; Dell PowerEdge R730 94,903 SPECjbb2015-MultiJVM max-jOPS, 29,033 SPECjbb2015-MultiJVM critical-jOPS; Cisco UCS C220 M4 92,463 SPECjbb2015-MultiJVM max-jOPS, 31,654 SPECjbb2015-MultiJVM critical-jOPS; Lenovo Flex System x240 M5 80,889 SPECjbb2015-MultiJVM max-jOPS, 43,654 SPECjbb2015-MultiJVM critical-jOPS; SPARC T5-2 80,889 SPECjbb2015-MultiJVM max-jOPS, 37,422 SPECjbb2015-MultiJVM critical-jOPS; Oracle Server X5-2L 76,773 SPECjbb2015-MultiJVM max-jOPS, 26,458 SPECjbb2015-MultiJVM critical-jOPS; Sun Server X4-2 52,482 SPECjbb2015-MultiJVM max-jOPS, 19,614 SPECjbb2015-MultiJVM critical-jOPS; HP ProLiant DL120 Gen9 47,334 SPECjbb2015-MultiJVM max-jOPS, 9,876 SPECjbb2015-MultiJVM critical-jOPS; IBM Power S812LC 44,883 SPECjbb2015-MultiJVM max-jOPS, 13,032 SPECjbb2015-MultiJVM critical-jOPS.

Monday Oct 26, 2015

SPECvirt_sc2013: SPARC T7-2 World Record for 2 and 4 Chip Systems

Oracle has had a new result accepted by SPEC as of November 19, 2015. This new result may be found here.

Oracle's SPARC T7-2 server delivered a world record SPECvirt_sc2013 result for systems with two to four chips.

  • The SPARC T7-2 server produced a result of 3026 @ 168 VMs SPECvirt_sc2013.

  • The two-chip SPARC T7-2 server beat the best two-chip x86 Intel E5-2699 v3 server results by nearly 1.9 times (Huawei FusionServer RH2288H V3, HP ProLiant DL360 Gen9).

  • The two-chip SPARC T7-2 server delivered nearly 2.2 times the performance of the four-chip IBM Power System S824 server solution which used 3.5 GHz POWER8 six core chips.

  • The SPARC T7-2 server running Oracle Solaris 11.3 operating system, utilizes embedded virtualization products as the Oracle Solaris 11 zones, which in turn provide a low overhead, flexible, scalable and manageable virtualization environment.

  • The SPARC T7-2 server result used Oracle VM Server for SPARC 3.3 and Oracle Solaris Zones providing a flexible, scalable and manageable virtualization environment.

Performance Landscape

Complete benchmark results are at the SPEC website, SPECvirt_sc2013 Results. The following table highlights the leading two-, and four-chip results for the benchmark, bigger is better.

SPECvirt_sc2013
Leading Two to Four-Chip Results
System
Processor
Chips Result @ VMs Virtualization Software
SPARC T7-2
SPARC M7 (4.13 GHz, 32core)
2 3026 @ 168 Oracle VM Server for SPARC 3.3
Oracle Solaris Zones
HP DL580 Gen9
Intel E7-8890 v3 (2.5 GHz, 18core)
4 3020 @ 168 Red Hat Enterprise Linux 7.1 KVM
Lenovo System x3850 X6
Intel E7-8890 v3 (2.5 GHz, 18core)
4 2655 @ 147 Red Hat Enterprise Linux 6.6 KVM
Huawei FusionServer RH2288H V3
Intel E5-2699 v3 (2.3 GHz, 18core)
2 1616 @ 95 Huawei FusionSphere V1R5C10
HP DL360 Gen9
Intel E5-2699 v3 (2.3 GHz, 18core)
2 1614 @ 95 Red Hat Enterprise Linux 7.1 KVM
IBM Power S824
POWER8 (3.5 GHz, 6core)
4 1370 @ 79 PowerVM Enterprise Edition 2.2.3

Configuration Summary

System Under Test Highlights:

Hardware:
1 x SPARC T7-2 server, with
2 x 4.13 GHz SPARC M7
1 TB memory
2 Sun Dual Port 10GBase-T Adapter
2 Sun Storage Dual 16 Gb Fibre Channel PCIe Universal HBA

Software:
Oracle Solaris 11.3
Oracle VM Server for SPARC 3.3 (LDom)
Oracle Solaris Zones
Oracle iPlanet Web Server 7.0.20
Oracle PHP 5.3.29
Dovecot v2.2.18
Oracle WebLogic Server Standard Edition Release 10.3.6
Oracle Database 12c Enterprise Edition (12.1.0.2.0)
Java HotSpot(TM) 64-Bit Server VM on Solaris, version 1.7.0_85-b15

Storage:
3 x Oracle Server X5-2L, with
2 x Intel Xeon Processor E5-2630 v3 8-core 2.4 GHz
32 GB memory
4 x Oracle Flash Accelerator F160 PCIe Card
Oracle Solaris 11.3

1 x Oracle Server X5-2L, with
2 x Intel Xeon Processor E5-2630 v3 8-core 2.4 GHz
32 GB memory
4 x Oracle Flash Accelerator F160 PCIe Card
4x 400 GB SSDs
Oracle Solaris 11.3

Benchmark Description

SPECvirt_sc2013 is SPEC's updated benchmark addressing performance evaluation of datacenter servers used in virtualized server consolidation. SPECvirt_sc2013 measures the end-to-end performance of all system components including the hardware, virtualization platform, and the virtualized guest operating system and application software. It utilizes several SPEC workloads representing applications that are common targets of virtualization and server consolidation. The workloads were made to match a typical server consolidation scenario of CPU resource requirements, memory, disk I/O, and network utilization for each workload. These workloads are modified versions of SPECweb2005, SPECjAppServer2004, SPECmail2008, and SPEC CPU2006. The client-side SPECvirt_sc2013 harness controls the workloads. Scaling is achieved by running additional sets of virtual machines, called "tiles", until overall throughput reaches a peak.

Key Points and Best Practices

  • The SPARC T7-2 server running the Oracle Solaris 11.3, utilizes embedded virtualization products as the Oracle VM Server for SPARC and Oracle Solaris Zones, which provide a low overhead, flexible, scalable and manageable virtualization environment.

  • In order to provide a high level of data integrity and availability, all the benchmark data sets are stored on mirrored (RAID1) storage

  • Using Oracle VM Server for SPARC to bind the SPARC M7 processor with its local memory optimized system memory use in this virtual environment.

See Also

Disclosure Statement

SPEC and the benchmark name SPECvirt_sc are registered trademarks of the Standard Performance Evaluation Corporation. Results from www.spec.org as of 10/25/2015. SPARC T7-2, SPECvirt_sc2013 3026@168 VMs; HP DL580 Gen9, SPECvirt_sc2013 3020@168 VMs; Lenovo x3850 X6; SPECvirt_sc2013 2655@147 VMs; Huawei FusionServer RH2288H V3, SPECvirt_sc2013 1616@95 VMs; HP ProLiant DL360 Gen9, SPECvirt_sc2013 1614@95 VMs; IBM Power S824, SPECvirt_sc2013 1370@79 VMs.

Oracle E-Business Suite Applications R12.1.3 (OLTP X-Large): SPARC M7-8 World Record

Oracle's SPARC M7-8 server, using a four-chip Oracle VM Server for SPARC (LDom) virtualized server, produced a world record 20,000 users running the Oracle E-Business OLTP X-Large benchmark. The benchmark runs five Oracle E-Business online workloads concurrently: Customer Service, iProcurement, Order Management, Human Resources Self-Service, and Financials.

  • The virtualized four-chip LDom on the SPARC M7-8 was able to handle more users than the previous best result which used eight processors of Oracle's SPARC M6-32 server.

  • The SPARC M7-8 server using Oracle VM Server for SPARC provides enterprise applications high availability, where each application is executed on its own environment, insulated and independent of the others.

Performance Landscape

Oracle E-Business (3-tier) OLTP X-Large Benchmark
System Chips Total Online Users Weighted Average
Response Time (sec)
90th Percentile
Response Time (s)
SPARC M7-8 4 20,000 0.70 1.13
SPARC M6-32 8 18,500 0.61 1.16

Break down of the total number of users by component.

Users per Component
Component SPARC M7-8 SPARC M6-32
Total Online Users 20,000 users 18,500 users
HR Self-Service
Order-to-Cash
iProcurement
Customer Service
Financial
5000 users
2500 users
2700 users
7000 users
2800 users
4000 users
2300 users
2400 users
7000 users
2800 users

Configuration Summary

System Under Test:

SPARC M7-8 server
8 x SPARC M7 processors (4.13 GHz)
4 TB memory
2 x 600 GB SAS-2 HDD
using a Logical Domain with
4 x SPARC M7 processors (4.13 GHz)
2 TB memory
2 x Sun Storage Dual 16Gb Fibre Channel PCIe Universal HBA
2 x Sun Dual Port 10GBase-T Adapter
Oracle Solaris 11.3
Oracle E-Business Suite 12.1.3
Oracle Database 11g Release 2

Storage Configuration:

4 x Oracle ZFS Storage ZS3-2 appliances each with
2 x Read Flash Accelerator SSD
1 x Storage Drive Enclosure DE2-24P containing:
20 x 900 GB 10K RPM SAS-2 HDD
4 x Write Flash Accelerator SSD
1 x Sun Storage Dual 8Gb FC PCIe HBA
Used for Database files, Zones OS, EBS Mid-Tier Apps software stack
and db-tier Oracle Server
2 x Sun Server X4-2L server with
2 x Intel Xeon Processor E5-2650 v2
128 GB memory
1 x Sun Storage 6Gb SAS PCIe RAID HBA
4 x 400 GB SSD
14 x 600 GB HDD
Used for Redo log files, db backup storage.

Benchmark Description

The Oracle E-Business OLTP X-Large benchmark simulates thousands of online users executing transactions typical of an internal Enterprise Resource Processing, simultaneously executing five application modules: Customer Service, Human Resources Self Service, iProcurement, Order Management and Financial.

Each database tier uses a database instance of about 600 GB in size, supporting thousands of application users, accessing hundreds of objects (tables, indexes, SQL stored procedures, etc.).

Key Points and Best Practices

This test demonstrates virtualization technologies running concurrently various Oracle multi-tier business critical applications and databases on four SPARC M7 processors contained in a single SPARC M7-8 server supporting thousands of users executing a high volume of complex transactions with constrained (<1 sec) weighted average response time.

The Oracle E-Business LDom is further configured using Oracle Solaris Zones.

This result of 20,000 users was achieved by load balancing the Oracle E-Business Suite Applications 12.1.3 five online workloads across two Oracle Solaris processor sets and redirecting all network interrupts to a dedicated third processor set.

Each applications processor set (set-1 and set-2) was running concurrently two Oracle E-Business Suite Application servers and two database servers instances, each within its own Oracle Solaris Zone (4 x Zones per set).

Each application server network interface (to a client) was configured to map with the locality group associated to the CPUs processing the related workload, to guarantee memory locality of network structures and application servers hardware resources.

All external storage was connected with at least two paths to the host multipath-capable fibre channel controller ports and Oracle Solaris I/O multipathing feature was enabled.

See Also

Disclosure Statement

Oracle E-Business Suite R12 extra-large multiple-online module benchmark, SPARC M7-8, SPARC M7, 4.13 GHz, 4 chips, 128 cores, 1024 threads, 2 TB memory, 20,000 online users, average response time 0.70 sec, 90th percentile response time 1.13 sec, Oracle Solaris 11.3, Oracle Solaris Zones, Oracle VM Server for SPARC, Oracle E-Business Suite 12.1.3, Oracle Database 11g Release 2, Results as of 10/25/2015.

Wednesday Mar 05, 2014

SPARC T5-2 Delivers World Record 2-Socket SPECvirt_sc2010 Benchmark

Oracle's SPARC T5-2 server delivered a world record two-chip SPECvirt_sc2010 result of 4270 @ 264 VMs, establishing performance superiority in virtualized environments of the SPARC T5 processors with Oracle Solaris 11, which includes as standard virtualization products Oracle VM for SPARC and Oracle Solaris Zones.

  • The SPARC T5-2 server has 2.3x better performance than an HP BL620c G7 blade server (with two Westmere EX processors) which used VMware ESX 4.1 U1 virtualization software (best SPECvirt_sc2010 result on two-chip servers using VMware software).

  • The SPARC T5-2 server has 1.6x better performance than an IBM Flex System x240 server (with two Sandy Bridge processors) which used Kernel-based Virtual Machines (KVM).

  • This is the first SPECvirt_sc2010 result using Oracle production level software: Oracle Solaris 11.1, Oracle WebLogic Server 10.3.6, Oracle Database 11g Enterprise Edition, Oracle iPlanet Web Server 7 and Oracle Java Development Kit 7 (JDK). The only exception for the Dovecot mail server.

Performance Landscape

Complete benchmark results are at the SPEC website, SPECvirt_sc2010 Results. The following table highlights the leading two-chip results for the benchmark, bigger is better.

SPECvirt_sc2010
Leading Two-Chip Results
System Processor Result @ VMs Virtualization Software
SPARC T5-2 2 x SPARC T5, 3.6 GHz 4270 @ 264 Oracle VM Server for SPARC 3.0
Oracle Solaris Zones
IBM Flex System x240 2 x Intel E5-2690, 2.9 GHz 2741 @ 168 Red Hat Enterprise Linux 6.4 KVM
HP Proliant BL6200c G7 2 x Intel E7-2870, 2.4 GHz 1878 @ 120 VMware ESX 4.1 U1

Configuration Summary

System Under Test Highlights:

1 x SPARC T5-2 server, with
2 x 3.6 GHz SPARC T5 processors
1 TB memory
Oracle Solaris 11.1
Oracle VM Server for SPARC 3.0
Oracle iPlanet Web Server 7.0.15
Oracle PHP 5.3.14
Dovecot 2.1.17
Oracle WebLogic Server 11g (10.3.6)
Oracle Database 11g (11.2.0.3)
Java HotSpot(TM) 64-Bit Server VM on Solaris, version 1.7.0_51

Benchmark Description

The SPECvirt_sc2010 benchmark is SPEC's first benchmark addressing performance of virtualized systems. It measures the end-to-end performance of all system components that make up a virtualized environment.

The benchmark utilizes several previous SPEC benchmarks which represent common tasks which are commonly used in virtualized environments. The workloads included are derived from SPECweb2005, SPECjAppServer2004 and SPECmail2008. Scaling of the benchmark is achieved by running additional sets of virtual machines until overall throughput reaches a peak. The benchmark includes a quality of service criteria that must be met for a successful run.

Key Points and Best Practices

  • The SPARC T5 server running the Oracle Solaris 11.1, utilizes embedded virtualization products as the Oracle VM for SPARC and Oracle Solaris Zones, which provide a low overhead, flexible, scalable and manageable virtualization environment.

  • In order to provide a high level of data integrity and availability, all the benchmark data sets are stored on mirrored (RAID1) storage.

See Also

Disclosure Statement

SPEC and the benchmark name SPECvirt_sc are registered trademarks of the Standard Performance Evaluation Corporation. Results from www.spec.org as of 3/5/2014. SPARC T5-2, SPECvirt_sc2010 4270 @ 264 VMs; IBM Flex System x240, SPECvirt_sc2010 2741 @ 168 VMs; HP Proliant BL620c G7, SPECvirt_sc2010 1878 @ 120 VMs.

Tuesday Feb 18, 2014

SPARC T5-2 Produces SPECjbb2013-MultiJVM World Record for 2-Chip Systems

From www.spec.org

Defects Identified in SPECjbb®2013

December 9, 2014 - SPEC has identified a defect in its SPECjbb®2013 benchmark suite. SPEC has suspended sales of the benchmark software and is no longer accepting new submissions of SPECjbb®2013 results for publication on SPEC's website. Current SPECjbb®2013 licensees will receive a free copy of the new version of the benchmark when it becomes available.

SPEC is advising SPECjbb®2013 licensees and users of the SPECjbb®2013 metrics that the recently discovered defect impacts the comparability of results. This defect can significantly impact the amount of work done during the measurement period, resulting in an inflated SPECjbb®2013 metric. SPEC recommends that users not utilize these results for system comparisons without a full understanding of the impact of these defects on each benchmark result.

Additional information is available here.

The SPECjbb2013 benchmark shows modern Java application performance. Oracle's SPARC T5-2 set a two-chip world record, which is 1.8x faster than the best two-chip x86-based server. Using Oracle Solaris and Oracle Java, Oracle delivered this two-chip world record result on the MultiJVM SPECjbb2013 metric.

  • The SPARC T5-2 server achieved 114,492 SPECjbb2013-MultiJVM max-jOPS and 43,963 SPECjbb2013-MultiJVM critical-jOPS on the SPECjbb2013 benchmark. This result is a two-chip world record.

  • The SPARC T5-2 server running SPECjbb2013 is 1.8x faster than the Cisco UCS C240 M3 server (2.7 GHz Intel Xeon E5-2697 v2) based on both the SPECjbb2013-MultiJVM max-jOPS and SPECjbb2013-MultiJVM critical-jOPS metrics.

  • The SPARC T5-2 server running SPECjbb2013 is 2x faster than the HP ProLiant ML350p Gen8 server (2.7 GHz Intel Xeon E5-2697 v2) based on SPECjbb2013-MultiJVM max-jOPS and 1.3x faster based on SPECjbb2013-MultiJVM critical-jOPS.

  • The new Oracle results were obtained using Oracle Solaris 11 along with Oracle Java SE 8 on the SPARC T5-2 server.

  • The SPARC T5-2 server running SPECjbb2013 on a per chip basis is 1.3x faster than the NEC Express5800/A040b server (2.8 GHz Intel Xeon E7-4890 v2) based on both the SPECjbb2013-MultiJVM max-jOPS and SPECjbb2013-MultiJVM critical-jOPS metrics.

  • There are no IBM POWER7 or POWER7+ based server results on the SPECjbb2013 benchmark. IBM has published IBM POWER7+ based servers on the SPECjbb2005 which was retired by SPEC in 2013.

Performance Landscape

Results of SPECjbb2013 from www.spec.org as of March 6, 2014. These are the leading 2-chip SPECjbb2013 MultiJVM results.

SPECjbb2013 - 2-Chip MultiJVM Results
System Processor SPECjbb2013-MultiJVM JDK
max-jOPS critical-jOPS
SPARC T5-2 2xSPARC T5, 3.6 GHz 114,492 43,963 Oracle Java SE 8
Cisco UCS C240 M3 2xIntel E5-2697 v2, 2.7 GHz 63,079 23,797 Oracle Java SE 7u45
HP ProLiant ML350p Gen8 2xIntel E5-2697 v2, 2.7 GHz 62,393 24,310 Oracle Java SE 7u45
IBM System x3650 M4 BD 2xIntel E5-2695 v2, 2.4 GHz 59,124 22,275 IBM SDK V7 SR6 (*)
HP ProLiant ML350p Gen8 2xIntel E5-2697 v2, 2.7 GHz 57,594 32,103 Oracle Java SE 7u40
HP ProLiant BL460c Gen8 2xIntel E5-2697 v2, 2.7 GHz 56,367 30,078 Oracle Java SE 7u40
Sun Server X4-2, DDR3-1600 2xIntel E5-2697 v2, 2.7 GHz 52,664 20,553 Oracle Java SE 7u40
HP ProLiant DL360e Gen8 2xIntel E5-2470 v2, 2.4 GHz 48,772 17,915 Oracle Java SE 7u40

* IBM SDK V7 SR6 – IBM SDK, Java Technology Edition, Version 7, Service Refresh 6

The following table compares the SPARC T5 processor to the Intel E7 v2 processor.

SPECjbb2013 - Results Using JDK 8
Per Chip Comparison
System SPECjbb2013-MultiJVM SPECjbb2013-MultiJVM/Chip JDK
max-jOPS critical-jOPS max-jOPS critical-jOPS
SPARC T5-2
2xSPARC T5, 3.6 GHz
114,492 43,963 57,246 21,981 Oracle Java SE 8
NEC Express5800/A040b
4xIntel E7-4890 v2, 2.8 GHz
177,753 65,529 44,438 16,382 Oracle Java SE 8

SPARC per Chip Advantage 1.29x 1.34x

Configuration Summary

System Under Test:

SPARC T5-2 server
2 x SPARC T5, 3.60 GHz
512 GB memory (32 x 16 GB dimms)
Oracle Solaris 11.1
Oracle Java SE 8

Benchmark Description

The SPECjbb2013 benchmark has been developed from the ground up to measure performance based on the latest Java application features. It is relevant to all audiences who are interested in Java server performance, including JVM vendors, hardware developers, Java application developers, researchers and members of the academic community.

From SPEC's press release, "SPECjbb2013 replaces SPECjbb2005. The new benchmark has been developed from the ground up to measure performance based on the latest Java application features. It is expected to be used widely by all those interested in Java server performance, including JVM vendors, hardware developers, Java application developers, researchers and members of the academic community."

SPECjbb2013 features include:

  • A usage model based on a world-wide supermarket company with an IT infrastructure that handles a mix of point-of-sale requests, online purchases and data-mining operations.
  • Both a pure throughput metric and a metric that measures critical throughput under service-level agreements (SLAs) specifying response times ranging from 10ms to 500ms.
  • Support for multiple run configurations, enabling users to analyze and overcome bottlenecks at multiple layers of the system stack, including hardware, OS, JVM and application layers.
  • Exercising new Java 7 features and other important performance elements, including the latest data formats (XML), communication using compression, and messaging with security.
  • Support for virtualization and cloud environments.

See Also

Disclosure Statement

SPEC and the benchmark name SPECjbb are registered trademarks of Standard Performance Evaluation Corporation (SPEC). Results as of 3/6/2014, see http://www.spec.org for more information.  SPARC T5-2 114,492 SPECjbb2013-MultiJVM max-jOPS, 43,963 SPECjbb2013-MultiJVM critical-jOPS; NEC Express5800/A040b 177,753 SPECjbb2013-MultiJVM max-jOPS, 65,529 SPECjbb2013-MultiJVM critical-jOPS; Cisco UCS c240 M3 63,079 SPECjbb2013-MultiJVM max-jOPS, 23,797 SPECjbb2013-MultiJVM critical-jOPS; HP ProLiant ML350p Gen8 62,393 SPECjbb2013-MultiJVM max-jOPS, 24,310 SPECjbb2013-MultiJVM critical-jOPS; IBM System X3650 M4 BD 59,124 SPECjbb2013-MultiJVM max-jOPS, 22,275 SPECjbb2013-MultiJVM critical-jOPS; HP ProLiant ML350p Gen8 57,594 SPECjbb2013-MultiJVM max-jOPS, 32,103 SPECjbb2013-MultiJVM critical-jOPS; HP ProLiant BL460c Gen8 56,367 SPECjbb2013-MultiJVM max-jOPS, 30,078 SPECjbb2013-MultiJVM critical-jOPS; Sun Server X4-2 52,664 SPECjbb2013-MultiJVM max-jOPS, 20,553 SPECjbb2013-MultiJVM critical-jOPS; HP ProLiant DL360e Gen8 48,772 SPECjbb2013-MultiJVM max-jOPS, 17,915 SPECjbb2013-MultiJVM critical-jOPS.

Monday Sep 23, 2013

SPARC T5-2 Delivers Best 2-Chip MultiJVM SPECjbb2013 Result

From www.spec.org

Defects Identified in SPECjbb®2013

December 9, 2014 - SPEC has identified a defect in its SPECjbb®2013 benchmark suite. SPEC has suspended sales of the benchmark software and is no longer accepting new submissions of SPECjbb®2013 results for publication on SPEC's website. Current SPECjbb®2013 licensees will receive a free copy of the new version of the benchmark when it becomes available.

SPEC is advising SPECjbb®2013 licensees and users of the SPECjbb®2013 metrics that the recently discovered defect impacts the comparability of results. This defect can significantly impact the amount of work done during the measurement period, resulting in an inflated SPECjbb®2013 metric. SPEC recommends that users not utilize these results for system comparisons without a full understanding of the impact of these defects on each benchmark result.

Additional information is available here.

SPECjbb2013 is a new benchmark designed to show modern Java server performance. Oracle's SPARC T5-2 set a world record as the fastest two-chip system beating just introduced two-chip x86-based servers. Oracle, using Oracle Solaris and Oracle JDK, delivered this two-chip world record result on the MultiJVM SPECjbb2013 metric. SPECjbb2013 is the replacement for SPECjbb2005 (SPECjbb2005 will soon be retired by SPEC).

  • Oracle's SPARC T5-2 server achieved 81,084 SPECjbb2013-MultiJVM max-jOPS and 39,129 SPECjbb2013-MultiJVM critical-jOPS on the SPECjbb2013 benchmark. This result is a two chip world record.

  • There are no IBM POWER7 or POWER7+ based server results on the SPECjbb2013 benchmark. IBM has published IBM POWER7+ based servers on the SPECjbb2005 which will soon be retired by SPEC.

  • The 2-chip SPARC T5-2 server running SPECjbb2013 is 30% faster than the 2-chip Cisco UCS B200 M3 server (2.7 GHz E5-2697 v2 Ivy Bridge-based) based on SPECjbb2013-MultiJVM max-jOPS.

  • The 2-chip SPARC T5-2 server running SPECjbb2013 is 66% faster than the 2-chip Cisco UCS B200 M3 server (2.7 GHz E5-2697 v2 Ivy Bridge-based) based on SPECjbb2013-MultiJVM critical-jOPS.

  • These results were obtained using Oracle Solaris 11 along with Java Platform, Standard Edition, JDK 7 Update 40 on the SPARC T5-2 server.

From SPEC's press release, "SPECjbb2013 replaces SPECjbb2005. The new benchmark has been developed from the ground up to measure performance based on the latest Java application features. It is expected to be used widely by all those interested in Java server performance, including JVM vendors, hardware developers, Java application developers, researchers and members of the academic community."

Performance Landscape

Results of SPECjbb2013 from www.spec.org as of September 22, 2013 and this report.

SPECjbb2013
System Processor SPECjbb2013-MultiJVM JDK
type # max-jOPS critical-jOPS
SPARC T5-2 SPARC T5, 3.6 GHz 2 81,084 39,129 Oracle JDK 7u40
Cisco UCS B200 M3, DDR3-1866 Intel E5-2697 v2, 2.7 GHz 2 62,393 23,505 Oracle JDK 7u40
Sun Server X4-2, DDR3-1600 Intel E5-2697 v2, 2.7 GHz 2 52,664 20,553 Oracle JDK 7u40
Cisco UCS C220 M3 Intel E5-2690, 2.9 GHz 2 41,954 16,545 Oracle JDK 7u11

The above table represents all of the published results on www.spec.org. SPEC allows for self publication of SPECjbb2013 results. See below for locations where full reports were made available.

Configuration Summary

System Under Test:

SPARC T5-2 server
2 x SPARC T5, 3.60 GHz
512 GB memory (32 x 16 GB dimms)
Oracle Solaris 11.1
Oracle JDK 7 Update 40

Benchmark Description

The SPECjbb2013 benchmark has been developed from the ground up to measure performance based on the latest Java application features. It is relevant to all audiences who are interested in Java server performance, including JVM vendors, hardware developers, Java application developers, researchers and members of the academic community.

SPECjbb2013 replaces SPECjbb2005. New features include:

  • A usage model based on a world-wide supermarket company with an IT infrastructure that handles a mix of point-of-sale requests, online purchases and data-mining operations.
  • Both a pure throughput metric and a metric that measures critical throughput under service-level agreements (SLAs) specifying response times ranging from 10ms to 500ms.
  • Support for multiple run configurations, enabling users to analyze and overcome bottlenecks at multiple layers of the system stack, including hardware, OS, JVM and application layers.
  • Exercising new Java 7 features and other important performance elements, including the latest data formats (XML), communication using compression, and messaging with security.
  • Support for virtualization and cloud environments.

See Also

Disclosure Statement

SPEC and the benchmark name SPECjbb are registered trademarks of Standard Performance Evaluation Corporation (SPEC). Results as of 9/23/2013, see http://www.spec.org for more information. SPARC T5-2 81,084 SPECjbb2013-MultiJVM max-jOPS, 39,129 SPECjbb2013-MultiJVM critical-jOPS, result from https://blogs.oracle.com/BestPerf/resource/jbb2013/sparct5-922.pdf Cisco UCS B200 M3 62,393 SPECjbb2013-MultiJVM max-jOPS, 23,505 SPECjbb2013-MultiJVM critical-jOPS, result from http://www.cisco.com/en/US/prod/collateral/ps10265/le_41704_pb_specjbb2013b200.pdf; Sun Server X4-2 52,664 SPECjbb2013-MultiJVM max-jOPS, 20,553 SPECjbb2013-MultiJVM critical-jOPS, result from https://blogs.oracle.com/BestPerf/entry/20130918_x4_2_specjbb2013; Cisco UCS C220 M3 41,954 SPECjbb2013-MultiJVM max-jOPS, 16,545 SPECjbb2013-MultiJVM critical-jOPS result from www.spec.org.

Wednesday Sep 18, 2013

Sun Server X4-2 Performance Running SPECjbb2013 MultiJVM Benchmark

From www.spec.org

Defects Identified in SPECjbb®2013

December 9, 2014 - SPEC has identified a defect in its SPECjbb®2013 benchmark suite. SPEC has suspended sales of the benchmark software and is no longer accepting new submissions of SPECjbb®2013 results for publication on SPEC's website. Current SPECjbb®2013 licensees will receive a free copy of the new version of the benchmark when it becomes available.

SPEC is advising SPECjbb®2013 licensees and users of the SPECjbb®2013 metrics that the recently discovered defect impacts the comparability of results. This defect can significantly impact the amount of work done during the measurement period, resulting in an inflated SPECjbb®2013 metric. SPEC recommends that users not utilize these results for system comparisons without a full understanding of the impact of these defects on each benchmark result.

Additional information is available here.

Oracle's Sun Server X4-2 system, using Oracle Solaris and Oracle JDK, produced a SPECjbb2013 benchmark (MultiJVM metric) result. This benchmark was designed by the industry to showcase Java server performance.

  • The Sun Server X4-2 system is 24% faster than the fastest published Intel Xeon E5-2600 (Sandy Bridge) based two socket system's (Dell PowerEdge R720's) SPECjbb2013-MultiJVM max-jOPS.

  • The Sun Server X4-2 is 22% faster than the fastest published Intel Xeon E5-2600 (Sandy Bridge) based two socket system's (Dell PowerEdge R720's) SPECjbb2013-MultiJVM critical-jOPS.

  • The Sun Server X4-2 runs SPECjbb2013 (MultiJVM metric) at 70% of the published T5-2 SPECjbb2013-MultiJVM max-jOPS.

  • The Sun Server X4-2 runs SPECjbb2013 (MultiJVM metric) at 88% of the published T5-2 SPECjbb2013-MultiJVM critical-jOPS.

  • The combination of Oracle Solaris 11.1 and Oracle JDK 7 update 40 delivered a result of 52,664 SPECjbb2013-MultiJVM max-jOPS and 20,553 SPECjbb2013-MultiJVM critical-jOPS on the SPECjbb2013 benchmark.

From SPEC's press release, "SPECjbb2013 replaces SPECjbb2005. The new benchmark has been developed from the ground up to measure performance based on the latest Java application features. It is expected to be used widely by all those interested in Java server performance, including JVM vendors, hardware developers, Java application developers, researchers and members of the academic community."

Performance Landscape

Top two-socket results of SPECjbb2013 MultiJVM as of October 8, 2013.

SPECjbb2013
System Processor DDR3 SPECjbb2013-MultiJVM OS JDK
max-jOPS critical-jOPS
SPARC T5-2 2 x 3.6 GHz SPARC T5 1600 75,658 23,334 Solaris 11.1 7u17
Cisco UCS B200 M3 2 x 2.7 GHz Intel E5-2697 v2 1866 62,393 23,505 RHEL 6.4 7u40
Sun Server X4-2 2 x 2.7 GHz Intel E5-2697 v2 1600 52,664 20,553 Solaris 11.1 7u40
Dell PowerEdge R720 2 x 2.9 GHz Intel Xeon E5-2690 1600 42,431 16,779 RHEL 6.4 7u21

The above table includes published results from www.spec.org.

Configuration Summary

System Under Test:

Sun Server X4-2
2 x Intel E5-2697 v2, 2.7 GHz
Hyper-Threading enabled
Turbo Boost enabled
128 GB memory (16 x 8 GB dimms)
Oracle Solaris 11.1 (11.1.4.2.0)
Oracle JDK 7u40

Benchmark Description

The SPECjbb2013 benchmark has been developed from the ground up to measure performance based on the latest Java application features. It is relevant to all audiences who are interested in Java server performance, including JVM vendors, hardware developers, Java application developers, researchers and members of the academic community.

SPECjbb2013 replaces SPECjbb2005. New features include:

  • A usage model based on a world-wide supermarket company with an IT infrastructure that handles a mix of point-of-sale requests, online purchases and data-mining operations.
  • Both a pure throughput metric and a metric that measures critical throughput under service-level agreements (SLAs) specifying response times ranging from 10ms to 500ms.
  • Support for multiple run configurations, enabling users to analyze and overcome bottlenecks at multiple layers of the system stack, including hardware, OS, JVM and application layers.
  • Exercising new Java 7 features and other important performance elements, including the latest data formats (XML), communication using compression, and messaging with security.
  • Support for virtualization and cloud environments.

See Also

Disclosure Statement

SPEC and the benchmark name SPECjbb are registered trademarks of Standard Performance Evaluation Corporation (SPEC). Results from http://www.spec.org as of 10/8/2013. SPARC T5-2, 75,658 SPECjbb2013-MultiJVM max-jOPS, 23,334 SPECjbb2013-MultiJVM critical-jOPS; Cisco UCS B200 M3 62,393 SPECjbb2013-MultiJVM max-jOPS, 23,505 SPECjbb2013-MultiJVM critical-jOPS; Dell PowerEdge R720 42,431 SPECjbb2013-MultiJVM max-jOPS, 16,779 SPECjbb2013-MultiJVM critical-jOPS; Sun Server X4-2 52,664 SPECjbb2013-MultiJVM max-jOPS, 20,553 SPECjbb2013-MultiJVM critical-jOPS.

Tuesday Mar 26, 2013

SPARC T5-2 Achieves SPECjbb2013 Benchmark World Record Result

From www.spec.org

Defects Identified in SPECjbb®2013

December 9, 2014 - SPEC has identified a defect in its SPECjbb®2013 benchmark suite. SPEC has suspended sales of the benchmark software and is no longer accepting new submissions of SPECjbb®2013 results for publication on SPEC's website. Current SPECjbb®2013 licensees will receive a free copy of the new version of the benchmark when it becomes available.

SPEC is advising SPECjbb®2013 licensees and users of the SPECjbb®2013 metrics that the recently discovered defect impacts the comparability of results. This defect can significantly impact the amount of work done during the measurement period, resulting in an inflated SPECjbb®2013 metric. SPEC recommends that users not utilize these results for system comparisons without a full understanding of the impact of these defects on each benchmark result.

Additional information is available here.

Oracle, using Oracle Solaris and Oracle JDK, delivered a two socket server world record result on the SPECjbb2013 benchmark, Multi-JVM metric. This benchmark was designed by the industry to showcase Java server performance. SPECjbb2013 is the replacement for SPECjbb2005 (SPECjbb2005 will soon be retired by SPEC).

  • Oracle's SPARC T5-2 server achieved 75,658 SPECjbb2013-MultiJVM max-jOPS and 23,268 SPECjbb2013-MultiJVM critical-jOPS on the SPECjbb2013 benchmark. This result is a two chip world record. (Oracle has submitted this result for review by SPEC.)

  • There are no IBM POWER7 or POWER7+ based server results on the SPECjbb2013 benchmark. IBM has published IBM POWER7+ based servers on the SPECjbb2005 which will soon be retired by SPEC.

  • The SPARC T5-2 server running is 1.9x faster than the 2-chip HP ProLiant ML350p server (2.9 GHz E5-2690 Sandy Bridge-based) based on SPECjbb2013-MultiJVM max-jOPS.

  • The 2-chip SPARC T5-2 server is 15% faster than the 4-chip HP ProLiant DL560p server (2.7 GHz E5-4650 Sandy Bridge-based) based on SPECjbb2013-MultiJVM max-jOPS.

  • The 2-chip SPARC T5-2 server is 6.1x faster than the 1-chip HP ProLiant ML310e Gen8 (3.6 GHZ E3-1280v2 Ivy Bridge based) based on SPECjbb2013-MultiJVM max-jOPS.

  • The Sun Server X3-2 system running Oracle Solaris 11 is 5% faster than the HP ProLiant ML350p Gen8 server running Windows Server 2008 based on SPECjbb2013-MultiJVM max-jOPS.

  • Oracle's SPARC T4-2 server achieved 34,804 SPECjbb2013-MultiJVM max-jOPS and 10,101 SPECjbb2013-MultiJVM critical-jOPS on the SPECjbb2013 benchmark.
    (Oracle has submitted this result for review by SPEC.)

  • Oracle's Sun Server X3-2 system achieved 41,954 SPECjbb2013-MultiJVM max-jOPS and 13,305 SPECjbb2013-MultiJVM critical-jOPS on the SPECjbb2013 benchmark. (Oracle has submitted this result for review by SPEC.)

  • Oracle's Sun Server X2-4 system achieved 65,211 SPECjbb2013-MultiJVM max-jOPS and 22,057 SPECjbb2013-MultiJVM critical-jOPS on the SPECjbb2013 benchmark. (Oracle has submitted this result for review by SPEC.)

  • SPECjbb2013 demonstrates better performance on Oracle hardware and software, engineered to work together, than alternatives from HP.

  • These results were obtained using Oracle Solaris 11 along with Java Platform, Standard Edition, JDK 7 Update 17 on the SPARC T5-2 server, SPARC T4-2 server, Sun Server X3-2 and Sun Server X2-4.

From SPEC's press release, "SPECjbb2013 replaces SPECjbb2005. The new benchmark has been developed from the ground up to measure performance based on the latest Java application features. It is expected to be used widely by all those interested in Java server performance, including JVM vendors, hardware developers, Java application developers, researchers and members of the academic community."

Performance Landscape

Results of SPECjbb2013 from www.spec.org as of March 26, 2013 and this report.

SPECjbb2013
System Processor SPECjbb2013-MultiJVM OS JDK
max-jOPS critical-jOPS
SPARC T5-2 2 x SPARC T5 75,658 23,334 Oracle Solaris 11.1 Oracle JDK 7u17
HP DL560p Gen8 4 x Intel E5-4650 66,007 16,577 Windows 2008 R2 Oracle JDK 7u15
Sun Server X2-4 4 x Intel E7-4870 65,211 22,057 Oracle Solaris 11.1 Oracle JDK 7u17
Sun Server X3-2 2 x Intel E5-2690 41,954 13,305 Oracle Solaris 11.1 Oracle JDK 7u17
HP ML350p Gen8 2 x Intel E5-2690 40,047 12,308 Windows 2008 R2 Oracle JDK 7u15
SPARC T4-2 2 x SPARC T4 34,804 10,101 Oracle Solaris 11.1 Oracle JDK 7u17
Supermicro X8DTN+ 2 x Intel E5690 20,977 6,188 RHEL 6.3 Oracle JDK 7u11
HP ML310e Gen8 1 x Intel E3-1280v2 12,315 2,908 Windows 2008 R2 Oracle JDK 7u15
Intel R1304BT 1 x Intel 1260L 6,198 1,722 Windows 2008 R2 Oracle JDK 7u11

The above table represents all of the published results on www.spec.org. SPEC allows for self publication of SPECjbb2013 results.

Configuration Summary

Systems Under Test:

SPARC T5-2 server
2 x SPARC T5, 3.60 GHz
512 GB memory (32 x 16 GB dimms)
Oracle Solaris 11.1
Oracle JDK 7 Update 17

Sun Server X2-4
4 x Intel Xeon E7-4870, 2.40 GHz
Hyper-Threading enabled
Turbo Boost enabled
128 GB memory (32 x 4 GB dimms)
Oracle Solaris 11.1
Oracle JDK 7 Update 17

Sun Server X3-2
2 x Intel E5-2690, 2.90 GHz
Hyper-Threading enabled
Turbo Boost enabled
128 GB memory (32 x 4 GB dimms)
Oracle Solaris 11.1
Oracle JDK 7 Update 17

SPARC T4-2 server
2 x SPARC T4, 2.85 GHz
256 GB memory (32 x 8 GB dimms)
Oracle Solaris 11.1
Oracle JDK 7 Update 17

Benchmark Description

The SPECjbb2013 benchmark has been developed from the ground up to measure performance based on the latest Java application features. It is relevant to all audiences who are interested in Java server performance, including JVM vendors, hardware developers, Java application developers, researchers and members of the academic community.

SPECjbb2013 replaces SPECjbb2005. New features include:

  • A usage model based on a world-wide supermarket company with an IT infrastructure that handles a mix of point-of-sale requests, online purchases and data-mining operations.
  • Both a pure throughput metric and a metric that measures critical throughput under service-level agreements (SLAs) specifying response times ranging from 10ms to 500ms.
  • Support for multiple run configurations, enabling users to analyze and overcome bottlenecks at multiple layers of the system stack, including hardware, OS, JVM and application layers.
  • Exercising new Java 7 features and other important performance elements, including the latest data formats (XML), communication using compression, and messaging with security.
  • Support for virtualization and cloud environments.

See Also

Disclosure Statement

SPEC and the benchmark name SPECjbb are registered trademarks of Standard Performance Evaluation Corporation (SPEC). Results as of 3/26/2013, see http://www.spec.org for more information. SPARC T5-2 75,658 SPECjbb2013-MultiJVM max-jOPS, 23,334 SPECjbb2013-MultiJVM critical-jOPS. Sun Server X2-4 65,211 SPECjbb2013-MultiJVM max-jOPS, 22,057 SPECjbb2013-MultiJVM critical-jOPS. Sun Server X3-2 41,954 SPECjbb2013-MultiJVM max-jOPS, 13,305 SPECjbb2013-MultiJVM critical-jOPS. SPARC T4-2 34,804 SPECjbb2013-MultiJVM max-jOPS, 10,101 SPECjbb2013-MultiJVM critical-jOPS. HP ProLiant DL560p Gen8 66,007 SPECjbb2013-MultiJVM max-jOPS, 16,577 SPECjbb2013-MultiJVM critical-jOPS. HP ProLiant ML350p Gen8 40,047 SPECjbb2013-MultiJVM max-jOPS, 12,308 SPECjbb2013-MultiJVM critical-jOPS. Supermicro X8DTN+ 20,977 SPECjbb2013-MultiJVM max-jOPS, 6,188 SPECjbb2013-MultiJVM critical-jOPS. HP ProLiant ML310e Gen8 12,315 SPECjbb2013-MultiJVM max-jOPS, 2,908 SPECjbb2013-MultiJVM critical-jOPS. Intel R1304BT 6,198 SPECjbb2013-MultiJVM max-jOPS, 1,722 SPECjbb2013-MultiJVM critical-jOPS.

SPARC T5-2 Achieves JD Edwards EnterpriseOne Benchmark World Records

Oracle produced World Record batch throughput for single system results on Oracle's JD Edwards EnterpriseOne Day-in-the-Life benchmark using Oracle's SPARC T5-2 server running Oracle Solaris Containers and consolidating JD Edwards EnterpriseOne, Oracle WebLogic servers and the Oracle Database 11g Release 2. There are two workloads tested: online plus batch workload and batch-only workload.

Online plus batch workload:

  • The SPARC T5-2 server delivered a result of 12,000 online users at 180 msec average response time while concurrently executing a mix of JD Edwards EnterpriseOne long and short batch processes at 198.5 UBEs/min (Universal Batch Engines per minute).

  • The SPARC T5-2 server online plus batch throughput is 2.7x higher than the IBM Power 770 server, both running 12,000 online users.

  • The SPARC T5-2 server online plus batch throughput is 6x higher per chip than the IBM Power 770 server. The SPARC T5-2 server has 2 chips and the IBM Power 770 has 4 chips, both ran 12,000 online users.

  • The SPARC T5-2 server online plus batch throughput is 3x higher per core than the IBM Power 770 server. Both servers have 32 cores and ran 12,000 online users.

Batch-only workload:

  • The SPARC T5-2 server delivered throughput of 880 UBEs/min while executing the batch-only workload (Long and Short batch processes).

  • The SPARC T5-2 server batch-only throughput is 2.7x faster per chip than the IBM Power 770 server. The SPARC T5-2 server has 2 chips and the IBM Power 770 has 4 chips.

  • The SPARC T5-2 server batch-only throughput is 1.4x higher per core than the IBM Power 770 server. Both servers have 32 cores.

  • The SPARC T5-2 server batch-only throughput is 61% faster than the Cisco multiple system solution.

  • The SPARC T5-2 server batch-only throughput is 5x faster per chip than the Cisco UCS B200/B250 M2 servers. The SPARC T5-2 server has 2 chips and the Cisco 3 server solution has 6 chips.

  • The SPARC T5-2 server batch-only throughput is 18x higher per core than the Cisco UCS B200/B250 M2 servers. The SPARC T5-2 server has 32 cores while the Cisco solution utilized 36 cores.

Both workloads:

  • The SPARC T5-2 server offers a 5.4x cost savings for the application server when compared to the IBM Power 770 application server.

  • The SPARC T5-2 server running Oracle Solaris Containers and consolidating JD Edwards EnterpriseOne, Oracle WebLogic servers and the Oracle Database 11g Release 2 utilized a maximum 65% of the available CPU power, leaving headroom for additional processing.

  • The database server in a shared-server configuration allows for optimized CPU resource utilization and significant memory savings on the SPARC T5-2 server without sacrificing performance.

Performance Landscape

JD Edwards EnterpriseOne Day in the Life (DIL) Benchmark
Consolidated Online with Batch Workload
System Rack
Units (U)
Batch
Rate
(UBEs/min)
Online
Users
Users/
U
UBEs/
Core
UBEs/
Chip
Version
SPARC T5-2 (2 x SPARC T5, 3.6 GHz) 3 198.5 12000 4000 6.2 99 9.0.2
IBM Power 770 (4 x POWER7, 3.3 GHz) 8 65 12000 1500 2.0 16 9.0.2

Batch Rate (UBEs/min) — Batch transaction rate in UBEs per minute.

JD Edwards EnterpriseOne Batch Only Benchmark
System Rack
Units (U)
Batch
Rate
(UBEs/min)
UBEs/
U
UBEs/
Core
UBEs/
Chip
Version
SPARC T5-2 (2 x SPARC T5, 3.6 GHz) 3 880 267 25 440 9.0.2
IBM Power 770 (4 x POWER7, 3.3 GHz) 8 643 81 20 161 9.0.2
2 x Cisco B200 M2 (2 x X5690, 3.46 GHz)
1 x Cisco B250 M2 (2 x X5680, 3.33 GHz)
3 546 182 15 91 9.0.2

Configuration Summary

Hardware Configuration:

1 x SPARC T5-2 server with
2 x SPARC T5 processors, 3.6 GHz
512 GB memory
4 x 300 GB 10K RPM SAS internal disk
2 x 300 GB internal SSD
4 x Sun Flash Accelerator F40 PCIe Card (4 x 93 GB)

Software Configuration:

Oracle Solaris 10 1/13
Oracle Solaris Containers
JD Edwards EnterpriseOne 9.0.2
JD Edwards EnterpriseOne Tools (8.98.4.2)
Oracle WebLogic Server 11g (10.3.4)
Oracle HTTP Server 11g
Oracle Database 11g Release 2 (11.2.0.3)

Benchmark Description

JD Edwards EnterpriseOne is an integrated applications suite of Enterprise Resource Planning (ERP) software. Oracle offers 70 JD Edwards EnterpriseOne application modules to support a diverse set of business operations.

Oracle's Day in the Life (DIL) kit is a suite of scripts that exercises most common transactions of JD Edwards EnterpriseOne applications, including business processes such as payroll, sales order, purchase order, work order, and manufacturing processes, such as ship confirmation. These are labeled by industry acronyms such as SCM, CRM, HCM, SRM and FMS. The kit's scripts execute transactions typical of a mid-sized manufacturing company.

  • The workload consists of online transactions and the UBE – Universal Business Engine workload of 61 short and 4 long UBEs.

  • LoadRunner runs the DIL workload, collects the user’s transactions response times and reports the key metric of Combined Weighted Average Transaction Response time.

  • The UBE processes workload runs from the JD Enterprise Application server.

    • Oracle's UBE processes come as three flavors:
      • Short UBEs < 1 minute engage in Business Report and Summary Analysis,
      • Mid UBEs > 1 minute create a large report of Account, Balance, and Full Address,
      • Long UBEs > 2 minutes simulate Payroll, Sales Order, night only jobs.
    • The UBE workload generates large numbers of PDF files reports and log files.
    • The UBE Queues are categorized as the QBATCHD, a single threaded queue for large and medium UBEs, and the QPROCESS queue for short UBEs run concurrently.

Oracle's UBE process performance metric is Number of Maximum Concurrent UBE processes at transaction rate, UBEs/minute.

Key Points and Best Practices

Four Oracle Solaris processors sets were used with Oracle Solaris Containers assigned to the processor sets as follows:

  • one JD Edwards EnterpriseOne Application server, two Oracle WebLogic Servers 11g Release 1 each coupled with an Oracle Web Tier HTTP server instances (online workload), each in an Oracle Solaris Container (three total),

  • one JD Edwards EnterpriseOne Application server (for batch only workload) in an Oracle Solaris Container,

  • Oracle Database 11g Release 2.0.3 database in an Oracle Solaris Container,

  • the Oracle database log writer.

Other items of note:

  • Each Oracle WebLogic vertical cluster, with twelve managed instances, was configured in a dedicated webserver container in order to load balance users' requests and to provide the infrastructure to support high number of users with ease of deployment and high availability.

  • The database redo logs were configured on the raw disk partitions.

  • The mixed batch workload of 44 short UBEs and 8 long UBEs was executed concurrently with the 12,000 online application users, producing a sustained rate of 198.5 UBE/min.

See Also

Disclosure Statement

Copyright 2013, Oracle and/or its affiliates. All rights reserved. Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners. Results as of 03/26/2013

SPARC T5-2 (SPARC T5-2 Server base package, 2xSPARC T5 16-core processors, 32x16GB-1066 DIMMS, 4x600GB 10K RPM 2.5. SAS-2 HDD,2x300GB SSDs, 4x Sun Flash Accelerator F40 PCIe Cards, 2x Power Cables) List Price $98,190. IBM Power 770 (IBM Power 770:9917 Model MMC, 2x3.3GHz 16-core, 32x one processor activation, 2xCEC Enclosure with IBM Bezel, I/O Backplane and System Midplane,2x Service Processor, 16x 0/64GB DDR3 Memory (4x16GB) DIMMS-1066MHz Power7 CoD Memory, 24x Activation of 1 GB DDR3 Power7 Memory, 10x Activation of 100GB DDR3 Power7 Memory, 2x Disk/Media Backplane. 2x 300GB SAS 15K RPM 2.5. HDD (AIX/Linux only), 1x SATA slimline DVD-RAM drive, 4x AC Power Supply 1925W) List Price $532,143. Source: ibm.com, collected 03/18/2013.

Friday Feb 22, 2013

Oracle Produces World Record SPECjbb2013 Result with Oracle Solaris and Oracle JDK

From www.spec.org

Defects Identified in SPECjbb®2013

December 9, 2014 - SPEC has identified a defect in its SPECjbb®2013 benchmark suite. SPEC has suspended sales of the benchmark software and is no longer accepting new submissions of SPECjbb®2013 results for publication on SPEC's website. Current SPECjbb®2013 licensees will receive a free copy of the new version of the benchmark when it becomes available.

SPEC is advising SPECjbb®2013 licensees and users of the SPECjbb®2013 metrics that the recently discovered defect impacts the comparability of results. This defect can significantly impact the amount of work done during the measurement period, resulting in an inflated SPECjbb®2013 metric. SPEC recommends that users not utilize these results for system comparisons without a full understanding of the impact of these defects on each benchmark result.

Additional information is available here.

Oracle, using Oracle Solaris and Oracle JDK, delivered a world record result on the SPECjbb2013 benchmark (Composite metric). This benchmark was designed by the industry to showcase Java server performance. SPECjbb2013 is the replacement for SPECjbb2005 (SPECjbb2005 will soon be retired by SPEC).

  • Oracle Solaris is 1.8x faster on the SPECjbb2013-Composite max-jOPS metric than the Red Hat Enterprise Linux result.

  • Oracle Solaris is 2.2x faster on the SPECjbb2013-Composite critical-jOPS metric than the Red Hat Enterprise Linux result.

  • The combination of Oracle Solaris 11.1 and Oracle JDK 7 update 15 delivered a result of 37,007 SPECjbb2013-Composite max-jOPS and 13,812 SPECjbb2013-Composite critical-jOPS on the SPECjbb2013 benchmark.
    (Oracle has submitted this result for review by SPEC and it is currently under review.)

From SPEC's press release, "SPECjbb2013 replaces SPECjbb2005. The new benchmark has been developed from the ground up to measure performance based on the latest Java application features. It is expected to be used widely by all those interested in Java server performance, including JVM vendors, hardware developers, Java application developers, researchers and members of the academic community."

Performance Landscape

Results of SPECjbb2013 from www.spec.org as of February 22, 2013 and this report.

SPECjbb2013
System Processor SPECjbb2013-Composite OS JDK
max-jOPS critical-jOPS
Sun Server X2-4 4 x Intel E7-4870 37,007 13,812 Solaris 11.1 Oracle JDK 7u15
Supermicro X8DTN+ 2 x Intel E5690 20,977 6,188 RHEL 6.3 Oracle JDK 7u11
Intel R1304BT 1 x Intel 1260L 6,198 1,722 Windows 2008 R2 Oracle JDK 7u11

The above table represents all of the published results on www.spec.org. SPEC allows for self publication of SPECjbb2013 results. AnandTech has taken advantage of this and has some result on their website which were run on Intel Xeon E5-2660, AMD Opteron 6380, AMD Opteron 6376 systems. These information be viewed at: www.anandtech.com. Unfortunately AnandTech did not follow SPEC's Fair Use requirements in disclosing information about their runs, so it is not possible to include the results in the table above.

SPECjbb2013
System Processor SPECjbb2013-MultiJVM OS JDK
max-jOPS critical-jOPS
HP ProLiant DL560p Gen8 4 x Intel E5-4650 66,007 16,577 Windows Server 2008 Oracle JDK 7u15
HP ProLiant ML350p Gen8 2 x Intel E5-2690 40,047 12,308 Windows Server 2008 Oracle JDK 7u15
HP ProLiant ML310e Gen8 1 x Intel E3-1280v2 12,315 2,908 Windows 2008 R2 Oracle JDK 7u15

Configuration Summary

System Under Test:

Sun Server X2-4
4 x Intel Xeon E7-4870, 2.40 GHz
Hyper-Threading enabled
Turbo Boost enabled
128 GB memory (32 x 4 GB dimms)
Oracle Solaris 11.1
Oracle JDK 7 update 15

Benchmark Description

The SPECjbb2013 benchmark has been developed from the ground up to measure performance based on the latest Java application features. It is relevant to all audiences who are interested in Java server performance, including JVM vendors, hardware developers, Java application developers, researchers and members of the academic community.

SPECjbb2013 replaces SPECjbb2005. New features include:

  • A usage model based on a world-wide supermarket company with an IT infrastructure that handles a mix of point-of-sale requests, online purchases and data-mining operations.
  • Both a pure throughput metric and a metric that measures critical throughput under service-level agreements (SLAs) specifying response times ranging from 10ms to 500ms.
  • Support for multiple run configurations, enabling users to analyze and overcome bottlenecks at multiple layers of the system stack, including hardware, OS, JVM and application layers.
  • Exercising new Java 7 features and other important performance elements, including the latest data formats (XML), communication using compression, and messaging with security.
  • Support for virtualization and cloud environments.

See Also

Disclosure Statement

SPEC and the benchmark name SPECjbb are registered trademarks of Standard Performance Evaluation Corporation (SPEC). Results as of 2/22/2013, see http://www.spec.org for more information. Sun Server X2-4 37007 SPECjbb2013-Composite max-jOPS, 13812 SPECjbb2013-Composite critical-jOPS.

Monday Oct 01, 2012

World Record Batch Rate on Oracle JD Edwards Consolidated Workload with SPARC T4-2

Oracle produced a World Record batch throughput for single system results on Oracle's JD Edwards EnterpriseOne Day-in-the-Life benchmark using Oracle's SPARC T4-2 server running Oracle Solaris Containers and consolidating JD Edwards EnterpriseOne, Oracle WebLogic servers and the Oracle Database 11g Release 2. The workload includes both online and batch workload.

  • The SPARC T4-2 server delivered a result of 8,000 online users while concurrently executing a mix of JD Edwards EnterpriseOne Long and Short batch processes at 95.5 UBEs/min (Universal Batch Engines per minute).

  • In order to obtain this record benchmark result, the JD Edwards EnterpriseOne, Oracle WebLogic and Oracle Database 11g Release 2 servers were executed each in separate Oracle Solaris Containers which enabled optimal system resources distribution and performance together with scalable and manageable virtualization.

  • One SPARC T4-2 server running Oracle Solaris Containers and consolidating JD Edwards EnterpriseOne, Oracle WebLogic servers and the Oracle Database 11g Release 2 utilized only 55% of the available CPU power.

  • The Oracle DB server in a Shared Server configuration allows for optimized CPU resource utilization and significant memory savings on the SPARC T4-2 server without sacrificing performance.

  • This configuration with SPARC T4-2 server has achieved 33% more Users/core, 47% more UBEs/min and 78% more Users/rack unit than the IBM Power 770 server.

  • The SPARC T4-2 server with 2 processors ran the JD Edwards "Day-in-the-Life" benchmark and supported 8,000 concurrent online users while concurrently executing mixed batch workloads at 95.5 UBEs per minute. The IBM Power 770 server with twice as many processors supported only 12,000 concurrent online users while concurrently executing mixed batch workloads at only 65 UBEs per minute.

  • This benchmark demonstrates more than 2x cost savings by consolidating the complete solution in a single SPARC T4-2 server compared to earlier published results of 10,000 users and 67 UBEs per minute on two SPARC T4-2 and SPARC T4-1.

  • The Oracle DB server used mirrored (RAID 1) volumes for the database providing high availability for the data without impacting performance.

Performance Landscape

JD Edwards EnterpriseOne Day in the Life (DIL) Benchmark
Consolidated Online with Batch Workload

System Rack
Units
(U)
Batch
Rate
(UBEs/m)
Online
Users
Users
/ U
Users
/ Core
Version
SPARC T4-2 (2 x SPARC T4, 2.85 GHz) 3 95.5 8,000 2,667 500 9.0.2
IBM Power 770 (4 x POWER7, 3.3 GHz, 32 cores) 8 65 12,000 1,500 375 9.0.2

Batch Rate (UBEs/m) — Batch transaction rate in UBEs per minute

Configuration Summary

Hardware Configuration:

1 x SPARC T4-2 server with
2 x SPARC T4 processors, 2.85 GHz
256 GB memory
4 x 300 GB 10K RPM SAS internal disk
2 x 300 GB internal SSD
2 x Sun Storage F5100 Flash Arrays

Software Configuration:

Oracle Solaris 10
Oracle Solaris Containers
JD Edwards EnterpriseOne 9.0.2
JD Edwards EnterpriseOne Tools (8.98.4.2)
Oracle WebLogic Server 11g (10.3.4)
Oracle HTTP Server 11g
Oracle Database 11g Release 2 (11.2.0.1)

Benchmark Description

JD Edwards EnterpriseOne is an integrated applications suite of Enterprise Resource Planning (ERP) software. Oracle offers 70 JD Edwards EnterpriseOne application modules to support a diverse set of business operations.

Oracle's Day in the Life (DIL) kit is a suite of scripts that exercises most common transactions of JD Edwards EnterpriseOne applications, including business processes such as payroll, sales order, purchase order, work order, and manufacturing processes, such as ship confirmation. These are labeled by industry acronyms such as SCM, CRM, HCM, SRM and FMS. The kit's scripts execute transactions typical of a mid-sized manufacturing company.

  • The workload consists of online transactions and the UBE – Universal Business Engine workload of 61 short and 4 long UBEs.

  • LoadRunner runs the DIL workload, collects the user’s transactions response times and reports the key metric of Combined Weighted Average Transaction Response time.

  • The UBE processes workload runs from the JD Enterprise Application server.

    • Oracle's UBE processes come as three flavors:

      • Short UBEs < 1 minute engage in Business Report and Summary Analysis,

      • Mid UBEs > 1 minute create a large report of Account, Balance, and Full Address,

      • Long UBEs > 2 minutes simulate Payroll, Sales Order, night only jobs.

    • The UBE workload generates large numbers of PDF files reports and log files.

    • The UBE Queues are categorized as the QBATCHD, a single threaded queue for large and medium UBEs, and the QPROCESS queue for short UBEs run concurrently.

Oracle's UBE process performance metric is Number of Maximum Concurrent UBE processes at transaction rate, UBEs/minute.

Key Points and Best Practices

Two JD Edwards EnterpriseOne Application Servers, two Oracle WebLogic Servers 11g Release 1 coupled with two Oracle Web Tier HTTP server instances and one Oracle Database 11g Release 2 database on a single SPARC T4-2 server were hosted in separate Oracle Solaris Containers bound to four processor sets to demonstrate consolidation of multiple applications, web servers and the database with best resource utilizations.

  • Interrupt fencing was configured on all Oracle Solaris Containers to channel the interrupts to processors other than the processor sets used for the JD Edwards Application server, Oracle WebLogic servers and the database server.

  • A Oracle WebLogic vertical cluster was configured on each WebServer Container with twelve managed instances each to load balance users' requests and to provide the infrastructure that enables scaling to high number of users with ease of deployment and high availability.

  • The database log writer was run in the real time RT class and bound to a processor set.

  • The database redo logs were configured on the raw disk partitions.

  • The Oracle Solaris Container running the Enterprise Application server completed 61 Short UBEs, 4 Long UBEs concurrently as the mixed size batch workload.

  • The mixed size UBEs ran concurrently from the Enterprise Application server with the 8,000 online users driven by the LoadRunner.

See Also

Disclosure Statement

Copyright 2012, Oracle and/or its affiliates. All rights reserved. Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners. Results as of 09/30/2012.

Thursday Jan 12, 2012

Netra SPARC T4-2 SPECjvm2008 World Record Performance

Oracle's Netra SPARC T4-2 server equipped with two SPARC T4 processors running at 2.85 GHz delivered a World Record result of 454.52 SPECjvm2008 Peak ops/m on the SPECjvm2008 benchmark. This result just eclipsed the previous record which was run on a similar product, Oracle's SPARC T4-2 server, which is also a two SPARC T4 processor based system.

  • The Netra SPARC T4-2 server demonstrates 41% better performance than the SPARC T3-2 server and similar performance to Oracle's SPARC T4-2 server.

  • The Netra SPARC T4-2 server running the SPECjvm2008 benchmark achieved a score of 454.52 SPECjvm2008 Peak ops/m while the Sun Blade X6270 server module achieved 317.13 SPECjvm2008 Base ops/m.

  • The Netra SPARC T4-2 server with hardware cryptography acceleration greatly increases performance with subtests using AES and RSA encryption ciphers.

  • This result was produced using Oracle Solaris 11 and Oracle JDK 7 Update 2.

  • There are no SPECjvm2008 results published by IBM on POWER7 based systems.

  • The Netra SPARC T4-2 server demonstrates Oracle's position of leadership in Java-based computing by publishing world record results for the SPECjvm2008 benchmark.

Performance Landscape

Complete benchmark results are at the SPECjvm2008 website.

SPECjvm2008 Performance Chart
(ordered by performance)
System Processors Performance
base peak
Netra SPARC T4-2 2 x 2.85 GHz SPARC T4 - 454.52
SPARC T4-2 2 x 2.85 GHz SPARC T4 - 454.25
SPARC T3-2 2 x 1.65 GHz SPARC T3 - 320.52
Sun Blade X6270 2 x 2.93 GHz Intel X5570 317.13 -

base: SPECjvm2008 Base ops/m (bigger is better)
peak: SPECjvm2008 Peak ops/m (bigger is better)

SPEC allows base and peak to be submitted separately. The base metric does not allow any optimization of the JVM, the peak metric allows optimization.

Configuration Summary

Hardware Configuration:

Netra SPARC T4-2 server
2 x 2.85 GHz SPARC T4 processors
256 GB memory

Software Configuration:

Oracle Solaris 11 11/11
Java Platform, Standard Edition, JDK 7 Update 2

Benchmark Description

SPECjvm2008 (Java Virtual Machine Benchmark) is a benchmark suite for measuring the performance of a Java Runtime Environment (JRE), containing several real life applications and benchmarks focusing on core Java functionality. The suite focuses on the performance of the JRE executing a single application; it reflects the performance of the hardware processor and memory subsystem, but has low dependence on file I/O and includes no network I/O across machines.

The SPECjvm2008 workload mimics a variety of common general purpose application computations. These characteristics reflect the intent that this benchmark will be applicable to measuring basic Java performance on a wide variety of both client and server systems.

SPECjvm2008 benchmark highlights:

  • Leverages real life applications (like derby, sunflow, and javac) and area-focused benchmarks (like xml, serialization, crypto, and scimark).
  • Also measures the performance of the operating system and hardware in the context of executing the JRE.

The current rules for the benchmark allow either base or peak to be run. The base run is done without any tuning of the JVM to improve the out of the box performance. The peak run allows tuning of the JVM.

Key Points and Best Practices

  • Enhancements to the JVM had a major impact on performance, especially for the security tests.

See Also

Disclosure Statement

SPEC and SPECjvm are registered trademarks of Standard Performance Evaluation Corporation. Results from www.spec.org and this report as of 1/9/2012. Netra SPARC T4-2 454.52 SPECjvm2008 Peak ops/m submitted for review, SPARC T4-2 454.25 SPECjvm2008 Peak ops/m, SPARC T3-2 320.52 SPECjvm2008 Peak ops/m, Sun Blade X6270 317.13 SPECjvm2008 Base ops/m.

Wednesday Nov 09, 2011

SPARC T4-2 Delivers World Record SPECjvm2008 Result with Oracle Solaris 11

Oracle's SPARC T4-2 server equipped with two SPARC T4 processors running at 2.85 GHz delivered a World Record result of 454.25 SPECjvm2008 Peak ops/m on the SPECjvm2008 benchmark.

  • The SPARC T4-2 server demonstrates 41% better performance than the SPARC T3-2 server.

  • The SPARC T4-2 server with hardware cryptography acceleration greatly increases performance with subtests using AES and RSA encryption ciphers.

  • This result was produced using Oracle Solaris 11 and Oracle JDK 7 Update 2.

  • There are no SPECjvm2008 results published by IBM on POWER7 based systems.

  • The SPARC T4-2 server demonstrates Oracle's position of leadership in Java-based computing by publishing world record results for the SPECjvm2008 benchmark.

Performance Landscape

Complete benchmark results are at the SPECjvm2008 website.

SPECjvm2008 Performance Chart
(ordered by performance)
System Processors Performance
base peak
SPARC T4-2 2 x 2.85 GHz SPARC T4 - 454.25
SPARC T3-2 2 x 1.65 GHz SPARC T3 - 320.52
Sun Blade X6270 2 x 2.93 GHz Intel X5570 317.13 -

base: SPECjvm2008 Base ops/m (bigger is better)
peak: SPECjvm2008 Peak ops/m (bigger is better)

SPEC allows base and peak to be submitted separately. The base metric does not allow any optimization of the JVM, the peak metric allows optimization.

Configuration Summary

Hardware Configuration:

SPARC T4-2 server
2 x 2.85 GHz SPARC T4 processors
256 GB memory

Software Configuration:

Oracle Solaris 11 11/11
Java Platform, Standard Edition, JDK 7 Update 2

Benchmark Description

SPECjvm2008 (Java Virtual Machine Benchmark) is a benchmark suite for measuring the performance of a Java Runtime Environment (JRE), containing several real life applications and benchmarks focusing on core Java functionality. The suite focuses on the performance of the JRE executing a single application; it reflects the performance of the hardware processor and memory subsystem, but has low dependence on file I/O and includes no network I/O across machines.

The SPECjvm2008 workload mimics a variety of common general purpose application computations. These characteristics reflect the intent that this benchmark will be applicable to measuring basic Java performance on a wide variety of both client and server systems.

SPECjvm2008 benchmark highlights:

  • Leverages real life applications (like derby, sunflow, and javac) and area-focused benchmarks (like xml, serialization, crypto, and scimark).
  • Also measures the performance of the operating system and hardware in the context of executing the JRE.

The current rules for the benchmark allow either base or peak to be run. The base run is done without any tuning of the JVM to improve the out of the box performance. The peak run allows tuning of the JVM.

Key Points and Best Practices

  • Enhancements to the JVM had a major impact on performance, especially for the security tests.

See Also

Disclosure Statement

SPEC and SPECjvm are registered trademarks of Standard Performance Evaluation Corporation. Results from www.spec.org and this report as of 11/9/2011. SPARC T4-2 454.25 SPECjvm2008 Peak ops/m submitted for review, SPARC T3-2 320.52 SPECjvm2008 Peak ops/m, Sun Blade X6270 317.13 SPECjvm2008 Base ops/m.

Tuesday Sep 27, 2011

SPARC T4-2 Servers Set World Record on JD Edwards EnterpriseOne Day in the Life Benchmark with Batch, Outperforms IBM POWER7

Using Oracle's SPARC T4-2 server for the application tier and a SPARC T4-1 server for the database tier, a world record result was produced running the Oracle's JD Edwards EnterpriseOne application Day in the Life (DIL) benchmark concurrently with a batch workload.

  • The SPARC T4-2 server running online and batch with JD Edwards EnterpriseOne 9.0.2 is 1.7x faster and has better response time than the IBM Power 750 system which only ran the online component of JD Edwards EnterpriseOne 9.0 Day in the Life test.

  • The combination of SPARC T4 servers delivered a Day in the Life benchmark result of 10,000 online users with 0.35 seconds of average transaction response time running concurrently with 112 Universal Batch Engine (UBE) processes at 67 UBEs/minute.

  • This is the first JD Edwards EnterpriseOne benchmark for 10,000 users and payroll batch on a SPARC T4-2 server for the application tier and the database tier with Oracle Database 11g Release 2. All servers ran with the Oracle Solaris 10 operating system.

  • The single-thread performance of the SPARC T4 processor produced sub-second response for the online components and provided dramatic performance for the batch jobs.

  • The SPARC T4 servers, JD Edwards EnterpriseOne 9.0.2, and Oracle WebLogic Server 11g Release 1 support 17% more users per JAS (Java Application Server) than the SPARC T3-1 server for this benchmark.

  • The SPARC T4-2 server provided a 6.7x better batch processing rate than the previous SPARC T3-1 server record result and had 2.5x faster response time.

  • The SPARC T4-2 server used Oracle Solaris Containers, which provide flexible, scalable and manageable virtualization.

  • JD Edwards EnterpriseOne uses Oracle Fusion Middleware WebLogic Server 11g R1 and Oracle Fusion Middleware Cluster Web Tier Utilities 11g HTTP server.

  • The combination of the SPARC T4-2 server and Oracle JD Edwards EnterpriseOne in the application tier with a SPARC T4-1 server in the database tier measured low CPU utilization providing headroom for growth.

Performance Landscape

JD Edwards EnterpriseOne Day in the Life Benchmark
Online with Batch Workload

System Online
Users
Resp
Time (sec)
Batch
Concur
(# of UBEs)
Batch
Rate
(UBEs/m)
Version
2xSPARC T4-2 (app+web)
SPARC T4-1 (db)
10000 0.35 112 67 9.0.2
SPARC T3-1 (app+web)
SPARC Enterprise M3000 (db)
5000 0.88 19 10 9.0.1

Resp Time (sec) — Response time of online jobs reported in seconds
Batch Concur (# of UBEs) — Batch concurrency presented in the number of UBEs
Batch Rate (UBEs/m) — Batch transaction rate in UBEs per minute

Edwards EnterpriseOne Day in the Life Benchmark
Online Workload Only

System Online
Users
Response
Time (sec)
Version
SPARC T3-1, 1 x SPARC T3 (1.65 GHz), Solaris 10 (app)
M3000, 1 x SPARC64 VII (2.75 GHz), Solaris 10 (db)
5000 0.52 9.0.1
IBM Power 750, POWER7 (3.55 GHz) (app+db) 4000 0.61 9.0

IBM result from http://www-03.ibm.com/systems/i/advantages/oracle/, IBM used WebSphere

Configuration Summary

Application Tier Configuration:

1 x SPARC T4-2 server with
2 x 2.85 GHz SPARC T4 processors
128 GB main memory
6 x 300 GB 10K RPM SAS internal HDD
Oracle Solaris 10 9/10
JD Edwards EnterpriseOne 9.0.2 with Tools 8.98.3.3

Web Tier Configuration:

1 x SPARC T4-2 server with
2 x 2.85 GHz SPARC T4 processors
256 GB main memory
2 x 300 GB SSD
4 x 300 GB 10K RPM SAS internal HDD
Oracle Solaris 10 9/10
Oracle WebLogic Server 11g Release 1

Database Tier Configuration:

1 x SPARC T4-1 server with
1 x 2.85 GHz SPARC T4 processor
128 GB main memory
6 x 300 GB 10K RPM SAS internal HDD
2 x Sun Storage F5100 Flash Array
Oracle Solaris 10 9/10
Oracle Database 11g Release 2

Benchmark Description

JD Edwards EnterpriseOne is an integrated applications suite of Enterprise Resource Planning (ERP) software. Oracle offers 70 JD Edwards EnterpriseOne application modules to support a diverse set of business operations.

Oracle's Day in the Life (DIL) kit is a suite of scripts that exercises most common transactions of JD Edwards EnterpriseOne applications, including business processes such as payroll, sales order, purchase order, work order, and manufacturing processes, such as ship confirmation. These are labeled by industry acronyms such as SCM, CRM, HCM, SRM and FMS. The kit's scripts execute transactions typical of a mid-sized manufacturing company.

  • The workload consists of online transactions and the UBE – Universal Business Engine workload of 42 short, 8 medium and 4 long UBEs.

  • LoadRunner runs the DIL workload, collects the user’s transactions response times and reports the key metric of Combined Weighted Average Transaction Response time.

  • The UBE processes workload runs from the JD Enterprise Application server.

    • Oracle's UBE processes come as three flavors:
      • Short UBEs < 1 minute engage in Business Report and Summary Analysis,
      • Mid UBEs > 1 minute create a large report of Account, Balance, and Full Address,
      • Long UBEs > 2 minutes simulate Payroll, Sales Order, night only jobs.
    • The UBE workload generates large numbers of PDF files reports and log files.
    • The UBE Queues are categorized as the QBATCHD, a single threaded queue for large and medium UBEs, and the QPROCESS queue for short UBEs run concurrently.

Oracle’s UBE process performance metric is Number of Maximum Concurrent UBE processes at transaction rate, UBEs/minute.

Key Points and Best Practices

One JD Edwards EnterpriseOne Application Server and two Oracle WebLogic Servers 11g R1 coupled with two Oracle Fusion Middleware 11g Web Tier HTTP Server instances on the SPARC T4-2 servers were hosted in three separate Oracle Solaris Containers to demonstrate consolidation of multiple application and web servers.

  • Interrupt fencing was configured on all Oracle Solaris Containers to channel the interrupts to processors other than the processor sets used for the JD Edwards Application server and WebLogic servers.

  • Processor 0 was left alone for clock interrupts.

  • The applications were executed in the FX scheduling class to improve performance by reducing the frequency of context switches.

  • A WebLogic vertical cluster was configured on each WebServer Container with twelve managed instances each to load balance users' requests and to provide the infrastructure that enables scaling to high number of users with ease of deployment and high availability.

  • The database server was run in an Oracle Solaris Container hosted on the SPARC T4-2 server.

  • The database log writer was run in the real time RT class and bound to a processor set.

  • The database redo logs were configured on the raw disk partitions.

  • The private network between the SPARC T4-2 servers was configured with a 10 GbE interface.

  • The Oracle Solaris Container on the Enterprise Application server ran 42 Short UBEs, 8 Medium UBEs and 4 Long UBEs concurrently as the mixed size batch workload.

  • The mixed size UBEs ran concurrently from the application server with the 10000 online users driven by the LoadRunner.

See Also

Disclosure Statement

Copyright 2011, Oracle and/or its affiliates. All rights reserved. Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners. Results as of 9/26/2011.

Friday Jul 01, 2011

SPARC T3-1 Record Results Running JD Edwards EnterpriseOne Day in the Life Benchmark with Added Batch Component

Using Oracle's SPARC T3-1 server for the application tier and Oracle's SPARC Enterprise M3000 server for the database tier, a world record result was produced running the Oracle's JD Edwards EnterpriseOne applications Day in the Life benchmark run concurrently with a batch workload.

  • The SPARC T3-1 server based result has 25% better performance than the IBM Power 750 POWER7 server even though the IBM result did not include running a batch component.

  • The SPARC T3-1 server based result has 25% better space/performance than the IBM Power 750 POWER7 server as measured by the online component.

  • The SPARC T3-1 server based result is 5x faster than the x86-based IBM x3650 M2 server system when executing the online component of the JD Edwards EnterpriseOne 9.0.1 Day in the Life benchmark. The IBM result did not include a batch component.

  • The SPARC T3-1 server based result has 2.5x better space/performance than the x86-based IBM x3650 M2 server as measured by the online component.

  • The combination of SPARC T3-1 and SPARC Enterprise M3000 servers delivered a Day in the Life benchmark result of 5000 online users with 0.875 seconds of average transaction response time running concurrently with 19 Universal Batch Engine (UBE) processes at 10 UBEs/minute. The solution exercises various JD Edwards EnterpriseOne applications while running Oracle WebLogic Server 11g Release 1 and Oracle Web Tier Utilities 11g HTTP server in Oracle Solaris Containers, together with the Oracle Database 11g Release 2.

  • The SPARC T3-1 server showed that it could handle the additional workload of batch processing while maintaining the same number of online users for the JD Edwards EnterpriseOne Day in the Life benchmark. This was accomplished with minimal loss in response time.

  • JD Edwards EnterpriseOne 9.0.1 takes advantage of the large number of compute threads available in the SPARC T3-1 server at the application tier and achieves excellent response times.

  • The SPARC T3-1 server consolidates the application/web tier of the JD Edwards EnterpriseOne 9.0.1 application using Oracle Solaris Containers. Containers provide flexibility, easier maintenance and better CPU utilization of the server leaving processing capacity for additional growth.

  • A number of Oracle advanced technology and features were used to obtain this result: Oracle Solaris 10, Oracle Solaris Containers, Oracle Java Hotspot Server VM, Oracle WebLogic Server 11g Release 1, Oracle Web Tier Utilities 11g, Oracle Database 11g Release 2, the SPARC T3 and SPARC64 VII+ based servers.

  • This is the first published result running both online and batch workload concurrently on the JD Enterprise Application server. No published results are available from IBM running the online component together with a batch workload.

  • The 9.0.1 version of the benchmark saw some minor performance improvements relative to 9.0. When comparing between 9.0.1 and 9.0 results, the reader should take this into account when the difference between results is small.

Performance Landscape

JD Edwards EnterpriseOne Day in the Life Benchmark
Online with Batch Workload

This is the first publication on the Day in the Life benchmark run concurrently with batch jobs. The batch workload was provided by Oracle's Universal Batch Engine.

System Rack
Units
Online
Users
Resp
Time (sec)
Batch
Concur
(# of UBEs)
Batch
Rate
(UBEs/m)
Version
SPARC T3-1, 1xSPARC T3 (1.65 GHz), Solaris 10
M3000, 1xSPARC64 VII+ (2.86 GHz), Solaris 10
4 5000 0.88 19 10 9.0.1

Resp Time (sec) — Response time of online jobs reported in seconds
Batch Concur (# of UBEs) — Batch concurrency presented in the number of UBEs
Batch Rate (UBEs/m) — Batch transaction rate in UBEs/minute.

JD Edwards EnterpriseOne Day in the Life Benchmark
Online Workload Only

These results are for the Day in the Life benchmark. They are run without any batch workload.

System Rack
Units
Online
Users
Response
Time (sec)
Version
SPARC T3-1, 1xSPARC T3 (1.65 GHz), Solaris 10
M3000, 1xSPARC64 VII (2.75 GHz), Solaris 10
4 5000 0.52 9.0.1
IBM Power 750, 1xPOWER7 (3.55 GHz), IBM i7.1 4 4000 0.61 9.0
IBM x3650M2, 2xIntel X5570 (2.93 GHz), OVM 2 1000 0.29 9.0

IBM result from http://www-03.ibm.com/systems/i/advantages/oracle/, IBM used WebSphere

Configuration Summary

Hardware Configuration:

1 x SPARC T3-1 server
1 x 1.65 GHz SPARC T3
128 GB memory
16 x 300 GB 10000 RPM SAS
1 x Sun Flash Accelerator F20 PCIe Card, 96 GB
1 x 10 GbE NIC
1 x SPARC Enterprise M3000 server
1 x 2.86 SPARC64 VII+
64 GB memory
1 x 10 GbE NIC
2 x StorageTek 2540 + 2501

Software Configuration:

JD Edwards EnterpriseOne 9.0.1 with Tools 8.98.3.3
Oracle Database 11g Release 2
Oracle 11g WebLogic server 11g Release 1 version 10.3.2
Oracle Web Tier Utilities 11g
Oracle Solaris 10 9/10
Mercury LoadRunner 9.10 with Oracle Day in the Life kit for JD Edwards EnterpriseOne 9.0.1
Oracle’s Universal Batch Engine - Short UBEs and Long UBEs

Benchmark Description

JD Edwards EnterpriseOne is an integrated applications suite of Enterprise Resource Planning (ERP) software. Oracle offers 70 JD Edwards EnterpriseOne application modules to support a diverse set of business operations.

Oracle's Day in the Life (DIL) kit is a suite of scripts that exercises most common transactions of JD Edwards EnterpriseOne applications, including business processes such as payroll, sales order, purchase order, work order, and other manufacturing processes, such as ship confirmation. These are labeled by industry acronyms such as SCM, CRM, HCM, SRM and FMS. The kit's scripts execute transactions typical of a mid-sized manufacturing company.

  • The workload consists of online transactions and the UBE workload of 15 short and 4 long UBEs.

  • LoadRunner runs the DIL workload, collects the user’s transactions response times and reports the key metric of Combined Weighted Average Transaction Response time.

  • The UBE processes workload runs from the JD Enterprise Application server.

    • Oracle's UBE processes come as three flavors:

      • Short UBEs < 1 minute engage in Business Report and Summary Analysis,
      • Mid UBEs > 1 minute create a large report of Account, Balance, and Full Address,
      • Long UBEs > 2 minutes simulate Payroll, Sales Order, night only jobs.
    • The UBE workload generates large numbers of PDF files reports and log files.

    • The UBE Queues are categorized as the QBATCHD, a single threaded queue for large UBEs, and the QPROCESS queue for short UBEs run concurrently.

  • One of the Oracle Solaris Containers ran 4 Long UBEs, while another Container ran 15 short UBEs concurrently.

  • The mixed size UBEs ran concurrently from the SPARC T3-1 server with the 5000 online users driven by the LoadRunner.

  • Oracle’s UBE process performance metric is Number of Maximum Concurrent UBE processes at transaction rate, UBEs/minute.

Key Points and Best Practices

Two JD Edwards EnterpriseOne Application Servers and two Oracle Fusion Middleware WebLogic Servers 11g R1 coupled with two Oracle Fusion Middleware 11g Web Tier HTTP Server instances on the SPARC T3-1 server were hosted in four separate Oracle Solaris Containers to demonstrate consolidation of multiple application and web servers.

See Also

Disclosure Statement

Copyright 2011, Oracle and/or its affiliates. All rights reserved. Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners. Results as of 6/27/2011.

Wednesday Mar 23, 2011

SPARC T3-1B Doubles Performance on Oracle Fusion Middleware WebLogic Avitek Medical Records Sample Application

The Oracle WebLogic Server 11g software was used to demonstrate the performance of the Avitek Medical Records sample application. A configuration using SPARC T3-1B and SPARC Enterprise M5000 servers from Oracle was used and showed excellent scaling of different configurations as well as doubling previous generation SPARC blade performance.

  • A SPARC T3-1B server, running a typical real-world J2EE application on Oracle WebLogic Server 11g, together with a SPARC Enterprise M5000 server running the Oracle database, had 2.1x times the transactional throughput over the previous generation UltraSPARC T2 processor based Sun Blade T6320 server module.

  • The SPARC T3-1B server shows linear scaling as the number of cores in the SPARC T3 processor used in the SPARC T3-1B system module are doubled.

  • The Avitek Medical Records application instances were deployed in Oracle Solaris zones on the SPARC T3-1B server, allowing for flexible, scalable and lightweight architecture of the application tier.

Performance Landscape

Performance for the application tier is presented. Results are the maximum transactions per second (TPS).

Server Processor Memory Maximum TPS
SPARC T3-1B 1 x SPARC T3, 1.65 GHz, 16 cores 128 GB 28,156
SPARC T3-1B 1 x SPARC T3, 1.65 GHz, 8 cores 128 GB 14,030
Sun Blade T6320 1 x UltraSPARC T2, 1.4 GHz, 8 cores 64 GB 13,386

The same SPARC Enterprise M5000 server from Oracle was used in each case as the database server. Internal disk storage was used.

Configuration Summary

Hardware Configuration:

1 x SPARC T3-1B
1 x 1.65 GHz SPARC T3
128 GB memory

1 x Sun Blade T6320
1 x 1.4Ghz GHz SPARC T2
64 GB memory

1 x SPARC Enterprise M5000
8 x 2.53 SPARC64 VII
128 GB memory

Software Configuration:

Avitek Medical Records
Oracle Database 10g Release 2
Oracle WebLogic Server 11g R1 version 10.3.3 (Oracle Fusion Middleware)
Oracle Solaris 10 9/10
HP Mercury LoadRunner 9.5

Benchmark Description

Avitek Medical Records (or MedRec) is an Oracle WebLogic Server 11g sample application suite that demonstrates all aspects of the J2EE platform. MedRec showcases the use of each J2EE component, and illustrates best practice design patterns for component interaction and client development. Oracle WebLogic server 11g is a key component of Oracle Fusion Middleware 11g.

The MedRec application provides a framework for patients, doctors, and administrators to manage patient data using a variety of different clients. Patient data includes:

  • Patient profile information: A patient's name, address, social security number, and log-in information.

  • Patient medical records: Details about a patient's visit with a physician, such as the patient's vital signs and symptoms as well as the physician's diagnosis and prescriptions.

MedRec comprises of two main Java EE applications supporting different user scenarios:

medrecEar – Patients log in to the web application (patientWebApp) to register their profile or edit. Patients can also view medical records or their prior visits. Administrators use the web application (adminWebApp) to approve or deny new patient profile requests. medrecEar also provides all of the controller and business logic used by the MedRec application suite, as well as the Web Service used by different clients.

physicianEar – Physicians and nurses login to the web application (physicianWebApp) to search and access patient profiles, create and review medical records, and prescribe medicine to patients. The physician application is designed to communicate using the Web Service provided in the medrecEar.

The medrecEAR and physicianEar application are deployed to Oracle WebLogic Server 11g instance called MedRecServer. The physicianEAR application communicates with the controller components of medrecEAR using Web Services.

The workload injected into the MedRec applications measures the average transactions per second for the following sequence:

  1. A client opens page http://{host}:7011/Start.jsp (MedRec)
  2. Patient completes Registration process
  3. Administrator login, approves the patient profile, and logout
  4. Physician connect to the on-line system and logs in
  5. Physician performs search for a patient and looks up patient's visit information
  6. Physician logs out
  7. Patient logs in and reviews the profile
  8. Patient makes changes to the profile and updates the information
  9. Patient logs out

Each of the above steps constitutes a single transaction.

Key Points and Best Practices

Please see the Oracle documentation on the Oracle Technical Network for tuning your Oracle WebLogic Server 11g deployment.

See Also

Disclosure Statement

Copyright 2011, Oracle and/or its affiliates. All rights reserved. Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners. Results as of 3/22/2011.

Thursday Feb 17, 2011

SPARC T3-1 takes JD Edwards "Day In the Life" benchmark lead, beats IBM Power7 by 25%

Oracle's SPARC T3-1 server, running the application, together with Oracle's SPARC Enterprise M3000 server running the database, have achieved a record result of 5000 users, with 0.523 seconds of average transaction response time, for the online component of the "Day in the Life" JD Edwards EnterpriseOne benchmark.

  • The "Day in the Life" benchmark tests the Oracle JD Edwards EnterpriseOne applications, running Oracle Fusion Middleware WebLogic Server 11g R1, Oracle Fusion Middleware Web Tier Utilities 11g HTTP server and JD Edwards EnterpriseOne 9.0.1 in Oracle Solaris Containers, together with the Oracle Database 11g Release 2.

  • The SPARC T3-1 server is 25% faster and has better response time than the IBM P750 POWER7 system, when executing the JD Edwards EnterpriseOne 9.0.1 Day in the Life test, online component.

  • The SPARC T3-1 server had 25% better space/performance than the IBM P750 POWER7 server.

  • The SPARC T3-1 server is 5x faster than the x86-based IBM x3650 M2 server system, when executing the JD Edwards EnterpriseOne 9.0.1 Day in the Life test, online component.

  • The SPARC T3-1 server had 2.5x better space/performance than the x86-based IBM x3650 M2 server.

  • The SPARC T3-1 server consolidated the application/web tier of the JD Edwards EnterpriseOne 9.0.1 application using Oracle Solaris Containers. Containers provide flexibility, easier maintenance and better CPU utilization of the server leaving processing capacity for additional growth.

  • The SPARC Enterprise M3000 server provides enterprise class RAS features for customers deploying the Oracle 11g Release 2 database software.

  • To obtain this leading result, a number of Oracle advanced technology and features were used: Oracle Solaris 10, Oracle Solaris Containers, Oracle Java Hotspot Server VM, Oracle Fusion Middleware WebLogic Server 11g R1, Oracle Fusion Middleware Web Tier Utilities 11g, Oracle Database 11g Release 2, the SPARC T3 and the SPARC64 VII based servers.

Performance Landscape

JD Edwards EnterpriseOne DIL Online Component Performance Chart

System Memory OS #user JD Edwards
Version
Rack
Units
Response
Time
(sec)
SPARC T3-1, 1x1.65 GHz SPARC T3 128 Solaris 10 5000 9.0.1 2U 0.523
\*IBM Power 750, 1x3.55 GHz POWER7 120 IBM i7.1 4000 9.0 4U 0.61
IBM Power 570, 4x4.2 GHz POWER6 128 IBM i6.1 2400 8.12 4U 1.129
IBM x3650M2, 2x2.93 GHz X5570 64 OVM 1000 9.0 2U 0.29

\* from http://www-03.ibm.com/systems/i/advantages/oracle/, IBM used Websphere

Configuration Summary

Hardware Configuration:

1 x SPARC T3-1 server
1 x 1.65 GHz SPARC T3
128 GB memory
16 x 300 GB 10000 RPM SAS
1 x 1 GbE NIC
1 x SPARC Enterprise M3000
1 x 2.75 SPARC 64 VII
64 GB memory
1 x 1 GbE NIC
2 x StorageTek 2540/2501

Software Configuration:

JD Edwards EnterpriseOne 9.0.1 with Tools 8.98.3.3
Oracle Database 11g Release 2
Oracle Fusion Middleware 11g WebLogic server 11g R1 version 10.3.2
Oracle Fusion Middleware Web Tier Utilities 11g
Oracle Solaris 10 9/10
Mercury LoadRunner 9.10 with Oracle DIL kit for JD Edwards EnterpriseOne 9.0 update 1

Benchmark Description

Oracle's JD Edwards EnterpriseOne is an integrated applications suite of Enterprise Resource Planning software.

  • Oracle offers 70 JD Edwards EnterpriseOne application modules to support a diverse set of business operations.
  • Oracle 's Day-In-Life (DIL) kit is a suite of scripts that exercises most common transactions of J.D. Edwards EnterpriseOne applications including business processes such as payroll, sales order, purchase order, work order, and other manufacturing processes, such as ship confirmation. These are labeled by industry acronyms such as SCM, CRM, HCM, SRM and FMS.
  • Oracle's DIL kit's scripts execute transactions typical of a mid-sized manufacturing company.
  • The workload consists of online transactions. It does not include the batch processing job components.
  • LoadRunner is used to run the workload and collect the users' transactions response times against increasing numbers of users from 500 to 5000.
  • Key metric used to evaluate performance is the transaction response time which is reported by LoadRunner.

Key Points and Best Practices

Two JD Edwards EnterpriseOne and two Oracle Fusion Middleware WebLogic Servers 11g R1 coupled with two Fusion Middleware 11g Web Tier HTTP Servers instances on the SPARC T3-1 server were hosted in four separate Oracle Solaris Containers to demonstrate consolidation of multiple application and web servers.

  • Each Oracle Solaris container was bound to a separate processor set with 40 virtual processors allocated to each EnterpriseOne Server, 16 virtual processors allocated to each WebServer container and 16 to the default set. This was done to improve performance by using the physical memory closest to the processors, thereby, reducing memory access latency and reducing processor cross calls. The default processor set was used for network and disk interrupt handling.

  • The applications were executed in the FX scheduling class to improve performance by reducing the frequency of context switches.

  • A WebLogic Vertical cluster was configured on each WebServer container with seven managed instances each to load balance users' requests and to provide the infrastructure that enables scaling to high number of users with ease of deployment and high availability.

  • The database server was run in an Oracle Solaris Container hosted on the Oracle's SPARC Enterprise M3000 server.

See Also

Disclosure Statement

Copyright 2011, Oracle and/or its affiliates. All rights reserved. Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners. Results as of 2/16/2011.

Friday Sep 24, 2010

SPARC T3-2 sets World Record on SPECjvm2008 Benchmark

World Record SPECjvm2008 Result

Oracle's SPARC T3-2 server equipped with two SPARC T3 processors running at 1.65 GHz delivered a World Record result of 320.52 SPECjvm2008 Peak ops/m on the SPECjvm2008 benchmark.
  • The SPARC T3-2 server demonstrates better performance than servers equipped with 2 Intel Xeon processors.

  • This result was produced using Oracle Solaris 10 and Oracle JDK 6 Update 21 Performance Release.

  • There are no SPECjvm2008 results published by IBM on POWER7 based systems.

  • The SPARC T3-2 server demonstrates Oracle's position of leadership in Java-based computing by publishing world record results for the SPECjvm2008 benchmark.

Performance Landscape

Complete benchmark results are at the SPECjvm2008 website.

SPECjvm2008 Performance Chart
(ordered by performance)
System Processors Performance
base peak
SPARC T3-2 2 x 1.65 GHz SPARC T3 - 320.52
Sun Blade X6270 2 x 2.93 GHz Intel X5570 317.13 -
Sun Fire X4450 4 x 2.66 GHz Intel X7450 283.79 -
Sun Fire X4450 4 x 2.93 GHz Intel X7350 260.08 -

base: SPECjvm2008 Base ops/m (bigger is better)
peak: SPECjvm2008 Peak ops/m (bigger is better)

Results and Configuration Summary

Hardware Configuration:

SPARC T3-2 server
2 x 1.65 GHz SPARC T3 processors
256 GB memory

Software Configuration:

Oracle Solaris 10 9/10
Java Platform, Standard Edition, JDK 6 Update 21 Performance Release

Benchmark Description

SPECjvm2008 (Java Virtual Machine Benchmark) is a benchmark suite for measuring the performance of a Java Runtime Environment (JRE), containing several real life applications and benchmarks focusing on core java functionality. The suite focuses on the performance of the JRE executing a single application; it reflects the performance of the hardware processor and memory subsystem, but has low dependence on file I/O and includes no network I/O across machines.

The SPECjvm2008 workload mimics a variety of common general purpose application computations. These characteristics reflect the intent that this benchmark will be applicable to measuring basic Java performance on a wide variety of both client and server systems.

SPEC also finds user experience of Java important, and the suite, therefore, includes startup benchmarks and has a required run category called base, which must be run without any tuning of the JVM to improve the out of the box performance.

SPECjvm2008 benchmark highlights:

  • Leverages real life applications (like derby, sunflow, and javac) and area-focused benchmarks (like xml, serialization, crypto, and scimark).
  • Also measures the performance of the operating system and hardware in the context of executing the JRE.

Key Points and Best Practices

  • Enhancements to the JVM had a major impact on performance

See Also

Disclosure Statement

SPEC and SPECjvm are registered trademarks of Standard Performance Evaluation Corporation. Results from www.spec.org and this report as of 9/16/2010. SPARC T3-2 320.52 SPECjvm2008 Peak ops/m, Sun Blade X6270 317.13 SPECjvm2008 Base ops/m.

Monday Sep 20, 2010

SPARC T3-4 Sets World Record Single Server Result on SPECjEnterprise2010 Benchmark

World Record Single Application Server System Performance

Oracle produced a single application server world record SPECjEnterprise2010 benchmark result using Oracle's SPARC T3-4 server for the application server and Oracle's SPARC T3-2 server for the database server.
  • A SPARC T3-4 server paired with a SPARC T3-2 server delivered a result of 9456.28 SPECjEnterprise2010 EjOPS for the SPECjEnterprise benchmark.

  • The SPARC T3-4 server running at 1.65 GHz demonstrated 32% better performance compared to the IBM Power 750 system result of 7172.93 SPECjEnterprise2010 EjOPS which used four POWER7 chips running at 3.55 GHz.

  • The 4-socket SPARC T3-4 server was 32% faster than a 4-socket IBM Power 750 system proving that IBM's per-core performance is irrelevant when compared to system performance.

  • The SPARC T3-4 server has 5% better computational density than the IBM Power 750 system.

  • The SPARC T3-4 server running SPARC T3 processors at 1.65 GHz demonstrated 84% better performance compared to the IBM x3850 X5 system result of 5140.53 SPECjEnterprise2010 EjOPS using four Intel Xeon chips at 2.27 GHz.

  • The SPARC T3-4 server has 47% better computational density than the IBM x3850 X5 system.

  • This world record result was achieved using Oracle Weblogic 10.3.3 application server and Oracle Database 11g R2.

  • Oracle Fusion Middleware provides a family of complete, integrated, hot plugable and best-of-breed products known for enabling enterprise customers to create and run agile and intelligent business applications. The Oracle WebLogic Server's on-going, record-setting Java application server performance demonstrates why so many customers rely on Oracle Fusion Middleware as their foundation for innovation.

  • To obtain this leading result a number of Oracle technologies were used: Oracle Solaris 10, Oracle Solaris Containers, Oracle Java Hotspot VM, Oracle Weblogic, Oracle Database 11gR2, SPARC T3-4 server, and SPARC T3-2 server.

  • The SPARC T3-4 server demonstrated less than 1 second average response times for all SPECjEnterprise2010 transactions and 90% of all transaction times took less than 1 second.

  • The two T-series systems occupied a total of 16 RU of space. This is less than half of the 37 RU of space used in the IBM Power 750 system result of 7172.93 SPECjEnterprise2010 EjOPS.

  • The SPARC T3-4 server result only used 61% of floor space compared to the IBM x3850 X5 system result of 5140.53 SPECjEnterprise2010 EjOPS which requires 26 RU of space.

Performance Landscape

Complete benchmark results are at the SPEC website, SPECjEnterprise2010 Results.

SPECjEnterprise2010 Performance Chart
as of 9/20/2010
Submitter EjOPS\* Application Server Database Server
Oracle 9456.28 1 x Oracle SPARC T3-4
4 x SPARC 1.65 GHz SPARC T3
Oracle WebLogic 10.3.3
1 x Oracle SPARC T3-2
2 x 1.65 GHz SPARC T3
Oracle 11g DB 11.2.0.1
IBM 7172.93 1 x IBM Power 750 Express
4 x 3.55 GHz POWER7
WebSphere Application Server V7.0
1 x IBM BladeCenter PS702
2 x 3.0 GHz POWER7
IBM DB2 Universal Database 9.7
IBM 5140.53 1 x IBM x3850 X5
4 x 2.2 GHz Intel X7560
WebSphere Application Server V7.0
1 x IBM x3850 X5
4 x 2.2 GHz Intel X7560
IBM DB2 Universal Database 9.7

\* SPECjEnterprise2010 EjOPS, Bigger is better.

Results and Configuration Summary

Application Server:

1 x Oracle SPARC T3-4 server
4 x 1.65 GHz SPARC T3 processors
256 GB memory
4 x 10GbE NIC
Oracle Solaris 10 9/10
Oracle Solaris Containers
Oracle WebLogic 10.3.3 Application Server - Standard Edition
Oracle Fusion Middleware
Oracle Java SE, JDK 6 Update 21

Database Server:

1x Oracle SPARC T3-2
2 x 1.65 GHz SPARC T3 processors
256 GB memory
2 x 10GbE NIC
2 x Sun Storage 6180 Array
Oracle Solaris 10 9/10
Oracle Database Enterprise Edition Release 11.2.0.1

Benchmark Description

The SPECjEnterprise2010™ benchmark is a full system benchmark which allows performance measurement and characterization of Java EE 5.0 servers and supporting infrastructure such as JVM, Database, CPU, disk and servers.

The workload consists of an end-to-end web-based order processing domain, an RMI and Web Services driven manufacturing domain and a supply chain model utilizing document-based Web Services. The application is a collection of Java classes, Java Servlets, Java Server Pages , Enterprise Java Beans, Java Persistence Entities (pojo's) and Message Driven Beans.

SPECjEnterprise2010 is the third generation of the SPEC organization's J2EE end-to-end industry standard benchmark application. The new SPECjEnterprise2010 benchmark has been re-designed and developed to cover the JEE 5.0 specification's significantly expanded and simplified programming model, highlighting the major features used by developers in the industry today. This provides a real world workload driving the Application Server's implementation of the Java EE specification to its maximum potential and allowing maximum stressing of the underlying hardware and software systems.

SPEC has paid particular attention to making this benchmark as easy as possible to install and run. This has been achieved by utilizing simplification features of the Java EE 5.0 platform such as annotations and sensible defaulting and by the use of the opensource Faban facility for developing and running the benchmark driver.

SPECjEnterprise2010's new design spans Java EE 5.0 including the new EJB 3.0 and WSEE component architecture, Message Driven beans, and features level transactions.

Key Points and Best Practices

  • Eight Oracle WebLogic server instances on the SPARC T3-4 server were hosted in 8 separate Oracle Solaris Containers to demonstrate consolidation of multiple application servers.

  • Each Oracle Solaris container was bound to a separate processor set, each containing 7 cores. This was done to improve performance by using the physical memory closest to the processors, thereby, reducing memory access latency. The default processor set was used for network and disk interrupt handling.

  • The Oracle WebLogic application servers were executed in the FX scheduling class to improve performance by reducing the frequency of context switches.

  • The Oracle database processes were run in 2 processor sets using the Oracle Solaris psrset utility and executed in the FX scheduling class. These were done to improve performance by reducing memory access latency and by reducing context switches.

  • The Oracle Log Writer process was run in a separate processor set containing 1 core and run in the RT scheduling class. This was done to insure that the Log Writer had the most efficient use of CPU resources.

See Also

Disclosure Statement

SPEC is a registered trademark and SPECjEnterprise is a trademark of Standard Performance Evaluation Corporation. Results from www.spec.org as of 9/20/2010. SPARC T3-4 9456.28 SPECjEnterprise2010 EjOPS. IBM Power 750 Express 7,172.93 SPECjEnterprise2010 EjOPS. IBM System x3850 X5 5,140.53 SPECjEnterprise2010 EjOPS.

IBM Power 750 Express (4RU each).
IBM BladeCenter H Chassis (9RU each).
IBM System x3850 X5 (4RU each).
IBM DS4800 Disk System Model 82 (4RU each).
IBM DS4000 EXP810 (3RU each).

http://www-03.ibm.com/systems/power/hardware/750/index.html
http://www-03.ibm.com/systems/x/hardware/enterprise/x3850x5/index.html
http://www-03.ibm.com/systems/bladecenter/hardware/chassis/bladeh/index.html
http://www-900.ibm.com/storage/cn/disk/ds4000/ds4800/TSD01054USEN.pdf
http://www-947.ibm.com/support/entry/portal/docdisplay?lndocid=MIGR-59552&brandind=5000028
http://www-03.ibm.com/systems/storage/disk/ds4000/exp810/

Tuesday Jun 29, 2010

Sun Blade X6270 M2 Sets World Record on SPECjbb2005 Benchmark

Oracle's Sun Blade X6270 M2, equipped with two 3.33 Ghz Intel Xeon X5680 processors obtained a result of 931637 SPECjbb2005 bops, 465819 SPECjbb2005 bops/JVM on the SPECjbb2005 benchmark. This is a World Record result on 2-socket servers with Intel Xeon 5600 series x86 processors.

  • This result was obtained on the Sun Blade X6270 M2 using Microsoft Windows 2008 R2 and Java HotSpot(TM) 64-Bit Server VM on Windows, version 1.6.0_21 Performance Release.

Performance Landscape

SPECjbb2005 Performance Chart (ordered by performance)

bops: SPECjbb2005 Business Operations per Second (bigger is better)

System Processor JVM Performance
SPECjbb2005
bops
SPECjbb2005
bops/JVM
Sun Blade X6270 M2 Intel X5680 3.33 GHz Java HotSpot 1.6.0_21 931637 465819
Cisco UCS B200 M2 Intel X5680 3.33 GHz IBM J9 VM 2.4 931076 155179
Fujitsu TX300 S6 Intel X5680 3.33 GHz IBM J9 VM 2.4 928393 154732
IBM x3500 M3 Intel X5680 3.33 GHz IBM J9 VM 2.4 916251 152709

Complete benchmark results may be found at the SPEC benchmark website http://www.spec.org.

Results and Configuration Summary

Hardware Configuration:

Sun Blade X6270 M2
2 x Intel Xeon X5680 3.33 GHz processors
48 GB of memory

Software Configuration:

Windows Server 2008 R2 Enterprise Edition
Java HotSpot(TM) 64-Bit Server VM on Windows, version 1.6.0_21 Performance Release

Benchmark Description

SPECjbb2005 (Java Business Benchmark) measures the performance of a Java implemented application tier (server-side Java). The benchmark is based on the order processing in a wholesale supplier application. The performance of the user tier and the database tier are not measured in this test. The metrics given are number of SPECjbb2005 bops (Business Operations per Second) and SPECjbb2005 bops/JVM (bops per JVM instance).

Key Points and Best Practices

  • Enhancements to the JVM had a major impact on performance.

See Also

Disclosure Statement

SPEC, SPECjbb reg tm of Standard Performance Evaluation Corporation. Results as of 6/28/2010 on www.spec.org. Sun Blade X6270 M2(2 chips, 12 cores) 931637 SPECjbb2005 bops, 465819 SPECjbb2005 bops/JVM submitted for review. Cisco UCS B200 M2(2 chips, 12 cores) 931076 SPECjbb2005 bops, 155179 SPECjbb2005 bops/JVM. Fujitsu TX300 S6(2 chips, 12 cores) 928393 SPECjbb2005 bops, 154732 SPECjbb2005 bops/JVM. IBM x3500 M3(2 chips, 12 cores) 916251 SPECjbb2005 bops, 152709 SPECjbb2005 bops/JVM.

Monday Jun 28, 2010

Sun Fire X4270 M2 Sets World Record on SPECjbb2005 Benchmark

Oracle's Sun Fire X4270 M2, equipped with two 3.33 GHz Intel Xeon X5680 processors, obtained a result of 812,358 SPECjbb2005 bops, 812,358 SPECjbb2005 bops/JVM on the SPECjb2005 benchmark. This is the best x86 based system result using a single JVM.
  • This result was obtained on the Sun Fire X4270 M2 using Oracle Solaris 10 10/09 and Java HotSpot(TM) 64-Bit Server VM on Solaris, version 1.6.0_21 Performance Release.

Performance Landscape

SPECjbb2005 Performance Chart (ordered by performance)

bops: SPECjbb2005 Business Operations per Second (bigger is better)

System Processor JVM Performance
SPECjbb2005
bops
SPECjbb2005
bops/JVM
Sun Fire X4270 M2 Intel X5680 3.33 GHz Java HotSpot 1.6.0_21 812,358 812,358
HP DL380 G6 Intel X5570 2.93 GHz Java HotSpot 1.6.0_14 509,962 509,962
Sun Blade X6270 Intel X5570 2.93 GHz Java HotSpot 1.6.0_14 503,675 503,675

Complete benchmark results may be found at the SPEC benchmark website http://www.spec.org.

Results and Configuration Summary

Hardware Configuration:

Sun Fire X4270 M2
2 x Intel Xeon X5680, 3.33 GHz processors
48 GB of memory

Software Configuration:

Oracle Solaris 10 10/09
Java HotSpot(TM) 64-Bit Server VM on Solaris, version 1.6.0_21 Performance Release

Benchmark Description

SPECjbb2005 (Java Business Benchmark) measures the performance of a Java implemented application tier (server-side Java). The benchmark is based on the order processing in a wholesale supplier application. The performance of the user tier and the database tier are not measured in this test. The metrics given are number of SPECjbb2005 bops (Business Operations per Second) and SPECjbb2005 bops/JVM (bops per JVM instance).

Key Points and Best Practices

  • Enhancements to the JVM had a major impact on performance

See Also

Disclosure Statement

SPEC, SPECjbb reg tm of Standard Performance Evaluation Corporation. Results as of 6/28/2010 on www.spec.org. Sun Fire X4270 M2 812358 SPECjbb2005 bops, 812358 SPECjbb2005 bops/JVM submitted for review; HP DL380 G6, 509,962 SPECjbb2005 bops, 509,962 SPECjbb2005 bops/JVM; Sun Blade X6270, 503,675 SPECjbb2005, 503,675 SPECjbb2005 bops/JVM.

Sun Fire X4800 Sets World Record on SPECjbb2005 Benchmark

Oracle's Sun Fire X4800, equipped with eight 2.26 GHz Intel Xeon X7560 processors, obtained a result of 3369694 SPECjbb2005 bops, 421212 SPECjbb2005 bops/JVM on the SPECjb2005 benchmark. This is the best result for 8-socket servers with Intel 7500 series x86 processors.
  • This result was obtained on the Sun Fire X4800 using Oracle Solaris 10 10/09 and Java HotSpot(TM) 64-Bit Server VM on Solaris, version 1.6.0_21 Performance Release.

Performance Landscape

SPECjbb2005 Performance Chart (ordered by performance)

bops: SPECjbb2005 Business Operations per Second (bigger is better)

System Processor JVM Performance
SPECjbb2005
bops
SPECjbb2005
bops/JVM
Sun Fire X4800 Intel X7560 2.26 GHz Java HotSpot 1.6.0_21 3369694 421212
NEC 5800 Intel X7560 2.26 GHz Oracle JRockit(R) 6 P28.0.0 3343714 417964
Fujitsu 1800E Intel X7560 2.26 GHz Oracle JRockit(R) 6 P28.0.0 3321826 415228

Complete benchmark results may be found at the SPEC benchmark website http://www.spec.org.

Results and Configuration Summary

Hardware Configuration:

Sun Fire X4800
8 x Intel Xeon X7560, 2.26 GHz processors
512 GB of memory

Software Configuration:

Oracle Solaris 10 10/09
Java HotSpot(TM) 64-Bit Server VM on Solaris, version 1.6.0_21 Performance Release

Benchmark Description

SPECjbb2005 (Java Business Benchmark) measures the performance of a Java implemented application tier (server-side Java). The benchmark is based on the order processing in a wholesale supplier application. The performance of the user tier and the database tier are not measured in this test. The metrics given are number of SPECjbb2005 bops (Business Operations per Second) and SPECjbb2005 bops/JVM (bops per JVM instance).

Key Points and Best Practices

  • Enhancements to the JVM had a major impact on performance

See Also

Disclosure Statement

SPEC, SPECjbb reg tm of Standard Performance Evaluation Corporation. Results as of 6/28/2010 on www.spec.org. Sun Fire X4800(8 chips, 64 cores) 3369694 SPECjbb2005 bops, 421211 SPECjbb2005 bops/JVM submitted for review. NEC 5800(8 chips, 64 cores) 3343714 SPECjbb2005 bops, 417964 SPECjbb2005 bops/JVM. Fujitsu 1800E(8 chips, 64 cores) 3321826 SPECjbb2005 bops, 415228 SPECjbb2005 bops/JVM.

Thursday Nov 19, 2009

SPECmail2009: New World record on T5240 1.6GHz Sun 7310 and ZFS

The Sun SPARC Enterprise T5240 server running the Sun Java Messaging server 7.2 achieved a World Record SPECmail2009 result using Sun Storage 7310 Unified Storage System and ZFS file system.  Sun's OpenStorage platforms enable another world record.

  • World record SPECmail2009 benchmark using the Sun SPARC Enterprise T5240 server (two 1.6GHz UltraSPARC T2 Plus), Sun Communications Suite 7, Solaris 10, and the Sun Storage 7310 Unified Storage System achieved 14,500 SPECmail_Ent2009 users at 69,857 Sessions/Hour.

  • This SPECmail2009 benchmark result clearly demonstrates that the Sun Messaging Server 7.2, Solaris 10 and ZFS solution can support a large, enterprise level IMAP mail server environment as a low cost 'Sun on Sun' solution, delivering the best performance and maximizing data integrity and availability of Sun Open Storage and ZFS.

  • The Sun SPARC Enterprise T5240 server supported 2.4 times more users with 2.4 times better sessions/hour rate than AppleXserv3 solution on the SPECmail2009 benchmark.

  • There are no IBM Power6 results on this benchmark.

  • The configuration using Sun OpenStorage outperformed all previous results with traditional direct attached storage and significantly higher number of disk devices.

SPECmail2009 Performance Landscape (ordered by performance)

System Performance Disks OS Messaging
Server
Users Sessions/
hour
Sun SPARC Enterprise T5240
2 x 1.6GHz UltraSPARC T2 Plus
14,500 69,857 58
NAS
Solaris 10 CommSuite 7.2
Sun JMS 7.2
Sun SPARC Enterprise T5240
2 x 1.6GHz UltraSPARC T2 Plus
12,000 57,758 80
DAS
Solaris 10 CommSuite 5
Sun JMS 6.3
Sun Fire X4275
2 x 2.93GHz Xeon X5570
8,000 38,348 44
NAS
Solaris 10 Sun JMS 6.2
Apple Xserv3,1
2 x 2.93GHz Xeon X5570
6,000 28,887 82
DAS
MacOS 10.6 Dovecot 1.1.14
apple 0.5
Sun SPARC Enterprise T5220
1 x 1.4GHz UltraSPARC T2
3,600 17,316 52
DAS
Solaris 10 Sun JMS 6.2

Complete benchmark results may be found at the SPEC benchmark website http://www.spec.org

Users - SPECmail_Ent2009 Users
Sessions/hour - SPECmail2009 Sessions/hour
NAS - Network Attached Storage
DAS - Direct Attached Storage

Results and Configuration Summary

Hardware Configuration:

    Sun SPARC Enterprise T5240
      2 x 1.6 GHz UltraSPARC T2 Plus processors
      128 GB memory
      2 x 146GB, 10K RPM SAS disks, 4 x 32GB SSDs

External Storage:

    2 x Sun Storage 7310 Unified Storage System, each with
      32 GB of memory
      24 x 1 TB 7200 RPM SATA Drives

Software Configuration:

    Solaris 10
    ZFS
    Sun Java Communications Suite 7 Update 2
      Sun Java System Messaging Server 7.2
      Directory Server 6.3

Benchmark Description

The SPECmail2009 benchmark measures the ability of corporate e-mail systems to meet today's demanding e-mail users over fast corporate local area networks (LAN). The SPECmail2009 benchmark simulates corporate mail server workloads that range from 250 to 10,000 or more users, using industry standard SMTP and IMAP4 protocols. This e-mail server benchmark creates client workloads based on a 40,000 user corporation, and uses folder and message MIME structures that include both traditional office documents and a variety of rich media content. The benchmark also adds support for encrypted network connections using industry standard SSL v3.0 and TLS 1.0 technology. SPECmail2009 replaces all versions of SPECmail2008, first released in August 2008. The results from the two benchmarks are not comparable.

Software on one or more client machines generates a benchmark load for a System Under Test (SUT) and measures the SUT response times. A SUT can be a mail server running on a single system or a cluster of systems.

A SPECmail2009 'run' simulates a 100% load level associated with the specific number of users, as defined in the configuration file. The mail server must maintain a specific Quality of Service (QoS) at the 100% load level to produce a valid benchmark result. If the mail server does maintain the specified QoS at the 100% load level, the performance of the mail server is reported as SPECmail_Ent2009 SMTP and IMAP Users at SPECmail2009 Sessions per hour. The SPECmail_Ent2009 users at SPECmail2009 Sessions per Hour metric reflects the unique workload combination for a SPEC IMAP4 user.

Key Points and Best Practices

  • Each Sun Storage 7310 Unified Storage System was configured with one J4400 JBOD array with 22x1TB SATA drives to a mirrored device and 4 shared volumes are built under the mirrored device. Total 8 mirrored volumes from 2 x Sun Storage 7310 are mounted on the system under test (SUT) messaging mail indexes and mail messages file system using NFSV4 protocol. Four SSDs were used as the SUT internal disks. Each SSD is configured as a ZFS file system. Four such ZFS directories are used for the messaging server queue, store metadata, LDAP and queue. SSDs substantially reduced the store metadata and queue latencies.

  • Each Sun Storage 7310 Unified Storage System was connected to the SUT via a dual 10-Gigabit Ethernet Fiber XFP card.

  • The Sun Storage 7310 Unified Storage System software version is 2009.08.11,1-0.

  • The clients used these Java options: java -d64 -Xms4096m -Xmx4096m -XX:+AggressiveHeap

  • Substantial performance improvement and scalability was observed with Sun Communications Suite7 update2, Java Messaging Server 7.2 and Directory Server 6.2

  • See the SPEC Report for all OS, network and messaging server tunings.

See Also

Disclosure Statement

SPEC, SPECmail reg tm of Standard Performance Evaluation Corporation. Results as of 10/22/09 on www.spec.org. SPECmail2009: Sun SPARC Enterprise T5240, SPECmail_Ent2009 14,500 users at 69,857 SPECmail2009 Sessions/hour. Apple Xserv3,1, SPECmail_Ent2009 6,000 users at 28,887 SPECmail2009 Sessions/hour.

Tuesday Oct 13, 2009

SPECweb2005 on Sun SPARC Enterprise T5440 World Record using Solaris Containers and Sun Storage F5100 Flash

The Sun SPARC Enterprise T5440 server with 1.6GHz UltraSPARC T2 Plus with Solaris Containers, Sun Flash Open Storage, and Sun JAVA System Web Server 7.0 Update 5 achieved World Record SPECweb2005.
  • Sun has obtained a World Record SPECweb2005 performance result of 100,209 SPECweb2005 on the Sun SPARC Enterprise T5440, running Solaris 10 10/09 Sun JAVA System Web Server 7.0 Update 5, and Java Hotspot™ Server VM.

  • This result demonstrates performance leadership of the Sun SPARC Enterprise T5440 server and its scalability, by using Solaris Containers to consolidate multiple web serving environments, and Sun OpenStorage Flash technology to store large datasets for fast data retrieval.

  • The Sun SPARC Enterprise T5440 delivers 21% greater SPECweb2005 performance than the HP DL370 G6 with 3.2GHz Xeon W5580 processors.

  • The Sun SPARC Enterprise T5440 delivers 40% greater SPECweb2005 performance than the HP DL 585 G5 with four 3.114 GHz Opteron 8393 SE processors.

  • The Sun SPARC Enterprise T5440 delivers 2x the SPECweb2005 performance of the HP DL 580 G5 with four 2.66GHz Xeon X7460 processors.

  • There are no IBM Power6 results on the SPECweb2005 benchmark.

  • This benchmark result clearly demonstrates that the Sun SPARC Enterprise T5440 running Solaris 10 10/09 and Sun Java System Webserver 7.0 Update 5 can support thousands of concurrent web server sessions and is an industry leader in web serving with a Sun solution.

Performance Landscape

Server

Processor

SPECweb2005

Banking\*

Ecomm\*

Support\*

Webserver

OS

Sun T5440

4x 1.6 T2 Plus

100,209

176,500

133,000

95,000

Java WebServer

Solaris

HP DL370 G6

2x 3.2 W5580

83,073

117,120

142,080

76,352

Rock

RedHat
Linux

HP DL585 G5

4x 3.11 O8393

71,629

117,504

123,072

56,320

Rock

RedHat
Linux

HP DL580 G5

4x 2.66 X7460

50,013

97,632

69,600

40,800

Rock

RedHat
Linux

\* Banking - SPECweb2005-Banking
   Ecomm - SPECweb2005-Ecommerce
   Support - SPECweb2005-Support

Results and Configuration Summary

Hardware Configuration:

  1 Sun SPARC Enterprise T5440 with

  • 4 x UltraSPARC T2 Processor 8 core, 64 threads, 1.6 GHz
  • 254 GB memory
  • 6 x 4Gb PCI Express 8-Port Host Adapter (SG-XPCIE8SAS-E-Z)
  • 1 x Sun Storage F5100 Flash Array (TA5100RASA4-80AA)
  • 1 x Sun Storage F5100 Flash Array (TA5100RASA4-40AA)

Server Software Configuration:

  • Solaris 10 10/09
  • JAVA System Web Server 7.0 Update 5
  • Java Hotspot™ Server VM

Network configuration:

  • 1 x Arista DCS-7124s 24-10GbE port  switch
  • 1 x Cisco 2970 series (WS-C2970G-24TS-E) switch for the three 1 GbE networks

Back-end Simulator:

  1 Sun Fire X4270 with

  • 2 x 2.93 GHz Intel X5570 Quad core
  • 48GB memory
  • Solaris 10 10/09
  • JSWS 7.0 Update 5
  • Java Hotspot™ Server VM

Clients:

  8 Sun Blade™ T6320

  • 1 x 1.417 GHz UltraSPARC-T2
  • 64 GB memory
  • Solaris 10 5/09
  • Java Hotspot™ Server VM

  8 Sun Blade™ 6270

  • 2 x 2.93 GHz Intel X5570 Quad core
  • 36 GB memory
  • Solaris 10 5/09
  • Java Hotspot™ Server VM

Benchmark Description

SPECweb2005, successor to SPECweb99 and SPECweb99_SSL, is an industry standard benchmark for evaluating Web Server performance developed by SPEC. The benchmark simulates multiple user sessions accessing a Web Server and generating static and dynamic HTTP requests. The major features of SPECweb2005 are:

  • Measures simultaneous user sessions
  • Dynamic content: currently PHP and JSP implementations
  • Page images requested using 2 parallel HTTP connections
  • Multiple, standardized workloads: Banking (HTTPS), E-commerce (HTTP and HTTPS), and Support (HTTP)
  • Simulates browser caching effects
  • File accesses more accurately simulate today's disk access patterns

Key Points and Best Practices

  • The server was divided into four Solaris Containers and a single web server instance was executed in each container.
  • Four processor sets were created (with varying numbers of threads depending on the workload) to run the web server in. This was done to reduce memory access latency using the physical memory closest to the processor.  All interrupts were run on the remaining threads.
  • Each web server is executed in the FX scheduling class to improve performance by reducing the frequency of context switches.
  • Two Sun Storage F5100 Flash Arrays (holding the target file set and logs) were shared by the four containers  for fast data retrieval.   
  • Use of Solaris Containers highlights the consolidation of multiple web serving environments on a single server.
  • Use of the Sun Ext I/O Expansion unit and Sun Storage F5100 Flash Arrays highlight the expandability of the server.

    Disclosure Statement

    Sun SPARC Enterprise T5440 (8 cores, 1 chip) 100209 SPECweb2005, was submitted to SPEC for review on October 13, 2009.  HP ProLiant DL370 G6 (8 cores, 2 chips) 83,073 SPECweb2005. HP ProLiant DL585 G5 (16 cores, 4 chips) 71,629 SPECweb2005. HP ProLiant DL580 G5 (24 cores, 4 chips) 50,013 SPECweb2005. SPEC, SPECweb reg tm of Standard Performance Evaluation Corporation. Results from www.spec.org as of Oct 10, 2009.

    Thursday Aug 27, 2009

    Sun SPARC Enterprise T5240 with 1.6GHz UltraSPARC T2 Plus Beats 4-Chip IBM Power 570 POWER6 System on SPECjbb2005

    Significance of Results

    A Sun SPARC Enterprise T5240 server equipped with two UltraSPARC T2 Plus processors at 1.6GHz delivered a result of 422782 SPECjbb2005 bops, 26424 SPECjbb2005 bops/JVM. The Sun SPARC Enterprise T5240 consumed an average of 875 Watts of power during the execution of the benchmark.

    • The Sun SPARC Enterprise T5240 server running 2x 1.6 GHz UltraSPARC T2 Plus processor delivered 5% better performance than an IBM Power 570 with 4x 4.7 GHz POWER6 processors as measured by the SPECjbb2005 benchmark.

    • The Sun SPARC Enterprise T5240 server equipped with two UltraSPARC T2 Plus processors at 1.6GHz demonstrated 10% better performance than the Sun SPARC Enterprise T5240 server equipped with two UltraSPARC T2 Plus processors at 1.4GHz.
    • One Sun SPARC Enterprise T5240 (two 1.6GHz UltraSPARC T2 Plus chips, 2RU) has 2.3 times the power/performance than the IBM Power 570 (8RU) that used four 4.7GHz POWER6 chips.
    • The Sun SPARC Enterprise T5240 used OpenSolaris 2009.06 and the Sun JDK 1.6.0_14 Performance Release to obtain this result.

    Performance Landscape

    SPECjbb2005 Performance Chart (ordered by performance), select results presented.

    bops : SPECjbb2005 Business Operations per Second (bigger is better)

    System Processors Performance
    Chips Cores Threads GHz Type bops bops/JVM
    Sun SPARC Enterprise T5240 2 16 128 1.6 UltraSPARC T2 Plus 422782 26424
    IBM Power 570 4 8 16 4.7 POWER6 402923 100731
    Sun SPARC Enterprise T5240 2 16 128 1.4 UltraSPARC T2 Plus 384934 24058

    Complete benchmark results may be found at the SPEC benchmark website http://www.spec.org.

    Results and Configuration Summary

    Hardware Configuration:

      Sun SPARC Enterprise T5240
        2 x 1.6 GHz UltraSPARC T2 Plus processors
        64 GB

    Software Configuration:

      OpenSolaris 2009.06
      Java HotSpot(TM) 32-Bit Server, Version 1.6.0_14 Performance Release

    Benchmark Description

    SPECjbb2005 (Java Business Benchmark) measures the performance of a Java implemented application tier (server-side Java). The benchmark is based on the order processing in a wholesale supplier application. The performance of the user tier and the database tier are not measured in this test. The metrics given are number of SPECjbb2005 bops (Business Operations per Second) and SPECjbb2005 bops/JVM (bops per JVM instance).

    Key Points and Best Practices

    • Each JVM executed in the FX scheduling class to improve performance by reducing the frequency of context switches.
    • Each JVM was bound to a separate processor containing 1 core to reduce memory access latency using the physical memory closest to the processor.

    See Also

    Disclosure Statement

    SPEC, SPECjbb reg tm of Standard Performance Evaluation Corporation. Results as of 8/25/2009 on http://www.spec.org.
    Sun SPARC T5240 (2 chips, 16 cores) 422782 SPECjbb2005 bops, 26424 SPECjbb2005 bops/JVM;Sun SPARC T5240 (2 chips, 16 cores) 384934 SPECjbb2005 bops, 24058 SPECjbb2005 bops/JVM; IBM Power 570 (4 chips, 8 cores) 402923 SPECjbb2005 bops, 100731 SPECjbb2005 bops/JVM.

    Sun watts were measured on the system during the test.

    IBM p 570 4P (2 building blocks) power specifications calculated as 80% of maximum input power reported 7/8/09 in 'Facts and Features Report': ftp://ftp.software.ibm.com/common/ssi/pm/br/n/psb01628usen/PSB01628USEN.PDF

    Wednesday Aug 26, 2009

    Sun SPARC Enterprise T5220 with 1.6GHz UltraSPARC T2 Sets Single Chip World Record on SPECjbb2005

    Significance of Results

    A Sun SPARC Enterprise T5220 server equipped with one UltraSPARC T2 processor at 1.6GHz delivered a World Record single-chip result of 231464 SPECjbb2005 bops, 28933 SPECjbb2005 bops/JVM. The Sun SPARC Enterprise T5220 consumed an average of 520 Watts of power during the execution of this benchmark.

    • The Sun SPARC Enterprise T5220 server (one 1.6 GHz UltraSPARC T2 chip) demonstrated 3% better performance over the Fujitsu TX100 result of 223691 SPECjbb2005 bops which used one 3.16 GHz Xeon X3380 processor.
    • The Sun SPARC Enterprise T5220 (one 1.6 GHz UltraSPARC T2 chip) demonstrated 8% better performance over the IBM x3200 result of 214578 SPECjbb2005 bops which used one 3.16 GHz Xeon X3380 processor.
    • The Sun SPARC Enterprise T5220 server (one 1.6 GHz UltraSPARC T2 chip) demonstrated 10% better performance over the Fujitsu RX100 result of 211144 SPECjbb2005 bops which used one 3.16 GHz Xeon X3380 processor.
    • The Sun SPARC Enterprise T5220 server (one 1.6 GHz UltraSPARC T2 chip) demonstrated 19% better performance over the IBM X3350 result of 194256 SPECjbb2005 bops which used one 3 GHz Xeon X3370 processor.
    • The Sun SPARC Enterprise T5220 server (one 1.6 GHz UltraSPARC T2 chip) demonstrated 2.6X the performance over the IBM p570 result of 88089 SPECjbb2005 bops which used one 4.7 GHz POWER6 processor.
    • One Sun SPARC Enterprise T5220 (one 1.6GHz UltraSPARC T2 Plus chip, 2RU) has 2.1 the power/performance than the IBM Power 570 (4RU) that used two 4.7GHz POWER6 chips.
    • The Sun SPARC Enterprise T5220 used OpenSolaris 2009.06 and the Sun JDK 1.6.0_14 Performance Release to obtain this result.

    Performance Landscape

    SPECjbb2005 Performance Chart (ordered by performance)

    bops : SPECjbb2005 Business Operations per Second (bigger is better)

    System Processors Performance
    Chips Cores Threads GHz Type bops bops/JVM
    Sun SPARC Enterprise T5220 1 8 64 1.6 UltraSPARC T2 231464 28933
    Sun Blade T6320 1 8 64 1.6 UltraSPARC T2 229576 28697
    Fujitsu TX100 1 4 4 3.16 Intel Xeon 223691 111846
    IBM x3200 M2 1 4 4 3.16 Intel Xeon 214578 107289
    Fujitsu RX100 1 4 4 3.16 Intel Xeon 211144 105572
    IBM Power 570 2 4 8 4.7 POWER6 205917 102959
    IBM x3350 1 4 4 3.0 Intel Xeon 194256 97128
    Sun SPARC Enterprise T5220 1 8 64 1.4 UltraSPARC T2 192055 24007
    IBM Power 570 1 2 4 4.7 POWER6 88089 88089

    Complete benchmark results may be found at the SPEC benchmark website http://www.spec.org.

    Results and Configuration Summary

    Hardware Configuration:

      Sun SPARC Enterprise T5220
        1x 1.6 GHz UltraSPARC T2 processor
        64 GB

    Software Configuration:

      OpenSolaris 2009.06
      Java HotSpot(TM) 32-Bit Server, Version 1.6.0_14 Performance Release

    Benchmark Description

    SPECjbb2005 (Java Business Benchmark) measures the performance of a Java implemented application tier (server-side Java). The benchmark is based on the order processing in a wholesale supplier application. The performance of the user tier and the database tier are not measured in this test. The metrics given are number of SPECjbb2005 bops (Business Operations per Second) and SPECjbb2005 bops/JVM (bops per JVM instance).

    Key Points and Best Practices

    • Each JVM executed in the FX scheduling class to improve performance by reducing the frequency of context switches.
    • Each JVM was bound to a separate processor containing 1 core to reduce memory access latency using the physical memory closest to the processor.

    See Also

    Disclosure Statement

    SPEC, SPECjbb reg tm of Standard Performance Evaluation Corporation. Results as of 8/25/2009 on http://www.spec.org.
    Sun SPARC T5220 231464 SPECjbb2005 bops, 28933 SPECjbb2005 bops/JVM Submitted to SPEC for review; IBM p 570 88089 SPECjbb2005 bops, 88089 SPECjbb2005 bops/JVM; Fujitsu TX100 223691 SPECjbb2005 bops, 111846 SPECjbb2005 bops/JVM; IBM x3350 194256 SPECjbb2005 bops, 97128 SPECjbb2005 bops/JVM; Sun SPARC Enterprise T5120 192055 SPECjbb2005 bops, 24007 SPECjbb2005 bops/JVM.

    Sun watts were measured on the system during the test.

    IBM p 570 2P (1 building blocks) power specifications calculated as 80% of maximum input power reported 7/8/09 in "Facts and Features Report": ftp://ftp.software.ibm.com/common/ssi/pm/br/n/psb01628usen/PSB01628USEN.PDF

    Wednesday Aug 12, 2009

    SPECmail2009 on Sun SPARC Enterprise T5240 and Sun Java System Messaging Server 6.3

    Significance of Results

    The Sun SPARC Enterprise T5240 server running the Sun Java Messaging server 6.3 achieved World Record SPECmail2009 results using ZFS.

    • A Sun SPARC Enterprise T5240 server powered by two 1.6 GHz UltraSPARC T2 Plus processors running the Sun Java Communications Suite 5 software along with the Solaris 10 Operating System and using six Sun StorageTek 2540 arrays achieved a new World Record 12000 SPECmail_Ent2009 IMAP4 users at 57,758 Sessions/hour for SPECmail2009.
    • The Sun SPARC Enterprise T5240 server achieve twice the number of users and sessions/hour rate than the Apple Xserv3,1 solution equipped with Intel Nehalem processors.
    • The Sun result was obtained using ~10% fewer disk spindles with the Sun StorageTek 2540 RAID controller direct attach storage solution versus Apple's direct attached storage.
    • This benchmark result demonstrates that the Sun SPARC Enterprise T5240 server together with Sun Java Communication Suite 5 component Sun Java System Messaging Server 6.3, Solaris 10 and ZFS on Sun StorageTek 2540 arrays supports a large, enterprise level IMAP mail server environment. This solution is reliable, low cost, and low power, delivering the best performance and maximizing the data integrity with Sun's ZFS file systems.

    Performance Landscape

    SPECmail2009 (ordered by performance)

    System Processors Performance
    Type GHz Ch, Co, Th SPECmail_Ent2009
    Users
    SPECmail2009
    Sessions/hour
    Sun SPARC Enterprise T5240 UltraSPARC T2 Plus 1.6 2, 16, 128 12,000 57,758
    Sun Fire X4275 Xeon X5570 2.93 2, 8, 16 8,000 38,348
    Apple Xserv3,1 Xeon X5570 2.93 2, 8, 16 6,000 28,887
    Sun SPARC Enterprise T5220 UltraSPARC T2 1.4 1, 8, 64 3,600 17,316

    Notes:

      Number of SPECmail_Ent2009 users (bigger is better)
      SPECmail2009 Sessions/hour (bigger is better)
      Ch, Co, Th: Chips, Cores, Threads

    Complete benchmark results may be found at the SPEC benchmark website http://www.spec.org

    Results and Configuration Summary

    Hardware Configuration:

      Sun SPARC Enterprise T5240

        2 x 1.6 GHz UltraSPARC T2 Plus processors
        128 GB
        8 x 146GB, 10K RPM SAS disks

      6 x Sun StorageTek 2540 Arrays,

        4 arrays with 12 x 146GB 15K RPM SAS disks
        2 arrays with 12 x 73GB 15K RPM SAS disks

      2 x Sun Fire X4600 benchmark manager, load generator and mail sink

        8 x AMD Opteron 8356 2.7 GHz QC processors
        64 GB
        2 x 73GB 10K RPM SAS disks

      Sun Fire X4240 load generator

        2 x AMD Opteron 2384 2.7 GHz DC processors
        16 GB
        2 x 73GB 10K RPM SAS disks

    Software Configuration:

      Solaris 10
      ZFS
      Sun Java Communication Suite 5
      Sun Java System Messaging Server 6.3

    Benchmark Description

    The SPECmail2009 benchmark measures the ability of corporate e-mail systems to meet today's demanding e-mail users over fast corporate local area networks (LAN). The SPECmail2009 benchmark simulates corporate mail server workloads that range from 250 to 10,000 or more users, using industry standard SMTP and IMAP4 protocols. This e-mail server benchmark creates client workloads based on a 40,000 user corporation, and uses folder and message MIME structures that include both traditional office documents and a variety of rich media content. The benchmark also adds support for encrypted network connections using industry standard SSL v3.0 and TLS 1.0 technology. SPECmail2009 replaces all versions of SPECmail2008, first released in August 2008. The results from the two benchmarks are not comparable.

    Software on one or more client machines generates a benchmark load for a System Under Test (SUT) and measures the SUT response times. A SUT can be a mail server running on a single system or a cluster of systems.

    A SPECmail2009 'run' simulates a 100% load level associated with the specific number of users, as defined in the configuration file. The mail server must maintain a specific Quality of Service (QoS) at the 100% load level to produce a valid benchmark result. If the mail server does maintain the specified QoS at the 100% load level, the performance of the mail server is reported as SPECmail_Ent2009 SMTP and IMAP Users at SPECmail2009 Sessions per hour. The SPECmail_Ent2009 users at SPECmail2009 Sessions per Hour metric reflects the unique workload combination for a SPEC IMAP4 user.

    Key Points and Best Practices

    • Each Sun StorageTek 2540 array was configured with 6 hardware RAID1 volumes. A total of 36 RAID1 volumes were configured with 24 of size 146GB and 12 of size 73GB. Four ZPOOLs of (6x146GB RAID1 volumes) were mounted as the four primary message stores and ZFS file systems. Four ZPOOLs of (8x73GB RAID1 volumes) were mounted as the four primary message indexes. The hardware RAID1 volumes were created with 64K stripe size without read ahead turned on. The 7x146GB internal drives were used to create four ZPOOLs and ZFS file systems for the LDAP, store metadata, queue and the mailserver log.

    • The clients used these Java options: java -d64 -Xms4096m -Xmx4096m -XX:+AggressiveHeap

    • See the SPEC Report for all OS, network and messaging server tunings.

    See Also

    Disclosure Statement

    SPEC, SPECmail reg tm of Standard Performance Evaluation Corporation. Results as of 08/07/2009 on www.spec.org. SPECmail2009: Sun SPARC Enterprise T5240 (16 cores, 2 chips) SPECmail_Ent2009 12000 users at 57,758 SPECmail2009 Sessions/hour. Apple Xserv3,1 (8 cores, 2 chips) SPECmail_Ent2009 6000 users at 28,887 SPECmail2009 Sessions/hour.

    Tuesday Jul 21, 2009

    Sun Blade T6320 World Record SPECjbb2005 performance

    Significance of Results

    The Sun Blade T6320 server module equipped with one UltraSPARC T2 processor running at 1.6 GHz delivered a World Record single-chip result while running the SPECjbb2005 benchmark.

    • The Sun Blade T6320 server module powered by one 1.6 GHz UltraSPARC T2 processor delivered a result of 229576 SPECjbb2005 bops, 28697 SPECjbb2005 bops/JVM when running the SPECjbb2005 benchmark.
    • The Sun Blade T6320 server module (with one 1.6 GHz UltraSPARC T2 processor) demonstrated 2.6X better performance than the IBM System p 570 with one 4.7 GHz POWER6 processor.
    • The Sun Blade T6320 server module (with one 1.6 GHz UltraSPARC T2 processor) demonstrated 3% better performance than the Fujitsu TX100 result which used one 3.16 GHz Intel Xeon X3380 processor.
    • The Sun Blade T6320 server module (with one 1.6 GHz UltraSPARC T2 processor) demonstrated 7% better performance than the IBM x3200 result which used one 3.16 GHz Xeon X3380 processor.
    • The Sun Blade T6320 server module running the 1.6 GHz UltraSPARC T2 processor delivered 20% better performance than a Sun SPARC Enterprise T5120 with the 1.4 GHz UltraSPARC T2 processor.
    • The Sun Blade T6320 used the OpenSolaris 2009.06 operation system and the Java HotSpot(TM) 32-Bit Server, Version 1.6.0_14 Performance Release JVM to obtain this leading result.

    Performance Landscape

    SPECjbb2005 Performance Chart (ordered by performance)

    bops: SPECjbb2005 Business Operations per Second (bigger is better)

    System Processors Performance
    Chips Cores Threads GHz Type SPECjbb2005
    bops
    SPECjbb2005
    bops/JVM
    Sun Blade T6320 1 8 64 1.6 UltraSPARC T2 229576 28697
    Fujitsu TX100 1 4 4 3.16 Intel Xeon 223691 111846
    IBM x3200 M2 1 4 4 3.16 Intel Xeon 214578 107289
    Fujitsu RX100 1 4 4 3.16 Intel Xeon 211144 105572
    IBM x3350 1 4 4 3.0 Intel Xeon 194256 97128
    Sun SE T5120 1 8 64 1.4 UltraSPARC T2 192055 24007
    IBM p 570 1 2 4 4.7 POWER6 88089 88089

    Complete benchmark results may be found at the SPEC benchmark website http://www.spec.org.

    Results and Configuration Summary

    Hardware Configuration:

      Sun Blade T6320
        1 x 1.6 GHz UltraSPARC T2 processor
        64 GB

    Software Configuration:

      OpenSolaris 2009.06
      Java HotSpot(TM) 32-Bit Server, Version 1.6.0_14 Performance Release

    Benchmark Description

    SPECjbb2005 (Java Business Benchmark) measures the performance of a Java implemented application tier (server-side Java). The benchmark is based on the order processing in a wholesale supplier application. The performance of the user tier and the database tier are not measured in this test. The metrics given are number of SPECjbb2005 bops (Business Operations per Second) and SPECjbb2005 bops/JVM (bops per JVM instance).

    Key Points and Best Practices

    • Enhancements to the JVM had a major impact on performance.
    • Each JVM executed in the FX scheduling class to improve performance by reducing the frequency of context switches.
    • Each JVM bound to a separate processor containing 1 core to reduce memory access latency using the physical memory closest to the processor.

    See Also

    Disclosure Statement

    SPEC, SPECjbb reg tm of Standard Performance Evaluation Corporation. Results as of 7/17/2009 on http://www.spec.org. SPECjbb2005, Sun Blade T6320 229576 SPECjbb2005 bops, 28697 SPECjbb2005 bops/JVM; IBM p 570 88089 SPECjbb2005 bops, 88089 SPECjbb2005 bops/JVM; Fujitsu TX100 223691 SPECjbb2005 bops, 111846 SPECjbb2005 bops/JVM; IBM x3350 194256 SPECjbb2005 bops, 97128 SPECjbb2005 bops/JVM; Sun SPARC Enterprise T5120 192055 SPECjbb2005 bops, 24007 SPECjbb2005 bops/JVM.

    Monday Jul 20, 2009

    Sun T5440 SPECjbb2005 Beats IBM POWER6 Chip-to-Chip

    A Sun SPARC Enterprise T5440 server equipped with four UltraSPARC T2 Plus processors running at 1.6GHz, delivered a result of 841380 SPECjbb2005 bops, 26293 SPECjbb2005 bops/JVM when running the SPECjbb2005 benchmark.

    • One Sun SPARC Enterprise T5440 (four 1.6GHz UltraSPARC T2 Plus chips, 4RU) demonstrated 5% better performance than the IBM Power 570 (16RU) result of 798752 SPECjbb2005 bops using eight 4.7 GHz POWER6 chips. The IBM system requires twice the number of processor chips and the system requires 4 times more space than the T5440.
    • One Sun SPARC Enterprise T5440 (four 1.6GHz UltraSPARC T2 Plus chips, 4RU) has 2.3 times better power/performance than the IBM Power 570 (16RU) that used eight 4.7 GHz POWER6 chips.
    • Sun's 1.6GHz UltraSPARC T2 Plus Processor is over 2.3x performance of 4.7 GHz IBM POWER6 processor.
    • One Sun SPARC Enterprise T5440 (four 1.6GHz UltraSPARC T2 Plus chips) demonstrated 21% better performance when compared to the Sun SPARC Enterprise T5440 result of 692736 SPECjbb2005 bops using four 1.4GHz UltraSPARC T2 Plus chips.
    • The Sun SPARC Enterprise T5440 used OpenSolaris 2009.06 and the Sun JDK 1.6.0_14 Performance Release to obtain this result.

    Performance Landscape

    SPECjbb2005 Performance Chart (ordered by performance)

    bops : SPECjbb2005 Business Operations per Second (bigger is better)

    System Processors Performance
    Chips Cores Threads GHz Type bops bops/JVM
    HP DL585 G6 4 24 24 2.8 AMD Opteron 937207 234302
    Sun SPARC Enterprise T5440 4 32 256 1.6 US-T2 Plus 841380 26293
    IBM Power 570 8 16 32 4.7 POWER6 798752 99844
    Sun SPARC Enterprise T5440 4 32 256 1.4 US-T2 Plus 692736 21648

    Complete benchmark results may be found at the SPEC benchmark website http://www.spec.org.

    Results and Configuration Summary

    Hardware Configuration:

      Sun SPARC Enterprise T5440
        4 x 1.6 GHz UltraSPARC T2 Plus processors
        256 GB

    Software Configuration:

      OpenSolaris 2009.06
      Java HotSpot(TM) 32-Bit Server, Version 1.6.0_14 Performance Release

    Benchmark Description

    SPECjbb2005 (Java Business Benchmark) measures the performance of a Java implemented application tier (server-side Java). The benchmark is based on the order processing in a wholesale supplier application. The performance of the user tier and the database tier are not measured in this test. The metrics given are number of SPECjbb2005 bops (Business Operations per Second) and SPECjbb2005 bops/JVM (bops per JVM instance).

    Key Points and Best Practices

    • Enhancements to the JVM had a major impact on performance.
    • Each JVM executed in the FX scheduling class to improve performance by reducing the frequency of context switches.
    • Each JVM bound to a separate processor containing 1 core to reduce memory access latency using the physical memory closest to the processor.

    See Also

    Disclosure Statement:

    SPECjbb2005 Sun SPARC Enterprise T5440 (4 chips, 32 cores) 841380 SPECjbb2005 bops, 26293 SPECjbb2005 bops/JVM. Results submitted to SPEC. HP DL585 G5 (4 chips, 24 cores) 937207 SPECjbb2005 bops, 234302 SPECjbb2005 bops/JVM. IBM Power 570 (8 chips, 16 cores) 798752 SPECjbb2005 bops, 99844 SPECjbb2005 bops/JVM. Sun SPARC Enterprise T5440 (4 chips, 32 cores) 692736 SPECjbb2005 bops, 21648 SPECjbb2005 bops/JVM. SPEC, SPECjbb reg tm of Standard Performance Evaluation Corporation. Results from www.spec.org as of 7/20/09

    Sun watts were measured on the system during the test.

    IBM p 570 8P (4 building blocks) power specifications calculated as 80% of maximum input power reported 7/8/09 in “Facts and Features Report”: ftp://ftp.software.ibm.com/common/ssi/pm/br/n/psb01628usen/PSB01628USEN.PDF

    About

    BestPerf is the source of Oracle performance expertise. In this blog, Oracle's Strategic Applications Engineering group explores Oracle's performance results and shares best practices learned from working on Enterprise-wide Applications.

    Index Pages
    Search

    Archives
    « May 2016
    SunMonTueWedThuFriSat
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
        
           
    Today