Thursday Nov 19, 2015

SPECvirt_2013: SPARC T7-2 World Record Performance for Two- and Four-Chip Systems

Oracle's SPARC T7-2 server delivered a world record SPECvirt_sc2013 result for systems with two to four chips.

  • The SPARC T7-2 server produced a result of 3198 @ 179 VMs SPECvirt_sc2013.

  • The two-chip SPARC T7-2 server beat the best four-chip x86 Intel E7-8890 v3 server (HP ProLiant DL580 Gen9), demonstrating that the SPARC M7 processor is 2.1 times faster than the Intel Xeon Processor E7-8890 v3 (chip-to-chip comparison).

  • The two-chip SPARC T7-2 server beat the best two-chip x86 Intel E5-2699 v3 server results by nearly 2 times (Huawei FusionServer RH2288H V3, HP ProLiant DL360 Gen9).

  • The two-chip SPARC T7-2 server delivered nearly 2.2 times the performance of the four-chip IBM Power System S824 server solution which used 3.5 GHz POWER8 six core chips.

  • The SPARC T7-2 server running Oracle Solaris 11.3 operating system, utilizes embedded virtualization products as the Oracle Solaris 11 zones, which in turn provide a low overhead, flexible, scalable and manageable virtualization environment.

  • The SPARC T7-2 server result used Oracle VM Server for SPARC 3.3 and Oracle Solaris Zones providing a flexible, scalable and manageable virtualization environment.

Performance Landscape

Complete benchmark results are at the SPEC website, SPECvirt_sc2013 Results. The following table highlights the leading two-, and four-chip results for the benchmark, bigger is better.

SPECvirt_sc2013
Leading Two to Four-Chip Results
System
Processor
Chips Result @ VMs Virtualization Software
SPARC T7-2
SPARC M7 (4.13 GHz, 32core)
2 3198 @ 179 Oracle VM Server for SPARC 3.3
Oracle Solaris Zones
HP ProLiant DL580 Gen9
Intel E7-8890 v3 (2.5 GHz, 18core)
4 3020 @ 168 Red Hat Enterprise Linux 7.1 KVM
Lenovo System x3850 X6
Intel E7-8890 v3 (2.5 GHz, 18core)
4 2655 @ 147 Red Hat Enterprise Linux 6.6 KVM
Huawei FusionServer RH2288H V3
Intel E5-2699 v3 (2.3 GHz, 18core)
2 1616 @ 95 Huawei FusionSphere V1R5C10
HP ProLiant DL360 Gen9
Intel E5-2699 v3 (2.3 GHz, 18core)
2 1614 @ 95 Red Hat Enterprise Linux 7.1 KVM
IBM Power S824
POWER8 (3.5 GHz, 6core)
4 1370 @ 79 PowerVM Enterprise Edition 2.2.3

Configuration Summary

System Under Test Highlights:

Hardware:
1 x SPARC T7-2 server, with
2 x 4.13 GHz SPARC M7
1 TB memory
2 Sun Dual Port 10GBase-T Adapter
2 Sun Storage Dual 16 Gb Fibre Channel PCIe Universal HBA

Software:
Oracle Solaris 11.3
Oracle VM Server for SPARC 3.3 (LDom)
Oracle Solaris Zones
Oracle iPlanet Web Server 7.0.20
Oracle PHP 5.3.29
Dovecot v2.2.18
Oracle WebLogic Server Standard Edition Release 10.3.6
Oracle Database 12c Enterprise Edition (12.1.0.2.0)
Java HotSpot(TM) 64-Bit Server VM on Solaris, version 1.7.0_85-b15

Storage:
3 x Oracle Server X5-2L, with
2 x Intel Xeon Processor E5-2630 v3 8-core 2.4 GHz
32 GB memory
4 x Oracle Flash Accelerator F160 PCIe Card
Oracle Solaris 11.3

1 x Oracle Server X5-2L, with
2 x Intel Xeon Processor E5-2630 v3 8-core 2.4 GHz
32 GB memory
4 x Oracle Flash Accelerator F160 PCIe Card
4x 400 GB SSDs
Oracle Solaris 11.3

Benchmark Description

SPECvirt_sc2013 is SPEC's updated benchmark addressing performance evaluation of datacenter servers used in virtualized server consolidation. SPECvirt_sc2013 measures the end-to-end performance of all system components including the hardware, virtualization platform, and the virtualized guest operating system and application software. It utilizes several SPEC workloads representing applications that are common targets of virtualization and server consolidation. The workloads were made to match a typical server consolidation scenario of CPU resource requirements, memory, disk I/O, and network utilization for each workload. These workloads are modified versions of SPECweb2005, SPECjAppServer2004, SPECmail2008, and SPEC CPU2006. The client-side SPECvirt_sc2013 harness controls the workloads. Scaling is achieved by running additional sets of virtual machines, called "tiles", until overall throughput reaches a peak.

Key Points and Best Practices

  • The SPARC T7-2 server running the Oracle Solaris 11.3, utilizes embedded virtualization products as the Oracle VM Server for SPARC and Oracle Solaris Zones, which provide a low overhead, flexible, scalable and manageable virtualization environment.

  • In order to provide a high level of data integrity and availability, all the benchmark data sets are stored on mirrored (RAID1) storage

  • Using Oracle VM Server for SPARC to bind the SPARC M7 processor with its local memory optimized the memory use in this virtual environment.

  • This improved result used a fractional tile to fully saturate the system.

See Also

Disclosure Statement

SPEC and the benchmark name SPECvirt_sc are registered trademarks of the Standard Performance Evaluation Corporation. Results from www.spec.org as of 11/19/2015. SPARC T7-2, SPECvirt_sc2013 3198@179 VMs; HP ProLiant DL580 Gen9, SPECvirt_sc2013 3020@168 VMs; Lenovo x3850 X6; SPECvirt_sc2013 2655@147 VMs; Huawei FusionServer RH2288H V3, SPECvirt_sc2013 1616@95 VMs; HP ProLiant DL360 Gen9, SPECvirt_sc2013 1614@95 VMs; IBM Power S824, SPECvirt_sc2013 1370@79 VMs.

Friday Nov 13, 2015

SPECjbb2015: SPARC T7-1 World Record for 1 Chip Result

Updated November 30, 2015 to point to published results and add latest, best x86 two-chip result.

Oracle's SPARC T7-1 server, using Oracle Solaris and Oracle JDK, produced world record one-chip SPECjbb2015 benchmark (MultiJVM metric) results beating all previous one- and two-chip results in the process. This benchmark was designed by the industry to showcase Java performance in the Enterprise. Performance is expressed in terms of two metrics, max-jOPS which is the maximum throughput number, and critical-jOPS which is critical throughput under service level agreements (SLAs).

  • The SPARC T7-1 server achieved 120,603 SPECjbb2015-MultiJVM max-jOPS and 60,280 SPECjbb2015-MultiJVM critical-jOPS on the SPECjbb2015 benchmark.

  • The one-chip SPARC T7-1 server delivered 2.5 times more max-jOPS performance per chip than the best two-chip result which was run on the Cisco UCS C220 M4 server using Intel v3 processors. The SPARC T7-1 server also produced 4.3 times more critical-jOPS performance per chip compared to the Cisco UCS C220 M4. The Cisco result enabled the COD BIOS option.

  • The SPARC T7-1 server delivered 2.7 times more max-jOPS performance per chip than the IBM Power S812LC using POWER8 chips. The SPARC T7-1 server also produced 4.6 times more critical-jOPS performance per chip compared to the IBM server. The SPARC M7 processor also delivered 1.45 times more critical-jOPS performance per core than IBM POWER8 processor.

  • The one-chip SPARC T7-1 server delivered 3 times more max-jOPS performance per chip than the two-chip result on the Lenovo Flex System x240 M5 using Intel v3 processors. The SPARC T7-1 server also produced 2.8 times more critical-jOPS performance per chip compared to the Lenovo. The Lenovo result did not enable the COD BIOS option.

  • The SPARC T5-2 server achieved 80,889 SPECjbb2015-MultiJVM max-jOPS and 37,422 SPECjbb2015-MultiJVM critical-jOPS on the SPECjbb2015 benchmark.

  • The one-chip SPARC T7-1 server demonstrated a 3 times max-jOPS performance improvement per chip compared to the previous generation two-chip SPARC T5-2 server.

From SPEC's press release: "The SPECjbb2015 benchmark is based on the usage model of a worldwide supermarket company with an IT infrastructure that handles a mix of point-of-sale requests, online purchases, and data-mining operations. It exercises Java 7 and higher features, using the latest data formats (XML), communication using compression, and secure messaging."

The Cluster on Die (COD) mode is a BIOS setting that effectively splits the chip in half, making the operating system think it has twice as many chips as it does (in this case, four, 9 core chips). Intel has said that COD is appropriate only for highly NUMA optimized workloads . Dell has shown that there is a 3.7x slower bandwidth to the other half of the chip split by COD.

Performance Landscape

One- and two-chip results of SPECjbb2015 MultiJVM from www.spec.org as of November 30, 2015.

SPECjbb2015
One- and Two-Chip Results
System SPECjbb2015-MultiJVM OS JDK Notes
max-jOPS critical-jOPS
SPARC T7-1
1 x SPARC M7
(4.13 GHz, 1x 32core)
120,603 60,280 Oracle Solaris 11.3 8u66 -
Cisco UCS C220 M4
2 x Intel E5-2699 v3
(2.3 GHz, 2x 18core)
97,551 28,318 Red Hat 6.5 8u60 COD
Dell PowerEdge R730
2 x Intel E5-2699 v3
(2.3 GHz, 2x 18core)
94,903 29,033 SUSE 12 8u60 COD
Cisco UCS C220 M4
2 x Intel E5-2699 v3
(2.3 GHz, 2x 18core)
92,463 31,654 Red Hat 6.5 8u60 COD
Lenovo Flex System x240 M5
2 x Intel E5-2699 v3
(2.3 GHz, 2x 18core)
80,889 43,654 Red Hat 6.5 8u60 -
SPARC T5-2
2 x SPARC T5
(3.6 GHz, 2x 16core)
80,889 37,422 Oracle Solaris 11.2 8u66 -
Oracle Server X5-2L
2 x Intel E5-2699 v3
(2.3 GHz, 2x 18core)
76,773 26,458 Oracle Solaris 11.2 8u60 -
Sun Server X4-2
2 x Intel E5-2697 v2
(2.7 GHz, 2x 12core)
52,482 19,614 Oracle Solaris 11.1 8u60 -
HP ProLiant DL120 Gen9
1 x Intel Xeon E5-2699 v3
(2.3 GHz, 18core)
47,334 9,876 Red Hat 7.1 8u51 -
IBM Power S812LC
1 x POWER8
(2.92 GHz, 10core)
44,883 13,032 Ubuntu 14.04.3 J9 VM -

* Note COD: result uses non-default BIOS setting of Cluster on Die (COD) which splits the chip in two. This requires specific NUMA optimization, in that memory traffic to the other half of the chip can see a 3.7x decrease in bandwidth

Configuration Summary

Systems Under Test:

SPARC T7-1
1 x SPARC M7 processor (4.13 GHz)
512 GB memory (16 x 32 GB dimms)
Oracle Solaris 11.3 (11.3.1.5.0)
Java HotSpot 64-Bit Server VM, version 1.8.0_66

SPARC T5-2
2 x SPARC T5 processors (3.6 GHz)
512 GB memory (32 x 16 GB dimms)
Oracle Solaris 11.2
Java HotSpot 64-Bit Server VM, version 1.8.0_66

Benchmark Description

The benchmark description, as found at the SPEC website.

The SPECjbb2015 benchmark has been developed from the ground up to measure performance based on the latest Java application features. It is relevant to all audiences who are interested in Java server performance, including JVM vendors, hardware developers, Java application developers, researchers and members of the academic community.

Features include:

  • A usage model based on a world-wide supermarket company with an IT infrastructure that handles a mix of point-of-sale requests, online purchases and data-mining operations.
  • Both a pure throughput metric and a metric that measures critical throughput under service level agreements (SLAs) specifying response times ranging from 10ms to 100ms.
  • Support for multiple run configurations, enabling users to analyze and overcome bottlenecks at multiple layers of the system stack, including hardware, OS, JVM and application layers.
  • Exercising new Java 7 features and other important performance elements, including the latest data formats (XML), communication using compression, and messaging with security.
  • Support for virtualization and cloud environments.

Key Points and Best Practices

  • For the SPARC T5-2 server results, processor sets were use to isolate the different JVMs used during the test.

See Also

Disclosure Statement

SPEC and the benchmark name SPECjbb are registered trademarks of Standard Performance Evaluation Corporation (SPEC). Results from http://www.spec.org as of 11/30/2015. SPARC T7-1 120,603 SPECjbb2015-MultiJVM max-jOPS, 60,280 SPECjbb2015-MultiJVM critical-jOPS; Cisco UCS C220 M4 97,551 SPECjbb2015-MultiJVM max-jOPS, 28,318 SPECjbb2015-MultiJVM critical-jOPS; Dell PowerEdge R730 94,903 SPECjbb2015-MultiJVM max-jOPS, 29,033 SPECjbb2015-MultiJVM critical-jOPS; Cisco UCS C220 M4 92,463 SPECjbb2015-MultiJVM max-jOPS, 31,654 SPECjbb2015-MultiJVM critical-jOPS; Lenovo Flex System x240 M5 80,889 SPECjbb2015-MultiJVM max-jOPS, 43,654 SPECjbb2015-MultiJVM critical-jOPS; SPARC T5-2 80,889 SPECjbb2015-MultiJVM max-jOPS, 37,422 SPECjbb2015-MultiJVM critical-jOPS; Oracle Server X5-2L 76,773 SPECjbb2015-MultiJVM max-jOPS, 26,458 SPECjbb2015-MultiJVM critical-jOPS; Sun Server X4-2 52,482 SPECjbb2015-MultiJVM max-jOPS, 19,614 SPECjbb2015-MultiJVM critical-jOPS; HP ProLiant DL120 Gen9 47,334 SPECjbb2015-MultiJVM max-jOPS, 9,876 SPECjbb2015-MultiJVM critical-jOPS; IBM Power S812LC 44,883 SPECjbb2015-MultiJVM max-jOPS, 13,032 SPECjbb2015-MultiJVM critical-jOPS.

Monday Oct 26, 2015

SPECjEnterprise2010: SPARC T7-1 World Record with Single Application Server Using 1 to 4 Chips

Oracle's SPARC T7-1 servers have set a world record for the SPECjEnterprise2010 benchmark for solutions using a single application server with one to four chips. The result of 25,818.85 SPECjEnterprise2010 EjOPS used two SPARC T7-1 servers, one server for the application tier and the other server for the database tier.

  • The SPARC T7-1 servers obtained a result of 25,093.06 SPECjEnterprise2010 EjOPS using encrypted data. This secured result used Oracle Advanced Security Transparent Data Encryption (TDE) for the application database tablespaces with the AES-256-CFB cipher. The network connection between the application server and the database server was also encrypted using the secure JDBC.

  • The SPARC T7-1 server solution delivered 34% more performance compared to the two-chip IBM x3650 M5 server result of 19,282.14 SPECjEnterprise2010 EjOPS.

  • The SPARC T7-1 server solution delivered 14% more performance compared to the four-chip IBM Power System S824 server result of 22,543.34 SPECjEnterprise2010 EjOPS.

  • The SPARC T7-1 server based results demonstrated 20% more performance compared to the Oracle Server X5-2 system result of 21,504.30 SPECjEnterprise2010 EjOPS. Oracle holds the top x86 two-chip application server SPECjEnterprise2010 result.

  • The application server used Oracle Fusion Middleware components including the Oracle WebLogic 12.1 application server and Java HotSpot(TM) 64-Bit Server VM on Solaris, version 1.8.0_60. The database server was configured with Oracle Database 12c Release 1.

  • For the secure result, the application data was encrypted in the Oracle database using the Oracle Advanced Security Transparent Data Encryption (TDE) feature. Hardware accelerated cryptography support in the SPARC M7 processor for the AES-256-CFB cipher was used to provide data security.

  • The benchmark performance using the secure SPARC T7-1 server configuration with encryption was less than 3% when compared to the peak result.

  • This result demonstrated less than 1 second average response times for all SPECjEnterprise2010 transactions and represents Jave EE 5.0 transactions generated by over 210,000 users.

Performance Landscape

Select single application server results. Complete benchmark results are at the SPEC website, SPECjEnterprise2010 Results.

SPECjEnterprise2010 Performance Chart
10/25/2015
Submitter EjOPS* Java EE Server DB Server Notes
Oracle 25,818.85 1 x SPARC T7-1
1 x 4.13 GHz SPARC M7
Oracle WebLogic 12c (12.1.3)
1 x SPARC T7-1
1 x 4.13 GHz SPARC M7
Oracle Database 12c (12.1.0.2)
-
Oracle 25,093.06 1 x SPARC T7-1
1 x 4.13 GHz SPARC M7
Oracle WebLogic 12c (12.1.3)
Network Data Encryption for JDBC
1 x SPARC T7-1
1 x 4.13 GHz SPARC M7
Oracle Database 12c (12.1.0.2)
Transparent Data Encryption
Secure
IBM 22,543.34 1 x IBM Power S824
4 x 3.5 GHz POWER 8
WebSphere Application Server V8.5
1 x IBM Power S824
4 x 3.5 GHz POWER 8
IBM DB2 10.5 FP3
-
Oracle 21,504.30 1 x Oracle Server X5-2
2 x 2.3 GHz Intel Xeon E5-2699 v3
Oracle WebLogic 12c (12.1.3)
1 x Oracle Server X5-2
2 x 2.3 GHz Intel Xeon E5-2699 v3
Oracle Database 12c (12.1.0.2)
COD
IBM 19,282.14 1 x System x3650 M5
2 x 2.6 GHz Intel Xeon E5-2697 v3
WebSphere Application Server V8.5
1 x System x3850 X6
4 x 2.8 GHz Intel Xeon E7-4890 v2
IBM DB2 10.5 FP5
-

* SPECjEnterprise2010 EjOPS (bigger is better)

The Cluster on Die (COD) mode is a BIOS setting that effectively splits the chip in half, making the operating system think it has twice as many chips as it does (in this case, four, 9 core chips). Intel has stated that COD is appropriate only for highly NUMA optimized workloads. Dell has shown that there is a 3.7x slower bandwidth to the other half of the chip split by COD.

Configuration Summary

Application Server:

1 x SPARC T7-1 server, with
1 x SPARC M7 processor (4.13 GHz)
256 GB memory (16 x 16 GB)
2 x 600 GB SAS HDD
2 x 400 GB SAS SSD
3 x Sun Dual Port 10 GbE PCIe 2.0 Networking card with Intel 82599 10 GbE Controller
Oracle Solaris 11.3 (11.3.0.0.30)
Oracle WebLogic Server 12c (12.1.3)
Java HotSpot(TM) 64-Bit Server VM on Solaris, version 1.8.0_60

Database Server:

1 x SPARC T7-1 server, with
1 x SPARC M7 processor (4.13 GHz)
512 GB memory (16 x 32 GB)
2 x 600 GB SAS HDD
1 x Sun Dual Port 10 GbE PCIe 2.0 Networking card with Intel 82599 10 GbE Controller
1 x Sun Storage 16 Gb Fibre Channel Universal HBA
Oracle Solaris 11.3 (11.3.0.0.30)
Oracle Database 12c (12.1.0.2)

Storage Servers:

1 x Oracle Server X5-2L (8-Drive), with
2 x Intel Xeon Processor E5-2699 v3 (2.3 GHz)
32 GB memory
1 x Sun Storage 16 Gb Fibre Channel Universal HBA
4 x 1.6 TB NVMe SSD
2 x 600 GB SAS HDD
Oracle Solaris 11.3 (11.3.0.0.30)
1 x Oracle Server X5-2L (24-Drive), with
2 x Intel Xeon Processor E5-2699 v3 (2.3 GHz)
32 GB memory
1 x Sun Storage 16 Gb Fibre Channel Universal HBA
14 x 600 GB SAS HDD
Oracle Solaris 11.3 (11.3.0.0.30)

1 x Brocade 6510 16 Gb FC switch

Benchmark Description

SPECjEnterprise2010 is the third generation of the SPEC organization's J2EE end-to-end industry standard benchmark application. The SPECjEnterprise2010 benchmark has been designed and developed to cover the Java EE 5 specification's significantly expanded and simplified programming model, highlighting the major features used by developers in the industry today. This provides a real world workload driving the Application Server's implementation of the Java EE specification to its maximum potential and allowing maximum stressing of the underlying hardware and software systems,

  • The web zone, servlets, and web services
  • The EJB zone
  • JPA 1.0 Persistence Model
  • JMS and Message Driven Beans
  • Transaction management
  • Database connectivity
Moreover, SPECjEnterprise2010 also heavily exercises all parts of the underlying infrastructure that make up the application environment, including hardware, JVM software, database software, JDBC drivers, and the system network.

The primary metric of the SPECjEnterprise2010 benchmark is jEnterprise Operations Per Second (SPECjEnterprise2010 EjOPS). The primary metric for the SPECjEnterprise2010 benchmark is calculated by adding the metrics of the Dealership Management Application in the Dealer Domain and the Manufacturing Application in the Manufacturing Domain. There is NO price/performance metric in this benchmark.

Key Points and Best Practices

  • Four Oracle WebLogic server instances on the SPARC T7-1 server were hosted in 4 separate Oracle Solaris Zones.
  • The Oracle WebLogic application servers were executed in the FX scheduling class to improve performance by reducing the frequency of context switches.
  • The Oracle log writer process was run in the RT scheduling class.

See Also

Disclosure Statement

SPEC and the benchmark name SPECjEnterprise are registered trademarks of the Standard Performance Evaluation Corporation. Results from www.spec.org as of 10/25/2015. SPARC T7-1, 25,818.85 SPECjEnterprise2010 EjOPS (unsecure); SPARC T7-1, 25,093.06 SPECjEnterprise2010 EjOPS (secure); Oracle Server X5-2, 21,504.30 SPECjEnterprise2010 EjOPS (unsecure); IBM Power S824, 22,543.34 SPECjEnterprise2010 EjOPS (unsecure); IBM x3650 M5, 19,282.14 SPECjEnterprise2010 EjOPS (unsecure);

SPECvirt_sc2013: SPARC T7-2 World Record for 2 and 4 Chip Systems

Oracle has had a new result accepted by SPEC as of November 19, 2015. This new result may be found here.

Oracle's SPARC T7-2 server delivered a world record SPECvirt_sc2013 result for systems with two to four chips.

  • The SPARC T7-2 server produced a result of 3026 @ 168 VMs SPECvirt_sc2013.

  • The two-chip SPARC T7-2 server beat the best two-chip x86 Intel E5-2699 v3 server results by nearly 1.9 times (Huawei FusionServer RH2288H V3, HP ProLiant DL360 Gen9).

  • The two-chip SPARC T7-2 server delivered nearly 2.2 times the performance of the four-chip IBM Power System S824 server solution which used 3.5 GHz POWER8 six core chips.

  • The SPARC T7-2 server running Oracle Solaris 11.3 operating system, utilizes embedded virtualization products as the Oracle Solaris 11 zones, which in turn provide a low overhead, flexible, scalable and manageable virtualization environment.

  • The SPARC T7-2 server result used Oracle VM Server for SPARC 3.3 and Oracle Solaris Zones providing a flexible, scalable and manageable virtualization environment.

Performance Landscape

Complete benchmark results are at the SPEC website, SPECvirt_sc2013 Results. The following table highlights the leading two-, and four-chip results for the benchmark, bigger is better.

SPECvirt_sc2013
Leading Two to Four-Chip Results
System
Processor
Chips Result @ VMs Virtualization Software
SPARC T7-2
SPARC M7 (4.13 GHz, 32core)
2 3026 @ 168 Oracle VM Server for SPARC 3.3
Oracle Solaris Zones
HP DL580 Gen9
Intel E7-8890 v3 (2.5 GHz, 18core)
4 3020 @ 168 Red Hat Enterprise Linux 7.1 KVM
Lenovo System x3850 X6
Intel E7-8890 v3 (2.5 GHz, 18core)
4 2655 @ 147 Red Hat Enterprise Linux 6.6 KVM
Huawei FusionServer RH2288H V3
Intel E5-2699 v3 (2.3 GHz, 18core)
2 1616 @ 95 Huawei FusionSphere V1R5C10
HP DL360 Gen9
Intel E5-2699 v3 (2.3 GHz, 18core)
2 1614 @ 95 Red Hat Enterprise Linux 7.1 KVM
IBM Power S824
POWER8 (3.5 GHz, 6core)
4 1370 @ 79 PowerVM Enterprise Edition 2.2.3

Configuration Summary

System Under Test Highlights:

Hardware:
1 x SPARC T7-2 server, with
2 x 4.13 GHz SPARC M7
1 TB memory
2 Sun Dual Port 10GBase-T Adapter
2 Sun Storage Dual 16 Gb Fibre Channel PCIe Universal HBA

Software:
Oracle Solaris 11.3
Oracle VM Server for SPARC 3.3 (LDom)
Oracle Solaris Zones
Oracle iPlanet Web Server 7.0.20
Oracle PHP 5.3.29
Dovecot v2.2.18
Oracle WebLogic Server Standard Edition Release 10.3.6
Oracle Database 12c Enterprise Edition (12.1.0.2.0)
Java HotSpot(TM) 64-Bit Server VM on Solaris, version 1.7.0_85-b15

Storage:
3 x Oracle Server X5-2L, with
2 x Intel Xeon Processor E5-2630 v3 8-core 2.4 GHz
32 GB memory
4 x Oracle Flash Accelerator F160 PCIe Card
Oracle Solaris 11.3

1 x Oracle Server X5-2L, with
2 x Intel Xeon Processor E5-2630 v3 8-core 2.4 GHz
32 GB memory
4 x Oracle Flash Accelerator F160 PCIe Card
4x 400 GB SSDs
Oracle Solaris 11.3

Benchmark Description

SPECvirt_sc2013 is SPEC's updated benchmark addressing performance evaluation of datacenter servers used in virtualized server consolidation. SPECvirt_sc2013 measures the end-to-end performance of all system components including the hardware, virtualization platform, and the virtualized guest operating system and application software. It utilizes several SPEC workloads representing applications that are common targets of virtualization and server consolidation. The workloads were made to match a typical server consolidation scenario of CPU resource requirements, memory, disk I/O, and network utilization for each workload. These workloads are modified versions of SPECweb2005, SPECjAppServer2004, SPECmail2008, and SPEC CPU2006. The client-side SPECvirt_sc2013 harness controls the workloads. Scaling is achieved by running additional sets of virtual machines, called "tiles", until overall throughput reaches a peak.

Key Points and Best Practices

  • The SPARC T7-2 server running the Oracle Solaris 11.3, utilizes embedded virtualization products as the Oracle VM Server for SPARC and Oracle Solaris Zones, which provide a low overhead, flexible, scalable and manageable virtualization environment.

  • In order to provide a high level of data integrity and availability, all the benchmark data sets are stored on mirrored (RAID1) storage

  • Using Oracle VM Server for SPARC to bind the SPARC M7 processor with its local memory optimized system memory use in this virtual environment.

See Also

Disclosure Statement

SPEC and the benchmark name SPECvirt_sc are registered trademarks of the Standard Performance Evaluation Corporation. Results from www.spec.org as of 10/25/2015. SPARC T7-2, SPECvirt_sc2013 3026@168 VMs; HP DL580 Gen9, SPECvirt_sc2013 3020@168 VMs; Lenovo x3850 X6; SPECvirt_sc2013 2655@147 VMs; Huawei FusionServer RH2288H V3, SPECvirt_sc2013 1616@95 VMs; HP ProLiant DL360 Gen9, SPECvirt_sc2013 1614@95 VMs; IBM Power S824, SPECvirt_sc2013 1370@79 VMs.

Oracle Internet Directory: SPARC T7-2 World Record

Oracle's SPARC T7-2 server running Oracle Internet Directory (OID, Oracle's LDAP Directory Server) on Oracle Solaris 11 on a virtualized processor configuration achieved a record result on the Oracle Internet Directory benchmark.

  • The SPARC T7-2 server, virtualized to use a single processor, achieved world record performance running Oracle Internet Directory benchmark with 50M users.

  • The SPARC T7-2 server and Oracle Internet Directory using Oracle Database 12c running on Oracle Solaris 11 achieved record result of 1.18M LDAP searches/sec with an average latency of 0.85 msec with 1000 clients.

  • The SPARC T7 server demonstrated 25% better throughput and 23% better latency for LDAP search/sec over similarly configured SPARC T5 server benchmark environment.

  • Oracle Internet Directory achieved near linear scalability on the virtualized single processor domain on the SPARC T7-2 server with 79K LDAP searches/sec with 2 cores to 1.18M LDAP searches/sec with 32 cores.

  • Oracle Internet Directory and the virtualized single processor domain on the SPARC T7-2 server achieved up to 22,408 LDAP modify/sec with an average latency of 2.23 msec for 50 clients.

Performance Landscape

A virtualized single SPARC M7 processor in a SPARC T7-2 server was used for the test results presented below. The SPARC T7-2 server and SPARC T5-2 server results were run as part of this benchmark effort. The remaining results were part of a previous benchmark effort.

Oracle Internet Directory Tests
System chips/
cores
Search Modify Add
ops/sec lat (msec) ops/sec lat (msec) ops/sec lat (msec)
SPARC T7-2 1/32 1,177,947 0.85 22,400 2.2 1,436 11.1
SPARC T5-2 2/32 944,624 1.05 16,700 2.9 1,000 15.95
SPARC T4-4 4/32 682,000 1.46 12,000 4.0 835 19.0

Scaling runs were also made on the virtualized single processor domain on the SPARC T7-2 server.

Scaling of Search Tests – SPARC T7-2, One Processor
Cores Clients ops/sec Latency (msec)
32 1000 1,177,947 0.85
24 1000 863,343 1.15
16 500 615,563 0.81
8 500 280,029 1.78
4 100 156,114 0.64
2 100 79,300 1.26

Configuration Summary

System Under Test:

SPARC T7-2
2 x SPARC M7 processors, 4.13 GHz
512 GB memory
6 x 600 GB internal disks
1 x Sun Storage ZS3-2 (used for database and log files)
Flash storage (used for redo logs)
Oracle Solaris 11.3
Oracle Internet Directory 11g Release 1 PS7 (11.1.1.7.0)
Oracle Database 12c Enterprise Edition 12.1.0.2 (64-bit)

Benchmark Description

Oracle Internet Directory (OID) is Oracle's LDAPv3 Directory Server. The throughput for five key operations are measured — Search, Compare, Modify, Mix and Add.

LDAP Search Operations Test

This test scenario involved concurrent clients binding once to OID and then performing repeated LDAP Search operations. The salient characteristics of this test scenario is as follows:

  • SLAMD SearchRate job was used.
  • BaseDN of the search is root of the DIT, the scope is SUBTREE, the search filter is of the form UID=, DN and UID are the required attribute.
  • Each LDAP search operation matches a single entry.
  • The total number concurrent clients was 1000 and were distributed amongst two client nodes.
  • Each client binds to OID once and performs repeated LDAP Search operations, each search operation resulting in the lookup of a unique entry in such a way that no client looks up the same entry twice and no two clients lookup the same entry and all entries are searched randomly.
  • In one run of the test, random entries from the 50 Million entries are looked up in as many LDAP Search operations.
  • Test job was run for 60 minutes.

LDAP Compare Operations Test

This test scenario involved concurrent clients binding once to OID and then performing repeated LDAP Compare operations on userpassword attribute. The salient characteristics of this test scenario is as follows:

  • SLAMD CompareRate job was used.
  • Each LDAP compare operation matches user password of user.
  • The total number concurrent clients was 1000 and were distributed amongst two client nodes.
  • Each client binds to OID once and performs repeated LDAP compare operations.
  • In one run of the test, random entries from the 50 Million entries are compared in as many LDAP compare operations.
  • Test job was run for 60 minutes.

LDAP Modify Operations Test

This test scenario consisted of concurrent clients binding once to OID and then performing repeated LDAP Modify operations. The salient characteristics of this test scenario is as follows:

  • SLAMD LDAP modrate job was used.
  • A total of 50 concurrent LDAP clients were used.
  • Each client updates a unique entry each time and a total of 50 Million entries are updated.
  • Test job was run for 60 minutes.
  • Value length was set to 11.
  • Attribute that is being modified is not indexed.

LDAP Mixed Load Test

The test scenario involved both the LDAP search and LDAP modify clients enumerated above.

  • The ratio involved 60% LDAP search clients, 30% LDAP bind and 10% LDAP modify clients.
  • A total of 1000 concurrent LDAP clients were used and were distributed on 2 client nodes.
  • Test job was run for 60 minutes.

LDAP Add Load Test

The test scenario involved concurrent clients adding new entries as follows.

  • Slamd standard add rate job is used.
  • A total of 500,000 entries were added.
  • A total of 16 concurrent LDAP clients were used.
  • Slamd add's inetorgperson objectclass entry with 21 attributes (includes operational attributes).

See Also

Disclosure Statement

Copyright 2015, Oracle and/or its affiliates. All rights reserved. Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners. Results as of 25 October 2015.

Oracle FLEXCUBE Universal Banking: SPARC T7-1 World Record

Oracle's SPARC T7-1 servers running Oracle FLEXCUBE Universal Banking Release 12 along with Oracle Database 12c Enterprise Edition with Oracle Real Application Clusters on Oracle Solaris 11 produced record results for two processor solutions.

  • Two SPARC T7-1 servers each running Oracle FLEXCUBE Universal Banking Release 12 (v 12.0.1) and Oracle Real Application Clusters 12c database on Oracle Solaris 11 achieved record End of Year batch processing of 25 million accounts with 200 branches in 4 hrs 34 minutes (total of two processors).

  • A single SPARC T7-1 server running Oracle FLEXCUBE Universal Banking Release 12 processing 100 branches was able to complete the workload in similar time as the two node 200 branches End of Year workload, demonstrating good scaling of the application.

  • The customer representative workload for all 25 million accounts included saving accounts, current accounts, loans and TD accounts were created on the basis 25 million Customer IDs with 200 branches.

  • Oracle's SPARC M7 and T7 Servers running Oracle Solaris 11 with built-in Silicon Secured Memory with Oracle Database 12c can benefit global retail and corporate financial institutions who are running Oracle FLEXCUBE Universal Banking Release 12. The uniquely co-engineered Oracle software and hardware unlock unique agile capabilities demanded by modern business environments.

  • The SPARC T7-1 system and Oracle Solaris are able to provide a combination of uniquely essential characteristics that resonate with core values for a modern financial services institution.

  • The SPARC M7 processor based systems are capable of delivering higher performance and lower total cost of ownership (TCO) than older SPARC infrastructure, without introducing the unseen tax and risk of migrating applications away from older SPARC systems.

Performance Landscape

Oracle FLEXCUBE Universal Banking Release 12
End of Year Batch Processing
System Branches Time in Minutes
2 x SPARC T7-1 200 274 (min)
1 x SPARC T7-1 100 268 (min)

Configuration Summary

Systems Under Test:

2 x SPARC T7-1 each with
1 x SPARC M7 processor, 4.13 GHz
256 GB memory
Oracle Solaris 11.3 (11.3.0.27.0)
Oracle Database 12c (RAC/ASM 12.1.0.2 BP7)
Oracle FLEXCUBE Universal Banking Release 12

Storage Configuration:

Oracle ZFS Storage ZS4-4 appliance

Benchmark Description

The Oracle FLEXCUBE Universal Banking Release 12 benchmark models an actual customer bank with End of Cycle transaction batch jobs which typically execute during non-banking hours. This benchmark includes accrual for savings and term deposit accounts, interest capitalization for saving accounts, interest pay out for term deposit accounts and consumer load processing.

This benchmark helps banks refine their infrastructure requirements for the volumes and scale of operations for business expansion. The end of cycle can be year, month or day, with year having the most processing followed by month and then day.

See Also

Disclosure Statement

Copyright 2015, Oracle and/or its affiliates. All rights reserved. Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners. Results as of 25 October 2015.

PeopleSoft Human Capital Management 9.1 FP2: SPARC M7-8 World Record

This result demonstrates how Oracle's SPARC M7-8 server using Oracle VM Server for SPARC (LDoms) provides mission critical enterprise virtualization.

  • The virtualized two-chip, 1 TB LDom of the SPARC M7-8 server set a world record two-chip PeopleSoft Human Capital Management (HCM) 9.1 FP2 benchmark result, supporting 35,000 HR Self-Service online users with response times under one second, while simultaneously running the Payroll batch workload.

  • The virtualized two-chip LDom of the SPARC M7-8 server demonstrated 4 times better Search and 6 times better Save average response times running nearly double the number of online users along with payroll batch, compared to the ten-chip x86 solution from Cisco.

  • Using only a single chip in the virtualized two-chip LDom on the SPARC M7-8 server, the batch-only run demonstrated 1.8 times better throughput (payments/hour) compared to a four-chip Cisco UCSB460 M4 server.

  • Using only a single chip in the virtualized two-chip LDom on the SPARC M7-8 server, the batch-only run demonstrated 2.3 times better throughput (payments/hour) compared to a nine-chip IBM zEnterprise z196 server (EC 2817-709, 9-way, 8943 MIPS).

  • This record result demonstrates that a two SPARC M7 processor LDom (in SPARC M7-8), can run the same number of online users as a dynamic domain (PDom) of eight SPARC M6 processors (in SPARC M6-32), with better online response times, batch elapsed times and batch throughput (payments/hour).

  • The SPARC M7-8 server provides enterprise applications high availability and security, where each application is executed on its own environment independent of the others.

Performance Landscape

The first table presents the combined results, running both the PeopleSoft HR Self-Service Online and Payroll Batch tests concurrently.

PeopleSoft HR Self-Service Online And Payroll Batch Using Oracle Database 11g
System
Processors
Chips
Used
Users Search/Save Batch Elapsed
Time
Batch Pay/Hr
SPARC M7-8
SPARC M7
LDom1 2 35,000 0.67 sec/0.42 sec 22.71 min 1,322,272
LDom2 2 35,000 0.85 sec/0.50 sec 22.96 min 1,307,875
SPARC M6-32
SPARC M6
8 35,000 1.80 sec/1.12 sec 29.2 min 1,029,440
Cisco 1 x B460 M4, 3 x B200 M3
Intel E7-4890 v2, Intel E5-2697 v2
10 18,000 2.70 sec/2.60 sec 21.70 min 1,383,816

The following results are running only the Peoplesoft HR Self-Service Online test.

PeopleSoft HR Self-Service Online Using Oracle Database 11g
System
Processors
Chips
Used
Users Search/Save
Avg Response Times
SPARC M7-8
SPARC M7
LDom1 2 40,000 0.55 sec/0.33 sec
LDom2 2 40,000 0.56 sec/0.32 sec
SPARC M6-32
SPARC M6
8 40,000 2.73 sec/1.33 sec
Cisco 1 x B460 M4, 3 x B200 M3
Intel E7-4890 v2, Intel E5-2697 v2
10 20,000 0.35 sec/0.17 sec

The following results are running only the Peoplesoft Payroll Batch test. For the SPARC M7-8 server results, only one of the processors was used per LDom. This was accomplished using processor sets to further restrict the test to a single SPARC M7 processor.

PeopleSoft Payroll Batch Using Oracle Database 11g
System
Processors
Chips
Used
Batch Elapsed
Time
Batch Pay/Hr
SPARC M7-8
SPARC M7
LDom1 1 13.06 min 2,299,296
LDom2 1 12.85 min 2,336,872
SPARC M6-32
SPARC M6
2 18.27 min 1,643,612
Cisco UCS B460 M4
Intel E7-4890 v2
4 23.02 min 1,304,655
IBM z196
zEnterprise (5.2 GHz, 8943 MIPS)
9 30.50 min 984,551

Configuration Summary

System Under Test:

SPARC M7-8 server with
8 x SPARC M7 processor (4.13 GHz)
4 TB memory
Virtualized as two Oracle VM Server for SPARC (LDom) each with
2 x SPARC M7 processor (4.13 GHz)
1 TB memory

Storage Configuration:

2 x Oracle ZFS Storage ZS3-2 appliance (DB Data) each with
40 x 300 GB 10K RPM SAS-2 HDD,
8 x Write Flash Accelerator SSD and
2 x Read Flash Accelerator SSD 1.6TB SAS
2 x Oracle Server X5-2L (DB redo logs & App object cache) each with
2 x Intel Xeon Processor E5-2630 v3
32 GB memory
4 x 1.6 TB NVMe SSD

Software Configuration:

Oracle Solaris 11.3
Oracle Database 11g Release 2 (11.2.0.3.0)
PeopleSoft Human Capital Management 9.1 FP2
PeopleSoft PeopleTools 8.52.03
Oracle Java SE 6u32
Oracle Tuxedo, Version 10.3.0.0, 64-bit, Patch Level 043
Oracle WebLogic Server 11g (10.3.5)

Benchmark Description

The PeopleSoft Human Capital Management benchmark simulates thousands of online employees, managers and Human Resource administrators executing transactions typical of a Human Resources Self Service application for the Enterprise. Typical transactions are: viewing paychecks, promoting and hiring employees, updating employee profiles, etc. The database tier uses a database instance of about 500 GB in size, containing information for 500,480 employees. The application tier for this test includes web and application server instances, specifically Oracle WebLogic Server 11g, PeopleSoft Human Capital Management 9.1 FP2 and Oracle Java SE 6u32.

Key Points and Best Practices

In the HR online along with Payroll batch run, each LDom had one Oracle Solaris Zone of 7 cores containing the Web tier, two Oracle Solaris Zones of 16 cores each containing the Application tier and one Oracle Solaris Zone of 23 cores containing the Database tier. Two cores were dedicated to network and disk interrupt handling. In the HR online only run, each LDom had one Oracle Solaris Zone of 12 cores containing the Web tier, two Oracle Solaris Zones of 18 cores each containing the Application tier and one Oracle Solaris Zone of 14 cores containing the Database tier. 2 cores were dedicated to network and disk interrupt handling. In the Payroll batch only run, each LDom had one Oracle Solaris Zone of 31 cores containing the Database tier. 1 core was dedicated to disk interrupt handling.

All database data files, recovery files and Oracle Clusterware files for the PeopleSoft test were created with the Oracle Automatic Storage Management (Oracle ASM) volume manager for the added benefit of the ease of management provided by Oracle ASM integrated storage management solution.

In the application tier on each LDom, 5 PeopleSoft application domains with 350 application servers (70 per domain) were hosted in two separate Oracle Solaris Zones for a total of 10 domains with 700 application server processes.

All PeopleSoft Application processes and the 32 Web Server JVM instances were executed in the Oracle Solaris FX scheduler class.

See Also

Disclosure Statement

Oracle and/or its affiliates. All rights reserved. Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners. Results as of 10/25/2015.

Oracle E-Business Payroll Batch Extra-Large: SPARC T7-1 World Record

Oracle's SPARC T7-1 server set a world record running the Oracle E-Business Suite 12.1.3 Standard Extra-Large (250,000 Employees) Payroll (Batch) workload.

  • The SPARC T7-1 server produced a world record result of 1,527,494 employee records processed per hour (9.82 min elapsed time) on the Oracle E-Business Suite R12 (12.1.3) Extra-Large Payroll (Batch) benchmark.

  • The SPARC T7-1 server equipped with one 4.13 GHz SPARC M7 processor, demonstrated 36% better hourly employee throughput compared to a two-chip Cisco UCS B200 M4 (Intel Xeon E5-2697 v3).

  • The SPARC T7-1 server equipped with one 4.13 GHz SPARC M7 processor, demonstrated 40% better hourly employee throughput compared to two-chip IBM S824 (POWER8 using 12 cores total).

Performance Landscape

This is the world record result for the Payroll Extra-Large model using Oracle E-Business 12.1.3 workload.

Batch Workload: Payroll Extra-Large Model
System Processor Employees/Hr Elapsed Time
SPARC T7-1 1 x SPARC M7 (4.13 GHz) 1,527,494 9.82 minutes
Cisco UCS B200 M4 2 x Intel Xeon Processor E5-2697 v3 1,125,281 13.33 minutes
IBM S824 2 x POWER8 (3.52 GHz) 1,090,909 13.75 minutes
Cisco UCS B200 M3 2 x Intel Xeon Processor E5-2697 v2 1,017,639 14.74 minutes
Cisco UCS B200 M3 2 x Intel Xeon Processor E5-2690 839,865 17.86 minutes
Sun Server X3-2L 2 x Intel Xeon Processor E5-2690 789,473 19.00 minutes

Configuration Summary

Hardware Configuration:

SPARC T7-1 server
1 x SPARC M7 processor (4.13 GHz)
256 GB memory (16 x 16 GB)
Oracle ZFS Storage ZS3-2 appliance (DB Data storage) with
40 x 900 GB 10K RPM SAS-2 HDD,
8 x Write Flash Accelerator SSD and
2 x Read Flash Accelerator SSD 1.6 TB SAS
Oracle Flash Accelerator F160 PCIe Card (1.6 TB NVMe for DB Log storage)

Software Configuration:

Oracle Solaris 11.3
Oracle E-Business Suite R12 (12.1.3)
Oracle Database 11g (11.2.0.3.0)

Benchmark Description

The Oracle E-Business Suite Standard R12 Benchmark combines online transaction execution by simulated users with concurrent batch processing to model a typical scenario for a global enterprise. This benchmark ran one Batch component, Payroll, in the Extra-Large size.

Results can be published in four sizes and use one or more online/batch modules

  • X-large: Maximum online users running all business flows between 10,000 to 20,000; 750,000 order to cash lines per hour and 250,000 payroll checks per hour.
    • Order to Cash Online — 2400 users
      • The percentage across the 5 transactions in Order Management module is:
        • Insert Manual Invoice — 16.66%
        • Insert Order — 32.33%
        • Order Pick Release — 16.66%
        • Ship Confirm — 16.66%
        • Order Summary Report — 16.66%
    • HR Self-Service — 4000 users
    • Customer Support Flow — 8000 users
    • Procure to Pay — 2000 users
  • Large: 10,000 online users; 100,000 order to cash lines per hour and 100,000 payroll checks per hour.
  • Medium: up to 3000 online users; 50,000 order to cash lines per hour and 10,000 payroll checks per hour.
  • Small: up to 1000 online users; 10,000 order to cash lines per hour and 5,000 payroll checks per hour.

Key Points and Best Practices

  • All system optimizations are in the published report which is referenced in the See Also section below.

See Also

Disclosure Statement

Oracle E-Business X-Large Payroll Batch workload, SPARC T7-1, 4.13 GHz, 1 chip, 32 cores, 256 threads, 256 GB memory, elapsed time 9.82 minutes, 1,527,494 hourly employee throughput, Oracle Solaris 11.3, Oracle E-Business Suite 12.1.3, Oracle Database 11g Release 2, Results as of 10/25/2015.

Oracle E-Business Suite Applications R12.1.3 (OLTP X-Large): SPARC M7-8 World Record

Oracle's SPARC M7-8 server, using a four-chip Oracle VM Server for SPARC (LDom) virtualized server, produced a world record 20,000 users running the Oracle E-Business OLTP X-Large benchmark. The benchmark runs five Oracle E-Business online workloads concurrently: Customer Service, iProcurement, Order Management, Human Resources Self-Service, and Financials.

  • The virtualized four-chip LDom on the SPARC M7-8 was able to handle more users than the previous best result which used eight processors of Oracle's SPARC M6-32 server.

  • The SPARC M7-8 server using Oracle VM Server for SPARC provides enterprise applications high availability, where each application is executed on its own environment, insulated and independent of the others.

Performance Landscape

Oracle E-Business (3-tier) OLTP X-Large Benchmark
System Chips Total Online Users Weighted Average
Response Time (sec)
90th Percentile
Response Time (s)
SPARC M7-8 4 20,000 0.70 1.13
SPARC M6-32 8 18,500 0.61 1.16

Break down of the total number of users by component.

Users per Component
Component SPARC M7-8 SPARC M6-32
Total Online Users 20,000 users 18,500 users
HR Self-Service
Order-to-Cash
iProcurement
Customer Service
Financial
5000 users
2500 users
2700 users
7000 users
2800 users
4000 users
2300 users
2400 users
7000 users
2800 users

Configuration Summary

System Under Test:

SPARC M7-8 server
8 x SPARC M7 processors (4.13 GHz)
4 TB memory
2 x 600 GB SAS-2 HDD
using a Logical Domain with
4 x SPARC M7 processors (4.13 GHz)
2 TB memory
2 x Sun Storage Dual 16Gb Fibre Channel PCIe Universal HBA
2 x Sun Dual Port 10GBase-T Adapter
Oracle Solaris 11.3
Oracle E-Business Suite 12.1.3
Oracle Database 11g Release 2

Storage Configuration:

4 x Oracle ZFS Storage ZS3-2 appliances each with
2 x Read Flash Accelerator SSD
1 x Storage Drive Enclosure DE2-24P containing:
20 x 900 GB 10K RPM SAS-2 HDD
4 x Write Flash Accelerator SSD
1 x Sun Storage Dual 8Gb FC PCIe HBA
Used for Database files, Zones OS, EBS Mid-Tier Apps software stack
and db-tier Oracle Server
2 x Sun Server X4-2L server with
2 x Intel Xeon Processor E5-2650 v2
128 GB memory
1 x Sun Storage 6Gb SAS PCIe RAID HBA
4 x 400 GB SSD
14 x 600 GB HDD
Used for Redo log files, db backup storage.

Benchmark Description

The Oracle E-Business OLTP X-Large benchmark simulates thousands of online users executing transactions typical of an internal Enterprise Resource Processing, simultaneously executing five application modules: Customer Service, Human Resources Self Service, iProcurement, Order Management and Financial.

Each database tier uses a database instance of about 600 GB in size, supporting thousands of application users, accessing hundreds of objects (tables, indexes, SQL stored procedures, etc.).

Key Points and Best Practices

This test demonstrates virtualization technologies running concurrently various Oracle multi-tier business critical applications and databases on four SPARC M7 processors contained in a single SPARC M7-8 server supporting thousands of users executing a high volume of complex transactions with constrained (<1 sec) weighted average response time.

The Oracle E-Business LDom is further configured using Oracle Solaris Zones.

This result of 20,000 users was achieved by load balancing the Oracle E-Business Suite Applications 12.1.3 five online workloads across two Oracle Solaris processor sets and redirecting all network interrupts to a dedicated third processor set.

Each applications processor set (set-1 and set-2) was running concurrently two Oracle E-Business Suite Application servers and two database servers instances, each within its own Oracle Solaris Zone (4 x Zones per set).

Each application server network interface (to a client) was configured to map with the locality group associated to the CPUs processing the related workload, to guarantee memory locality of network structures and application servers hardware resources.

All external storage was connected with at least two paths to the host multipath-capable fibre channel controller ports and Oracle Solaris I/O multipathing feature was enabled.

See Also

Disclosure Statement

Oracle E-Business Suite R12 extra-large multiple-online module benchmark, SPARC M7-8, SPARC M7, 4.13 GHz, 4 chips, 128 cores, 1024 threads, 2 TB memory, 20,000 online users, average response time 0.70 sec, 90th percentile response time 1.13 sec, Oracle Solaris 11.3, Oracle Solaris Zones, Oracle VM Server for SPARC, Oracle E-Business Suite 12.1.3, Oracle Database 11g Release 2, Results as of 10/25/2015.

Oracle E-Business Order-To-Cash Batch Large: SPARC T7-1 World Record

Oracle's SPARC T7-1 server set a world record running the Oracle E-Business Suite 12.1.3 Standard Large (100,000 Order/Inventory Lines) Order-To-Cash (Batch) workload.

  • The SPARC T7-1 server produced a world record hourly order line throughput of 273,973 per hour (21.90 min elapsed time) on the Oracle E-Business Suite R12 (12.1.3) Large Order-To-Cash (Batch) benchmark using a SPARC T7-1 server for the database and application tiers running Oracle Database 11g on Oracle Solaris 11.

  • The SPARC T7-1 server demonstrated 12% better hourly order line throughput compared to a two-chip Cisco UCS B200 M4 (Intel Xeon Processor E5-2697 v3).

Performance Landscape

Results for the Oracle E-Business 12.1.3 Order-To-Cash Batch Large model workload.

Batch Workload: Order-To-Cash Large Model
System CPU Employees/Hr Elapsed Time (min)
SPARC T7-1 1 x SPARC M7 processor 273,973 21.90
Cisco UCS B200 M4 2 x Intel Xeon Processor E5-2697 v3 243,803 24.61
Cisco UCS B200 M3 2 x Intel Xeon Processor E5-2690 232,739 25.78

Configuration Summary

Hardware Configuration:

SPARC T7-1 server with
1 x SPARC M7 processor (4.13 GHz)
256 GB memory (16 x 16 GB)
Oracle ZFS Storage ZS3-2 appliance (DB Data storage) with
40 x 900 GB 10K RPM SAS-2 HDD,
8 x Write Flash Accelerator SSD and
2 x Read Flash Accelerator SSD 1.6TB SAS
Oracle Flash Accelerator F160 PCIe Card (1.6 TB NVMe for DB Log storage)

Software Configuration:

Oracle Solaris 11.3
Oracle E-Business Suite R12 (12.1.3)
Oracle Database 11g (11.2.0.3.0)

Benchmark Description

The Oracle E-Business Suite Standard R12 Benchmark combines online transaction execution by simulated users with concurrent batch processing to model a typical scenario for a global enterprise. This benchmark ran one Batch component, Order-To-Cash, in the Large size.

Results can be published in four sizes and use one or more online/batch modules

  • X-large: Maximum online users running all business flows between 10,000 to 20,000; 750,000 order to cash lines per hour and 250,000 payroll checks per hour.
    • Order to Cash Online — 2400 users
      • The percentage across the 5 transactions in Order Management module is:
        • Insert Manual Invoice — 16.66%
        • Insert Order — 32.33%
        • Order Pick Release — 16.66%
        • Ship Confirm — 16.66%
        • Order Summary Report — 16.66%
    • HR Self-Service — 4000 users
    • Customer Support Flow — 8000 users
    • Procure to Pay — 2000 users
  • Large: 10,000 online users; 100,000 order to cash lines per hour and 100,000 payroll checks per hour.
  • Medium: up to 3000 online users; 50,000 order to cash lines per hour and 10,000 payroll checks per hour.
  • Small: up to 1000 online users; 10,000 order to cash lines per hour and 5,000 payroll checks per hour.

Key Points and Best Practices

  • All system optimizations are in the published report, find link in See Also section below.

See Also

Disclosure Statement

Oracle E-Business Large Order-To-Cash Batch workload, SPARC T7-1, 4.13 GHz, 1 chip, 32 cores, 256 threads, 256 GB memory, elapsed time 21.90 minutes, 273,973 hourly order line throughput, Oracle Solaris 11.3, Oracle E-Business Suite 12.1.3, Oracle Database 11g Release 2, Results as of 10/25/2015.

SAP Two-Tier Standard Sales and Distribution SD Benchmark: SPARC T7-2 World Record 2 Processors

Oracle's SPARC T7-2 server produces a world record result for 2-processors on the SAP two-tier Sales and Distribution (SD) Standard Application Benchmark using SAP Enhancement Package 5 for SAP ERP 6.0 (2 chips / 64 cores / 512 threads).

  • The SPARC T7-2 server achieved 30,800 SAP SD benchmark users running the two-tier SAP Sales and Distribution (SD) Standard Application Benchmark using SAP Enhancement Package 5 for SAP ERP 6.0.

  • The SPARC T7-2 server achieved 1.9 times more users than the Dell PowerEdge R730 server result.

  • The SPARC T7-2 server achieved 1.5 times more users than the IBM Power System S824 server result.

  • The SPARC T7-2 server achieved 1.9 times more users than the HP ProLiant DL380 Gen9 server result.

  • The SPARC T7-2 server result was run with Oracle Solaris 11 and used Oracle Database 12c.

Performance Landscape

SAP-SD 2-tier performance table in decreasing performance order for leading two-processor systems and four-processor IBM Power System S824 server, with SAP ERP 6.0 Enhancement Package 5 for SAP ERP 6.0 results (current version of the benchmark as of May, 2012).

SAP SD Two-Tier Benchmark
System
Processor
OS
Database
Users Resp Time
(sec)
Version Cert#
SPARC T7-2
2 x SPARC M7 (2x 32core)
Oracle Solaris 11
Oracle Database 12c
30,800 0.96 EHP5 2015050
IBM Power S824
4 x POWER8 (4x 6core)
AIX 7
DB2 10.5
21,212 0.98 EHP5 2014016
Dell PowerEdge R730
2 x Intel E5-2699 v3 (2x 18core)
Red Hat Enterprise Linux 7
SAP ASE 16
16,500 0.99 EHP5 2014033
HP ProLiant DL380 Gen9
2 x Intel E5-2699 v3 (2x 18core)
Red Hat Enterprise Linux 6.5
SAP ASE 16
16,101 0.99 EHP5 2014032

Version – Version of SAP, EHP5 refers to SAP ERP 6.0 Enhancement Package 5 for SAP ERP 6.0

Number of cores presented are per chip, to get system totals, multiple by the number of chips.

Complete benchmark results may be found at the SAP benchmark website http://www.sap.com/benchmark.

Configuration Summary and Results

Database/Application Server:

1 x SPARC T7-2 server with
2 x SPARC M7 processors (4.13 GHz, total of 2 processors / 64 cores / 512 threads)
1 TB memory
Oracle Solaris 11.3
Oracle Database 12c

Database Storage:
3 x Sun Server X3-2L each with
2 x Intel Xeon Processors E5-2609 (2.4 GHz)
16 GB memory
4 x Sun Flash Accelerator F40 PCIe Card
12 x 3 TB SAS disks
Oracle Solaris 11

REDO log Storage:
1 x Pillar FS-1 Flash Storage System, with
2 x FS1-2 Controller (Netra X3-2)
2 x FS1-2 Pilot (X4-2)
4 x DE2-24P Disk enclosure
96 x 300 GB 10000 RPM SAS Disk Drive Assembly

Certified Results (published by SAP)

Number of SAP SD benchmark users: 30,800
Average dialog response time: 0.96 seconds
Throughput:
  Fully processed order line items per hour: 3,372,000
  Dialog steps per hour: 10,116,000
  SAPS: 168,600
Average database request time (dialog/update): 0.022 sec / 0.047 sec
SAP Certification: 2015050

Benchmark Description

The SAP Standard Application SD (Sales and Distribution) Benchmark is an ERP business test that is indicative of full business workloads of complete order processing and invoice processing, and demonstrates the ability to run both the application and database software on a single system. The SAP Standard Application SD Benchmark represents the critical tasks performed in real-world ERP business environments.

SAP is one of the premier world-wide ERP application providers, and maintains a suite of benchmark tests to demonstrate the performance of competitive systems on the various SAP products.

See Also

Disclosure Statement

Two-tier SAP Sales and Distribution (SD) standard application benchmarks, SAP Enhancement Package 5 for SAP ERP 6.0 as of 10/23/15:

SPARC T7-2 (2 processors, 64 cores, 512 threads) 30,800 SAP SD users, 2 x 4.13 GHz SPARC M7, 1 TB memory, Oracle Database 12c, Oracle Solaris 11, Cert# 2015050.
IBM Power System S824 (4 processors, 24 cores, 192 threads) 21,212 SAP SD users, 4 x 3.52 GHz POWER8, 512 GB memory, DB2 10.5, AIX 7, Cert#2014016
Dell PowerEdge R730 (2 processors, 36 cores, 72 threads) 16,500 SAP SD users, 2 x 2.3 GHz Intel Xeon Processor E5-2699 v3 256 GB memory, SAP ASE 16, RHEL 7, Cert#2014033
HP ProLiant DL380 Gen9 (2 processors, 36 cores, 72 threads) 16,101 SAP SD users, 2 x 2.3 GHz Intel Xeon Processor E5-2699 v3 256 GB memory, SAP ASE 16, RHEL 6.5, Cert#2014032

SAP, R/3, reg TM of SAP AG in Germany and other countries. More info www.sap.com/benchmark

Friday Apr 03, 2015

Oracle Server X5-2 Produces World Record 2-Chip Single Application Server SPECjEnterprise2010 Result

Two Oracle Server X5-2 systems, using the Intel Xeon E5-2699 v3 processor, produced a World Record x86 two-chip single application server SPECjEnterprise2010 benchmark result of 21,504.30 SPECjEnterprise2010 EjOPS. One Oracle Server X5-2 ran the application tier and the second Oracle Server X5-2 was used for the database tier.

  • The Oracle Server X5-2 system demonstrated 11% better performance when compared to the IBM X3650 M5 server result of 19,282.14 SPECjEnterprise2010 EjOPS.

  • The Oracle Server X5-2 system demonstrated 1.9x better performance when compared to the previous generation Sun Server X4-2 server result of 11,259.88 SPECjEnterprise2010 EjOPS.

  • This result used Oracle WebLogic Server 12c, Java HotSpot(TM) 64-Bit Server 1.8.0_40 Oracle Database 12c, and Oracle Linux.

Performance Landscape

Complete benchmark results are at the SPEC website, SPECjEnterprise2010 Results. The table below shows the top single application server, two-chip x86 results.

SPECjEnterprise2010 Performance Chart
as of 4/1/2015
Submitter EjOPS* Application Server Database Server
Oracle 21,504.30 1x Oracle Server X5-2
2x 2.3 GHz Intel Xeon E5-2699 v3
Oracle WebLogic 12c (12.1.3)
1x Oracle Server X5-2
2x 2.3 GHz Intel Xeon E5-2699 v3
Oracle Database 12c (12.1.0.2)
IBM 19,282.14 1x IBM X3650 M5
2x 2.6 GHz Intel Xeon E5-2697 v3
WebSphere Application Server V8.5
1x IBM X3850 X6
4x 2.8 GHz Intel Xeon E7-4890 v2
IBM DB2 10.5
Oracle 11,259.88 1x Sun Server X4-2
2x 2.7 GHz Intel Xeon E5-2697 v2
Oracle WebLogic 12c (12.1.2)
1x Sun Server X4-2L
2x 2.7 GHz Intel Xeon E5-2697 v2
Oracle Database 12c (12.1.0.1)

* SPECjEnterprise2010 EjOPS, bigger is better.

Configuration Summary

Application Server:

1 x Oracle Server X5-2
2 x 2.3 GHz Intel Xeon E5-2699 v3 processors
256 GB memory
3 x 10 GbE NIC
Oracle Linux 6 Update 5 (kernel-2.6.39-400.243.1.el6uek.x86_64)
Oracle WebLogic Server 12c (12.1.3)
Java HotSpot(TM) 64-Bit Server VM on Linux, version 1.8.0_40 (Java SE 8 Update 40)
BIOS SW 1.2

Database Server:

1 x Oracle Server X5-2
2 x 2.3 GHz Intel Xeon E5-2699 v3 processors
512 GB memory
2 x 10 GbE NIC
1 x 16 Gb FC HBA
2 x Oracle Server X5-2L Storage
Oracle Linux 6 Update 5 (kernel-3.8.13-16.2.1.el6uek.x86_64)
Oracle Database 12c Enterprise Edition Release 12.1.0.2

Benchmark Description

SPECjEnterprise2010 is the third generation of the SPEC organization's J2EE end-to-end industry standard benchmark application. The SPECjEnterprise2010 benchmark has been designed and developed to cover the Java EE 5 specification's significantly expanded and simplified programming model, highlighting the major features used by developers in the industry today. This provides a real world workload driving the Application Server's implementation of the Java EE specification to its maximum potential and allowing maximum stressing of the underlying hardware and software systems.

The workload consists of an end to end web based order processing domain, an RMI and Web Services driven manufacturing domain and a supply chain model utilizing document based Web Services. The application is a collection of Java classes, Java Servlets, Java Server Pages, Enterprise Java Beans, Java Persistence Entities (pojo's) and Message Driven Beans.

The SPECjEnterprise2010 benchmark heavily exercises all parts of the underlying infrastructure that make up the application environment, including hardware, JVM software, database software, JDBC drivers, and the system network.

The primary metric of the SPECjEnterprise2010 benchmark is jEnterprise Operations Per Second ("SPECjEnterprise2010 EjOPS"). This metric is calculated by adding the metrics of the Dealership Management Application in the Dealer Domain and the Manufacturing Application in the Manufacturing Domain. There is no price/performance metric in this benchmark.

Key Points and Best Practices

  • Four Oracle WebLogic server instances were started using numactl binding 2 instances per chip.
  • Four Oracle database listener processes were started, 2 processes bound per processor.
  • Additional tuning information is in the report at http://spec.org.
  • COD (Cluster on Die) is enabled in the BIOS on the application server.

See Also

Disclosure Statement

SPEC and the benchmark name SPECjEnterprise are registered trademarks of the Standard Performance Evaluation Corporation. Oracle Server X5-2, 21,504.30 SPECjEnterprise2010 EjOPS; IBM System X3650 M5, 19,282.14 SPECjEnterprise2010 EjOPS. Sun Server X4-2, 11,259.88 SPECjEnterprise2010 EjOPS; Results from www.spec.org as of 4/1/2015.

Thursday Mar 27, 2014

SPARC M6-32 Produces SAP SD Two-Tier Benchmark World Record for 32-Processor Systems

Oracle's SPARC M6-32 server produced a world record result for 32-processors on the SAP two-tier Sales and Distribution (SD) Standard Application Benchmark using SAP Enhancement Package 5 for SAP ERP 6.0 (32 chips / 384 cores / 3072 threads).

  • SPARC M6-32 server achieved 140,000 SAP SD benchmark users with a low average dialog response time of 0.58 seconds running the SAP two-tier Sales and Distribution (SD) Standard Application Benchmark using SAP Enhancement package 5 for SAP ERP 6.0.

  • The SPARC M6-32 delivered 2.5 times more users than the IBM Power 780 result using SAP Enhancement Package 5 for SAP ERP 6.0. The IBM result also had 1.7 times worse average dialog response time compared to the SPARC M6-32 server result.

  • The SPARC M6-32 delivered 3.0 times more users than the Fujitsu PRIMEQUEST 2800E (with Intel Xeon E7-8890 v2 processors) result. The Fujitsu result also had 1.7 times worse average dialog response time compared to the SPARC M6-32 server result.

  • The SPARC M6-32 server solution was run with Oracle Solaris 11 and used Oracle Database 11g.

Performance Landscape

SAP-SD 2-Tier Performance Table (in decreasing performance order). With SAP ERP 6.0 Enhancement Package 4 for SAP ERP 6.0 (Old version of the benchmark, obsolete at the end of April, 2012), and SAP ERP 6.0 Enhancement Package 5 for SAP ERP 6.0 results (current version of the benchmark as of May, 2012).

System
Processor
Ch / Co / Th — Memory
OS
Database
Users Resp Time
(sec)
Version Cert#
Fujitsu SPARC M10-4S
SPARC64 X @3.0 GHz
40 / 640 / 1280 — 10 TB
Solaris 11
Oracle 11g
153,000 0.87 EHP5 2013014
SPARC M6-32 Server
SPARC M6 @3.6 GHz
32 / 384 / 3072 — 16 TB
Solaris 11
Oracle 11g
140,000 0.58 EHP5 2014008
IBM Power 795
POWER7 @4 GHz
32 / 256 / 1024 — 4 TB
AIX 7.1
DB2 9.7
126,063 0.98 EHP4 2010046
IBM Power 780
POWER7+ @3.72 GHz
12 / 96 / 834 — 1536 GB
AIX 7.1
DB2 10
57,024 0.98 EHP5 2012033
Fujitsu PRIMEQUEST 2800E
Intel Xeon E7-8890 v2 @2.8 GHz
8 / 120 / 240 — 1024 GB
Windows Server 2012 SE
SQL Server 2012
47,500 0.97 EHP5 2014003
IBM Power 760
POWER7+ @3.41 GHz
8 / 48 / 192 — 1024 GB
AIX 7.1
DB2 10
25,488 0.99 EHP5 2013004

Version – Version of SAP, EHP5 refers to SAP ERP 6.0 Enhancement Package 5 for SAP ERP 6.0 and EHP4 refers to SAP ERP 6.0 Enhancement Package 4 for SAP ERP 6.0

Ch / Co / Th – Total chips, coreas and threads

Complete benchmark results may be found at the SAP benchmark website http://www.sap.com/benchmark.

Configuration Summary and Results

Hardware Configuration:

1 x SPARC M6-32 server with
32 x 3.6 GHz SPARC M6 processors (total of 32 processors / 384 cores / 3072 threads)
16 TB memory
6 x Sun Server X3-2L each with
2 x Intel Xeon E5-2609 2.4 GHz Processors
16 GB Memory
4 x Flash Accelerator F40
12 x 3 TB SAS disks
2 x Sun Server X3-2L each with
2 x Intel Xeon E5-2609 2.4 GHz Processors
16 GB Memory
1 x 8-Port 6Gbps SAS-2 RAID PCI Express HBA
12 x 3 TB SAS disks

Software Configuration:

Oracle Solaris 11
SAP Enhancement Package 5 for SAP ERP 6.0
Oracle Database 11g Release 2

Certified Results (published by SAP)

Number of SAP SD benchmark users:
140,000
Average dialog response time:
0.58 seconds
Throughput:

  Fully processed order line items per hour:
15,878,670
  Dialog steps per hour:
47,636,000
  SAPS:
793,930
Average database request time (dialog/update):
0.020 sec / 0.041 sec
SAP Certification:
2014008

Benchmark Description

The SAP Standard Application SD (Sales and Distribution) Benchmark is an ERP business test that is indicative of full business workloads of complete order processing and invoice processing, and demonstrates the ability to run both the application and database software on a single system. The SAP Standard Application SD Benchmark represents the critical tasks performed in real-world ERP business environments.

SAP is one of the premier world-wide ERP application providers, and maintains a suite of benchmark tests to demonstrate the performance of competitive systems on the various SAP products.

See Also

Disclosure Statement

Two-tier SAP Sales and Distribution (SD) standard application benchmarks, SAP Enhancement Package 5 for SAP ERP 6.0 as of 3/26/14:

SPARC M6-32 (32 processors, 384 cores, 3072 threads) 140,000 SAP SD users, 32 x 3.6 GHz SPARC M6, 16 TB memory, Oracle Database 11g, Oracle Solaris 11, Cert# 2014008. Fujitsu SPARC M10-4S (40 processors, 640 cores, 1280 threads) 153,000 SAP SD users, 40 x 3.0 GHz SPARC65 X, 10 TB memory, Oracle Database 11g, Oracle Solaris 11, Cert# 2013014. IBM Power 780 (12 processors, 96 cores, 384 threads) 57,024 SAP SD users, 12 x 3.72 GHz IBM POWER7+, 1536 GB memory, DB210, AIX7.1, Cert#2012033. Fujitsu PRIMEQUEST 2800E (8 processors, 120 cores, 240 threads) 47,500 SAP SD users, 8 x 2.8 GHz Intel Xeon Processor E7-8890 v2, 1024 GB memory, SQL Server 2012, Windows Server 2012 Standard Edition, Cert# 2014003. IBM Power 760 (8 processors, 48 cores, 192 threads) 25,488 SAP SD users, 8 x 3.41 GHz IBM POWER7+, 1024 GB memory, DB2 10, AIX 7.1, Cert#2013004.

Two-tier SAP Sales and Distribution (SD) standard application benchmarks, SAP Enhancement Package 4 for SAP ERP 6.0 as of 3/26/14:

IBM Power 795 (32 processors, 256 cores, 1024 threads) 126,063 SAP SD users, 32 x 4 GHz IBM POWER7, 4 TB memory, DB2 9.7, AIX7.1, Cert#2010046.

SAP, R/3, reg TM of SAP AG in Germany and other countries. More info www.sap.com/benchmark

Wednesday Mar 05, 2014

SPARC T5-2 Delivers World Record 2-Socket SPECvirt_sc2010 Benchmark

Oracle's SPARC T5-2 server delivered a world record two-chip SPECvirt_sc2010 result of 4270 @ 264 VMs, establishing performance superiority in virtualized environments of the SPARC T5 processors with Oracle Solaris 11, which includes as standard virtualization products Oracle VM for SPARC and Oracle Solaris Zones.

  • The SPARC T5-2 server has 2.3x better performance than an HP BL620c G7 blade server (with two Westmere EX processors) which used VMware ESX 4.1 U1 virtualization software (best SPECvirt_sc2010 result on two-chip servers using VMware software).

  • The SPARC T5-2 server has 1.6x better performance than an IBM Flex System x240 server (with two Sandy Bridge processors) which used Kernel-based Virtual Machines (KVM).

  • This is the first SPECvirt_sc2010 result using Oracle production level software: Oracle Solaris 11.1, Oracle WebLogic Server 10.3.6, Oracle Database 11g Enterprise Edition, Oracle iPlanet Web Server 7 and Oracle Java Development Kit 7 (JDK). The only exception for the Dovecot mail server.

Performance Landscape

Complete benchmark results are at the SPEC website, SPECvirt_sc2010 Results. The following table highlights the leading two-chip results for the benchmark, bigger is better.

SPECvirt_sc2010
Leading Two-Chip Results
System Processor Result @ VMs Virtualization Software
SPARC T5-2 2 x SPARC T5, 3.6 GHz 4270 @ 264 Oracle VM Server for SPARC 3.0
Oracle Solaris Zones
IBM Flex System x240 2 x Intel E5-2690, 2.9 GHz 2741 @ 168 Red Hat Enterprise Linux 6.4 KVM
HP Proliant BL6200c G7 2 x Intel E7-2870, 2.4 GHz 1878 @ 120 VMware ESX 4.1 U1

Configuration Summary

System Under Test Highlights:

1 x SPARC T5-2 server, with
2 x 3.6 GHz SPARC T5 processors
1 TB memory
Oracle Solaris 11.1
Oracle VM Server for SPARC 3.0
Oracle iPlanet Web Server 7.0.15
Oracle PHP 5.3.14
Dovecot 2.1.17
Oracle WebLogic Server 11g (10.3.6)
Oracle Database 11g (11.2.0.3)
Java HotSpot(TM) 64-Bit Server VM on Solaris, version 1.7.0_51

Benchmark Description

The SPECvirt_sc2010 benchmark is SPEC's first benchmark addressing performance of virtualized systems. It measures the end-to-end performance of all system components that make up a virtualized environment.

The benchmark utilizes several previous SPEC benchmarks which represent common tasks which are commonly used in virtualized environments. The workloads included are derived from SPECweb2005, SPECjAppServer2004 and SPECmail2008. Scaling of the benchmark is achieved by running additional sets of virtual machines until overall throughput reaches a peak. The benchmark includes a quality of service criteria that must be met for a successful run.

Key Points and Best Practices

  • The SPARC T5 server running the Oracle Solaris 11.1, utilizes embedded virtualization products as the Oracle VM for SPARC and Oracle Solaris Zones, which provide a low overhead, flexible, scalable and manageable virtualization environment.

  • In order to provide a high level of data integrity and availability, all the benchmark data sets are stored on mirrored (RAID1) storage.

See Also

Disclosure Statement

SPEC and the benchmark name SPECvirt_sc are registered trademarks of the Standard Performance Evaluation Corporation. Results from www.spec.org as of 3/5/2014. SPARC T5-2, SPECvirt_sc2010 4270 @ 264 VMs; IBM Flex System x240, SPECvirt_sc2010 2741 @ 168 VMs; HP Proliant BL620c G7, SPECvirt_sc2010 1878 @ 120 VMs.

Tuesday Feb 18, 2014

SPARC T5-2 Produces SPECjbb2013-MultiJVM World Record for 2-Chip Systems

From www.spec.org

Defects Identified in SPECjbb®2013

December 9, 2014 - SPEC has identified a defect in its SPECjbb®2013 benchmark suite. SPEC has suspended sales of the benchmark software and is no longer accepting new submissions of SPECjbb®2013 results for publication on SPEC's website. Current SPECjbb®2013 licensees will receive a free copy of the new version of the benchmark when it becomes available.

SPEC is advising SPECjbb®2013 licensees and users of the SPECjbb®2013 metrics that the recently discovered defect impacts the comparability of results. This defect can significantly impact the amount of work done during the measurement period, resulting in an inflated SPECjbb®2013 metric. SPEC recommends that users not utilize these results for system comparisons without a full understanding of the impact of these defects on each benchmark result.

Additional information is available here.

The SPECjbb2013 benchmark shows modern Java application performance. Oracle's SPARC T5-2 set a two-chip world record, which is 1.8x faster than the best two-chip x86-based server. Using Oracle Solaris and Oracle Java, Oracle delivered this two-chip world record result on the MultiJVM SPECjbb2013 metric.

  • The SPARC T5-2 server achieved 114,492 SPECjbb2013-MultiJVM max-jOPS and 43,963 SPECjbb2013-MultiJVM critical-jOPS on the SPECjbb2013 benchmark. This result is a two-chip world record.

  • The SPARC T5-2 server running SPECjbb2013 is 1.8x faster than the Cisco UCS C240 M3 server (2.7 GHz Intel Xeon E5-2697 v2) based on both the SPECjbb2013-MultiJVM max-jOPS and SPECjbb2013-MultiJVM critical-jOPS metrics.

  • The SPARC T5-2 server running SPECjbb2013 is 2x faster than the HP ProLiant ML350p Gen8 server (2.7 GHz Intel Xeon E5-2697 v2) based on SPECjbb2013-MultiJVM max-jOPS and 1.3x faster based on SPECjbb2013-MultiJVM critical-jOPS.

  • The new Oracle results were obtained using Oracle Solaris 11 along with Oracle Java SE 8 on the SPARC T5-2 server.

  • The SPARC T5-2 server running SPECjbb2013 on a per chip basis is 1.3x faster than the NEC Express5800/A040b server (2.8 GHz Intel Xeon E7-4890 v2) based on both the SPECjbb2013-MultiJVM max-jOPS and SPECjbb2013-MultiJVM critical-jOPS metrics.

  • There are no IBM POWER7 or POWER7+ based server results on the SPECjbb2013 benchmark. IBM has published IBM POWER7+ based servers on the SPECjbb2005 which was retired by SPEC in 2013.

Performance Landscape

Results of SPECjbb2013 from www.spec.org as of March 6, 2014. These are the leading 2-chip SPECjbb2013 MultiJVM results.

SPECjbb2013 - 2-Chip MultiJVM Results
System Processor SPECjbb2013-MultiJVM JDK
max-jOPS critical-jOPS
SPARC T5-2 2xSPARC T5, 3.6 GHz 114,492 43,963 Oracle Java SE 8
Cisco UCS C240 M3 2xIntel E5-2697 v2, 2.7 GHz 63,079 23,797 Oracle Java SE 7u45
HP ProLiant ML350p Gen8 2xIntel E5-2697 v2, 2.7 GHz 62,393 24,310 Oracle Java SE 7u45
IBM System x3650 M4 BD 2xIntel E5-2695 v2, 2.4 GHz 59,124 22,275 IBM SDK V7 SR6 (*)
HP ProLiant ML350p Gen8 2xIntel E5-2697 v2, 2.7 GHz 57,594 32,103 Oracle Java SE 7u40
HP ProLiant BL460c Gen8 2xIntel E5-2697 v2, 2.7 GHz 56,367 30,078 Oracle Java SE 7u40
Sun Server X4-2, DDR3-1600 2xIntel E5-2697 v2, 2.7 GHz 52,664 20,553 Oracle Java SE 7u40
HP ProLiant DL360e Gen8 2xIntel E5-2470 v2, 2.4 GHz 48,772 17,915 Oracle Java SE 7u40

* IBM SDK V7 SR6 – IBM SDK, Java Technology Edition, Version 7, Service Refresh 6

The following table compares the SPARC T5 processor to the Intel E7 v2 processor.

SPECjbb2013 - Results Using JDK 8
Per Chip Comparison
System SPECjbb2013-MultiJVM SPECjbb2013-MultiJVM/Chip JDK
max-jOPS critical-jOPS max-jOPS critical-jOPS
SPARC T5-2
2xSPARC T5, 3.6 GHz
114,492 43,963 57,246 21,981 Oracle Java SE 8
NEC Express5800/A040b
4xIntel E7-4890 v2, 2.8 GHz
177,753 65,529 44,438 16,382 Oracle Java SE 8

SPARC per Chip Advantage 1.29x 1.34x

Configuration Summary

System Under Test:

SPARC T5-2 server
2 x SPARC T5, 3.60 GHz
512 GB memory (32 x 16 GB dimms)
Oracle Solaris 11.1
Oracle Java SE 8

Benchmark Description

The SPECjbb2013 benchmark has been developed from the ground up to measure performance based on the latest Java application features. It is relevant to all audiences who are interested in Java server performance, including JVM vendors, hardware developers, Java application developers, researchers and members of the academic community.

From SPEC's press release, "SPECjbb2013 replaces SPECjbb2005. The new benchmark has been developed from the ground up to measure performance based on the latest Java application features. It is expected to be used widely by all those interested in Java server performance, including JVM vendors, hardware developers, Java application developers, researchers and members of the academic community."

SPECjbb2013 features include:

  • A usage model based on a world-wide supermarket company with an IT infrastructure that handles a mix of point-of-sale requests, online purchases and data-mining operations.
  • Both a pure throughput metric and a metric that measures critical throughput under service-level agreements (SLAs) specifying response times ranging from 10ms to 500ms.
  • Support for multiple run configurations, enabling users to analyze and overcome bottlenecks at multiple layers of the system stack, including hardware, OS, JVM and application layers.
  • Exercising new Java 7 features and other important performance elements, including the latest data formats (XML), communication using compression, and messaging with security.
  • Support for virtualization and cloud environments.

See Also

Disclosure Statement

SPEC and the benchmark name SPECjbb are registered trademarks of Standard Performance Evaluation Corporation (SPEC). Results as of 3/6/2014, see http://www.spec.org for more information.  SPARC T5-2 114,492 SPECjbb2013-MultiJVM max-jOPS, 43,963 SPECjbb2013-MultiJVM critical-jOPS; NEC Express5800/A040b 177,753 SPECjbb2013-MultiJVM max-jOPS, 65,529 SPECjbb2013-MultiJVM critical-jOPS; Cisco UCS c240 M3 63,079 SPECjbb2013-MultiJVM max-jOPS, 23,797 SPECjbb2013-MultiJVM critical-jOPS; HP ProLiant ML350p Gen8 62,393 SPECjbb2013-MultiJVM max-jOPS, 24,310 SPECjbb2013-MultiJVM critical-jOPS; IBM System X3650 M4 BD 59,124 SPECjbb2013-MultiJVM max-jOPS, 22,275 SPECjbb2013-MultiJVM critical-jOPS; HP ProLiant ML350p Gen8 57,594 SPECjbb2013-MultiJVM max-jOPS, 32,103 SPECjbb2013-MultiJVM critical-jOPS; HP ProLiant BL460c Gen8 56,367 SPECjbb2013-MultiJVM max-jOPS, 30,078 SPECjbb2013-MultiJVM critical-jOPS; Sun Server X4-2 52,664 SPECjbb2013-MultiJVM max-jOPS, 20,553 SPECjbb2013-MultiJVM critical-jOPS; HP ProLiant DL360e Gen8 48,772 SPECjbb2013-MultiJVM max-jOPS, 17,915 SPECjbb2013-MultiJVM critical-jOPS.

Friday Feb 14, 2014

SPARC M6-32 Delivers Oracle E-Business and PeopleSoft World Record Benchmarks, Linear Data Warehouse Scaling in a Virtualized Configuration

This result demonstrates how the combination of Oracle virtualization technologies for SPARC and Oracle's SPARC M6-32 server allow the deployment and concurrent high performance execution of multiple Oracle applications and databases sized for the Enterprise.

  • In an 8-chip Dynamic Domain (also known as PDom), the SPARC M6-32 server set a World Record E-Business 12.1.3 X-Large world record with 14,660 online users running five simultaneous E-Business modules.

  • In a second 8-chip Dynamic Domain, the SPARC M6-32 server set a World Record PeopleSoft HCM 9.1 HR Self-Service online supporting 35,000 users while simultaneously running a batch workload in 29.17 minutes. This was done with a database of 600,480 employees. Two other separate tests were run, one supporting 40,000 online users only and another a batch-only workload that was run in 18.27 min.

  • In a third Dynamic Domain with 16-chips on the SPARC M6-32 server, a data warehouse test was run that showed near-linear scaling.

  • On the SPARC M6-32 server, several critical applications instances were virtualized: an Oracle E-Business application and database, an Oracle's PeopleSoft application and database, and a Decision Support database instance using Oracle Database 12c.

  • In this Enterprise Virtualization benchmark a SPARC M6-32 server utilized all levels of Oracle Virtualization features available for SPARC servers. The 32-chip SPARC M6 based server was divided in three separate Dynamic Domains (also known as PDoms), available only on the SPARC Enterprise M-Series systems, which are completely electrically isolated and independent hardware partitions. Each PDom was subsequently split into multiple hypervisor-based Oracle VM for SPARC partitions (also known as LDoms), each one running its own Oracle Solaris kernel and managing its own CPUs and I/O resources. The hardware resources allocated to each Oracle VM for SPARC partition were then organized in various Oracle Solaris Zones, to further refine application tier isolation and resources management. The three PDoms were dedicated to the enterprise applications as follows:

    • Oracle E-Business PDom: Oracle E-Business 12.1.3 Suite World Record Extra-Large benchmark, exercising five Online Modules: Customer Service, Human Resources Self Service, iProcurement, Order Management and Financial, with 14,660 users and an average user response time under 2 seconds.

    • PeopleSoft PDom: PeopleSoft Human Capital Management (HCM) 9.1 FP2 World Record Benchmark, using PeopleTools 8.52 and an Oracle Database 11g Release 2, with 35,000 users, at an average user Search Time of 1.46 seconds and Save Time of 0.93 seconds. An online run with 40,000 users, had an average user Search Time of 2.17 seconds and Save Time of 1.39 seconds, and a Payroll batch run completed in 29.17 minutes elapsed time for more than 500,000 employees.

    • Decision Support PDom: An Oracle Database 12c instance executing a Decision Support workload on about 30 billion rows of data and achieving linear scalability, i.e. on the 16 chips comprising the PDom, the workload ran 16x faster than on a single chip. Specifically, the 16-chip PDom processed about 320M rows/sec whereas a single chip could process about 20M rows/sec.

  • The SPARC M6-32 server is ideally suited for large-memory utilization. In this virtualized environment, three critical applications made use of 16 TB of physical memory. Each of the Oracle VM Server for SPARC environments utilized from 4 to 8 TB of memory, more than the limits of other virtualization solutions.

  • SPARC M6-32 Server Virtualization Layout Highlights

    • The Oracle E-Business application instances were run in a dedicated Dynamic Domain consisting of 8 SPARC M6 processors and 4 TB of memory. The PDom was split into four symmetric Oracle VM Server for SPARC (LDoms) environments of 2 chips and 1 TB of memory each, two dedicated to the Application Server tier and the other two to the Database Server tier. Each Logical Domain was subsequently divided into two Oracle Solaris Zones, for a total of eight, one for each E-Business Application server and one for each Oracle Database 11g instance.

    • The PeopleSoft application was run in a dedicated Dynamic Domain (PDom) consisting of 8 SPARC M6 processors and 4 TB of memory. The PDom was split into two Oracle VM Server for SPARC (LDoms) environments one of 6 chips and 3 TB of memory, reserved for the Web and Application Server tiers, and a second one of 2 chips and 1 TB of memory, reserved for the Database tier. Two PeopleSoft Application Servers, a Web Server instance, and a single Oracle Database 11g instance were each executed in their respective and exclusive Oracle Solaris Zone.

    • The Oracle Database 12c Decision Support workload was run in a Dynamic Domain consisting of 16 SPARC M6 processors and 8 TB of memory.

  • All the Oracle Applications and Database instances were running at high level of performance and concurrently in a virtualized environment. Running three Enterprise level application environments on a single SPARC M6-32 server offers centralized administration, simplified physical layout, high availability and security features (as each PDom and LDom runs its own Oracle Solaris operating system copy physically and logically isolated from the other environments), enabling the coexistence of multiple versions Oracle Solaris and application software on a single physical server.

  • Dynamic Domains and Oracle VM Server for SPARC guests were configured with independent direct I/O domains, allowing for fast and isolated I/O paths, providing secure and high performance I/O access.

Performance Landscape

Oracle E-Business Test using Oracle Database 11g
SPARC M6-32 PDom, 8 SPARC M6 Processors, 4 TB Memory
Total Online Users Weighted Average
Response Time (sec)
90th Percentile
Response Time (s)
14,660 0.81 0.88
Multiple Online Modules X-Large Configuration (HR Self-Service, Order Management, iProcurement, Customer Service, Financial)

PeopleSoft HR Self-Service Online Plus Payroll Batch using Oracle Database 11g
SPARC M6-32 PDom, 8 SPARC M6 Processors, 4 TB Memory
HR Self-Service Payroll Batch
Elapsed (min)
Online Users Average User
Search / Save
Time (sec)
Transactions
per Second
35,000 1.46 / 0.93 116 29.17

HR Self-Service Only Payroll Batch Only
Elapsed (min)
40,000 2.17 / 1.39 132 18.27

Oracle Database 12c Decision Support Query Test
SPARC M6-32 PDom, 16 SPARC M6 Processors, 8 TB Memory
Parallelism
Chips Used
Rows Processing Rate
(rows/s)
Scaling Normalized to 1 Chip
16 319,981,734 15.9
8 162,545,303 8.1
4 80,943,271 4.0
2 40,458,329 2.0
1 20,086,829 1.0

Configuration Summary

System Under Test:

SPARC M6-32 server with
32 x SPARC M6 processors (3.6 GHz)
16 TB memory

Storage Configuration:

6 x Sun Storage 2540-M2 each with
8 x Expansion Trays (each tray equipped with 12 x 300 GB SAS drives)
7 x Sun Server X3-2L each with
2 x Intel Xeon E5-2609 2.4 GHz Processors
16 GB Memory
4 x Sun Flash Accelerator F40 PCIe 400 GB cards
Oracle Solaris 11.1 (COMSTAR)
1 x Sun Server X3-2L with
2 x Intel Xeon E5-2609 2.4 GHz Processors
16 GB Memory
12 x 3 TB SAS disks
Oracle Solaris 11.1 (COMSTAR)

Software Configuration:

Oracle Solaris 11.1 (11.1.10.5.0), Oracle E-Business
Oracle Solaris 11.1 (11.1.10.5.0), PeopleSoft
Oracle Solaris 11.1 (11.1.9.5.0), Decision Support
Oracle Database 11g Release 2, Oracle E-Business and PeopleSoft
Oracle Database 12c Release 1, Decision Support
Oracle E-Business Suite 12.1.3
PeopleSoft Human Capital Management 9.1 FP2
PeopleSoft PeopleTools 8.52.03
Oracle Java SE 6u32
Oracle Tuxedo, Version 10.3.0.0, 64-bit, Patch Level 043
Oracle WebLogic Server 11g (10.3.4)

Oracle Dynamic Domains (PDoms) resources:


Oracle E-Business PeopleSoft Oracle DSS
Processors 8 8 16
Memory 4 TB 4 TB 8 TB
Oracle Solaris 11.1 (11.1.10.5.0) 11.1 (11.1.10.5.0) 11.1 (11.1.9.5.0)
Oracle Database 11g 11g 12c
Oracle VM for SPARC /
Oracle Solaris Zones
4 LDom / 8 Zones 2 LDom / 4 Zones None
Storage 7 x Sun Server X3-2L 1 x Sun Server X3-2L
(12 x 3 TB SAS )
2 x Sun Storage 2540-M2 / 2501 pairs
4 x Sun Storage 2540-M2/2501 pairs

Benchmark Description

This benchmark consists of three different applications running concurrently. It shows that large, enterprise workloads can be run on a single system and without performance impact between application environments.

The three workloads are:

  • Oracle E-Business Suite Online

    • This test simulates thousands of online users executing transactions typical of an internal Enterprise Resource Processing, including 5 application modules: Customer Service, Human Resources Self Service, Procurement, Order Management and Financial.

    • Each database tier uses a database instance of about 600 GB in size, and supporting thousands of application users, accessing hundreds of objects (tables, indexes, SQL stored procedures, etc.).

    • The application tier includes multiple web and application server instances, specifically Apache Web Server, Oracle Application Server 10g and Oracle Java SE 6u32.

  • PeopleSoft Human Capital Management

    • This test simulates thousands of online employees, managers and Human Resource administrators executing transactions typical of a Human Resources Self Service application for the Enterprise. Typical transactions are: viewing paychecks, promoting and hiring employees, updating employee profiles, etc.

    • The database tier uses a database instance of about 500 GB in size, containing information for 500,480 employees.

    • The application tier for this test includes web and application server instances, specifically Oracle WebLogic Server 11g, PeopleSoft Human Capital Management 9.1 and Oracle Java SE 6u32.

  • Decision Support Workload using the Oracle Database.

    • The query processes 30 billion rows stored in the Oracle Database, making heavy use of Oracle parallel query processing features. It performs multiple aggregations and summaries by reading and processing all the rows of the database.

Key Points and Best Practices

Oracle E-Business Environment

The Oracle E-Business Suite setup consisted 4 Oracle E-Business environments running 5 online Oracle E-Business modules simultaneously.

The Oracle E-Business environments were deployed on 4 Oracle VM for SPARC, respectively 2 for the Application tier and 2 for the Database tier. Each LDom included 2 SPARC M6 processor chips. The Application LDom was further split into 2 Oracle Solaris Zones, each one containing one Oracle E-Business Application instance. Similarly, on the Database tier, each LDom was further divided into 2 Oracle Solaris Zones, each containing an Oracle Database instance. Applications on the same LDom shared a 10 GbE network link to connect to the Database tier LDom. Each Application in a Zone was connected to its own dedicated Database Zone. The communication between the two Zones was implemented via Oracle Solaris 11 virtual network, which provides high performance, low latency transfers at memory speed using large frames (9000 bytes vs typical 1500 bytes frames).

The Oracle E-Business setup made use of the Oracle Database Shared Server feature in order to limit memory utilization, as well as the number of database Server processes. The Oracle Database configuration and optimization was substantially out-of-the-box, except for proper sizing the Oracle Database memory areas (System Global Area and Program Global Area).

In the Oracle E-Business Application LDom handling Customer Service and HR Self Service modules, 28 Forms servers and 8 OC4J application servers were hosted in the two separate Oracle Solaris Zones, for a total of 56 forms servers and 16 applications servers.

All the Oracle Database server processes and the listener processes were executed in the Oracle Solaris FX scheduler class.

PeopleSoft Environment

The PeopleSoft Application Oracle VM for SPARC had one Oracle Solaris Zone of 12 cores containing the web tier and two Oracle Solaris Zones of 57 cores total containing the Application tier. The Database tier was contained in an Oracle VM for SPARC consisting of one Oracle Solaris Zone of 24 cores. One core, in the Application Oracle VM, was dedicated to network and disk interrupt handling.

All database data files, recovery files and Oracle Clusterware files for the PeopleSoft test were created with the Oracle Automatic Storage Management (Oracle ASM) volume manager for the added benefit of the ease of management provided by Oracle ASM integrated storage management solution.

In the application tier, 5 PeopleSoft domains with 350 application servers (70 per each domain) were hosted in the two separate Oracle Solaris Zones for a total of 10 domains with 700 application server processes.

All PeopleSoft Application processes and Web Server JVM instances were executed in the Oracle Solaris FX scheduler class.

Oracle Decision Support Environment

The decision support workload showed how the combination of a large memory (8 TB) and a large number of processors (16 chips comprising 1536 virtual CPUs) together with Oracle parallel query facility can linearly increase the performance of certain decision support queries as the number of CPUs increase.

The large memory was used to cache the entire 30 billion row Oracle table in memory. There are a number of ways to accomplish this. The method deployed in this test was to allocate sufficient memory for Oracle's "keep cache" and direct the table to the "keep cache."

To demonstrate scalability, it was necessary to ensure that the number of Oracle parallel servers was always equal to the number of available virtual CPUs. This was accomplished by the combination of providing a degree of parallelism hint to the query and setting both parallel_max_servers and parallel_min_servers to the number of virtual CPUs.

The number of virtual CPUs for each stage of the scalability test was adjusted using the psradm command available in Oracle Solaris.

See Also

Disclosure Statement

Oracle and/or its affiliates. All rights reserved. Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners. PeopleSoft results as of 02/14/2014. Other results as of 09/22/2013.

Oracle E-Business Suite R12 extra-large multiple-online module benchmark, SPARC M6-32, SPARC M6, 3.6 GHz, 8 chips, 96 cores, 768 threads, 4 TB memory, 14,660 online users, average response time 0.81 sec, 90th percentile response time 0.88 sec, Oracle Solaris 11.1, Oracle Solaris Zones, Oracle VM for SPARC, Oracle E-Business Suite 12.1.3, Oracle Database 11g Release 2, Results as of 9/22/2013.

Thursday Jan 23, 2014

SPARC T5-2 Delivers World Record 2-Socket Application Server for SPECjEnterprise2010 Benchmark

Oracle's SPARC T5-2 servers have set the world record for the SPECjEnterprise2010 benchmark using two-socket application servers with a result of 17,033.54 SPECjEnterprise2010 EjOPS. The result used two SPARC T5-2 servers, one server for the application tier and the other server for the database tier.

  • The SPARC T5-2 server delivered 29% more performance compared to the 2-socket IBM PowerLinux server result of 13,161.07 SPECjEnterprise2010 EjOPS.

  • The two SPARC T5-2 servers have 1.2x better price performance than the two IBM PowerLinux 7R2 POWER7+ processor-based servers (based on hardware plus software configuration costs for both tiers). The price performance of the SPARC T5-2 server is $35.99 compared to the IBM PowerLinux 7R2 at $44.75.

  • The SPARC T5-2 server demonstrated 1.5x more performance compared to Oracle's x86-based 2-socket Sun Server X4-2 system (Ivy Bridge) result of 11,259.88 SPECjEnterprise2010 EjOPS. Oracle holds the top x86 2-socket application server SPECjEnterprise2010 result.

  • This SPARC T5-2 server result represents the best performance per socket for a single system in the application tier of 8,516.77 SPECjEnterprise2010 EjOPS per socket.

  • The application server used Oracle Fusion Middleware components including the Oracle WebLogic 12.1 application server and Java HotSpot(TM) 64-Bit Server VM on Solaris, version 1.7.0_45. The database server was configured with Oracle Database 12c Release 1.

  • This result demonstrated less than 1 second average response times for all SPECjEnterprise2010 transactions and represents Jave EE 5.0 transactions generated by 139,000 users.

Performance Landscape

Select 2-socket single application server results. Complete benchmark results are at the SPEC website, SPECjEnterprise2010 Results.

SPECjEnterprise2010 Performance Chart
1/22/2014
Submitter EjOPS* Java EE Server DB Server
Oracle 17,033.54 1 x SPARC T5-2
2 x 3.6 GHz SPARC T5
Oracle WebLogic 12c (12.1.2)
1 x SPARC T5-2
2 x 3.6 GHz SPARC T5
Oracle Database 12c (12.1.0.1)
IBM 13,161.07 1x IBM PowerLinux 7R2
2 x 4.2 GHz POWER 7+
WebSphere Application Server V8.5
1x IBM PowerLinux 7R2
2 x 4.2 GHz POWER 7+
IBM DB2 10.1 FP2
Oracle 11,259.88 1x Sun Server X4-2
2 x 2.7 GHz Intel Xeon E5-2697 v2
Oracle WebLogic 12c (12.1.2)
1x Sun Server X4-2L
2 x 2.7 GHz Intel Xeon E5-2697 v2
Oracle Database 12c (12.1.0.1)

* SPECjEnterprise2010 EjOPS (bigger is better)

Configuration Summary

Application Server:

1 x SPARC T5-2 server, with
2 x 3.6 GHz SPARC T5 processors
512 GB memory
2 x 10 GbE dual-port NIC
Oracle Solaris 11.1 (11.1.13.6.0)
Oracle WebLogic Server 12c (12.1.2)
Java HotSpot(TM) 64-Bit Server VM on Solaris, version 1.7.0_45

Database Server:

1 x SPARC T5-2 server, with
2 x 3.6 GHz SPARC T5 processors
512 GB memory
1 x 10 GbE dual-port NIC
2 x 8 Gb FC HBA
Oracle Solaris 11.1 (11.1.13.6.0)
Oracle Database 12c (12.1.0.1)

Storage Servers:

2 x Sun Server X4-2L (24-Drive), with
2 x 2.6 GHz Intel Xeon
64 GB memory
1 x 8 Gb FC HBA
4 x Sun Flash Accelerator F80 PCI-E Cards
Oracle Solaris 11.1

Benchmark Description

SPECjEnterprise2010 is the third generation of the SPEC organization's J2EE end-to-end industry standard benchmark application. The new SPECjEnterprise2010 benchmark has been re-designed and developed to cover the Java EE 5 specification's significantly expanded and simplified programming model, highlighting the major features used by developers in the industry today. This provides a real world workload driving the Application Server's implementation of the Java EE specification to its maximum potential and allowing maximum stressing of the underlying hardware and software systems,

  • The web zone, servlets, and web services
  • The EJB zone
  • JPA 1.0 Persistence Model
  • JMS and Message Driven Beans
  • Transaction management
  • Database connectivity
Moreover, SPECjEnterprise2010 also heavily exercises all parts of the underlying infrastructure that make up the application environment, including hardware, JVM software, database software, JDBC drivers, and the system network.

The primary metric of the SPECjEnterprise2010 benchmark is jEnterprise Operations Per Second (SPECjEnterprise2010 EjOPS). The primary metric for the SPECjEnterprise2010 benchmark is calculated by adding the metrics of the Dealership Management Application in the Dealer Domain and the Manufacturing Application in the Manufacturing Domain. There is NO price/performance metric in this benchmark.

Key Points and Best Practices

  • Two Oracle WebLogic server instances on the SPARC T5-2 server were hosted in 2 separate Oracle Solaris Zones.
  • The Oracle WebLogic application servers were executed in the FX scheduling class to improve performance by reducing the frequency of context switches.
  • The Oracle log writer process was run in the RT scheduling class.

See Also

Disclosure Statement

SPEC and the benchmark name SPECjEnterprise are registered trademarks of the Standard Performance Evaluation Corporation. Results from www.spec.org as of 1/22/2014. SPARC T5-2, 17,033.54 SPECjEnterprise2010 EjOPS; IBM PowerLinux 7R2, 13,161.07 SPECjEnterprise2010 EjOPS; Sun Server X4-2, 11,259.88 SPECjEnterprise2010 EjOPS.

The SPARC T5-2 configuration cost is the total application and database server hardware plus software. List price is $613,052 from http://www.oracle.com as of 1/22/2014. The IBM PowerLinux 7R2 configuration total hardware plus software list price is $588,970 based on public pricing from http://www.ibm.com as of 1/22/2014. Pricing does not include database storage hardware for IBM or Oracle.

Thursday Sep 26, 2013

SPARC M6-32 Delivers Oracle E-Business and PeopleSoft World Record Benchmarks, Linear Data Warehouse Scaling in a Virtualized Configuration

This result has been superceded.  Please see the latest result.

 This result demonstrates how the combination of Oracle virtualization technologies for SPARC and Oracle's SPARC M6-32 server allow the deployment and concurrent high performance execution of multiple Oracle applications and databases sized for the Enterprise.

  • In an 8-chip Dynamic Domain (also known as PDom), the SPARC M6-32 server set a World Record E-Business 12.1.3 X-Large world record with 14,660 online users running five simultaneous E-Business modules.

  • In a second 8-chip Dynamic Domain, the SPARC M6-32 server set a World Record PeopleSoft HCM 9.1 HR Self-Service online supporting 34,000 users while simultaneously running a batch workload in 29.7 minutes. This was done with a database of 600,480 employees. In a separate test running a batch-only workload was run in 21.2 min.

  • In a third Dynamic Domain with 16-chips on the SPARC M6-32 server, a data warehouse test was run that showed near-linear scaling.

  • On the SPARC M6-32 server, several critical applications instances were virtualized: an Oracle E-Business application and database, an Oracle's PeopleSoft application and database, and a Decision Support database instance using Oracle Database 12c.

  • In this Enterprise Virtualization benchmark a SPARC M6-32 server utilized all levels of Oracle Virtualization features available for SPARC servers. The 32-chip SPARC M6 based server was divided in three separate Dynamic Domains (also known as PDoms), available only on the SPARC Enterprise M-Series systems, which are completely electrically isolated and independent hardware partitions. Each PDom was subsequently split into multiple hypervisor-based Oracle VM for SPARC partitions (also known as LDoms), each one running its own Oracle Solaris kernel and managing its own CPUs and I/O resources. The hardware resources allocated to each Oracle VM for SPARC partition were then organized in various Oracle Solaris Zones, to further refine application tier isolation and resources management. The three PDoms were dedicated to the enterprise applications as follows:

    • Oracle E-Business PDom: Oracle E-Business 12.1.3 Suite World Record Extra-Large benchmark, exercising five Online Modules: Customer Service, Human Resources Self Service, iProcurement, Order Management and Financial, with 14,660 users and an average user response time under 2 seconds.

    • PeopleSoft PDom: PeopleSoft Human Capital Management (HCM) 9.1 FP2 World Record Benchmark, using PeopleTools 8.52 and an Oracle Database 11g Release 2, with 34,000 users, at an average user Search Time of 1.11 seconds and Save Time of 0.77 seconds, and a Payroll batch run completed in 29.7 minutes elapsed time for more than 500,000 employees.

    • Decision Support PDom: An Oracle Database 12c instance executing a Decision Support workload on about 30 billion rows of data and achieving linear scalability, i.e. on the 16 chips comprising the PDom, the workload ran 16x faster than on a single chip. Specifically, the 16-chip PDom processed about 320M rows/sec whereas a single chip could process about 20M rows/sec.

  • The SPARC M6-32 server is ideally suited for large-memory utilization. In this virtualized environment, three critical applications made use of 16 TB of physical memory. Each of the Oracle VM Server for SPARC environments utilized from 4 to 8 TB of memory, more than the limits of other virtualization solutions.

  • SPARC M6-32 Server Virtualization Layout Highlights

    • The Oracle E-Business application instances were run in a dedicated Dynamic Domain consisting of 8 SPARC M6 processors and 4 TB of memory. The PDom was split into four symmetric Oracle VM Server for SPARC (LDoms) environments of 2 chips and 1 TB of memory each, two dedicated to the Application Server tier and the other two to the Database Server tier. Each Logical Domain was subsequently divided into two Oracle Solaris Zones, for a total of eight, one for each E-Business Application server and one for each Oracle Database 11g instance.

    • The PeopleSoft application was run in a dedicated Dynamic Domain (PDom) consisting of 8 SPARC M6 processors and 4 TB of memory. The PDom was split into two Oracle VM Server for SPARC (LDoms) environments one of 6 chips and 3 TB of memory, reserved for the Web and Application Server tiers, and a second one of 2 chips and 1 TB of memory, reserved for the Database tier. Two PeopleSoft Application Servers, a Web Server instance, and a single Oracle Database 11g instance were each executed in their respective and exclusive Oracle Solaris Zone.

    • The Oracle Database 12c Decision Support workload was run in a Dynamic Domain consisting of 16 SPARC M6 processors and 8 TB of memory.

  • All the Oracle Applications and Database instances were running at high level of performance and concurrently in a virtualized environment. Running three Enterprise level application environments on a single SPARC M6-32 server offers centralized administration, simplified physical layout, high availability and security features (as each PDom and LDom runs its own Oracle Solaris operating system copy physically and logically isolated from the other environments), enabling the coexistence of multiple versions Oracle Solaris and application software on a single physical server.

  • Dynamic Domains and Oracle VM Server for SPARC guests were configured with independent direct I/O domains, allowing for fast and isolated I/O paths, providing secure and high performance I/O access.

Performance Landscape

Oracle E-Business Test using Oracle Database 11g
SPARC M6-32 PDom, 8 SPARC M6 Processors, 4 TB Memory
Total Online Users Weighted Average
Response Time (sec)
90th Percentile
Response Time (s)
14,660 0.81 0.88
Multiple Online Modules X-Large Configuration (HR Self-Service, Order Management, iProcurement, Customer Service, Financial)

PeopleSoft HR Self-Service Online Plus Payroll Batch using Oracle Database 11g
SPARC M6-32 PDom, 8 SPARC M6 Processors, 4 TB Memory
HR Self-Service Payroll Batch
Elapsed (min)
Online Users Average User
Search / Save
Time (sec)
Transactions
per Second
34,000 1.11 / 0.77 113 29.7

Payroll Batch Only
Elapsed (min)
21.17

Oracle Database 12c Decision Support Query Test
SPARC M6-32 PDom, 16 SPARC M6 Processors, 8 TB Memory
Parallelism
Chips Used
Rows Processing Rate
(rows/s)
Scaling Normalized to 1 Chip
16 319,981,734 15.9
8 162,545,303 8.1
4 80,943,271 4.0
2 40,458,329 2.0
1 20,086,829 1.0

Configuration Summary

System Under Test:

SPARC M6-32 server with
32 x SPARC M6 processors (3.6 GHz)
16 TB memory

Storage Configuration:

6 x Sun Storage 2540-M2 each with
8 x Expansion Trays (each tray equipped with 12 x 300 GB SAS drives)
7 x Sun Server X3-2L each with
2 x Intel Xeon E5-2609 2.4 GHz Processors
16 GB Memory
4 x Sun Flash Accelerator F40 PCIe 400 GB cards
Oracle Solaris 11.1 (COMSTAR)
1 x Sun Server X3-2L with
2 x Intel Xeon E5-2609 2.4 GHz Processors
16 GB Memory
12 x 3 TB SAS disks
Oracle Solaris 11.1 (COMSTAR)

Software Configuration:

Oracle Solaris 11.1 (11.1.10.5.0), Oracle E-Business
Oracle Solaris 11.1 (11.1.10.5.0), PeopleSoft
Oracle Solaris 11.1 (11.1.9.5.0), Decision Support
Oracle Database 11g Release 2, Oracle E-Business and PeopleSoft
Oracle Database 12c Release 1, Decision Support
Oracle E-Business Suite 12.1.3
PeopleSoft Human Capital Management 9.1 FP2
PeopleSoft PeopleTools 8.52.03
Oracle Java SE 6u32
Oracle Tuxedo, Version 10.3.0.0, 64-bit, Patch Level 043
Oracle WebLogic Server 11g (10.3.4)

Oracle Dynamic Domains (PDoms) resources:


Oracle E-Business PeopleSoft Oracle DSS
Processors 8 8 16
Memory 4 TB 4 TB 8 TB
Oracle Solaris 11.1 (11.1.10.5.0) 11.1 (11.1.10.5.0) 11.1 (11.1.9.5.0)
Oracle Database 11g 11g 12c
Oracle VM for SPARC /
Oracle Solaris Zones
4 LDom / 8 Zones 2 LDom / 4 Zones None
Storage 7 x Sun Server X3-2L 1 x Sun Server X3-2L
(12 x 3 TB SAS )
2 x Sun Storage 2540-M2 / 2501 pairs
4 x Sun Storage 2540-M2/2501 pairs

Benchmark Description

This benchmark consists of three different applications running concurrently. It shows that large, enterprise workloads can be run on a single system and without performance impact between application environments.

The three workloads are:

  • Oracle E-Business Suite Online

    • This test simulates thousands of online users executing transactions typical of an internal Enterprise Resource Processing, including 5 application modules: Customer Service, Human Resources Self Service, Procurement, Order Management and Financial.

    • Each database tier uses a database instance of about 600 GB in size, and supporting thousands of application users, accessing hundreds of objects (tables, indexes, SQL stored procedures, etc.).

    • The application tier includes multiple web and application server instances, specifically Apache Web Server, Oracle Application Server 10g and Oracle Java SE 6u32.

  • PeopleSoft Human Capital Management

    • This test simulates thousands of online employees, managers and Human Resource administrators executing transactions typical of a Human Resources Self Service application for the Enterprise. Typical transactions are: viewing paychecks, promoting and hiring employees, updating employee profiles, etc.

    • The database tier uses a database instance of about 500 GB in size, containing information for 500,480 employees.

    • The application tier for this test includes web and application server instances, specifically Oracle WebLogic Server 11g, PeopleSoft Human Capital Management 9.1 and Oracle Java SE 6u32.

  • Decision Support Workload using the Oracle Database.

    • The query processes 30 billion rows stored in the Oracle Database, making heavy use of Oracle parallel query processing features. It performs multiple aggregations and summaries by reading and processing all the rows of the database.

Key Points and Best Practices

Oracle E-Business Environment

The Oracle E-Business Suite setup consisted 4 Oracle E-Business environments running 5 online Oracle E-Business modules simultaneously. The Oracle E-Business environments were deployed on 4 Oracle VM for SPARC, respectively 2 for the Application tier and 2 for the Database tier. Each LDom included 2 SPARC M6 processor chips. The Application LDom was further split into 2 Oracle Solaris Zones, each one containing one Oracle E-Business Application instance. Similarly, on the Database tier, each LDom was further divided into 2 Oracle Solaris Zones, each containing an Oracle Database instance. Applications on the same LDom shared a 10 GbE network link to connect to the Database tier LDom. Each Application in a Zone was connected to its own dedicated Database Zone. The communication between the two Zones was implemented via Oracle Solaris 11 virtual network, which provides high performance, low latency transfers at memory speed using large frames (9000 bytes vs typical 1500 bytes frames).

The Oracle E-Business setup made use of the Oracle Database Shared Server feature in order to limit memory utilization, as well as the number of database Server processes. The Oracle Database configuration and optimization was substantially out-of-the-box, except for proper sizing the Oracle Database memory areas (System Global Area and Program Global Area).

In the Oracle E-Business Application LDom handling Customer Service and HR Self Service modules, 28 Forms servers and 8 OC4J application servers were hosted in the two separate Oracle Solaris Zones, for a total of 56 forms servers and 16 applications servers.

All the Oracle Database server processes and the listener processes were executed in the Oracle Solaris FX scheduler class.

PeopleSoft Environment

The PeopleSoft Application Oracle VM for SPARC had one Oracle Solaris Zone of 12 cores containing the web tier and two Oracle Solaris Zones of 28 cores each containing the Application tier. The Database tier was contained in an Oracle VM for SPARC consisting of one Oracle Solaris Zone of 24 cores. One and a half cores, in the Application Oracle VM, were dedicated to network and disk interrupt handling.

All database data files, recovery files and Oracle Clusterware files for the PeopleSoft test were created with the Oracle Automatic Storage Management (Oracle ASM) volume manager for the added benefit of the ease of management provided by Oracle ASM integrated storage management solution.

In the application tier, 5 PeopleSoft domains with 350 application servers (70 per each domain) were hosted in the two separate Oracle Solaris Zones for a total of 10 domains with 700 application server processes.

All PeopleSoft Application processes and Web Server JVM instances were executed in the Oracle Solaris FX scheduler class.

Oracle Decision Support Environment

The decision support workload showed how the combination of a large memory (8 TB) and a large number of processors (16 chips comprising 1536 virtual CPUs) together with Oracle parallel query facility can linearly increase the performance of certain decision support queries as the number of CPUs increase.

The large memory was used to cache the entire 30 billion row Oracle table in memory. There are a number of ways to accomplish this. The method deployed in this test was to allocate sufficient memory for Oracle's "keep cache" and direct the table to the "keep cache."

To demonstrate scalability, it was necessary to ensure that the number of Oracle parallel servers was always equal to the number of available virtual CPUs. This was accomplished by the combination of providing a degree of parallelism hint to the query and setting both parallel_max_servers and parallel_min_servers to the number of virtual CPUs.

The number of virtual CPUs for each stage of the scalability test was adjusted using the psradm command available in Oracle Solaris.

See Also

Disclosure Statement

Oracle and/or its affiliates. All rights reserved. Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners. Results as of 09/22/2013.

Oracle E-Business Suite R12 extra-large multiple-online module benchmark, SPARC M6-32, SPARC M6, 3.6 GHz, 8 chips, 96 cores, 768 threads, 4 TB memory, 14,660 online users, average response time 0.81 sec, 90th percentile response time 0.88 sec, Oracle Solaris 11.1, Oracle Solaris Zones, Oracle VM for SPARC, Oracle E-Business Suite 12.1.3, Oracle Database 11g Release 2, Results as of 9/20/2013.

SPARC T5-8 Delivers World Record Single Server SPECjEnterprise2010 Benchmark, Utilizes Virtualized Environment

Oracle produced a world record single-server SPECjEnterprise2010 benchmark result of 36,571.36 SPECjEnterprise2010 EjOPS using one of Oracle's SPARC T5-8 servers for both the application and the database tier. Oracle VM Server for SPARC was used to virtualize the system to achieve this result.

  • The 8-chip SPARC T5 processor based server is 3.3x faster than the 8-chip IBM Power 780 server (POWER7+ processor based).

  • The SPARC T5-8 has 4.4x better price performance than the IBM Power 780, a POWER7+ processor based server (based on hardware plus software configuration costs). The price performance of the SPARC T5-8 server is $40.68 compared to the IBM Power 780 at $177.41. The IBM Power 780, POWER7+ based system has 1.2x better performance per core, but this did not reduce the total software and hardware cost to the customer. As shown by this comparison, performance-per-core is a poor predictor of characteristics relevant to customers. The SPARC T5-8 virtualized price performance was also less than the low-end IBM PowerLinux 7R2 at $62.26.

  • The SPARC T5-8 server ran the Oracle Solaris 11.1 operating system and used Oracle VM Server for SPARC to consolidate ten Oracle WebLogic application server instances and one database server instance to achieve this result.

  • This result demonstrated sub-second average response times for all SPECjEnterprise2010 transactions and represents JEE 5.0 transactions generated by 299,000 users.

  • The SPARC T5-8 server requires only 8 rack units, the same as the space of the IBM Power 780. In this configuration IBM has a hardware core density of 4 cores per rack unit which contrasts with the 16 cores per rack unit for the SPARC T5-8 server. This again demonstrates why performance-per-core is a poor predictor of characteristics relevant to customers.

  • The application server used Oracle Fusion Middleware components including the Oracle WebLogic 12.1 application server and Java HotSpot(TM) 64-Bit Server VM on Solaris, version 1.7.0_25. The database server was configured with Oracle Database 12c Release 1.

  • The SPARC T5-8 server is 2.8x faster than a non-virtualized IBM POWER7+ based server result (one server for application and one server for database), the IBM PowerLinux 7R2 achieved 13,161.07 SPECjEnterprise2010 EjOPS.

Performance Landscape

SPECjEnterprise2010 Performance Chart
Only Three Virtualized Results (App+DB on 1 Server) as of 9/23/2013
Submitter EjOPS* Chips per Server Java EE Server & DB Server
App DB
Oracle 36,571.36 5 3 1 x SPARC T5-8
8 chips, 128 cores, 3.6 GHz SPARC T5
Oracle WebLogic 12c (12.1.2)
Oracle Database 12c (12.1.0.1)
Oracle 27,843.57 4 4 1 x SPARC T5-8
8 chips, 128 cores, 3.6 GHz SPARC T5
Oracle WebLogic 12c (12.1.1)
Oracle Database 11g (11.2.0.3)
IBM 10,902.30 4 4 1 x IBM Power 780
8 chips, 32 cores, 4.42 GHz POWER7+
WebSphere Application Server V8.5
IBM DB2 Universal Database 10.1

* SPECjEnterprise2010 EjOPS (bigger is better)

Complete benchmark results are at the SPEC website, SPECjEnterprise2010 Results.

Configuration Summary

Oracle Summary

Application and Database Server:

1 x SPARC T5-8 server, with
8 x 3.6 GHz SPARC T5 processors
2 TB memory
9 x 10 GbE dual-port NIC
6 x 8 Gb dual-port HBA
Oracle Solaris 11.1 SRU 10.5
Oracle VM Server for SPARC
Oracle WebLogic Server 12c (12.1.2)
Java HotSpot(TM) 64-Bit Server VM on Solaris, version 1.7.0_25
Oracle Database 12c (12.1.0.1)

Storage Servers:

6 x Sun Server X3-2L (12-Drive), with
2 x 2.4 GHz Intel Xeon
16 GB memory
1 x 8 Gb FC HBA
4 x Sun Flash Accelerator F40 PCI-E Card
Oracle Solaris 11.1

2 x Sun Storage 2540-M2 Array
12 x 600 GB 15K RPM SAS HDD

Switch Hardware:

1 x Sun Network 10 GbE 72-port Top of Rack (ToR) Switch

IBM Summary

Application and Database Server:

1 x IBM Power 780 server, with
8 x 4.42 GHz POWER7+ processors
786 GB memory
6 x 10 GbE dual-port NIC
3 x 8 Gb four-port HBA
IBM AIX V7.1 TL2
IBM WebSphere Application Server V8.5
IBM J9 VM (build 2.6, JRE 1.7.0 IBM J9 AIX ppc-32)
IBM DB2 10.1
IBM InfoSphere Optim pureQuery Runtime v3.1.1

Storage:

2 x DS5324 Disk System with
48 x 146 GB 15K E-DDM Disks

1 x v7000 Disk Controller with
16 x 400 GB SSD Disks

Benchmark Description

SPECjEnterprise2010 is the third generation of the SPEC organization's J2EE end-to-end industry standard benchmark application. The new SPECjEnterprise2010 benchmark has been re-designed and developed to cover the Java EE 5 specification's significantly expanded and simplified programming model, highlighting the major features used by developers in the industry today. This provides a real world workload driving the Application Server's implementation of the Java EE specification to its maximum potential and allowing maximum stressing of the underlying hardware and software systems,
  • The web zone, servlets, and web services
  • The EJB zone
  • JPA 1.0 Persistence Model
  • JMS and Message Driven Beans
  • Transaction management
  • Database connectivity
Moreover, SPECjEnterprise2010 also heavily exercises all parts of the underlying infrastructure that make up the application environment, including hardware, JVM software, database software, JDBC drivers, and the system network.

The primary metric of the SPECjEnterprise2010 benchmark is jEnterprise Operations Per Second (SPECjEnterprise2010 EjOPS). The primary metric for the SPECjEnterprise2010 benchmark is calculated by adding the metrics of the Dealership Management Application in the Dealer Domain and the Manufacturing Application in the Manufacturing Domain. There is NO price/performance metric in this benchmark.

Key Points and Best Practices

  • Ten Oracle WebLogic server instances on the SPARC T5-8 server were hosted in 10 separate Oracle Solaris Zones within a separate guest domain on 80 cores (5 cpu chips).
  • The database ran in a separate guest domain consisting of 47 cores (3 cpu chips). One core was reserved for the primary domain.
  • The Oracle WebLogic application servers were executed in the FX scheduling class to improve performance by reducing the frequency of context switches.
  • The Oracle log writer process was run in the FX scheduling class at processor priority 60 to use the Critical Thread feature.

See Also

Disclosure Statement

SPEC and the benchmark name SPECjEnterprise are registered trademarks of the Standard Performance Evaluation Corporation. Results from www.spec.org as of 9/23/2013. SPARC T5-8, 36,571.36 SPECjEnterprise2010 EjOPS (using Oracle VM for SPARC and 5+3 split); SPARC T5-8, 27,843.57 SPECjEnterprise2010 EjOPS (using Oracle Zones and 4+4 split); IBM Power 780, 10,902.30 SPECjEnterprise2010 EjOPS; IBM PowerLinux 7R2, 13,161.07 SPECjEnterprise2010 EjOPS. SPARC T5-8 server total hardware plus software list price is $1,487,792 from http://www.oracle.com as of 9/20/2013. IBM Power 780 server total hardware plus software cost of $1,934,162 based on public pricing from http://www.ibm.com as of 5/22/2013. IBM PowerLinux 7R2 server total hardware plus software cost of $819,451 based on whywebsphere.com/2013/04/29/weblogic-12c-on-oracle-sparc-t5-8-delivers-half-the-transactions-per-core-at-double-the-cost-of-the-websphere-on-ibm-power7/ retrieved 9/20/2013.

Wednesday Sep 25, 2013

Sun Server X4-2 Delivers Single App Server, 2-Chip x86 World Record SPECjEnterprise2010

Oracle's Sun Server X4-2 and Sun Server X4-2L servers, using the Intel Xeon E5-2697 v2 processor, produced a world record x86 two-chip single application server SPECjEnterprise2010 benchmark result of 11,259.88 SPECjEnterprise2010 EjOPS. The Sun Server X4-2 ran the application tier and the Sun Server X4-2L was used for the database tier.

  • The 2-socket Sun Server X4-2 demonstrated 16% better performance when compared to the 2-socket IBM X3650 M4 server result of 9,696.43 SPECjEnterprise2010 EjOPS.

  • This result used Oracle WebLogic Server 12c, Java HotSpot(TM) 64-Bit Server 1.7.0_40, Oracle Database 12c, and Oracle Linux.

Performance Landscape

Complete benchmark results are at the SPEC website, SPECjEnterprise2010 Results. The table below shows the top single application server, two-chip x86 results.

SPECjEnterprise2010 Performance Chart
as of 9/22/2013
Submitter EjOPS* Application Server Database Server
Oracle 11,259.88 1x Sun Server X4-2
2x 2.7 GHz Intel Xeon E5-2697 v2
Oracle WebLogic 12c (12.1.2)
1x Sun Server X4-2L
2x 2.7 GHz Intel Xeon E5-2697 v2
Oracle Database 12c (12.1.0.1)
IBM 9,696.43 1x IBM X3650 M4
2x 2.9 GHz Intel Xeon E5-2690
WebSphere Application Server V8.5
1x IBM X3650 M4
2x 2.9 GHz Intel Xeon E5-2690
IBM DB2 10.1
Oracle 8,310.19 1x Sun Server X3-2
2x 2.9 GHz Intel Xeon E5-2690
Oracle WebLogic 11g (10.3.6)
1x Sun Server X3-2L
2x 2.9 GHz Intel Xeon E5-2690
Oracle Database 11g (11.2.0.3)

* SPECjEnterprise2010 EjOPS, bigger is better.

Configuration Summary

Application Server:

1 x Sun Server X4-2
2 x 2.7 GHz Intel Xeon processor E5-2697 v2
256 GB memory
4 x 10 GbE NIC
Oracle Linux 5 Update 9 (kernel-2.6.39-400.124.1.el5uek)
Oracle WebLogic Server 12c (12.1.2)
Java HotSpot(TM) 64-Bit Server VM on Linux, version 1.7.0_40 (Java SE 7 Update 40)

Database Server:

1 x Sun Server X4-2L
2 x 2.7 GHz Intel Xeon E5-2697 v2
256 GB memory
1 x 10 GbE NIC
2 x FC HBA
3 x Sun StorageTek 2540 M2
Oracle Linux 5 Update 9 (kernel-2.6.39-400.124.1.el5uek)
Oracle Database 12c Enterprise Edition Release 12.1.0.1

Benchmark Description

SPECjEnterprise2010 is the third generation of the SPEC organization's J2EE end-to-end industry standard benchmark application. The SPECjEnterprise2010 benchmark has been designed and developed to cover the Java EE 5 specification's significantly expanded and simplified programming model, highlighting the major features used by developers in the industry today. This provides a real world workload driving the Application Server's implementation of the Java EE specification to its maximum potential and allowing maximum stressing of the underlying hardware and software systems.

The workload consists of an end to end web based order processing domain, an RMI and Web Services driven manufacturing domain and a supply chain model utilizing document based Web Services. The application is a collection of Java classes, Java Servlets, Java Server Pages, Enterprise Java Beans, Java Persistence Entities (pojo's) and Message Driven Beans.

The SPECjEnterprise2010 benchmark heavily exercises all parts of the underlying infrastructure that make up the application environment, including hardware, JVM software, database software, JDBC drivers, and the system network.

The primary metric of the SPECjEnterprise2010 benchmark is jEnterprise Operations Per Second ("SPECjEnterprise2010 EjOPS"). This metric is calculated by adding the metrics of the Dealership Management Application in the Dealer Domain and the Manufacturing Application in the Manufacturing Domain. There is no price/performance metric in this benchmark.

Key Points and Best Practices

  • Four Oracle WebLogic server instances were started using numactl binding 2 instances per chip.
  • Two Oracle database listener processes were started and each was bound to a separate chip.
  • Additional tuning information is in the report at http://spec.org.

See Also

Disclosure Statement

SPEC and the benchmark name SPECjEnterprise are registered trademarks of the Standard Performance Evaluation Corporation. Sun Server X4-2, 11,259.88 SPECjEnterprise2010 EjOPS; Sun Server X3-2, 8,310.19 SPECjEnterprise2010 EjOPS; IBM System X3650 M4, 9,696.43 SPECjEnterprise2010 EjOPS. Results from www.spec.org as of 9/22/2013.

Monday Sep 23, 2013

SPARC T5-2 Delivers Best 2-Chip MultiJVM SPECjbb2013 Result

From www.spec.org

Defects Identified in SPECjbb®2013

December 9, 2014 - SPEC has identified a defect in its SPECjbb®2013 benchmark suite. SPEC has suspended sales of the benchmark software and is no longer accepting new submissions of SPECjbb®2013 results for publication on SPEC's website. Current SPECjbb®2013 licensees will receive a free copy of the new version of the benchmark when it becomes available.

SPEC is advising SPECjbb®2013 licensees and users of the SPECjbb®2013 metrics that the recently discovered defect impacts the comparability of results. This defect can significantly impact the amount of work done during the measurement period, resulting in an inflated SPECjbb®2013 metric. SPEC recommends that users not utilize these results for system comparisons without a full understanding of the impact of these defects on each benchmark result.

Additional information is available here.

SPECjbb2013 is a new benchmark designed to show modern Java server performance. Oracle's SPARC T5-2 set a world record as the fastest two-chip system beating just introduced two-chip x86-based servers. Oracle, using Oracle Solaris and Oracle JDK, delivered this two-chip world record result on the MultiJVM SPECjbb2013 metric. SPECjbb2013 is the replacement for SPECjbb2005 (SPECjbb2005 will soon be retired by SPEC).

  • Oracle's SPARC T5-2 server achieved 81,084 SPECjbb2013-MultiJVM max-jOPS and 39,129 SPECjbb2013-MultiJVM critical-jOPS on the SPECjbb2013 benchmark. This result is a two chip world record.

  • There are no IBM POWER7 or POWER7+ based server results on the SPECjbb2013 benchmark. IBM has published IBM POWER7+ based servers on the SPECjbb2005 which will soon be retired by SPEC.

  • The 2-chip SPARC T5-2 server running SPECjbb2013 is 30% faster than the 2-chip Cisco UCS B200 M3 server (2.7 GHz E5-2697 v2 Ivy Bridge-based) based on SPECjbb2013-MultiJVM max-jOPS.

  • The 2-chip SPARC T5-2 server running SPECjbb2013 is 66% faster than the 2-chip Cisco UCS B200 M3 server (2.7 GHz E5-2697 v2 Ivy Bridge-based) based on SPECjbb2013-MultiJVM critical-jOPS.

  • These results were obtained using Oracle Solaris 11 along with Java Platform, Standard Edition, JDK 7 Update 40 on the SPARC T5-2 server.

From SPEC's press release, "SPECjbb2013 replaces SPECjbb2005. The new benchmark has been developed from the ground up to measure performance based on the latest Java application features. It is expected to be used widely by all those interested in Java server performance, including JVM vendors, hardware developers, Java application developers, researchers and members of the academic community."

Performance Landscape

Results of SPECjbb2013 from www.spec.org as of September 22, 2013 and this report.

SPECjbb2013
System Processor SPECjbb2013-MultiJVM JDK
type # max-jOPS critical-jOPS
SPARC T5-2 SPARC T5, 3.6 GHz 2 81,084 39,129 Oracle JDK 7u40
Cisco UCS B200 M3, DDR3-1866 Intel E5-2697 v2, 2.7 GHz 2 62,393 23,505 Oracle JDK 7u40
Sun Server X4-2, DDR3-1600 Intel E5-2697 v2, 2.7 GHz 2 52,664 20,553 Oracle JDK 7u40
Cisco UCS C220 M3 Intel E5-2690, 2.9 GHz 2 41,954 16,545 Oracle JDK 7u11

The above table represents all of the published results on www.spec.org. SPEC allows for self publication of SPECjbb2013 results. See below for locations where full reports were made available.

Configuration Summary

System Under Test:

SPARC T5-2 server
2 x SPARC T5, 3.60 GHz
512 GB memory (32 x 16 GB dimms)
Oracle Solaris 11.1
Oracle JDK 7 Update 40

Benchmark Description

The SPECjbb2013 benchmark has been developed from the ground up to measure performance based on the latest Java application features. It is relevant to all audiences who are interested in Java server performance, including JVM vendors, hardware developers, Java application developers, researchers and members of the academic community.

SPECjbb2013 replaces SPECjbb2005. New features include:

  • A usage model based on a world-wide supermarket company with an IT infrastructure that handles a mix of point-of-sale requests, online purchases and data-mining operations.
  • Both a pure throughput metric and a metric that measures critical throughput under service-level agreements (SLAs) specifying response times ranging from 10ms to 500ms.
  • Support for multiple run configurations, enabling users to analyze and overcome bottlenecks at multiple layers of the system stack, including hardware, OS, JVM and application layers.
  • Exercising new Java 7 features and other important performance elements, including the latest data formats (XML), communication using compression, and messaging with security.
  • Support for virtualization and cloud environments.

See Also

Disclosure Statement

SPEC and the benchmark name SPECjbb are registered trademarks of Standard Performance Evaluation Corporation (SPEC). Results as of 9/23/2013, see http://www.spec.org for more information. SPARC T5-2 81,084 SPECjbb2013-MultiJVM max-jOPS, 39,129 SPECjbb2013-MultiJVM critical-jOPS, result from https://blogs.oracle.com/BestPerf/resource/jbb2013/sparct5-922.pdf Cisco UCS B200 M3 62,393 SPECjbb2013-MultiJVM max-jOPS, 23,505 SPECjbb2013-MultiJVM critical-jOPS, result from http://www.cisco.com/en/US/prod/collateral/ps10265/le_41704_pb_specjbb2013b200.pdf; Sun Server X4-2 52,664 SPECjbb2013-MultiJVM max-jOPS, 20,553 SPECjbb2013-MultiJVM critical-jOPS, result from https://blogs.oracle.com/BestPerf/entry/20130918_x4_2_specjbb2013; Cisco UCS C220 M3 41,954 SPECjbb2013-MultiJVM max-jOPS, 16,545 SPECjbb2013-MultiJVM critical-jOPS result from www.spec.org.

Wednesday Sep 18, 2013

Sun Server X4-2 Performance Running SPECjbb2013 MultiJVM Benchmark

From www.spec.org

Defects Identified in SPECjbb®2013

December 9, 2014 - SPEC has identified a defect in its SPECjbb®2013 benchmark suite. SPEC has suspended sales of the benchmark software and is no longer accepting new submissions of SPECjbb®2013 results for publication on SPEC's website. Current SPECjbb®2013 licensees will receive a free copy of the new version of the benchmark when it becomes available.

SPEC is advising SPECjbb®2013 licensees and users of the SPECjbb®2013 metrics that the recently discovered defect impacts the comparability of results. This defect can significantly impact the amount of work done during the measurement period, resulting in an inflated SPECjbb®2013 metric. SPEC recommends that users not utilize these results for system comparisons without a full understanding of the impact of these defects on each benchmark result.

Additional information is available here.

Oracle's Sun Server X4-2 system, using Oracle Solaris and Oracle JDK, produced a SPECjbb2013 benchmark (MultiJVM metric) result. This benchmark was designed by the industry to showcase Java server performance.

  • The Sun Server X4-2 system is 24% faster than the fastest published Intel Xeon E5-2600 (Sandy Bridge) based two socket system's (Dell PowerEdge R720's) SPECjbb2013-MultiJVM max-jOPS.

  • The Sun Server X4-2 is 22% faster than the fastest published Intel Xeon E5-2600 (Sandy Bridge) based two socket system's (Dell PowerEdge R720's) SPECjbb2013-MultiJVM critical-jOPS.

  • The Sun Server X4-2 runs SPECjbb2013 (MultiJVM metric) at 70% of the published T5-2 SPECjbb2013-MultiJVM max-jOPS.

  • The Sun Server X4-2 runs SPECjbb2013 (MultiJVM metric) at 88% of the published T5-2 SPECjbb2013-MultiJVM critical-jOPS.

  • The combination of Oracle Solaris 11.1 and Oracle JDK 7 update 40 delivered a result of 52,664 SPECjbb2013-MultiJVM max-jOPS and 20,553 SPECjbb2013-MultiJVM critical-jOPS on the SPECjbb2013 benchmark.

From SPEC's press release, "SPECjbb2013 replaces SPECjbb2005. The new benchmark has been developed from the ground up to measure performance based on the latest Java application features. It is expected to be used widely by all those interested in Java server performance, including JVM vendors, hardware developers, Java application developers, researchers and members of the academic community."

Performance Landscape

Top two-socket results of SPECjbb2013 MultiJVM as of October 8, 2013.

SPECjbb2013
System Processor DDR3 SPECjbb2013-MultiJVM OS JDK
max-jOPS critical-jOPS
SPARC T5-2 2 x 3.6 GHz SPARC T5 1600 75,658 23,334 Solaris 11.1 7u17
Cisco UCS B200 M3 2 x 2.7 GHz Intel E5-2697 v2 1866 62,393 23,505 RHEL 6.4 7u40
Sun Server X4-2 2 x 2.7 GHz Intel E5-2697 v2 1600 52,664 20,553 Solaris 11.1 7u40
Dell PowerEdge R720 2 x 2.9 GHz Intel Xeon E5-2690 1600 42,431 16,779 RHEL 6.4 7u21

The above table includes published results from www.spec.org.

Configuration Summary

System Under Test:

Sun Server X4-2
2 x Intel E5-2697 v2, 2.7 GHz
Hyper-Threading enabled
Turbo Boost enabled
128 GB memory (16 x 8 GB dimms)
Oracle Solaris 11.1 (11.1.4.2.0)
Oracle JDK 7u40

Benchmark Description

The SPECjbb2013 benchmark has been developed from the ground up to measure performance based on the latest Java application features. It is relevant to all audiences who are interested in Java server performance, including JVM vendors, hardware developers, Java application developers, researchers and members of the academic community.

SPECjbb2013 replaces SPECjbb2005. New features include:

  • A usage model based on a world-wide supermarket company with an IT infrastructure that handles a mix of point-of-sale requests, online purchases and data-mining operations.
  • Both a pure throughput metric and a metric that measures critical throughput under service-level agreements (SLAs) specifying response times ranging from 10ms to 500ms.
  • Support for multiple run configurations, enabling users to analyze and overcome bottlenecks at multiple layers of the system stack, including hardware, OS, JVM and application layers.
  • Exercising new Java 7 features and other important performance elements, including the latest data formats (XML), communication using compression, and messaging with security.
  • Support for virtualization and cloud environments.

See Also

Disclosure Statement

SPEC and the benchmark name SPECjbb are registered trademarks of Standard Performance Evaluation Corporation (SPEC). Results from http://www.spec.org as of 10/8/2013. SPARC T5-2, 75,658 SPECjbb2013-MultiJVM max-jOPS, 23,334 SPECjbb2013-MultiJVM critical-jOPS; Cisco UCS B200 M3 62,393 SPECjbb2013-MultiJVM max-jOPS, 23,505 SPECjbb2013-MultiJVM critical-jOPS; Dell PowerEdge R720 42,431 SPECjbb2013-MultiJVM max-jOPS, 16,779 SPECjbb2013-MultiJVM critical-jOPS; Sun Server X4-2 52,664 SPECjbb2013-MultiJVM max-jOPS, 20,553 SPECjbb2013-MultiJVM critical-jOPS.

Wednesday May 01, 2013

SPARC T5-8 Delivers Best Single System SPECjEnterprise2010 Benchmark, Beats IBM

Oracle produced a world record single-server SPECjEnterprise2010 benchmark result of 27,843.57 SPECjEnterprise2010 EjOPS using one of Oracle's SPARC T5-8 servers for both the application and the database tier. This result directly compares the 8-chip SPARC T5-8 server (8 SPARC T5 processors) to the 8-chip IBM Power 780 server (8 POWER7+ processor).

  • The 8-chip SPARC T5 processor based server is 2.6x faster than the 8-chip IBM POWER7+ processor based server.

  • Both Oracle and IBM used virtualization to provide 4-chips for application and 4-chips for database.

  • The server cost/performance for the SPARC T5 processor based server was 6.9x better than the server cost/performance of the IBM POWER7+ processor based server. The cost/performance of the SPARC T5-8 server is $10.72 compared to the IBM Power 780 at $73.83.

  • The total configuration cost/performance (hardware+software) for the SPARC T5 processor based server was 3.6x better than the IBM POWER7+ processor based server. The cost/performance of the SPARC T5-8 server is $56.21 compared to the IBM Power 780 at $199.42. The IBM system had 1.6x better performance per core, but this did not reduce the total software and hardware cost to the customer. As shown by this comparison, performance-per-core is a poor predictor of characteristics relevant to customers.

  • The total IBM hardware plus software cost was $2,174,152 versus the total Oracle hardware plus software cost of $1,565,092. At this price IBM could only provide 768 GB of memory while Oracle was able to deliver 2 TB in the SPARC T5-8 server.

  • The SPARC T5-8 server requires only 8 rack units, the same as the space of the IBM Power 780. In this configuration IBM has a hardware core density of 4 cores per rack unit which contrasts with the 16 cores per rack unit for the SPARC T5-8 server. This again demonstrates why performance-per-core is a poor predictor of characteristics relevant to customers.

  • The virtualized SPARC T5 processor based server ran the application tier servers on 4 chips using Oracle Solaris Zones and the database tier in a 4-chip Oracle Solaris Zone. The virtualized IBM POWER7+ processor based server ran the application in a 4-chip LPAR and the database in a 4-chip LPAR.

  • The SPARC T5-8 server ran the Oracle Solaris 11.1 operating system and used Oracle Solaris Zones to consolidate eight Oracle WebLogic application server instances and one database server instance to achieve this result. The IBM system used LPARS and AIX V7.1.

  • This result demonstrated less than 1 second average response times for all SPECjEnterprise2010 transactions and represents JEE 5.0 transactions generated by 227,500 users.

  • The application server used Oracle Fusion Middleware components including the Oracle WebLogic 12.1 application server and Java HotSpot(TM) 64-Bit Server VM on Solaris, version 1.7.0_15. The database server was configured with Oracle Database 11g Release 2.

  • IBM has a non-virtualized result (one server for application and one server for database). The IBM PowerLinux 7R2 achieved 13,161.07 SPECjEnterprise2010 EjOPS which means it was 2.1x slower than the SPARC T5-8 server. The total configuration cost/performance (hardware+software) for the SPARC T5 processor based server was 11% better than the IBM POWER7+ processor based server. The cost/performance of the SPARC T5-8 server is $56.21 compared to the IBM PowerLinux 7R2 at $62.26. As shown by this comparison, performance-per-core is a poor predictor of characteristics relevant to customers.

Performance Landscape

Complete benchmark results are at the SPEC website, SPECjEnterprise2010 Results.

SPECjEnterprise2010 Performance Chart
Only Two Virtualized Results (App+DB on 1 Server) as of 5/1/2013
Submitter EjOPS* Java EE Server & DB Server
Oracle 27,843.57 1 x SPARC T5-8
8 chips, 128 cores, 3.6 GHz SPARC T5
Oracle WebLogic 12c (12.1.1)
Oracle Database 11g (11.2.0.3)
IBM 10,902.30 1 x IBM Power 780
8 chips, 32 cores, 4.42 GHz POWER7+
WebSphere Application Server V8.5
IBM DB2 Universal Database 10.1

* SPECjEnterprise2010 EjOPS (bigger is better)

Configuration Summary

Oracle Summary

Application and Database Server:

1 x SPARC T5-8 server, with
8 x 3.6 GHz SPARC T5 processors
2 TB memory
5 x 10 GbE dual-port NIC
6 x 8 Gb dual-port HBA
Oracle Solaris 11.1 SRU 4.5
Oracle WebLogic Server 12c (12.1.1)
Java HotSpot(TM) 64-Bit Server VM on Solaris, version 1.7.0_15
Oracle Database 11g (11.2.0.3)

Storage Servers:

6 x Sun Server X3-2L (12-Drive), with
2 x 2.4 GHz Intel Xeon
16 GB memory
1 x 8 Gb FC HBA
4 x Sun Flash Accelerator F40 PCI-E Card
Oracle Solaris 11.1

2 x Sun Storage 2540-M2 Array
12 x 600 GB 15K RPM SAS HDD

Switch Hardware:

1 x Sun Network 10 GbE 72-port Top of Rack (ToR) Switch

IBM Summary

Application and Database Server:

1 x IBM Power 780 server, with
8 x 4.42 GHz POWER7+ processors
786 GB memory
6 x 10 GbE dual-port NIC
3 x 8 Gb four-port HBA
IBM AIX V7.1 TL2
IBM WebSphere Application Server V8.5
IBM J9 VM (build 2.6, JRE 1.7.0 IBM J9 AIX ppc-32)
IBM DB2 10.1
IBM InfoSphere Optim pureQuery Runtime v3.1.1

Storage:

2 x DS5324 Disk System with
48 x 146GB 15K E-DDM Disks

1 x v7000 Disk Controller with
16 x 400GB SSD Disks

Benchmark Description

SPECjEnterprise2010 is the third generation of the SPEC organization's J2EE end-to-end industry standard benchmark application. The new SPECjEnterprise2010 benchmark has been re-designed and developed to cover the Java EE 5 specification's significantly expanded and simplified programming model, highlighting the major features used by developers in the industry today. This provides a real world workload driving the Application Server's implementation of the Java EE specification to its maximum potential and allowing maximum stressing of the underlying hardware and software systems,
  • The web zone, servlets, and web services
  • The EJB zone
  • JPA 1.0 Persistence Model
  • JMS and Message Driven Beans
  • Transaction management
  • Database connectivity
Moreover, SPECjEnterprise2010 also heavily exercises all parts of the underlying infrastructure that make up the application environment, including hardware, JVM software, database software, JDBC drivers, and the system network.

The primary metric of the SPECjEnterprise2010 benchmark is jEnterprise Operations Per Second (SPECjEnterprise2010 EjOPS). The primary metric for the SPECjEnterprise2010 benchmark is calculated by adding the metrics of the Dealership Management Application in the Dealer Domain and the Manufacturing Application in the Manufacturing Domain. There is NO price/performance metric in this benchmark.

Key Points and Best Practices

  • Eight Oracle WebLogic server instances on the SPARC T5-8 server were hosted in 8 separate Oracle Solaris Zones to demonstrate consolidation of multiple application servers. The 8 zones were bound to 4 resource pools using 64 cores (4 cpu chips).
  • The database ran in a separate Oracle Solaris Zone bound to a resource pool consisting 64 cores (4 cpu chips). The database shadow processes were run in the FX scheduling class and bound to one of four cpu chips using the plgrp command.
  • The Oracle WebLogic application servers were executed in the FX scheduling class to improve performance by reducing the frequency of context switches.
  • The Oracle log writer process was run in the FX scheduling class at processor priority 60 to use the Critical Thread feature.

See Also

Disclosure Statement

SPEC and the benchmark name SPECjEnterprise are registered trademarks of the Standard Performance Evaluation Corporation. Results from www.spec.org as of 5/1/2013. SPARC T5-8, 27,843.57 SPECjEnterprise2010 EjOPS; IBM Power 780, 10,902.30 SPECjEnterprise2010 EjOPS; IBM PowerLinux 7R2, 13,161.07 SPECjEnterprise2010 EjOPS. Oracle server only hardware list price is $298,494 and total hardware plus software list price is $1,565,092 from http://www.oracle.com as of  5/22/2013. IBM server only hardware list price is $804,931 and total hardware plus software cost of $2,174,152 based on public pricing from http://www.ibm.com as of 5/22/2013. IBM PowerLinux 7R2 server total hardware plus software cost of $819,451 based on public pricing from http://www.ibm.com as of 5/22/2013.

Tuesday Mar 26, 2013

SPARC T5-8 Delivers SPECjEnterprise2010 Benchmark World Record Performance

Oracle produced a world record SPECjEnterprise2010 benchmark result of 57,422.17 SPECjEnterprise2010 EjOPS using Oracle's SPARC T5-8 server in the application tier and another SPARC T5-8 server for the database tier.

  • The SPARC T5-8 server demonstrated 3.4x better performance compared to an 8-socket IBM Power 780 server result of 16,646.34 SPECjEnterprise2010 EjOPS. The SPARC T5-8 is 3.7x less expensive for the application server hardware list cost compared to the IBM configuration.

  • The SPARC T5 processor demonstrated 1.7x better performance per core compared to the POWER7 processor used in the IBM Power 780 SPECjEnterprise2010 result.

  • The SPARC T5-8 server demonstrated 2.2x better performance compared to the Cisco UCS B440 M2 Blade Server result of 26,118.67 SPECjEnterprise2010 EjOPS.

  • The SPARC T5-8 servers used in the application and database tiers ran the Oracle Solaris 11.1 operating system.

  • The SPARC T5-8 server for the application tier used Oracle Solaris Zones to consolidate sixteen Oracle WebLogic Server instances to achieve this result.

  • This result demonstrated less than 1 second response time for all SPECjEnterprise2010 transactions, while demonstrating a sustained load of Java EE 5 transactions equivalent to 468,000 users.

  • The SPARC T5-8 application server used Oracle Fusion Middleware components including the Oracle WebLogic 12.1 application server and Oracle JDK 7 Update 15. The SPARC T5-8 database server was configured with Oracle Database 11g Release 2.

  • This result used six Sun Server X3-2L systems each configured with 4 x 400 GB Sun Flash Accelerator F40 PCIe Card devices as storage servers for the database files.

  • This result represents the best performance/socket for a single system in the application tier of 7,177.77 SPECjEnterprise2010 EjOPS per socket.

  • A single SPARC T5-8 server in the application tier producing 57,422.17 SPECjEnterprise2010 EjOPS can replace a total of 4x SPARC T4-4 servers that obtained 40,104.86 SPECjEnterprise2010 EjOPS. A single SPARC T5-8 server in the application tier producing 57,422.17 SPECjEnterprise2010 EjOPS can replace 6x SPARC T3-4 servers where each SPARC T3-4 server obtained 9,456.28 SPECjEnterprise2010 EjOPS.

  • Oracle Fusion Middleware provides a family of complete, integrated, hot pluggable and best-of-breed products known for enabling enterprise customers to create and run agile and intelligent business applications. Oracle WebLogic Server's on-going, record-setting Java application server performance demonstrates why so many customers rely on Oracle Fusion Middleware as their foundation for innovation.

Performance Landscape

Complete benchmark results are at the SPEC website, SPECjEnterprise2010 Results.

SPECjEnterprise2010 Performance Chart
as of 3/26/2013
Submitter EjOPS* Java EE Server DB Server
Oracle 57,422.17 1 x SPARC T5-8
8 chips, 128 cores, 3.6 GHz SPARC T5
Oracle WebLogic 12c (12.1.1)
1 x SPARC T5-8
8 chips, 128 cores, 3.6 GHz SPARC T5
Oracle Database 11g (11.2.0.3)
Oracle 40,104.86 4 x SPARC T4-4
4 chips, 32 cores, 3.0 GHz SPARC T4
Oracle WebLogic 11g (10.3.5)
2 x SPARC T4-4
4 chips, 32 cores, 3.0 GHz SPARC T4
Oracle Database 11g (11.2.0.2)
Oracle 27,150.05 1x Sun Server X2-8
8x 2.4 GHz Intel Xeon E7-8870
Oracle WebLogic 12c
1x Sun Server X2-4
4x 2.4 GHz Intel Xeon E7-4870
Oracle Database 11g (11.2.0.2)
Cisco 26,118.67 2 x Cisco UCS B440 M2
4 chips, 40 cores, 2.4 GHz Xeon E7-4870
Oracle WebLogic 11g (10.3.5)
1 x Cisco UCS C460 M2
4 chips, 40 cores, 2.4 GHz Xeon E7-4870
Oracle Database 11g (11.2.0.2)
IBM 16,646.34 1 x IBM Power 780
8 chips, 64 cores, 3.86 GHz POWER7
WebSphere Application Server V7.0
1 x IBM Power 750 Express
4 chips, 32 cores, 3.55 GHz POWER7
IBM DB2 Universal Database 9.7

* SPECjEnterprise2010 EjOPS (bigger is better)

Configuration Summary

Application Server:

1 x SPARC T5-8 server, with
8 x 3.6 GHz SPARC T5 processors
2 TB memory
8 x 10 GbE dual-port NIC
Oracle Solaris 11.1 SRU 4.5
Oracle WebLogic Server 12c (12.1.1)
Oracle JDK 7 Update 15

Database Server:

1 x SPARC T5-8 server, with
8 x 3.6 GHz SPARC T5 processors
2 TB memory
5 x 10 GbE dual-port NIC
6 x 8 Gb FC dual-port HBA
Oracle Solaris 11.1 SRU 4.5
Oracle Database 11g Enterprise Edition Release 11.2.0.3

Storage Servers:

6 x Sun Server X3-2L (12-Drive), with
2 x 2.4 GHz Intel Xeon
16 GB memory
1 x 8 Gb FC HBA
4 x Sun Flash Accelerator F40 PCI-E Card
Oracle Solaris 11.1

2 x Sun Storage 2540-M2 Array
12 x 600 GB 15K RPM SAS HDD

Switch Hardware:

1 x Sun Network 10 GbE 72-port Top of Rack (ToR) Switch

Benchmark Description

SPECjEnterprise2010 is the third generation of the SPEC organization's J2EE end-to-end industry standard benchmark application. The new SPECjEnterprise2010 benchmark has been re-designed and developed to cover the Java EE 5 specification's significantly expanded and simplified programming model, highlighting the major features used by developers in the industry today. This provides a real world workload driving the Application Server's implementation of the Java EE specification to its maximum potential and allowing maximum stressing of the underlying hardware and software systems,
  • The web zone, servlets, and web services
  • The EJB zone
  • JPA 1.0 Persistence Model
  • JMS and Message Driven Beans
  • Transaction management
  • Database connectivity
Moreover, SPECjEnterprise2010 also heavily exercises all parts of the underlying infrastructure that make up the application environment, including hardware, JVM software, database software, JDBC drivers, and the system network.

The primary metric of the SPECjEnterprise2010 benchmark is jEnterprise Operations Per Second (SPECjEnterprise2010 EjOPS). The primary metric for the SPECjEnterprise2010 benchmark is calculated by adding the metrics of the Dealership Management Application in the Dealer Domain and the Manufacturing Application in the Manufacturing Domain. There is NO price/performance metric in this benchmark.

Key Points and Best Practices

  • Sixteen Oracle WebLogic server instances on the SPARC T5-8 server were hosted in 16 separate Oracle Solaris Zones to demonstrate consolidation of multiple application servers.
  • Each Oracle Solaris Zone was bound to a separate processor set, each contained total 58 hardware strands. This was done to improve performance by using the physical memory closest to the processors to reduce memory access latency. The default set was used for network and disk interrupt handling.
  • The Oracle WebLogic application servers were executed in the FX scheduling class to improve performance by reducing the frequency of context switches.
  • The Oracle database processes were run in 8 processor sets using psrset(1M) and executed in the FX scheduling class. This improved performance by reducing memory access latency and reducing context switches.
  • The Oracle log writer process was run in a separate processor set containing a single core and run in the RT scheduling class. This insured that the log writer had the most efficient use of CPU resources.

See Also

Disclosure Statement

SPEC and the benchmark name SPECjEnterprise are registered trademarks of the Standard Performance Evaluation Corporation. Results from www.spec.org as of 3/26/2013. SPARC T5-8, 57,422.17 SPECjEnterprise2010 EjOPS; SPARC T4-4, 40,104.86 SPECjEnterprise2010 EjOPS; Sun Server X2-8, 27,150.05 SPECjEnterprise2010 EjOPS; Cisco UCS B440 M2, 26,118.67 SPECjEnterprise2010 EjOPS; IBM Power 780, 16,646.34 SPECjEnterprise2010 EjOPS. SPARC T3-4 9456.28 SPECjEnterprise2010 EjOPS.

SPARC T5-8 (SPARC T5-8 Server base package, 8xSPARC T5 16-core processors, 128x16GB-1066 DIMMS, 2x600GB 10K RPM 2.5. SAS-2 HDD, 4x Power Cables) List Price $268,742. IBM Power 780 (IBM Power 780:9179 Model MHB, 8x3.8GHz 16-core, 64x one processor activation, 4xCEC Enclosure with IBM Bezel, I/O Backplane and System Midplane,16x 0/32GB DDR3 Memory (4x8GB) DIMMS-1066MHz Power7 CoD Memory, 12x Activation of 1 GB DDR3 Power7 Memory, 5x Activation of 100GB DDR3 Power7 Memory, 1x Disk/Media Backplane. 2x 146.8GB SAS 15K RPM 2.5. HDD (AIX/Linux only), 4x AC Power Supply 1725W) List Price $992,023. Source: Oracle.com and IBM.com, collected 03/18/2013.

SPARC T5-2 Achieves SPECjbb2013 Benchmark World Record Result

From www.spec.org

Defects Identified in SPECjbb®2013

December 9, 2014 - SPEC has identified a defect in its SPECjbb®2013 benchmark suite. SPEC has suspended sales of the benchmark software and is no longer accepting new submissions of SPECjbb®2013 results for publication on SPEC's website. Current SPECjbb®2013 licensees will receive a free copy of the new version of the benchmark when it becomes available.

SPEC is advising SPECjbb®2013 licensees and users of the SPECjbb®2013 metrics that the recently discovered defect impacts the comparability of results. This defect can significantly impact the amount of work done during the measurement period, resulting in an inflated SPECjbb®2013 metric. SPEC recommends that users not utilize these results for system comparisons without a full understanding of the impact of these defects on each benchmark result.

Additional information is available here.

Oracle, using Oracle Solaris and Oracle JDK, delivered a two socket server world record result on the SPECjbb2013 benchmark, Multi-JVM metric. This benchmark was designed by the industry to showcase Java server performance. SPECjbb2013 is the replacement for SPECjbb2005 (SPECjbb2005 will soon be retired by SPEC).

  • Oracle's SPARC T5-2 server achieved 75,658 SPECjbb2013-MultiJVM max-jOPS and 23,268 SPECjbb2013-MultiJVM critical-jOPS on the SPECjbb2013 benchmark. This result is a two chip world record. (Oracle has submitted this result for review by SPEC.)

  • There are no IBM POWER7 or POWER7+ based server results on the SPECjbb2013 benchmark. IBM has published IBM POWER7+ based servers on the SPECjbb2005 which will soon be retired by SPEC.

  • The SPARC T5-2 server running is 1.9x faster than the 2-chip HP ProLiant ML350p server (2.9 GHz E5-2690 Sandy Bridge-based) based on SPECjbb2013-MultiJVM max-jOPS.

  • The 2-chip SPARC T5-2 server is 15% faster than the 4-chip HP ProLiant DL560p server (2.7 GHz E5-4650 Sandy Bridge-based) based on SPECjbb2013-MultiJVM max-jOPS.

  • The 2-chip SPARC T5-2 server is 6.1x faster than the 1-chip HP ProLiant ML310e Gen8 (3.6 GHZ E3-1280v2 Ivy Bridge based) based on SPECjbb2013-MultiJVM max-jOPS.

  • The Sun Server X3-2 system running Oracle Solaris 11 is 5% faster than the HP ProLiant ML350p Gen8 server running Windows Server 2008 based on SPECjbb2013-MultiJVM max-jOPS.

  • Oracle's SPARC T4-2 server achieved 34,804 SPECjbb2013-MultiJVM max-jOPS and 10,101 SPECjbb2013-MultiJVM critical-jOPS on the SPECjbb2013 benchmark.
    (Oracle has submitted this result for review by SPEC.)

  • Oracle's Sun Server X3-2 system achieved 41,954 SPECjbb2013-MultiJVM max-jOPS and 13,305 SPECjbb2013-MultiJVM critical-jOPS on the SPECjbb2013 benchmark. (Oracle has submitted this result for review by SPEC.)

  • Oracle's Sun Server X2-4 system achieved 65,211 SPECjbb2013-MultiJVM max-jOPS and 22,057 SPECjbb2013-MultiJVM critical-jOPS on the SPECjbb2013 benchmark. (Oracle has submitted this result for review by SPEC.)

  • SPECjbb2013 demonstrates better performance on Oracle hardware and software, engineered to work together, than alternatives from HP.

  • These results were obtained using Oracle Solaris 11 along with Java Platform, Standard Edition, JDK 7 Update 17 on the SPARC T5-2 server, SPARC T4-2 server, Sun Server X3-2 and Sun Server X2-4.

From SPEC's press release, "SPECjbb2013 replaces SPECjbb2005. The new benchmark has been developed from the ground up to measure performance based on the latest Java application features. It is expected to be used widely by all those interested in Java server performance, including JVM vendors, hardware developers, Java application developers, researchers and members of the academic community."

Performance Landscape

Results of SPECjbb2013 from www.spec.org as of March 26, 2013 and this report.

SPECjbb2013
System Processor SPECjbb2013-MultiJVM OS JDK
max-jOPS critical-jOPS
SPARC T5-2 2 x SPARC T5 75,658 23,334 Oracle Solaris 11.1 Oracle JDK 7u17
HP DL560p Gen8 4 x Intel E5-4650 66,007 16,577 Windows 2008 R2 Oracle JDK 7u15
Sun Server X2-4 4 x Intel E7-4870 65,211 22,057 Oracle Solaris 11.1 Oracle JDK 7u17
Sun Server X3-2 2 x Intel E5-2690 41,954 13,305 Oracle Solaris 11.1 Oracle JDK 7u17
HP ML350p Gen8 2 x Intel E5-2690 40,047 12,308 Windows 2008 R2 Oracle JDK 7u15
SPARC T4-2 2 x SPARC T4 34,804 10,101 Oracle Solaris 11.1 Oracle JDK 7u17
Supermicro X8DTN+ 2 x Intel E5690 20,977 6,188 RHEL 6.3 Oracle JDK 7u11
HP ML310e Gen8 1 x Intel E3-1280v2 12,315 2,908 Windows 2008 R2 Oracle JDK 7u15
Intel R1304BT 1 x Intel 1260L 6,198 1,722 Windows 2008 R2 Oracle JDK 7u11

The above table represents all of the published results on www.spec.org. SPEC allows for self publication of SPECjbb2013 results.

Configuration Summary

Systems Under Test:

SPARC T5-2 server
2 x SPARC T5, 3.60 GHz
512 GB memory (32 x 16 GB dimms)
Oracle Solaris 11.1
Oracle JDK 7 Update 17

Sun Server X2-4
4 x Intel Xeon E7-4870, 2.40 GHz
Hyper-Threading enabled
Turbo Boost enabled
128 GB memory (32 x 4 GB dimms)
Oracle Solaris 11.1
Oracle JDK 7 Update 17

Sun Server X3-2
2 x Intel E5-2690, 2.90 GHz
Hyper-Threading enabled
Turbo Boost enabled
128 GB memory (32 x 4 GB dimms)
Oracle Solaris 11.1
Oracle JDK 7 Update 17

SPARC T4-2 server
2 x SPARC T4, 2.85 GHz
256 GB memory (32 x 8 GB dimms)
Oracle Solaris 11.1
Oracle JDK 7 Update 17

Benchmark Description

The SPECjbb2013 benchmark has been developed from the ground up to measure performance based on the latest Java application features. It is relevant to all audiences who are interested in Java server performance, including JVM vendors, hardware developers, Java application developers, researchers and members of the academic community.

SPECjbb2013 replaces SPECjbb2005. New features include:

  • A usage model based on a world-wide supermarket company with an IT infrastructure that handles a mix of point-of-sale requests, online purchases and data-mining operations.
  • Both a pure throughput metric and a metric that measures critical throughput under service-level agreements (SLAs) specifying response times ranging from 10ms to 500ms.
  • Support for multiple run configurations, enabling users to analyze and overcome bottlenecks at multiple layers of the system stack, including hardware, OS, JVM and application layers.
  • Exercising new Java 7 features and other important performance elements, including the latest data formats (XML), communication using compression, and messaging with security.
  • Support for virtualization and cloud environments.

See Also

Disclosure Statement

SPEC and the benchmark name SPECjbb are registered trademarks of Standard Performance Evaluation Corporation (SPEC). Results as of 3/26/2013, see http://www.spec.org for more information. SPARC T5-2 75,658 SPECjbb2013-MultiJVM max-jOPS, 23,334 SPECjbb2013-MultiJVM critical-jOPS. Sun Server X2-4 65,211 SPECjbb2013-MultiJVM max-jOPS, 22,057 SPECjbb2013-MultiJVM critical-jOPS. Sun Server X3-2 41,954 SPECjbb2013-MultiJVM max-jOPS, 13,305 SPECjbb2013-MultiJVM critical-jOPS. SPARC T4-2 34,804 SPECjbb2013-MultiJVM max-jOPS, 10,101 SPECjbb2013-MultiJVM critical-jOPS. HP ProLiant DL560p Gen8 66,007 SPECjbb2013-MultiJVM max-jOPS, 16,577 SPECjbb2013-MultiJVM critical-jOPS. HP ProLiant ML350p Gen8 40,047 SPECjbb2013-MultiJVM max-jOPS, 12,308 SPECjbb2013-MultiJVM critical-jOPS. Supermicro X8DTN+ 20,977 SPECjbb2013-MultiJVM max-jOPS, 6,188 SPECjbb2013-MultiJVM critical-jOPS. HP ProLiant ML310e Gen8 12,315 SPECjbb2013-MultiJVM max-jOPS, 2,908 SPECjbb2013-MultiJVM critical-jOPS. Intel R1304BT 6,198 SPECjbb2013-MultiJVM max-jOPS, 1,722 SPECjbb2013-MultiJVM critical-jOPS.

SPARC T5-8 Realizes SAP SD Two-Tier Benchmark World Record for 8 Chip Systems

Oracle's SPARC T5-8 server produced a world record result for systems with 8 processors on the two-tier SAP Sales and Distribution (SD) Standard Application Benchmark.

  • The SPARC T5-8 server achieved 40,000 users with running the two-tier SAP Sales and Distribution (SD) Standard Application Benchmark using SAP Enhancement package 5 for SAP ERP 6.0.

  • The SPARC T5-8 server is 57% faster than the IBM Power 760 8-chip running SAP Enhancement Package 5 for SAP ERP 6.0.

  • The SPARC T5-8 server delivers 5% more SAP users per chip than the IBM Power 780 12-chip running SAP Enhancement Package 5 for SAP ERP 6.0.

  • The SPARC T5-8 server solution was run with Oracle Solaris 11 and used Oracle Database 11g.

Performance Landscape

SAP-SD 2-Tier Performance Table (in decreasing performance order). SAP ERP 6.0 Enhancement Pack 5 for SAP ERP 6.0 results (New version of the benchmark as of May 2012).

System OS
Database
Users SAPS SAP
ERP/ECC
Release
Date
SPARC T5-8 Server
8x SPARC T5 @3.6 GHz, 2 TB
Solaris 11
Oracle 11g
40,000 220,950 EHP5 for SAP
ERP 6.0
25-Mar-13
IBM Power 760
8xPOWER7+ @3.41 GHz, 1024 GB
AIX 7.1
DB2 10
25,488 139,220 EHP5 for SAP
ERP 6.0
5-Feb-13

SAP ERP 6.0 Enhancement Pack 4 for SAP ERP 6.0 Results
(Old version of the benchmark, obsolete at the end of April, 2012)

System OS
Database
Users SAPS SAP
ERP/ECC
Release
Date
IBM Power 795
32xPOWER7 @4 GHz, 4 TB
AIX 7.1
DB2 9.7
126,063 688,630 EHP4 for SAP
ERP 6.0
15-Nov-10
SPARC Enterprise Server M9000
64xSPARC64 VII @2.88 GHz, 1152 GB
Solaris 10
Oracle 10g
32,000 175,600 EHP4 for SAP
ERP 6.0
18-Nov-09

Complete benchmark results may be found at the SAP benchmark website http://www.sap.com/benchmark.

Configuration Summary and Results

Hardware Configuration:

1 x SPARC T5-8 server with
8 x 3.6 GHz SPARC T5 processors (total of 8 processors / 128 cores / 1024 threads)
2 TB memory
1 x Sun ZFS Storage 7420 appliance with
72 x 600 GB 15K RPM 3.5" SAS-2 disk
32 x 32 GB memory
1 x Sun Fire X4270 M2 server configured as a COMSTAR device with
10 x 2 TB 7.2K 3.5" SAS disk
18 x 8 GB memory

Software Configuration:

Oracle Solaris 11
SAP enhancement package 5 for SAP ERP 6.0
Oracle Database 11g Release 2

Certified Results (published by SAP)

Performance:
40,000 benchmark users
SAP Certification:
2013008

Benchmark Description

The SAP Standard Application SD (Sales and Distribution) Benchmark is a two-tier ERP business test that is indicative of full business workloads of complete order processing and invoice processing, and demonstrates the ability to run both the application and database software on a single system. The SAP Standard Application SD Benchmark represents the critical tasks performed in real-world ERP business environments.

SAP is one of the premier world-wide ERP application providers, and maintains a suite of benchmark tests to demonstrate the performance of competitive systems on the various SAP products.

See Also

Disclosure Statement

Two-tier SAP Sales and Distribution (SD) Standard Application benchmarks SAP Enhancement package 5 for SAP ERP 6.0 as of 3/26/13:

SPARC T5-8 (8 processors, 128 cores, 1024 threads) 40,000 SAP SD users, 8 x 3.6 GHz SPARC T5, 2 TB memory, Oracle Database 11g, Oracle Solaris 11, Cert# 2013008. IBM Power 760 (8 processors, 48 cores, 192 threads) 25,488 SAP SD users, 8 x 3.41 GHz IBM POWER7+, 1024 GB memory, DB2 10, AIX 7.1, Cert#2013004.

Two-tier SAP Sales and Distribution (SD) Standard Application benchmarks SAP Enhancement package 4 for SAP ERP 6.0 as of 4/30/12:

IBM Power 795 (32 processors, 256 cores, 1024 threads) 126,063 SAP SD users, 32 x 4 GHz IBM POWER7, 4 TB memory, DB2 9.7, AIX7.1, Cert#2010046. SPARC Enterprise Server M9000 (64 processors, 256 cores, 512 threads) 32,000 SAP SD users, 64 x 2.88 GHz SPARC64 VII, 1152 GB memory, Oracle Database 10g, Oracle Solaris 10, Cert# 2009046.

SAP, R/3, reg TM of SAP AG in Germany and other countries. More info www.sap.com/benchmark

SPARC M5-32 Produces SAP SD Two-Tier Benchmark World Record for SAP Enhancement Package 5 for SAP ERP 6.0

Oracle's SPARC M5-32 server produced a world record result on the two-tier SAP Sales and Distribution (SD) Standard Application Benchmark using SAP Enhancement package 5 for SAP ERP 6.0.

  • The SPARC M5-32 server achieved 85,050 users running the two-tier SAP Sales and Distribution (SD) Standard Application Benchmark using SAP Enhancement package 5 for SAP ERP 6.0.

  • The SPARC M5-32 solution was run with Oracle Solaris 11 and used the Oracle Database 11g.

Performance Landscape

SAP-SD 2-Tier Performance Table (in decreasing performance order). SAP ERP 6.0 Enhancement Pack 5 for SAP ERP 6.0 results (new version of the benchmark as of May, 2012).

System OS
Database
Users SAPS SAP
ERP/ECC
Release
Date
SPARC M5-32 Server
32x SPARC M5 @3.6 GHz, 4 TB
Solaris 11
Oracle 11g
85,050 472,600 EHP5 for SAP
ERP 6.0
25-Mar-13
IBM Power 780
12xPOWER7+ @3.72 GHz, 1536 GB
AIX 7.1
DB2 10
57,024 311,720 EHP5 for SAP
ERP 6.0
3-Oct-12
IBM Power 760
8xPOWER7+ @3.41 GHz, 1024 GB
AIX 7.1
DB2 10
25,488 139,220 EHP5 for SAP
ERP 6.0
5-Feb-13

SAP ERP 6.0 Enhancement Pack 4 for SAP ERP 6.0 Results
(Old version of the benchmark, obsolete at the end of April, 2012)

System OS
Database
Users SAPS SAP
ERP/ECC
Release
Date
IBM Power 795
32xPOWER7 @4 GHz, 4 TB
AIX 7.1
DB2 9.7
126,063 688,630 EHP4 for SAP
ERP 6.0
15-Nov-10
SPARC Enterprise Server M9000
64xSPARC64 VII @2.88 GHz, 1152 GB
Solaris 10
Oracle 10g
32,000 175,600 EHP4 for SAP
ERP 6.0
18-Nov-09

Complete benchmark results may be found at the SAP benchmark website http://www.sap.com/benchmark.

Configuration Summary and Results

Hardware Configuration:

1 x SPARC M5-32 server with
32 x 3.6 GHz SPARC M5 processors (total of 32 processors / 192 cores / 1536 threads)
4 TB memory
1 x Sun Storage 2540-M2 (12 x 300 GB 5K RPM 3.5" SAS-2 disk & 2 GB cache)
Flash Storage

Software Configuration:

Oracle Solaris 11
SAP enhancement package 5 for SAP ERP 6.0
Oracle Database 11g Release 2

Certified Results (published by SAP)

Performance: 85,050 benchmark users
SAP Certification: 2013009

Benchmark Description

The SAP Standard Application SD (Sales and Distribution) Benchmark is a two-tier ERP business test that is indicative of full business workloads of complete order processing and invoice processing, and demonstrates the ability to run both the application and database software on a single system. The SAP Standard Application SD Benchmark represents the critical tasks performed in real-world ERP business environments.

SAP is one of the premier world-wide ERP application providers, and maintains a suite of benchmark tests to demonstrate the performance of competitive systems on the various SAP products.

See Also

Disclosure Statement

Two-tier SAP Sales and Distribution (SD) standard application benchmarks, SAP Enhancement package 5 for SAP ERP 6.0 as of 3/26/13:

SPARC M5-32 (32 processors, 192 cores, 1536 threads) 85,050 SAP SD users, 32 x 3.6 GHz SPARC M5, 4 TB memory, Oracle Database 11g, Oracle Solaris 11, Cert# 2013009. IBM Power 780 (12 processors, 96 cores, 384 threads) 57,024 SAP SD users, 12 x 3.72 GHz IBM POWER7+, 1536 GB memory, DB210, AIX7.1, Cert#2012033. IBM Power 760 (8 processors, 48 cores, 192 threads) 25,488 SAP SD users, 8 x 3.41 GHz IBM POWER7+, 1024 GB memory, DB2 10, AIX 7.1, Cert#2013004.

Two-tier SAP Sales and Distribution (SD) standard application benchmarks, SAP Enhancement package 4 for SAP ERP 6.0 as of 3/26/13:

IBM Power 795 (32 processors, 256 cores, 1024 threads) 126,063 SAP SD users, 32 x 4 GHz IBM POWER7, 4 TB memory, DB2 9.7, AIX7.1, Cert#2010046. SPARC Enterprise Server M9000 (64 processors, 256 cores, 512 threads) 32,000 SAP SD users, 64 x 2.88 GHz SPARC64 VII, 1152 GB memory, Oracle Database 10g, Oracle Solaris 10, Cert# 2009046.

SAP, R/3, reg TM of SAP AG in Germany and other countries. More info www.sap.com/benchmark

SPARC T5-2 Obtains Oracle Internet Directory Benchmark World Record Performance

Oracle's SPARC T5-2 server running Oracle Internet Directory (OID, Oracle's LDAP Directory Server) on Oracle Solaris 11 achieved a record result for LDAP searches/second with 1000 clients.

  • The SPARC T5-2 server running Oracle Internet Directory on Oracle Solaris 11 achieved a result of 944,624 LDAP searches/sec with an average latency of 1.05 ms with 1000 clients.

  • The SPARC T5-2 server running Oracle Internet Directory demonstrated 2.7x better throughput and 39% better latency improvement over similarly configured OID and SPARC T4 benchmark environment.

  • The SPARC T5-2 server running Oracle Internet Directory demonstrates 39% better throughput and latency for LDAP searches on core-to-core comparison over an x86 system configured with two Intel Xeon X5675 processors.

  • Oracle Internet Directory achieved near linear scaling on the SPARC T5-2 server with 68,399 LDAP searches/sec with 2 cores to 944,624 LDAP searches/sec with 32 cores.

  • Oracle Internet Directory and the SPARC T5-2 server achieved up to 12,453 LDAP modifys/sec with an average latency of 3.9 msec for 50 clients.

Performance Landscape

Oracle Internet Directory Tests
System c/c/th Search Modify Add
ops/sec lat (msec) ops/sec lat (msec) ops/sec lat (msec)
SPARC T5-2 2/32/256 944,624 1.05 12,453 3.9 888 17.9
SPARC T4-4 4/32/256 682,000 1.46 12,000 4.0 835 19.0

In order to compare the SPARC T5-2 to a 12-core x86 system, only 1 processor and 12 cores was used in the SPARC T5-2.

Oracle Internet Directory Tests – Comparing Against x86
System c/c/th Search Compare Authentication
ops/sec lat (msec) ops/sec lat (msec) ops/sec lat (msec)
SPARC T5-2 1/12/96 417,000 1.19 274,185 1.82 149,623 3.30
x86 2 x Intel X5675 2/12/24 299,000 1.66 202,433 2.46 119,198 4.19

Scaling runs were also made on the SPARC T5-2 server.

Scaling of Search Tests – SPARC T5-2
Cores Clients ops/sec Latency (msec)
32 1000 944,624 1.05
24 1000 823,741 1.21
16 500 560,709 0.88
8 500 270,601 1.84
4 100 145,879 0.68
2 100 68,399 1.46

Configuration Summary

System Under Test:

SPARC T5-2
2 x SPARC T5 processors, 3.6 GHz
512 GB memory
4 x 300 GB internal disks
Flash Storage (used for database and log files)
1 x Sun Storage 2540-M2 (used for redo logs)
Oracle Solaris 11.1
Oracle Internet Directory 11g Release 1 PS6 (11.1.1.7.0)
Oracle Database 11g Enterprise Edition 11.2.0.3 (64-bit)

Benchmark Description

Oracle Internet Directory (OID) is Oracle's LDAPv3 Directory Server. The throughput for five key operations are measured — Search, Compare, Modify, Mix and Add.

LDAP Search Operations Test

This test scenario involved concurrent clients binding once to OID and then performing repeated LDAP Search operations. The salient characteristics of this test scenario is as follows:

  • SLAMD SearchRate job was used.
  • BaseDN of the search is root of the DIT, the scope is SUBTREE, the search filter is of the form UID=, DN and UID are the required attribute.
  • Each LDAP search operation matches a single entry.
  • The total number concurrent clients was 1000 and were distributed amongst two client nodes.
  • Each client binds to OID once and performs repeated LDAP Search operations, each search operation resulting in the lookup of a unique entry in such a way that no client looks up the same entry twice and no two clients lookup the same entry and all entries are searched randomly.
  • In one run of the test, random entries from the 50 Million entries are looked up in as many LDAP Search operations.
  • Test job was run for 60 minutes.

LDAP Compare Operations Test

This test scenario involved concurrent clients binding once to OID and then performing repeated LDAP Compare operations on userpassword attribute. The salient characteristics of this test scenario is as follows:

  • SLAMD CompareRate job was used.
  • Each LDAP compare operation matches user password of user.
  • The total number concurrent clients was 1000 and were distributed amongst two client nodes.
  • Each client binds to OID once and performs repeated LDAP compare operations.
  • In one run of the test, random entries from the 50 Million entries are compared in as many LDAP compare operations.
  • Test job was run for 60 minutes.

LDAP Modify Operations Test

This test scenario consisted of concurrent clients binding once to OID and then performing repeated LDAP Modify operations. The salient characteristics of this test scenario is as follows:

  • SLAMD LDAP modrate job was used.
  • A total of 50 concurrent LDAP clients were used.
  • Each client updates a unique entry each time and a total of 50 Million entries are updated.
  • Test job was run for 60 minutes.
  • Value length was set to 11.
  • Attribute that is being modified is not indexed.

LDAP Mixed Load Test

The test scenario involved both the LDAP search and LDAP modify clients enumerated above.

  • The ratio involved 60% LDAP search clients, 30% LDAP bind and 10% LDAP modify clients.
  • A total of 1000 concurrent LDAP clients were used and were distributed on 2 client nodes.
  • Test job was run for 60 minutes.

LDAP Add Load Test

The test scenario involved concurrent clients adding new entries as follows.

  • Slamd standard add rate job is used.
  • A total of 500,000 entries were added.
  • A total of 16 concurrent LDAP clients were used.
  • Slamd add's inetorgperson objectclass entry with 21 attributes (includes operational attributes).

See Also

Disclosure Statement

Copyright 2013, Oracle and/or its affiliates. All rights reserved. Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners. Results as of 26 March 2013.

SPARC T5-2 Scores Oracle FLEXCUBE Universal Banking Benchmark World Record Performance

Oracle's SPARC T5-2 server running Oracle FLEXCUBE Universal Banking Release 12 along with Oracle Database 11g Release 2 on Oracle Solaris 11 produced record results.

  • A SPARC T5-2 server running Oracle FLEXCUBE Universal Banking Release 12 and Oracle Real Application Clusters (RAC) Database 11g Release 2 processed 25 million accounts in 150 minutes for the End of Month workloads with an average utilization of 55% and 196 minutes utilizing 20 cores with an average cpu utilization of 85%.

  • A SPARC T5-2 server running Oracle FLEXCUBE Universal Banking Release 12 and Oracle Real Application Clusters (RAC) Database 11g Release 2 processed 25 million accounts in 56 minutes for the End of Day workload utilizing just 20 cores.

  • A SPARC T5-2 server running Oracle FLEXCUBE Universal Banking Release 12 achieved twice the throughput compared to a SPARC T4-4 server (which has twice the number of processors) for End of Month batch processing.

  • A SPARC T5-2 server running Oracle FLEXCUBE Universal Banking Release 12 achieved a record result processing 10.14 million accounts in 28 minutes for the End of Day workload with an average cpu utilization of 72% on a single server.

  • These results demonstrate how SPARC T5 processor systems along with Oracle Solaris 11 can benefit global, private and corporate financial institutions who are running Oracle FLEXCUBE Universal Banking. The uniquely co-engineered Oracle software and SPARC T5 processor based system unlock unique agile capabilities demanded by modern business environments.

  • The SPARC T5-2 system along with Oracle Solaris is able to provide a combination of uniquely essential characteristics that resonate with core values for a modern financial services institution.

  • The SPARC T5 processor based systems are capable of delivering higher performance and lower total cost of ownership (TCO) than older SPARC infrastructure, without introducing the unseen tax and risk of migrating applications away from older SPARC systems.

Performance Landscape

Oracle FLEXCUBE Universal Banking Release 12
End of Month Batch Processing
System Customer
Accounts
Time in Minutes Notes
SPARC T5-2 25M 150.66 RAC (two systems)
SPARC T5-2 10.14M 101.92 single instance
SPARC T4-4 10.14M 108.77 single instance
SPARC T4-4 5M 106.18 single instance, two chips

Oracle FLEXCUBE Universal Banking Release 12
End of Day Batch Processing
System Customer
Accounts
Time in Minutes Notes
SPARC T5-2 25M 56.05 RAC (two systems)
SPARC T5-2 10.14M 27.87 single instance

Configuration Summary

SPARC T5 Configuration:

1 x SPARC T5-2 with
2 x SPARC T5 processors, 3.6 GHz
512 GB memory
1 x SPARC T5-2 with
2 x SPARC T5 processors, 3.6 GHz
256 GB memory
Oracle Solaris 11 11/11
Oracle Database 11g Release 2 (RAC/ASM 11.2.0.3.0)
Oracle FLEXCUBE Universal Banking Release 12.0.1

SPARC T4 Configuration:

2 x SPARC T4-4, each with
4 x SPARC T4 processors, 3.0 GHz
512 GB memory
Oracle Solaris 11 11/11
Oracle Database 11g Release 2 (RAC/ASM 11.2.0.3.0)
Oracle FLEXCUBE Universal Banking Release 12.0.1

Storage Configuration:

3 x Sun Storage 6180 Array with
16 x 300 GB disks, 15K RPM (total of 48)
4 x Sun Storage CSM200 Expansion Trays, each with
16 x 73 GB disks, 15K RPM (total of 64)
Configured as RAID0, ASM external redundancy
Tests run with single instance DB (single node) and with ASM two nodes
ASM configuration identical on both 2 machines
Oracle Database 11g Release 2 ASM 11.2.0.3.0 64bit (19 TB)

Benchmark Description

The Oracle FLEXCUBE Universal Banking Release 12 benchmark models an actual customer bank with End of Cycle transaction batch jobs which typically execute during non-banking hours. This benchmark includes end of day accrual for savings and term deposit accounts, interest capitalization for saving accounts, and interest pay out for term deposit accounts. The results of the benchmark are certified by Oracle and a white paper is published.

End of cycle batch tests are conducted to measure the throughput capabilities of the system. It helps banks to decide the end of cycle processing window required to do the back office processing. The End of Day (EOD) batch test includes the following:

  • Mark End of Transaction Input
  • Value Dated Balance update
  • Interest and Charges (IC) Batch
  • Mark End of Financial Input
  • Mark End of Day
  • Date Change
  • Mark Transaction Input
The End of Month (EOM) batch test includes additional tests. These batches typically execute during non-banking hours. The goal is to ensure that the system is able to complete the batch operations for the planned volumes End of Day (EOD) within 60 minutes and End of Month (EOM) including interest and charges liquidation within 180 minutes.

 

See Also

Disclosure Statement

Copyright 2013, Oracle and/or its affiliates. All rights reserved. Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners. Results as of 26 March 2013.

Friday Feb 22, 2013

Oracle Produces World Record SPECjbb2013 Result with Oracle Solaris and Oracle JDK

From www.spec.org

Defects Identified in SPECjbb®2013

December 9, 2014 - SPEC has identified a defect in its SPECjbb®2013 benchmark suite. SPEC has suspended sales of the benchmark software and is no longer accepting new submissions of SPECjbb®2013 results for publication on SPEC's website. Current SPECjbb®2013 licensees will receive a free copy of the new version of the benchmark when it becomes available.

SPEC is advising SPECjbb®2013 licensees and users of the SPECjbb®2013 metrics that the recently discovered defect impacts the comparability of results. This defect can significantly impact the amount of work done during the measurement period, resulting in an inflated SPECjbb®2013 metric. SPEC recommends that users not utilize these results for system comparisons without a full understanding of the impact of these defects on each benchmark result.

Additional information is available here.

Oracle, using Oracle Solaris and Oracle JDK, delivered a world record result on the SPECjbb2013 benchmark (Composite metric). This benchmark was designed by the industry to showcase Java server performance. SPECjbb2013 is the replacement for SPECjbb2005 (SPECjbb2005 will soon be retired by SPEC).

  • Oracle Solaris is 1.8x faster on the SPECjbb2013-Composite max-jOPS metric than the Red Hat Enterprise Linux result.

  • Oracle Solaris is 2.2x faster on the SPECjbb2013-Composite critical-jOPS metric than the Red Hat Enterprise Linux result.

  • The combination of Oracle Solaris 11.1 and Oracle JDK 7 update 15 delivered a result of 37,007 SPECjbb2013-Composite max-jOPS and 13,812 SPECjbb2013-Composite critical-jOPS on the SPECjbb2013 benchmark.
    (Oracle has submitted this result for review by SPEC and it is currently under review.)

From SPEC's press release, "SPECjbb2013 replaces SPECjbb2005. The new benchmark has been developed from the ground up to measure performance based on the latest Java application features. It is expected to be used widely by all those interested in Java server performance, including JVM vendors, hardware developers, Java application developers, researchers and members of the academic community."

Performance Landscape

Results of SPECjbb2013 from www.spec.org as of February 22, 2013 and this report.

SPECjbb2013
System Processor SPECjbb2013-Composite OS JDK
max-jOPS critical-jOPS
Sun Server X2-4 4 x Intel E7-4870 37,007 13,812 Solaris 11.1 Oracle JDK 7u15
Supermicro X8DTN+ 2 x Intel E5690 20,977 6,188 RHEL 6.3 Oracle JDK 7u11
Intel R1304BT 1 x Intel 1260L 6,198 1,722 Windows 2008 R2 Oracle JDK 7u11

The above table represents all of the published results on www.spec.org. SPEC allows for self publication of SPECjbb2013 results. AnandTech has taken advantage of this and has some result on their website which were run on Intel Xeon E5-2660, AMD Opteron 6380, AMD Opteron 6376 systems. These information be viewed at: www.anandtech.com. Unfortunately AnandTech did not follow SPEC's Fair Use requirements in disclosing information about their runs, so it is not possible to include the results in the table above.

SPECjbb2013
System Processor SPECjbb2013-MultiJVM OS JDK
max-jOPS critical-jOPS
HP ProLiant DL560p Gen8 4 x Intel E5-4650 66,007 16,577 Windows Server 2008 Oracle JDK 7u15
HP ProLiant ML350p Gen8 2 x Intel E5-2690 40,047 12,308 Windows Server 2008 Oracle JDK 7u15
HP ProLiant ML310e Gen8 1 x Intel E3-1280v2 12,315 2,908 Windows 2008 R2 Oracle JDK 7u15

Configuration Summary

System Under Test:

Sun Server X2-4
4 x Intel Xeon E7-4870, 2.40 GHz
Hyper-Threading enabled
Turbo Boost enabled
128 GB memory (32 x 4 GB dimms)
Oracle Solaris 11.1
Oracle JDK 7 update 15

Benchmark Description

The SPECjbb2013 benchmark has been developed from the ground up to measure performance based on the latest Java application features. It is relevant to all audiences who are interested in Java server performance, including JVM vendors, hardware developers, Java application developers, researchers and members of the academic community.

SPECjbb2013 replaces SPECjbb2005. New features include:

  • A usage model based on a world-wide supermarket company with an IT infrastructure that handles a mix of point-of-sale requests, online purchases and data-mining operations.
  • Both a pure throughput metric and a metric that measures critical throughput under service-level agreements (SLAs) specifying response times ranging from 10ms to 500ms.
  • Support for multiple run configurations, enabling users to analyze and overcome bottlenecks at multiple layers of the system stack, including hardware, OS, JVM and application layers.
  • Exercising new Java 7 features and other important performance elements, including the latest data formats (XML), communication using compression, and messaging with security.
  • Support for virtualization and cloud environments.

See Also

Disclosure Statement

SPEC and the benchmark name SPECjbb are registered trademarks of Standard Performance Evaluation Corporation (SPEC). Results as of 2/22/2013, see http://www.spec.org for more information. Sun Server X2-4 37007 SPECjbb2013-Composite max-jOPS, 13812 SPECjbb2013-Composite critical-jOPS.

About

BestPerf is the source of Oracle performance expertise. In this blog, Oracle's Strategic Applications Engineering group explores Oracle's performance results and shares best practices learned from working on Enterprise-wide Applications.

Index Pages
Search

Archives
« February 2016
SunMonTueWedThuFriSat
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
     
       
Today