Monday Oct 26, 2015

Oracle Communications ASAP – Telco Subscriber Activation: SPARC T7-2 World Record

Oracle's SPARC T7-2 server delivered world record results on Oracle Communications ASAP. The SPARC T7-2 server ran Oracle Solaris 11 with Oracle Database 11g Release 2, Oracle WebLogic Server 12c and Oracle Communications ASAP version 7.2.

  • Running Oracle Communications ASAP, the SPARC T7-2 server delivered a world record result of 3,018 ASDLs/sec (atomic network activation actions).

  • Oracle's SPARC M7 processor delivered over 2.5 times the throughput per ASDL cost compared to the previous generation SPARC T5 processor.

  • The SPARC T7-2 server running a single instance of the Oracle Communications ASAP application, with both the application and database tiers consolidated onto a single machine, easily supported the service activation volumes of 3,018 ASDLs/sec which is representative of a typical mobile operator with more than 100 million subscribers.

  • Oracle Communications ASAP v7.2.0.4 delivered 35% higher throughput on the SPARC T7-2 server when compared to the SPARC T5-4 server.

Performance Landscape

All of the following results were run as part of this benchmark effort.

ASAP 7.2.0.4 Test Results – 16 NEP
Both tests used 1 cpu for App tier and 1 cpu for DB tier
System ASDLs/sec CPU Usage CPU Cost per ASDL Cost Improvement Ratio
SPARC T7-2 3,018.56 11.4% 1.10 2.6
SPARC T5-4 2,238.97 29.6% 2.15 1.0

CPU Cost per ASDL – computing cost per ASDL (smaller is better)
Cost Improvement Ratio – improvement per cpu of SPARC T7-2 to SPARC T5-4

Configuration Summary

Hardware Configuration:

SPARC T7-2 server
2 x SPARC M7 processors (4.13 GHz)
512 GB memory

SPARC T5-4 server
4 x SPARC T5 processors (3.6 GHz)
512 GB memory

Storage Configuration:

Pillar Axiom

Software Configuration:

Oracle Communications ASAP 7.2.0.4.1
Oracle Solaris 11.2
Oracle Database 12c Release 12.1.0.1.0
Oracle WebLogic Server 10.3.6.0
Oracle JDK 7 update 75

Benchmark Description

Oracle Communications ASAP provides a convergent service activation platform that automatically activates customer services in a heterogeneous network and IT environment. It supports the activation of consumer and business services in fixed and mobile domains against network and IT applications.

ASAP enables rapid service design and network technology introduction by means of its metadata-driven architecture, design-time configuration environment, and catalog of pre-built activation cartridges to reduce deployment time, cost, and risk. The application has been deployed for mobile (3G, 4G and M2M) services and fixed multi-play (broadband, voice, video, and IT) services in telecommunications, cable and satellite environments as well as for business voice, data, and IT cloud services.

It may be deployed in a fully integrated manner as part of the Oracle Communications Service Fulfillment solution or directly integrated with third- party upstream systems. Market-proven for high-volume performance and scalability, Oracle Communications ASAP is deployed by more than 75 service providers worldwide and activates services for approximately 250 million subscribers globally.

The throughput of ASAP is measured in atomic actions per second (or ASDLs/sec). An atomic action is a single command or operation that can be executed on a network element. Atomic actions are grouped together to form a common service action, where each service action typically relates to an orderable item, such as "GSM voice" or "voice mail" or "GSM data". One or more service actions are invoked by an order management system via an activation work order request.

The workload resembles a typical mobile order to activate a GSM subscriber. A single service action to add a subscriber consists of seven atomic actions where each atomic action executes a command on a network element. Each network element was serviced by a dedicated Network Element Processor (NEP). The ASAP benchmark can vary the number of NEPs, which correlate to the complexity of a Telco operator's environment.

See Also

Disclosure Statement

Copyright 2015, Oracle and/or its affiliates. All rights reserved. Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners. Results as of 25 October 2015.

Oracle E-Business Payroll Batch Extra-Large: SPARC T7-1 World Record

Oracle's SPARC T7-1 server set a world record running the Oracle E-Business Suite 12.1.3 Standard Extra-Large (250,000 Employees) Payroll (Batch) workload.

  • The SPARC T7-1 server produced a world record result of 1,527,494 employee records processed per hour (9.82 min elapsed time) on the Oracle E-Business Suite R12 (12.1.3) Extra-Large Payroll (Batch) benchmark.

  • The SPARC T7-1 server equipped with one 4.13 GHz SPARC M7 processor, demonstrated 36% better hourly employee throughput compared to a two-chip Cisco UCS B200 M4 (Intel Xeon E5-2697 v3).

  • The SPARC T7-1 server equipped with one 4.13 GHz SPARC M7 processor, demonstrated 40% better hourly employee throughput compared to two-chip IBM S824 (POWER8 using 12 cores total).

Performance Landscape

This is the world record result for the Payroll Extra-Large model using Oracle E-Business 12.1.3 workload.

Batch Workload: Payroll Extra-Large Model
System Processor Employees/Hr Elapsed Time
SPARC T7-1 1 x SPARC M7 (4.13 GHz) 1,527,494 9.82 minutes
Cisco UCS B200 M4 2 x Intel Xeon Processor E5-2697 v3 1,125,281 13.33 minutes
IBM S824 2 x POWER8 (3.52 GHz) 1,090,909 13.75 minutes
Cisco UCS B200 M3 2 x Intel Xeon Processor E5-2697 v2 1,017,639 14.74 minutes
Cisco UCS B200 M3 2 x Intel Xeon Processor E5-2690 839,865 17.86 minutes
Sun Server X3-2L 2 x Intel Xeon Processor E5-2690 789,473 19.00 minutes

Configuration Summary

Hardware Configuration:

SPARC T7-1 server
1 x SPARC M7 processor (4.13 GHz)
256 GB memory (16 x 16 GB)
Oracle ZFS Storage ZS3-2 appliance (DB Data storage) with
40 x 900 GB 10K RPM SAS-2 HDD,
8 x Write Flash Accelerator SSD and
2 x Read Flash Accelerator SSD 1.6 TB SAS
Oracle Flash Accelerator F160 PCIe Card (1.6 TB NVMe for DB Log storage)

Software Configuration:

Oracle Solaris 11.3
Oracle E-Business Suite R12 (12.1.3)
Oracle Database 11g (11.2.0.3.0)

Benchmark Description

The Oracle E-Business Suite Standard R12 Benchmark combines online transaction execution by simulated users with concurrent batch processing to model a typical scenario for a global enterprise. This benchmark ran one Batch component, Payroll, in the Extra-Large size.

Results can be published in four sizes and use one or more online/batch modules

  • X-large: Maximum online users running all business flows between 10,000 to 20,000; 750,000 order to cash lines per hour and 250,000 payroll checks per hour.
    • Order to Cash Online — 2400 users
      • The percentage across the 5 transactions in Order Management module is:
        • Insert Manual Invoice — 16.66%
        • Insert Order — 32.33%
        • Order Pick Release — 16.66%
        • Ship Confirm — 16.66%
        • Order Summary Report — 16.66%
    • HR Self-Service — 4000 users
    • Customer Support Flow — 8000 users
    • Procure to Pay — 2000 users
  • Large: 10,000 online users; 100,000 order to cash lines per hour and 100,000 payroll checks per hour.
  • Medium: up to 3000 online users; 50,000 order to cash lines per hour and 10,000 payroll checks per hour.
  • Small: up to 1000 online users; 10,000 order to cash lines per hour and 5,000 payroll checks per hour.

Key Points and Best Practices

  • All system optimizations are in the published report which is referenced in the See Also section below.

See Also

Disclosure Statement

Oracle E-Business X-Large Payroll Batch workload, SPARC T7-1, 4.13 GHz, 1 chip, 32 cores, 256 threads, 256 GB memory, elapsed time 9.82 minutes, 1,527,494 hourly employee throughput, Oracle Solaris 11.3, Oracle E-Business Suite 12.1.3, Oracle Database 11g Release 2, Results as of 10/25/2015.

Oracle E-Business Suite Applications R12.1.3 (OLTP X-Large): SPARC M7-8 World Record

Oracle's SPARC M7-8 server, using a four-chip Oracle VM Server for SPARC (LDom) virtualized server, produced a world record 20,000 users running the Oracle E-Business OLTP X-Large benchmark. The benchmark runs five Oracle E-Business online workloads concurrently: Customer Service, iProcurement, Order Management, Human Resources Self-Service, and Financials.

  • The virtualized four-chip LDom on the SPARC M7-8 was able to handle more users than the previous best result which used eight processors of Oracle's SPARC M6-32 server.

  • The SPARC M7-8 server using Oracle VM Server for SPARC provides enterprise applications high availability, where each application is executed on its own environment, insulated and independent of the others.

Performance Landscape

Oracle E-Business (3-tier) OLTP X-Large Benchmark
System Chips Total Online Users Weighted Average
Response Time (sec)
90th Percentile
Response Time (s)
SPARC M7-8 4 20,000 0.70 1.13
SPARC M6-32 8 18,500 0.61 1.16

Break down of the total number of users by component.

Users per Component
Component SPARC M7-8 SPARC M6-32
Total Online Users 20,000 users 18,500 users
HR Self-Service
Order-to-Cash
iProcurement
Customer Service
Financial
5000 users
2500 users
2700 users
7000 users
2800 users
4000 users
2300 users
2400 users
7000 users
2800 users

Configuration Summary

System Under Test:

SPARC M7-8 server
8 x SPARC M7 processors (4.13 GHz)
4 TB memory
2 x 600 GB SAS-2 HDD
using a Logical Domain with
4 x SPARC M7 processors (4.13 GHz)
2 TB memory
2 x Sun Storage Dual 16Gb Fibre Channel PCIe Universal HBA
2 x Sun Dual Port 10GBase-T Adapter
Oracle Solaris 11.3
Oracle E-Business Suite 12.1.3
Oracle Database 11g Release 2

Storage Configuration:

4 x Oracle ZFS Storage ZS3-2 appliances each with
2 x Read Flash Accelerator SSD
1 x Storage Drive Enclosure DE2-24P containing:
20 x 900 GB 10K RPM SAS-2 HDD
4 x Write Flash Accelerator SSD
1 x Sun Storage Dual 8Gb FC PCIe HBA
Used for Database files, Zones OS, EBS Mid-Tier Apps software stack
and db-tier Oracle Server
2 x Sun Server X4-2L server with
2 x Intel Xeon Processor E5-2650 v2
128 GB memory
1 x Sun Storage 6Gb SAS PCIe RAID HBA
4 x 400 GB SSD
14 x 600 GB HDD
Used for Redo log files, db backup storage.

Benchmark Description

The Oracle E-Business OLTP X-Large benchmark simulates thousands of online users executing transactions typical of an internal Enterprise Resource Processing, simultaneously executing five application modules: Customer Service, Human Resources Self Service, iProcurement, Order Management and Financial.

Each database tier uses a database instance of about 600 GB in size, supporting thousands of application users, accessing hundreds of objects (tables, indexes, SQL stored procedures, etc.).

Key Points and Best Practices

This test demonstrates virtualization technologies running concurrently various Oracle multi-tier business critical applications and databases on four SPARC M7 processors contained in a single SPARC M7-8 server supporting thousands of users executing a high volume of complex transactions with constrained (<1 sec) weighted average response time.

The Oracle E-Business LDom is further configured using Oracle Solaris Zones.

This result of 20,000 users was achieved by load balancing the Oracle E-Business Suite Applications 12.1.3 five online workloads across two Oracle Solaris processor sets and redirecting all network interrupts to a dedicated third processor set.

Each applications processor set (set-1 and set-2) was running concurrently two Oracle E-Business Suite Application servers and two database servers instances, each within its own Oracle Solaris Zone (4 x Zones per set).

Each application server network interface (to a client) was configured to map with the locality group associated to the CPUs processing the related workload, to guarantee memory locality of network structures and application servers hardware resources.

All external storage was connected with at least two paths to the host multipath-capable fibre channel controller ports and Oracle Solaris I/O multipathing feature was enabled.

See Also

Disclosure Statement

Oracle E-Business Suite R12 extra-large multiple-online module benchmark, SPARC M7-8, SPARC M7, 4.13 GHz, 4 chips, 128 cores, 1024 threads, 2 TB memory, 20,000 online users, average response time 0.70 sec, 90th percentile response time 1.13 sec, Oracle Solaris 11.3, Oracle Solaris Zones, Oracle VM Server for SPARC, Oracle E-Business Suite 12.1.3, Oracle Database 11g Release 2, Results as of 10/25/2015.

Oracle E-Business Order-To-Cash Batch Large: SPARC T7-1 World Record

Oracle's SPARC T7-1 server set a world record running the Oracle E-Business Suite 12.1.3 Standard Large (100,000 Order/Inventory Lines) Order-To-Cash (Batch) workload.

  • The SPARC T7-1 server produced a world record hourly order line throughput of 273,973 per hour (21.90 min elapsed time) on the Oracle E-Business Suite R12 (12.1.3) Large Order-To-Cash (Batch) benchmark using a SPARC T7-1 server for the database and application tiers running Oracle Database 11g on Oracle Solaris 11.

  • The SPARC T7-1 server demonstrated 12% better hourly order line throughput compared to a two-chip Cisco UCS B200 M4 (Intel Xeon Processor E5-2697 v3).

Performance Landscape

Results for the Oracle E-Business 12.1.3 Order-To-Cash Batch Large model workload.

Batch Workload: Order-To-Cash Large Model
System CPU Employees/Hr Elapsed Time (min)
SPARC T7-1 1 x SPARC M7 processor 273,973 21.90
Cisco UCS B200 M4 2 x Intel Xeon Processor E5-2697 v3 243,803 24.61
Cisco UCS B200 M3 2 x Intel Xeon Processor E5-2690 232,739 25.78

Configuration Summary

Hardware Configuration:

SPARC T7-1 server with
1 x SPARC M7 processor (4.13 GHz)
256 GB memory (16 x 16 GB)
Oracle ZFS Storage ZS3-2 appliance (DB Data storage) with
40 x 900 GB 10K RPM SAS-2 HDD,
8 x Write Flash Accelerator SSD and
2 x Read Flash Accelerator SSD 1.6TB SAS
Oracle Flash Accelerator F160 PCIe Card (1.6 TB NVMe for DB Log storage)

Software Configuration:

Oracle Solaris 11.3
Oracle E-Business Suite R12 (12.1.3)
Oracle Database 11g (11.2.0.3.0)

Benchmark Description

The Oracle E-Business Suite Standard R12 Benchmark combines online transaction execution by simulated users with concurrent batch processing to model a typical scenario for a global enterprise. This benchmark ran one Batch component, Order-To-Cash, in the Large size.

Results can be published in four sizes and use one or more online/batch modules

  • X-large: Maximum online users running all business flows between 10,000 to 20,000; 750,000 order to cash lines per hour and 250,000 payroll checks per hour.
    • Order to Cash Online — 2400 users
      • The percentage across the 5 transactions in Order Management module is:
        • Insert Manual Invoice — 16.66%
        • Insert Order — 32.33%
        • Order Pick Release — 16.66%
        • Ship Confirm — 16.66%
        • Order Summary Report — 16.66%
    • HR Self-Service — 4000 users
    • Customer Support Flow — 8000 users
    • Procure to Pay — 2000 users
  • Large: 10,000 online users; 100,000 order to cash lines per hour and 100,000 payroll checks per hour.
  • Medium: up to 3000 online users; 50,000 order to cash lines per hour and 10,000 payroll checks per hour.
  • Small: up to 1000 online users; 10,000 order to cash lines per hour and 5,000 payroll checks per hour.

Key Points and Best Practices

  • All system optimizations are in the published report, find link in See Also section below.

See Also

Disclosure Statement

Oracle E-Business Large Order-To-Cash Batch workload, SPARC T7-1, 4.13 GHz, 1 chip, 32 cores, 256 threads, 256 GB memory, elapsed time 21.90 minutes, 273,973 hourly order line throughput, Oracle Solaris 11.3, Oracle E-Business Suite 12.1.3, Oracle Database 11g Release 2, Results as of 10/25/2015.

PeopleSoft Enterprise Financials 9.2: SPARC T7-2 World Record

Oracle's SPARC T7-2 server achieved world record performance being the first to publish on Oracle's PeopleSoft Enterprise Financials 9.2 benchmark. This result was obtained using one Oracle VM Server for SPARC (LDom) virtualized system configured with a single SPARC M7 processor.

  • The single processor LDom on the SPARC T7-2 server achieved world record performance executing 200 million Journal Lines in 18.60 minutes.

  • The single processor LDom on the SPARC T7-2 server was able to process General Ledger Journal Edit and Post batch jobs at 10,752,688 journal lines/min which reflects a large customer environment that utilizes a back-end database of nearly 1.0 TB performing highly competitive journal processing for Ledger.

Performance Landscape

Results are presented for PeopleSoft Financials Benchmark 9.2. Results obtained with PeopleSoft Financials Benchmark 9.2 are not comparable to the the previous version of the benchmark, PeopleSoft Financials Benchmark 9.1, due to significant change in data model and supports only batch.

PeopleSoft Financials Benchmark, Version 9.2
Solution Under Test Batch Journal lines/min
SPARC T7-2 (using 1 x SPARC M7, 4.13 GHz) 18.60 min 10,752,688

Configuration Summary

System:

SPARC T7-2 server with
2 x SPARC M7 processors
1 TB memory
4 x Oracle Flash Accelerator F160 PCIe Card (DB Redo, DB undo and DB Data)
4 x 600 GB internal disks
Oracle Solaris 11.3
Oracle Database 11g (11.2.0.4)
PeopleSoft Financials (9.20.348)
PeopleSoft PeopleTools (8.53.09)
Java HotSpot 64-Bit Server VM (build 1.7.0_45-b18)
Oracle Tuxedo, Version 11.1.1.3.0, 64-bit
Oracle WebLogic Server 11g (10.3.6)

LDom Under Test:

Oracle VM Server for SPARC (LDom) virtualized server (APP & DB Tier)
1 x SPARC M7 processor
512 GB memory

Benchmark Description

The PeopleSoft Enterprise Financials 9.2 benchmark emulates a large enterprise that processes and validates a large number of financial journal transactions before posting the journal entry to the ledger. The validation process certifies that the journal entries are accurate, ensuring that ChartFields values are valid, debits and credits equal out, and inter/intra-units are balanced. Once validated, the entries are processed, ensuring that each journal line posts to the correct target ledger, and then changes the journal status to posted. In this benchmark, the Journal Edit & Post is set up to edit and post both Inter-Unit and Regular multi-currency journals. The benchmark processes 200 million journal lines using AppEngine for edits and Cobol for post processes.

Key Points and Best Practices

  • The PeopleSoft Enterprise Financials 9.2 Batch benchmark ran on a one chip LDom consisting of 32 cores, each core had 8 threads. All total there were 256 virtual processors.

  • The LDom contained two Oracle Solaris Zones: a database tier zone and an application tier zone. The application tier zone consisted of 1 core with 8 virtual processors. The database tier zone consisted of 244 virtual processors from 31 cores. The remaining four virtual processors were dedicated to network and disk interrupt handling.

  • Inside of the database tier zone, the database log writer ran under 4 virtual processors and eight virtual processors were dedicated to four database writers.

  • There were 160 PeopleSoft Application instance processes running 320 streams of PeopleSoft Financial workload in the Oracle Solaris Fixed Priority FX class.

See Also

Disclosure Statement

Copyright 2015, Oracle and/or its affiliates. All rights reserved. Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners. Results as of 25 October 2015.

SAP Two-Tier Standard Sales and Distribution SD Benchmark: SPARC T7-2 World Record 2 Processors

Oracle's SPARC T7-2 server produces a world record result for 2-processors on the SAP two-tier Sales and Distribution (SD) Standard Application Benchmark using SAP Enhancement Package 5 for SAP ERP 6.0 (2 chips / 64 cores / 512 threads).

  • The SPARC T7-2 server achieved 30,800 SAP SD benchmark users running the two-tier SAP Sales and Distribution (SD) Standard Application Benchmark using SAP Enhancement Package 5 for SAP ERP 6.0.

  • The SPARC T7-2 server achieved 1.9 times more users than the Dell PowerEdge R730 server result.

  • The SPARC T7-2 server achieved 1.5 times more users than the IBM Power System S824 server result.

  • The SPARC T7-2 server achieved 1.9 times more users than the HP ProLiant DL380 Gen9 server result.

  • The SPARC T7-2 server result was run with Oracle Solaris 11 and used Oracle Database 12c.

Performance Landscape

SAP-SD 2-tier performance table in decreasing performance order for leading two-processor systems and four-processor IBM Power System S824 server, with SAP ERP 6.0 Enhancement Package 5 for SAP ERP 6.0 results (current version of the benchmark as of May, 2012).

SAP SD Two-Tier Benchmark
System
Processor
OS
Database
Users Resp Time
(sec)
Version Cert#
SPARC T7-2
2 x SPARC M7 (2x 32core)
Oracle Solaris 11
Oracle Database 12c
30,800 0.96 EHP5 2015050
IBM Power S824
4 x POWER8 (4x 6core)
AIX 7
DB2 10.5
21,212 0.98 EHP5 2014016
Dell PowerEdge R730
2 x Intel E5-2699 v3 (2x 18core)
Red Hat Enterprise Linux 7
SAP ASE 16
16,500 0.99 EHP5 2014033
HP ProLiant DL380 Gen9
2 x Intel E5-2699 v3 (2x 18core)
Red Hat Enterprise Linux 6.5
SAP ASE 16
16,101 0.99 EHP5 2014032

Version – Version of SAP, EHP5 refers to SAP ERP 6.0 Enhancement Package 5 for SAP ERP 6.0

Number of cores presented are per chip, to get system totals, multiple by the number of chips.

Complete benchmark results may be found at the SAP benchmark website http://www.sap.com/benchmark.

Configuration Summary and Results

Database/Application Server:

1 x SPARC T7-2 server with
2 x SPARC M7 processors (4.13 GHz, total of 2 processors / 64 cores / 512 threads)
1 TB memory
Oracle Solaris 11.3
Oracle Database 12c

Database Storage:
3 x Sun Server X3-2L each with
2 x Intel Xeon Processors E5-2609 (2.4 GHz)
16 GB memory
4 x Sun Flash Accelerator F40 PCIe Card
12 x 3 TB SAS disks
Oracle Solaris 11

REDO log Storage:
1 x Pillar FS-1 Flash Storage System, with
2 x FS1-2 Controller (Netra X3-2)
2 x FS1-2 Pilot (X4-2)
4 x DE2-24P Disk enclosure
96 x 300 GB 10000 RPM SAS Disk Drive Assembly

Certified Results (published by SAP)

Number of SAP SD benchmark users: 30,800
Average dialog response time: 0.96 seconds
Throughput:
  Fully processed order line items per hour: 3,372,000
  Dialog steps per hour: 10,116,000
  SAPS: 168,600
Average database request time (dialog/update): 0.022 sec / 0.047 sec
SAP Certification: 2015050

Benchmark Description

The SAP Standard Application SD (Sales and Distribution) Benchmark is an ERP business test that is indicative of full business workloads of complete order processing and invoice processing, and demonstrates the ability to run both the application and database software on a single system. The SAP Standard Application SD Benchmark represents the critical tasks performed in real-world ERP business environments.

SAP is one of the premier world-wide ERP application providers, and maintains a suite of benchmark tests to demonstrate the performance of competitive systems on the various SAP products.

See Also

Disclosure Statement

Two-tier SAP Sales and Distribution (SD) standard application benchmarks, SAP Enhancement Package 5 for SAP ERP 6.0 as of 10/23/15:

SPARC T7-2 (2 processors, 64 cores, 512 threads) 30,800 SAP SD users, 2 x 4.13 GHz SPARC M7, 1 TB memory, Oracle Database 12c, Oracle Solaris 11, Cert# 2015050.
IBM Power System S824 (4 processors, 24 cores, 192 threads) 21,212 SAP SD users, 4 x 3.52 GHz POWER8, 512 GB memory, DB2 10.5, AIX 7, Cert#2014016
Dell PowerEdge R730 (2 processors, 36 cores, 72 threads) 16,500 SAP SD users, 2 x 2.3 GHz Intel Xeon Processor E5-2699 v3 256 GB memory, SAP ASE 16, RHEL 7, Cert#2014033
HP ProLiant DL380 Gen9 (2 processors, 36 cores, 72 threads) 16,101 SAP SD users, 2 x 2.3 GHz Intel Xeon Processor E5-2699 v3 256 GB memory, SAP ASE 16, RHEL 6.5, Cert#2014032

SAP, R/3, reg TM of SAP AG in Germany and other countries. More info www.sap.com/benchmark

SPARC T7-1 Delivers 1-Chip World Records for SPEC CPU2006 Rate Benchmarks

This page has been updated on November 19, 2015. The SPARC T7-1 server results have been published at www.spec.org.

Oracle's SPARC T7-1 server delivered world record SPEC CPU2006 rate benchmark results for systems with one chip. This was accomplished with Oracle Solaris 11.3 and Oracle Solaris Studio 12.4 software.

  • The SPARC T7-1 server achieved world record scores of 1200 SPECint_rate2006, 1120 SPECint_rate_base2006, 832 SPECfp_rate2006, and 801 SPECfp_rate_base2006.

  • The SPARC T7-1 server beat the 1 chip Fujitsu CELSIUS C740 with an Intel Xeon Processor E5-2699 v3 by 1.7x on the SPECint_rate2006 benchmark. The SPARC T7-1 server beat the 1 chip NEC Express5800/R120f-1M with an Intel Xeon Processor E5-2699 v3 by 1.8x on the SPECfp_rate2006 benchmark.

  • The SPARC T7-1 server beat the 1 chip IBM Power S812LC server with a POWER8 processor by 1.9 times on the SPECint_rate2006 benchmark and by 1.8 times on the SPECfp_rate2006 benchmark.

  • The SPARC T7-1 server beat the 1 chip Fujitsu SPARC M10-4S with a SPARC64 X+ processor by 2.2x on the SPECint_rate2006 benchmark and by 1.6x on the SPECfp_rate2006 benchmark.

  • The SPARC T7-1 server improved upon the previous generation SPARC platform which used the SPARC T5 processor by 2.5 on the SPECint_rate2006 benchmark and by 2.3 on the SPECfp_rate2006 benchmark.

The SPEC CPU2006 benchmarks are derived from the compute-intensive portions of real applications, stressing chip, memory hierarchy, and compilers. The benchmarks are not intended to stress other computer components such as networking, the operating system, or the I/O system. Note that there are many other SPEC benchmarks, including benchmarks that specifically focus on Java computing, enterprise computing, and network file systems.

Performance Landscape

Complete benchmark results are at the SPEC website. The tables below provide the new Oracle results, as well as select results from other vendors.

Presented are single chip SPEC CPU2006 rate results. Only the best results published at www.spec.org per chip type are presented (best Intel, IBM, Fujitsu, Oracle chips).

SPEC CPU2006 Rate Results – One Chip
System Chip Peak Base
  SPECint_rate2006
SPARC T7-1 SPARC M7 (4.13 GHz, 32 cores) 1200 1120
Fujitsu CELSIUS C740 Intel E5-2699 v3 (2.3 GHz, 18 cores) 715 693
IBM Power S812LC POWER8 (2.92 GHz, 10 cores) 642 482
Fujitsu SPARC M10-4S SPARC64 X+ (3.7 GHz, 16 cores) 546 479
SPARC T5-1B SPARC T5 (3.6 GHz, 16 cores) 489 441
IBM Power 710 Express POWER7 (3.55 GHz, 8 cores) 289 255
  SPECfp_rate2006
SPARC T7-1 SPARC M7 (4.13 GHz, 32 cores) 832 801
NEC Express5800/R120f-1M Intel E5-2699 v3 (2.3 GHz, 18 cores) 474 460
IBM Power S812LC POWER8 (2.92 GHz, 10 cores) 468 394
Fujitsu SPARC M10-4S SPARC64 X+ (3.7 GHz, 16 cores) 462 418
SPARC T5-1B SPARC T5 (3.6 GHz, 16 cores) 369 350
IBM Power 710 Express POWER7 (3.55 GHz, 8 cores) 248 229

The following table highlights the performance of the single-chip SPARC M7 processor based server to the best published two-chip POWER8 processor based server.

SPEC CPU2006 Rate Results
Comparing One SPARC M7 Chip to Two POWER8 Chips
System Chip Peak Base
  SPECint_rate2006
SPARC T7-1 1 x SPARC M7 (4.13 GHz, 32core) 1200 1120
IBM Power S822LC 2 x POWER8 (2.92 GHz, 2x 10core) 1100 853
  SPECfp_rate2006
SPARC T7-1 1 x SPARC M7 (4.13 GHz, 32 cores) 832 801
IBM Power S822LC 2 x POWER8 (2.92 GHz, 2x 10core) 888 745

Configuration Summary

System Under Test:

SPARC T7-1
1 x SPARC M7 processor (4.13 GHz)
512 GB memory (16 x 32 GB dimms)
800 GB on 4 x 400 GB SAS SSD (mirrored)
Oracle Solaris 11.3
Oracle Solaris Studio 12.4 with 4/15 Patch Set

Benchmark Description

SPEC CPU2006 is SPEC's most popular benchmark. It measures:

  • Speed — single copy performance of chip, memory, compiler
  • Rate — multiple copy (throughput)

The benchmark is also divided into integer intensive applications and floating point intensive applications:

  • integer: 12 benchmarks derived from applications such as artificial intelligence chess playing, artificial intelligence go playing, quantum computer simulation, perl, gcc, XML processing, and pathfinding
  • floating point: 17 benchmarks derived from applications, including chemistry, physics, genetics, and weather.

It is also divided depending upon the amount of optimization allowed:

  • base: optimization is consistent per compiled language, all benchmarks must be compiled with the same flags per language.
  • peak: specific compiler optimization is allowed per application.

The overall metrics for the benchmark which are commonly used are:

  • SPECint_rate2006, SPECint_rate_base2006: integer, rate
  • SPECfp_rate2006, SPECfp_rate_base2006: floating point, rate
  • SPECint2006, SPECint_base2006: integer, speed
  • SPECfp2006, SPECfp_base2006: floating point, speed

Key Points and Best Practices

  • Jobs were bound using pbind.

See Also

Disclosure Statement

SPEC and the benchmark names SPECfp and SPECint are registered trademarks of the Standard Performance Evaluation Corporation. Results as of November 19, 2015 from www.spec.org.
SPARC T7-1: 1200 SPECint_rate2006, 1120 SPECint_rate_base2006, 832 SPECfp_rate2006, 801 SPECfp_rate_base2006; SPARC T5-1B: 489 SPECint_rate2006, 440 SPECint_rate_base2006, 369 SPECfp_rate2006, 350 SPECfp_rate_base2006; Fujitsu SPARC M10-4S: 546 SPECint_rate2006, 479 SPECint_rate_base2006, 462 SPECfp_rate2006, 418 SPECfp_rate_base2006. IBM Power 710 Express: 289 SPECint_rate2006, 255 SPECint_rate_base2006, 248 SPECfp_rate2006, 229 SPECfp_rate_base2006; Fujitsu CELSIUS C740: 715 SPECint_rate2006, 693 SPECint_rate_base2006; NEC Express5800/R120f-1M: 474 SPECfp_rate2006, 460 SPECfp_rate_base2006; IBM Power S822LC: 1100 SPECint_rate2006, 853 SPECint_rate_base2006, 888 SPECfp_rate2006, 745 SPECfp_rate_base2006; IBM Power S812LC: 642 SPECint_rate2006, 482 SPECint_rate_base2006, 468 SPECfp_rate2006, 394 SPECfp_rate_base2006.

SPARC T7-4 Delivers 4-Chip World Record for SPEC OMP2012

Oracle's SPARC T7-4 server delivered world record performance on the SPEC OMP2012 benchmark for systems with four chips. This was accomplished with Oracle Solaris 11.3 and Oracle Solaris Studio 12.4 software.

  • The SPARC T7-4 server delivered world record for systems with four chips of 27.9 SPECompG_peak2012 and 26.4 SPECompG_base2012.

  • The SPARC T7-4 server beat the four chip HP ProLiant DL580 Gen9 with Intel Xeon Processor E7-8890 v3 by 29% on the SPECompG_peak2012 benchmark.

  • This SPEC OMP2012 benchmark result demonstrates that the SPARC M7 processor performs well on floating-point intensive technical computing and modeling workloads.

Performance Landscape

Complete benchmark results are at the SPEC website, SPEC OMP2012 Results. The table below provides the new Oracle result as well as the previous best four chip results.

SPEC OMP2012 Results
Four Chip Results
System Processor Peak Base
SPARC T7-4 SPARC M7, 4.13 GHz 27.9 26.4
HP ProLiant DL580 Gen9 Intel Xeon E7-8890 v3, 2.5 GHz 21.5 20.4
Cisco UCS C460 M4 Intel Xeon E7-8890 v3, 2.5 GHz -- 20.8

Configuration Summary

Systems Under Test:

SPARC T7-4
4 x 4.13 GHz SPARC M7 processors
2 TB memory (64 x 32 GB dimms)
4 x 600 GB SAS 10,000 RPM HDD (mirrored)
Oracle Solaris 11.3 (11.3.0.30.0)
Oracle Solaris Studio 12.4 with 4/15 Patch Set

Benchmark Description

The following was taken from the SPEC website.

SPEC OMP2012 focuses on compute intensive performance, which means these benchmarks emphasize the performance of:

  • the computer processor (CPU),
  • the memory architecture,
  • the parallel support libraries, and
  • the compilers.

It is important to remember the contribution of the latter three components. SPEC OMP performance intentionally depends on more than just the processor.

SPEC OMP2012 contains a suite that focuses on parallel computing performance using the OpenMP parallelism standard.

The suite can be used to measure along the following vector:

  • Compilation method: Consistent compiler options across all programs of a given language (the base metrics) and, optionally, compiler options tuned to each program (the peak metrics).

SPEC OMP2012 is not intended to stress other computer components such as networking, the operating system, graphics, or the I/O system. Note that there are many other SPEC benchmarks, including benchmarks that specifically focus on graphics, distributed Java computing, webservers, and network file systems.

Key Points and Best Practices

  • Jobs were bound using OMP_PLACES.

See Also

Disclosure Statement

SPEC and the benchmark name SPEC OMP are registered trademarks of the Standard Performance Evaluation Corporation. Results as of November 11, 2015 from www.spec.org. SPARC T7-4 (4 chips, 128 cores, 1024 threads): 27.9 SPECompG_peak2012, 26.4 SPECompG_base2012; HP ProLiant DL580 Gen9 (4 chips, 72 cores, 144 threads): 21.5 SPECompG_peak2012, 20.4 SPECompG_base2012; Cisco UCS C460 M7 (4 chips, 72 cores, 144 threads): 20.8 SPECompG_base2012.

Virtualized Network Performance: SPARC T7-1

Oracle's SPARC T7-1 server using Oracle VM Server for SPARC exhibits lower network latency under virtualization. The network latency and bandwidth were measured using the Netperf benchmark.

  • TCP network latency between two Oracle VM Server for SPARC guests running on separate SPARC T7-1 servers each using SR-IOV is similar to that of two SPARC T7-1 servers without virtualization (native/bare metal).

  • TCP and UDP network latencies between two Oracle VM Server for SPARC guests running on separate SPARC T7-1 servers each using assigned I/O were significantly less than the other two I/O configurations (SR-IOV and paravirtual I/O).

  • TCP and UDP network latencies between two Oracle VM Server for SPARC guests running on separate SPARC T7-1 servers each using SR-IOV were significantly less than when using paravirtual I/O.

Terminology notes:

  • VM – virtual machine
  • guest – encapsulated operating system instance, typically running in a VM.
  • assigned I/O – network hardware driven directly and exclusively by guests
  • paravirtual I/O – network hardware driven by hosts, indirectly by guests via paravirtualized drivers
  • SR-IOV – single root i/o virtualization; virtualized network interfaces provided by network hardware, driven directly by guests.
  • LDom – logical domain (previous name for Oracle VM Server for SPARC)

Performance Landscape

The following tables show the results for TCP and UDP Netperf Latency and Bandwidth tests (single stream). Netperf latency, often called the round-trip time, is measured in microseconds (usec) (smaller is better).

TCP
Networking
Method
Netperf Latency
(usec)
Bandwidth
(Mb/sec)
MTU=1500MTU=9000 MTU=1500MTU=9000
Native/Bare Metal 5858 91009900
assigned I/O 5151 94009900
SR-IOV 5859 94009900
paravirtual I/O 9191 48009800


UDP
Networking
Method
Netperf Latency
(usec)
Bandwidth
(Mb/sec)
MTU=1500MTU=9000 MTU=1500MTU=9000
Native/Bare Metal 5757 91009900
assigned I/O 5151 94009900
SR-IOV 6663 94009900
paravirtual I/O 9897 48009800
Specifically, the Netperf benchmark latency:
  • is the average request/response time computed by inverse of the throughput reported by the program,
  • is measured within the program from 20 sample-runs of 30 seconds each,
  • uses single-in-flight [i.e. non-burst] 1 byte messages,
  • sends between separate servers connected by 10 GbE,
  • for each test, uses servers connected back-to-back (no network switch) and configured identically: native or guest VM.

Configuration Summary

System Under Test:

2 x SPARC T7-1 servers, each with
1 x SPARC M7 processor (4.13 GHz)
256 GB memory (16 x 16 GB)
2 x 600 GB 10K RPM SAS-2 HDD
10 GbE (on-board and PCIe network devices)
Oracle Solaris 11.3
Oracle VM Server for SPARC 3.2

Benchmark Description

Using the Netperf 2.6.0 benchmark to evaluate native and virtualized (LDoms) network performance. Netperf is a client/server benchmark measuring network performance providing a number of independent tests, including the omni Request/Response (aka ping-pong) test with TCP or UDP protocols used here to obtain the Netperf latency measurements, and TCP stream for bandwidth. Netperf was run between separate servers connected back-to-back (no network switch) by 10 GbE network interconnection.

To measure the cost of virtualization, for each test the servers were configured identically: native (without virtualization) or guest VM. When in a virtual environment, in similar identical fashion on each server, some representative methods were configured to connect the environment to the network hardware (e.g. assigned I/O, paravirtualization, SR-IOV).

Key Points and Best Practices

  • Oracle VM Server for SPARC requires explicit partitioning of guests into Logical Domains of bound CPUs and memory, typically chosen to be local, and does not provide dynamic load balancing between guests on a host.

  • Oracle VM Server for SPARC guests (LDoms) were assigned 32 virtual CPUs (4 complete processor cores) and 64 GB of memory. The control domain served as the I/O domain (for paravirtualized I/O) and was assigned 4 cores and 64 GB of memory.

  • Each latency average reported was computed from the inverse of the reported throughput (similar to the transaction rate) of a Netperf Request/Response test run using 20 samples (aka iterations) of 30 second measurements of non-concurrent 1 byte messages.

  • To obtain a meaningful average latency from a Netperf Request/Response test, it is important that the transactions consist of single messages, which is Netperf's default. If, for instance, Netperf options for burst and TCP_NODELAY are turned on, multiple messages can overlap in the transactions and the reported transaction rate or throughput cannot be used to compute the latency.

  • All results were obtained with interrupt coalescence (aka interrupt throttling, interrupt blanking) turned on in the physical NIC, and if applicable, for the attachment driver in the guest. Also, interrupt coalescence turned on is the default for all the platforms used here.

  • All the results were obtained with large receive offload (LRO) turned off in the physical NIC, and, if applicable, for the attachment driver in the guest, in order to reduce the network latency between the two guests.

  • The netperf bandwidth test used send and receive 1MB (1048576 Bytes) messages.

  • The paravirtual variation of the measurements refers to the use of a paravirtualized network driver in the guest instance. IP traffic consequently is routed across the guest, the virtualization subsystem in the host, a virtual network switch or bridge (depending upon the platform), and the network interface card.

  • The assigned I/O variation of the measurements refers to the use of the card's driver in the guest instance itself. This use is possible by exclusively assigning the device to the guest. Device assignment results in less (software) routing for IP traffic and consequently less overhead than using paravirtualized drivers, but virtualization still can impose significant overhead. Note also NICs used in this way cannot be shared amongst guests, and may obviate the use of certain other VM features like migration. The T7-1 system has four on-board 10 GbE devices, but all of them are connected to the same PCIe branch, making it impossible to configure them as assigned I/O devices. Using a PCIe 10 GbE NIC allows configuring it as an assigned I/O device.

  • In the context of Oracle VM Server for SPARC and these tests, assigned I/O refers to PCI endpoint device assignment, while paravirtualized I/O refers to virtual I/O using a virtual network device (vnet) in the guest connected to a virtual switch (vsw) through the I/O domain to the physical network device (NIC).

See Also

Disclosure Statement

Copyright 2015, Oracle and/or its affiliates. All rights reserved. Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners. Results as of 25 October 2015.

Sunday Oct 25, 2015

BestPerf Index 19 November 2015

This is an occasionally-generated index of previous entries in the BestPerf blog. Skip to next entry

Colors used:

Benchmark
Best Practices
Index

Nov 19, 2015 SPECvirt_2013: SPARC T7-2 World Record Performance for Two- and Four-Chip Systems
Nov 09, 2015 SPECjbb2015: SPARC T7-1 World Record for 1 Chip Result
Oct 26, 2015 Real-Time Enterprise: SPARC T7-1 Faster Than x86 E5 v3
Oct 26, 2015 In-Memory Database: SPARC T7-1 Faster Than x86 E5 v3
Oct 26, 2015 In-Memory Aggregation: SPARC T7-2 Beats 4-Chip x86 E7 v2
Oct 26, 2015 Memory and Bisection Bandwidth: SPARC T7 and M7 Servers Faster Than x86 and POWER8
Oct 26, 2015 Hadoop TeraSort: SPARC T7-4 Top Per-Chip Performance
Oct 26, 2015 Yahoo Cloud Serving Benchmark: SPARC T7-4 With Oracle NoSQL Beats x86 E5 v3 Per Chip
Oct 26, 2015 Graph PageRank: SPARC M7-8 Beats x86 E5 v3 Per Chip
Oct 26, 2015 Graph Breadth First Search Algorithm: SPARC T7-4 Beats 4-Chip x86 E7 v2
Oct 26, 2015 Neural Network Models Using Oracle R Enterprise: SPARC T7-4 Beats 4-Chip x86 E7 v3
Oct 26, 2015 SPECjEnterprise2010: SPARC T7-1 World Record with Single Application Server Using 1 to 4 Chips
Oct 26, 2015 AES Encryption: SPARC T7-2 Beats x86 E5 v3
Oct 26, 2015 SHA Digest Encryption: SPARC T7-2 Beats x86 E5 v3
Oct 26, 2015 SPECvirt_sc2013: SPARC T7-2 World Record for 2 and 4 Chip Systems
Oct 26, 2015 Live Migration: SPARC T7-2 Oracle VM Server for SPARC Performance
Oct 26, 2015 ZFS Encryption: SPARC T7-1 Performance
Oct 26, 2015 Virtualized Storage: SPARC T7-1 Performance
Oct 26, 2015 Oracle Internet Directory: SPARC T7-2 World Record
Oct 26, 2015 Oracle Stream Explorer DDOS Attack: SPARC T7-4 World Record
Oct 26, 2015 Oracle FLEXCUBE Universal Banking: SPARC T7-1 World Record
Oct 26, 2015 PeopleSoft Human Capital Management 9.1 FP2: SPARC M7-8 World Record
Oct 26, 2015 Oracle Communications ASAP – Telco Subscriber Activation: SPARC T7-2 World Record
Oct 26, 2015 Oracle E-Business Payroll Batch Extra-Large: SPARC T7-1 World Record
Oct 26, 2015 Oracle E-Business Suite Applications R12.1.3 (OLTP X-Large): SPARC M7-8 World Record
Oct 26, 2015 Oracle E-Business Order-To-Cash Batch Large: SPARC T7-1 World Record
Oct 26, 2015 PeopleSoft Enterprise Financials 9.2: SPARC T7-2 World Record
Oct 26, 2015 SAP Two-Tier Standard Sales and Distribution SD Benchmark: SPARC T7-2 World Record 2 Processors
Oct 26, 2015 SPARC T7-1 Delivers 1-Chip World Records for SPEC CPU2006 Rate Benchmarks
Oct 26, 2015 SPARC T7-4 Delivers 4-Chip World Record for SPEC OMP2012
Oct 26, 2015 Virtualized Network Performance: SPARC T7-1
Apr 03, 2015 Oracle Server X5-2 Produces World Record 2-Chip Single Application Server SPECjEnterprise2010 Result
Mar 20, 2015 Oracle ZFS Storage ZS4-4 Shows 1.8x Generational Performance Improvement on SPC-2 Benchmark
Jun 25, 2014 Oracle ZFS Storage ZS3-2 Delivers World Record Price-Performance on SPC-2/E
Mar 27, 2014 SPARC M6-32 Produces SAP SD Two-Tier Benchmark World Record for 32-Processor Systems
Mar 05, 2014 SPARC T5-2 Delivers World Record 2-Socket SPECvirt_sc2010 Benchmark
Feb 18, 2014 SPARC T5-2 Produces SPECjbb2013-MultiJVM World Record for 2-Chip Systems
Feb 14, 2014 SPARC M6-32 Delivers Oracle E-Business and PeopleSoft World Record Benchmarks, Linear Data Warehouse Scaling in a Virtualized Configuration
Jan 23, 2014 SPARC T5-2 Delivers World Record 2-Socket Application Server for SPECjEnterprise2010 Benchmark
Nov 25, 2013 World Record Single System TPC-H @10000GB Benchmark on SPARC T5-4
Sep 26, 2013 SPARC M6-32 Delivers Oracle E-Business and PeopleSoft World Record Benchmarks, Linear Data Warehouse Scaling in a Virtualized Configuration
Sep 26, 2013 SPARC T5-8 Delivers World Record Single Server SPECjEnterprise2010 Benchmark, Utilizes Virtualized Environment
Sep 26, 2013 SPARC T5-2 Server Beats x86 Server on Oracle Database Transparent Data Encryption
Sep 25, 2013
Sep 25, 2013 SPARC T5 Encryption Performance Tops Intel E5-2600 v2 Processor
Sep 25, 2013 Sun Server X4-2 Delivers Single App Server, 2-Chip x86 World Record SPECjEnterprise2010
Sep 23, 2013 SPARC T5-2 Delivers Best 2-Chip MultiJVM SPECjbb2013 Result
Sep 18, 2013 Sun Server X4-2 Performance Running SPECjbb2013 MultiJVM Benchmark
Sep 10, 2013 Oracle ZFS Storage ZS3-4 Delivers World Record SPC-2 Performance
Sep 10, 2013 Oracle ZFS Storage ZS3-4 Produces Best 2-Node Performance on SPECsfs2008 NFSv3
Sep 10, 2013 Oracle ZFS Storage ZS3-2 Beats Comparable NetApp on SPECsfs2008 NFSv3
Sep 09, 2013 Benchmarks Using Oracle Solaris 11
Jul 01, 2013 Quick Note about Blog Posting from John
Jun 12, 2013 SPARC T5-4 Produces World Record Single Server TPC-H @3000GB Benchmark Result
May 01, 2013 SPARC T5-8 Delivers Best Single System SPECjEnterprise2010 Benchmark, Beats IBM
Mar 29, 2013 SPARC T5 System Performance for Encryption Microbenchmark
Mar 26, 2013 SPARC T5-8 Produces TPC-C Benchmark Single-System World Record Performance
Mar 26, 2013 SPARC T5-8 Delivers SPECjEnterprise2010 Benchmark World Record Performance
Mar 26, 2013 SPARC T5-2 Achieves SPECjbb2013 Benchmark World Record Result
Mar 26, 2013 SPARC T5-8 Realizes SAP SD Two-Tier Benchmark World Record for 8 Chip Systems
Mar 26, 2013 SPARC M5-32 Produces SAP SD Two-Tier Benchmark World Record for SAP Enhancement Package 5 for SAP ERP 6.0
Mar 26, 2013 SPARC T5 Systems Deliver SPEC CPU2006 Rate Benchmark Multiple World Records
Mar 26, 2013 SPARC T5-2 Achieves JD Edwards EnterpriseOne Benchmark World Records
Mar 26, 2013 SPARC T5-2 Scores Siebel CRM Benchmark World Record
Mar 26, 2013 SPARC T5 Systems Produce Oracle TimesTen Benchmark World Record
Mar 26, 2013 SPARC T5-8 Delivers Oracle OLAP World Record Performance
Mar 26, 2013 SPARC T5-2 Achieves ZFS File System Encryption Benchmark World Record
Mar 26, 2013 SPARC T5-2 Obtains Oracle Internet Directory Benchmark World Record Performance
Mar 26, 2013 SPARC T5-2 Scores Oracle FLEXCUBE Universal Banking Benchmark World Record Performance
Mar 26, 2013 SPARC T5-2 Performance Running Oracle Fusion Middleware SOA
Mar 26, 2013 SPARC T5-1B Performance Running Oracle Communications ASAP
Feb 22, 2013 Oracle Produces World Record SPECjbb2013 Result with Oracle Solaris and Oracle JDK
Feb 08, 2013 Improved Oracle Solaris 10 1/13 Secure Copy Performance for High Latency Networks
Nov 08, 2012 SPARC T4-4 Delivers World Record Performance on Oracle OLAP Perf Version 2 Benchmark
Nov 08, 2012 Improved Performance on PeopleSoft Combined Benchmark using SPARC T4-4
Oct 02, 2012 Performance of Oracle Business Intelligence Benchmark on SPARC T4-4
Oct 02, 2012 World Record Oracle E-Business Consolidated Workload on SPARC T4-2
Oct 02, 2012 World Record Siebel PSPP Benchmark on SPARC T4 Servers
Oct 02, 2012 SPARC T4-4 Delivers World Record First Result on PeopleSoft Combined Benchmark
Oct 01, 2012 World Record Batch Rate on Oracle JD Edwards Consolidated Workload with SPARC T4-2
Oct 01, 2012 Oracle TimesTen In-Memory Database Performance on SPARC T4-2
Oct 01, 2012 World Record Performance on PeopleSoft Enterprise Financials Benchmark on SPARC T4-2
Aug 28, 2012 SPARC T4-2 Produces World Record Oracle Essbase Aggregate Storage Benchmark Result
May 01, 2012 SPARC T4 Servers Running Oracle Solaris 11 and Oracle RAC Deliver World Record on PeopleSoft HRMS 9.1
Apr 19, 2012 Sun ZFS Storage 7420 Appliance Delivers 2-Node World Record SPECsfs2008 NFS Benchmark
Apr 15, 2012 Sun ZFS Storage 7420 Appliance Delivers Top High-End Price/Performance Result for SPC-2 Benchmark
Apr 12, 2012 Sun Fire X4270 M3 SAP Enhancement Package 4 for SAP ERP 6.0 (Unicode) Two-Tier Standard Sales and Distribution (SD) Benchmark
Apr 10, 2012 World Record Oracle E-Business Suite 12.1.3 Standard Extra-Large Payroll (Batch) Benchmark on Sun Server X3-2L
Apr 10, 2012 SPEC CPU2006 Results on Oracle's Sun x86 Servers
Apr 10, 2012 SPEC CPU2006 Results on Oracle's Netra Server X3-2
Mar 29, 2012 Sun Server X2-8 (formerly Sun Fire X4800 M2) Delivers World Record TPC-C for x86 Systems
Mar 29, 2012 Sun Server X2-8 (formerly Sun Fire X4800 M2) Posts World Record x86 SPECjEnterprise2010 Result
Feb 27, 2012 Sun ZFS Storage 7320 Appliance 33% Faster Than NetApp FAS3270 on SPECsfs2008
Jan 12, 2012 Netra SPARC T4-2 SPECjvm2008 World Record Performance
Nov 30, 2011 SPARC T4-4 Beats 8-CPU IBM POWER7 on TPC-H @3000GB Benchmark
Nov 09, 2011 SPARC T4-2 Delivers World Record SPECjvm2008 Result with Oracle Solaris 11
Nov 02, 2011 SPARC T4-2 Server Beats 2-Socket 3.46 GHz x86 on Black-Scholes Option Pricing Test
Oct 03, 2011 SPARC T4-4 Servers Set World Record on SPECjEnterprise2010, Beats IBM POWER7, Cisco x86
Oct 03, 2011 SPARC T4-4 Beats IBM POWER7 and HP Itanium on TPC-H @1000GB Benchmark
Oct 03, 2011 Sun ZFS Storage 7420 Appliance Doubles NetApp FAS3270A on SPC-1 Benchmark
Oct 03, 2011 SPARC T4-4 Produces World Record Oracle OLAP Capacity
Sep 30, 2011 SPARC T4-2 Server Beats Intel (Westmere AES-NI) on ZFS Encryption Tests
Sep 30, 2011 SPARC T4 Processor Beats Intel (Westmere AES-NI) on AES Encryption Tests
Sep 29, 2011 SPARC T4 Processor Outperforms IBM POWER7 and Intel (Westmere AES-NI) on OpenSSL AES Encryption Test
Sep 29, 2011 SPARC T4-1 Server Outperforms Intel (Westmere AES-NI) on IPsec Encryption Tests
Sep 29, 2011 SPARC T4-2 Server Beats Intel (Westmere AES-NI) on SSL Network Tests
Sep 28, 2011 SPARC T4 Servers Set World Record on Oracle E-Business Suite R12 X-Large Order to Cash
Sep 28, 2011 SPARC T4-2 Server Beats Intel (Westmere AES-NI) on Oracle Database Tablespace Encryption Queries
Sep 28, 2011 SPARC T4 Servers Set World Record on PeopleSoft HRMS 9.1
Sep 27, 2011 SPARC T4-2 Servers Set World Record on JD Edwards EnterpriseOne Day in the Life Benchmark with Batch, Outperforms IBM POWER7
Sep 27, 2011 SPARC T4 Servers Set World Record on Siebel Loyalty Batch
Sep 27, 2011 SPARC T4-4 Server Sets World Record on PeopleSoft Payroll (N.A.) 9.1, Outperforms IBM Mainframe, HP Itanium
Sep 19, 2011 Halliburton ProMAX® Seismic Processing on Sun Blade X6270 M2 with Sun ZFS Storage 7320
Sep 15, 2011 Sun Fire X4800 M2 Servers (now known as Sun Server X2-8) Produce World Record on SAP SD-Parallel Benchmark
Sep 12, 2011 SPARC Enterprise M9000 Produces World Record SAP ATO Benchmark
Aug 12, 2011 Sun Blade X6270 M2 with Oracle WebLogic World Record 2 Processor SPECjEnterprise 2010 Benchmark
Jul 01, 2011 SPARC T3-1 Record Results Running JD Edwards EnterpriseOne Day in the Life Benchmark with Added Batch Component
Jun 10, 2011 SPARC Enterprise M5000 Delivers First PeopleSoft Payroll 9.1 Benchmark
Jun 03, 2011 SPARC Enterprise M8000 with Oracle 11g Beats IBM POWER7 on TPC-H @1000GB Benchmark
Mar 25, 2011 SPARC Enterprise M9000 with Oracle Database 11g Delivers World Record Single Server TPC-H @3000GB Result
Mar 23, 2011 SPARC T3-1B Doubles Performance on Oracle Fusion Middleware WebLogic Avitek Medical Records Sample Application
Mar 22, 2011 Netra SPARC T3-1 22% Faster Than IBM Running Oracle Communications ASAP
Feb 17, 2011 SPARC T3-1 takes JD Edwards "Day In the Life" benchmark lead, beats IBM Power7 by 25%
Dec 08, 2010 Sun Blade X6275 M2 Cluster with Sun Storage 7410 Performance Running Seismic Processing Reverse Time Migration
Dec 08, 2010 Sun Blade X6275 M2 Delivers Best Fluent (MCAE Application) Performance on Tested Configurations
Dec 07, 2010 Sun Blade X6275 M2 Server Module with Intel X5670 Processors SPEC CPU2006 Results
Dec 02, 2010 World Record TPC-C Result on Oracle's SPARC Supercluster with T3-4 Servers
Dec 02, 2010 World Record SPECweb2005 Result on SPARC T3-2 with Oracle iPlanet Web Server
Dec 02, 2010 World Record Performance on PeopleSoft Enterprise Financials Benchmark run on Sun SPARC Enterprise M4000 and M5000
Oct 26, 2010 3D VTI Reverse Time Migration Scalability On Sun Fire X2270-M2 Cluster with Sun Storage 7210
Oct 11, 2010 Sun SPARC Enterprise M9000 Server Delivers World Record Non-Clustered TPC-H @3000GB Performance
Sep 30, 2010 Consolidation of 30 x86 Servers onto One SPARC T3-2
Sep 29, 2010 SPARC T3-1 Delivers Record Number of Online Users on JD Edwards EnterpriseOne 9.0.1 Day in the Life Test
Sep 28, 2010 SPARC T3 Cryptography Performance Over 1.9x Increase in Throughput over UltraSPARC T2 Plus
Sep 28, 2010 SPARC T3-2 Delivers First Oracle E-Business X-Large Benchmark Self-Service (OLTP) Result
Sep 27, 2010 Sun Fire X2270 M2 Super-Linear Scaling of Hadoop Terasort and CloudBurst Benchmarks
Sep 27, 2010 SPARC T3-1 Shows Capabilities Running Online Auction Benchmark with Oracle Fusion Middleware
Sep 24, 2010 SPARC T3-2 sets World Record on SPECjvm2008 Benchmark
Sep 24, 2010 SPARC T3 Provides High Performance Security for Oracle Weblogic Applications
Sep 23, 2010 Sun Storage F5100 Flash Array with PCI-Express SAS-2 HBAs Achieves Over 17 GB/sec Read
Sep 23, 2010 SPARC T3-1 Performance on PeopleSoft Enterprise Financials 9.0 Benchmark
Sep 22, 2010 Oracle Solaris 10 9/10 ZFS OLTP Performance Improvements
Sep 22, 2010 SPARC T3-1 Supports 13,000 Users on Financial Services and Enterprise Application Integration Running Siebel CRM 8.1.1
Sep 21, 2010 ProMAX Performance and Throughput on Sun Fire X2270 and Sun Storage 7410
Sep 21, 2010 Sun Flash Accelerator F20 PCIe Cards Outperform IBM on SPC-1C
Sep 21, 2010 SPARC T3 Servers Deliver Top Performance on Oracle Communications Order and Service Management
Sep 20, 2010 Schlumberger's ECLIPSE 300 Performance Throughput On Sun Fire X2270 Cluster with Sun Storage 7410
Sep 20, 2010 Sun Fire X4470 4 Node Cluster Delivers World Record SAP SD-Parallel Benchmark Result
Sep 20, 2010 SPARC T3-4 Sets World Record Single Server Result on SPECjEnterprise2010 Benchmark
Aug 25, 2010 Transparent Failover with Solaris MPxIO and Oracle ASM
Aug 23, 2010 Repriced: SPC-1 Sun Storage 6180 Array (8Gb) 1.9x Better Than IBM DS5020 in Price-Performance
Aug 23, 2010 Repriced: SPC-2 (RAID 5 & 6 Results) Sun Storage 6180 Array (8Gb) Outperforms IBM DS5020 by up to 64% in Price-Performance
Jun 29, 2010 Sun Fire X2270 M2 Achieves Leading Single Node Results on ANSYS FLUENT Benchmark
Jun 29, 2010 Sun Fire X2270 M2 Demonstrates Outstanding Single Node Performance on MSC.Nastran Benchmarks
Jun 29, 2010 Sun Fire X2270 M2 Achieves Leading Single Node Results on ABAQUS Benchmark
Jun 29, 2010 Sun Fire X2270 M2 Sets World Record on SPEC OMP2001 Benchmark
Jun 29, 2010 Sun Fire X4170 M2 Sets World Record on SPEC CPU2006 Benchmark
Jun 29, 2010 Sun Blade X6270 M2 Sets World Record on SPECjbb2005 Benchmark
Jun 28, 2010 Sun Fire X4270 M2 Sets World Record on SPECjbb2005 Benchmark
Jun 28, 2010 Sun Fire X4470 Sets World Records on SPEC OMP2001 Benchmarks
Jun 28, 2010 Sun Fire X4470 Sets World Record on SPEC CPU2006 Rate Benchmark
Jun 28, 2010 Sun Fire X4470 2-Node Configuration Sets World Record for SAP SD-Parallel Benchmark
Jun 28, 2010 Sun Fire X4800 Sets World Record on SPECjbb2005 Benchmark
Jun 28, 2010 Sun Fire X4800 Sets World Records on SPEC CPU2006 Rate Benchmarks
Jun 10, 2010 Hyperion Essbase ASO World Record on Sun SPARC Enterprise M5000
Jun 09, 2010 PeopleSoft Payroll 500K Employees on Sun SPARC Enterprise M5000 World Record
Jun 03, 2010 Sun SPARC Enterprise T5440 World Record SPECjAppServer2004
May 11, 2010 Per-core Performance Myth Busting
Apr 14, 2010 Oracle Sun Storage F5100 Flash Array Delivers World Record SPC-1C Performance
Apr 13, 2010 Oracle Sun Flash Accelerator F20 PCIe Card Accelerates Web Caching Performance
Apr 06, 2010 WRF Benchmark: X6275 Beats Power6
Mar 29, 2010 Sun Blade X6275/QDR IB/ Reverse Time Migration
Feb 22, 2010 IBM POWER7 SPECfp_rate2006: Poor Scaling? Or Configuration Confusion?
Jan 25, 2010 Sun/Solaris Leadership in SAP SD Benchmarks and HP claims
Jan 21, 2010 SPARC Enterprise M4000 PeopleSoft NA Payroll 240K Employees Performance (16 Streams)
Dec 16, 2009 Sun Fire X4640 Delivers World Record x86 Result on SPEC OMPL2001
Nov 24, 2009 Sun M9000 Fastest SAP 2-tier SD Benchmark on current SAP EP4 for SAP ERP 6.0 (Unicode)
Nov 20, 2009 Sun Blade X6275 cluster delivers leading results for Fluent truck_111m benchmark
Nov 20, 2009 Sun Blade 6048 and Sun Blade X6275 NAMD Molecular Dynamics Benchmark beats IBM BlueGene/L
Nov 19, 2009 SPECmail2009: New World record on T5240 1.6GHz Sun 7310 and ZFS
Nov 18, 2009 Sun Flash Accelerator F20 PCIe Card Achieves 100K 4K IOPS and 1.1 GB/sec
Nov 04, 2009 New TPC-C World Record Sun/Oracle
Nov 02, 2009 Sun Blade X6275 Cluster Beats SGI Running Fluent Benchmarks
Nov 02, 2009 Sun Ultra 27 Delivers Leading Single Frame Buffer SPECviewperf 10 Results
Oct 28, 2009 SPC-2 Sun Storage 6780 Array RAID 5 & RAID 6 51% better $/performance than IBM DS5300
Oct 24, 2009 Sun C48 & Lustre fast for Seismic Reverse Time Migration using Sun X6275
Oct 24, 2009 Sun F5100 and Seismic Reverse Time Migration with faster Optimal Checkpointing
Oct 23, 2009 Wiki on performance best practices
Oct 20, 2009 Exadata V2 Information
Oct 15, 2009 Oracle Flash Cache - SGA Caching on Sun Storage F5100
Oct 13, 2009 Oracle Hyperion Sun M5000 and Sun Storage 7410
Oct 13, 2009 Sun T5440 Oracle BI EE Sun SPARC Enterprise T5440 World Record
Oct 13, 2009 SPECweb2005 on Sun SPARC Enterprise T5440 World Record using Solaris Containers and Sun Storage F5100 Flash
Oct 13, 2009 Oracle PeopleSoft Payroll (NA) Sun SPARC Enterprise M4000 and Sun Storage F5100 World Record Performance
Oct 13, 2009 SAP 2-tier SD Benchmark on Sun SPARC Enterprise M9000/32 SPARC64 VII
Oct 13, 2009 CP2K Life Sciences, Ab-initio Dynamics - Sun Blade 6048 Chassis with Sun Blade X6275 - Scalability and Throughput with Quad Data Rate InfiniBand
Oct 13, 2009 SAP 2-tier SD-Parallel on Sun Blade X6270 1-node, 2-node and 4-node
Oct 13, 2009 Halliburton ProMAX Oil & Gas Application Fast on Sun 6048/X6275 Cluster
Oct 13, 2009 SPECcpu2006 Results On MSeries Servers With Updated SPARC64 VII Processors
Oct 12, 2009 MCAE ABAQUS faster on Sun F5100 and Sun X4270 - Single Node World Record
Oct 12, 2009 MCAE ANSYS faster on Sun F5100 and Sun X4270
Oct 12, 2009 MCAE MCS/NASTRAN faster on Sun F5100 and Fire X4270
Oct 12, 2009 SPC-2 Sun Storage 6180 Array RAID 5 & RAID 6 Over 70% Better Price Performance than IBM
Oct 12, 2009 SPC-1 Sun Storage 6180 Array Over 70% Better Price Performance than IBM
Oct 12, 2009 Why Sun Storage F5100 is a good option for Peoplesoft NA Payroll Application
Oct 11, 2009 1.6 Million 4K IOPS in 1RU on Sun Storage F5100 Flash Array
Oct 11, 2009 TPC-C World Record Sun - Oracle
Oct 09, 2009 X6275 Cluster Demonstrates Performance and Scalability on WRF 2.5km CONUS Dataset
Oct 02, 2009 Sun X4270 VMware VMmark benchmark achieves excellent result
Sep 22, 2009 Sun X4270 Virtualized for Two-tier SAP ERP 6.0 Enhancement Pack 4 (Unicode) Standard Sales and Distribution (SD) Benchmark
Sep 01, 2009 String Searching - Sun T5240 & T5440 Outperform IBM Cell Broadband Engine
Aug 28, 2009 Sun X4270 World Record SAP-SD 2-Processor Two-tier SAP ERP 6.0 EP 4 (Unicode)
Aug 27, 2009 Sun SPARC Enterprise T5240 with 1.6GHz UltraSPARC T2 Plus Beats 4-Chip IBM Power 570 POWER6 System on SPECjbb2005
Aug 26, 2009 Sun SPARC Enterprise T5220 with 1.6GHz UltraSPARC T2 Sets Single Chip World Record on SPECjbb2005
Aug 12, 2009 SPECmail2009 on Sun SPARC Enterprise T5240 and Sun Java System Messaging Server 6.3
Jul 23, 2009 World Record Performance of Sun CMT Servers
Jul 22, 2009 Why does 1.6 beat 4.7?
Jul 21, 2009 Zeus ZXTM Traffic Manager World Record on Sun T5240
Jul 21, 2009 Sun T5440 Oracle BI EE World Record Performance
Jul 21, 2009 Sun T5440 World Record SAP-SD 4-Processor Two-tier SAP ERP 6.0 EP 4 (Unicode)
Jul 21, 2009 1.6 GHz SPEC CPU2006 - Rate Benchmarks
Jul 21, 2009 Sun Blade T6320 World Record SPECjbb2005 performance
Jul 21, 2009 New SPECjAppServer2004 Performance on the Sun SPARC Enterprise T5440
Jul 20, 2009 Sun T5440 SPECjbb2005 Beats IBM POWER6 Chip-to-Chip
Jul 20, 2009 New CMT results coming soon....
Jul 14, 2009 Vdbench: Sun StorageTek Vdbench, a storage I/O workload generator.
Jul 14, 2009 Storage performance and workload analysis using Swat.
Jul 10, 2009 World Record TPC-H@300GB Price-Performance for Windows on Sun Fire X4600 M2
Jul 06, 2009 Sun Blade 6048 Chassis with Sun Blade X6275: RADIOSS Benchmark Results
Jul 03, 2009 SPECmail2009 on Sun Fire X4275+Sun Storage 7110: Mail Server System Solution
Jun 30, 2009 Sun Blade 6048 and Sun Blade X6275 NAMD Molecular Dynamics Benchmark beats IBM BlueGene/L
Jun 26, 2009 Sun Fire X2270 Cluster Fluent Benchmark Results
Jun 25, 2009 Sun SSD Server Platform Bandwidth and IOPS (Speeds & Feeds)
Jun 24, 2009 I/O analysis using DTrace
Jun 23, 2009 New CPU2006 Records: 3x better integer throughput, 9x better fp throughput
Jun 23, 2009 Sun Blade X6275 results capture Top Places in CPU2006 SPEED Metrics
Jun 19, 2009 Pointers to Java Performance Tuning resources
Jun 19, 2009 SSDs in HPC: Reducing the I/O Bottleneck BluePrint Best Practices
Jun 17, 2009 The Performance Technology group wiki is alive!
Jun 17, 2009 Performance of Sun 7410 and 7310 Unified Storage Array Line
Jun 16, 2009 Sun Fire X2270 MSC/Nastran Vendor_2008 Benchmarks
Jun 15, 2009 Sun Fire X4600 M2 Server Two-tier SAP ERP 6.0 (Unicode) Standard Sales and Distribution (SD) Benchmark
Jun 12, 2009 Correctly comparing SAP-SD Benchmark results
Jun 12, 2009 OpenSolaris Beats Linux on memcached Sun Fire X2270
Jun 11, 2009 SAS Grid Computing 9.2 utilizing the Sun Storage 7410 Unified Storage System
Jun 10, 2009 Using Solaris Resource Management Utilities to Improve Application Performance
Jun 09, 2009 Free Compiler Wins Nehalem Race by 2x
Jun 08, 2009 Variety of benchmark results to be posted on BestPerf
Jun 05, 2009 Interpreting Sun's SPECpower_ssj2008 Publications
Jun 03, 2009 Wide Variety of Topics to be discussed on BestPerf
Jun 03, 2009 Welcome to BestPerf group blog!

Friday Apr 03, 2015

Oracle Server X5-2 Produces World Record 2-Chip Single Application Server SPECjEnterprise2010 Result

Two Oracle Server X5-2 systems, using the Intel Xeon E5-2699 v3 processor, produced a World Record x86 two-chip single application server SPECjEnterprise2010 benchmark result of 21,504.30 SPECjEnterprise2010 EjOPS. One Oracle Server X5-2 ran the application tier and the second Oracle Server X5-2 was used for the database tier.

  • The Oracle Server X5-2 system demonstrated 11% better performance when compared to the IBM X3650 M5 server result of 19,282.14 SPECjEnterprise2010 EjOPS.

  • The Oracle Server X5-2 system demonstrated 1.9x better performance when compared to the previous generation Sun Server X4-2 server result of 11,259.88 SPECjEnterprise2010 EjOPS.

  • This result used Oracle WebLogic Server 12c, Java HotSpot(TM) 64-Bit Server 1.8.0_40 Oracle Database 12c, and Oracle Linux.

Performance Landscape

Complete benchmark results are at the SPEC website, SPECjEnterprise2010 Results. The table below shows the top single application server, two-chip x86 results.

SPECjEnterprise2010 Performance Chart
as of 4/1/2015
Submitter EjOPS* Application Server Database Server
Oracle 21,504.30 1x Oracle Server X5-2
2x 2.3 GHz Intel Xeon E5-2699 v3
Oracle WebLogic 12c (12.1.3)
1x Oracle Server X5-2
2x 2.3 GHz Intel Xeon E5-2699 v3
Oracle Database 12c (12.1.0.2)
IBM 19,282.14 1x IBM X3650 M5
2x 2.6 GHz Intel Xeon E5-2697 v3
WebSphere Application Server V8.5
1x IBM X3850 X6
4x 2.8 GHz Intel Xeon E7-4890 v2
IBM DB2 10.5
Oracle 11,259.88 1x Sun Server X4-2
2x 2.7 GHz Intel Xeon E5-2697 v2
Oracle WebLogic 12c (12.1.2)
1x Sun Server X4-2L
2x 2.7 GHz Intel Xeon E5-2697 v2
Oracle Database 12c (12.1.0.1)

* SPECjEnterprise2010 EjOPS, bigger is better.

Configuration Summary

Application Server:

1 x Oracle Server X5-2
2 x 2.3 GHz Intel Xeon E5-2699 v3 processors
256 GB memory
3 x 10 GbE NIC
Oracle Linux 6 Update 5 (kernel-2.6.39-400.243.1.el6uek.x86_64)
Oracle WebLogic Server 12c (12.1.3)
Java HotSpot(TM) 64-Bit Server VM on Linux, version 1.8.0_40 (Java SE 8 Update 40)
BIOS SW 1.2

Database Server:

1 x Oracle Server X5-2
2 x 2.3 GHz Intel Xeon E5-2699 v3 processors
512 GB memory
2 x 10 GbE NIC
1 x 16 Gb FC HBA
2 x Oracle Server X5-2L Storage
Oracle Linux 6 Update 5 (kernel-3.8.13-16.2.1.el6uek.x86_64)
Oracle Database 12c Enterprise Edition Release 12.1.0.2

Benchmark Description

SPECjEnterprise2010 is the third generation of the SPEC organization's J2EE end-to-end industry standard benchmark application. The SPECjEnterprise2010 benchmark has been designed and developed to cover the Java EE 5 specification's significantly expanded and simplified programming model, highlighting the major features used by developers in the industry today. This provides a real world workload driving the Application Server's implementation of the Java EE specification to its maximum potential and allowing maximum stressing of the underlying hardware and software systems.

The workload consists of an end to end web based order processing domain, an RMI and Web Services driven manufacturing domain and a supply chain model utilizing document based Web Services. The application is a collection of Java classes, Java Servlets, Java Server Pages, Enterprise Java Beans, Java Persistence Entities (pojo's) and Message Driven Beans.

The SPECjEnterprise2010 benchmark heavily exercises all parts of the underlying infrastructure that make up the application environment, including hardware, JVM software, database software, JDBC drivers, and the system network.

The primary metric of the SPECjEnterprise2010 benchmark is jEnterprise Operations Per Second ("SPECjEnterprise2010 EjOPS"). This metric is calculated by adding the metrics of the Dealership Management Application in the Dealer Domain and the Manufacturing Application in the Manufacturing Domain. There is no price/performance metric in this benchmark.

Key Points and Best Practices

  • Four Oracle WebLogic server instances were started using numactl binding 2 instances per chip.
  • Four Oracle database listener processes were started, 2 processes bound per processor.
  • Additional tuning information is in the report at http://spec.org.
  • COD (Cluster on Die) is enabled in the BIOS on the application server.

See Also

Disclosure Statement

SPEC and the benchmark name SPECjEnterprise are registered trademarks of the Standard Performance Evaluation Corporation. Oracle Server X5-2, 21,504.30 SPECjEnterprise2010 EjOPS; IBM System X3650 M5, 19,282.14 SPECjEnterprise2010 EjOPS. Sun Server X4-2, 11,259.88 SPECjEnterprise2010 EjOPS; Results from www.spec.org as of 4/1/2015.

Friday Mar 20, 2015

Oracle ZFS Storage ZS4-4 Shows 1.8x Generational Performance Improvement on SPC-2 Benchmark

The Oracle ZFS Storage ZS4-4 appliance delivered 1.8x improved performance and 1.3x improved price performance over the previous generation Oracle ZFS Storage ZS3-4 appliance as shown by the SPC-2 benchmark.

  • Running the SPC-2 benchmark, the Oracle ZFS Storage ZS4-4 appliance delivered SPC-2 Price-Performance of $17.09 and an overall score of 31,486.23 SPC-2 MBPS.

  • The Oracle ZFS Storage continues its strong price performance by occupying the three of the top five SPC-2 price performance.

  • Oracle holds the three of the top four performance results on the SPC-2 benchmark for HDD based systems.

  • The Oracle ZFS Storage ZS4-4 appliance has a 7.6x price-performance advantage over the IBM DS8870 and 2x performance advantage as measured by the SPC-2 benchmark.

  • The Oracle ZFS Storage ZS4-4 appliance has a 5.0x performance advantage over the new Fujitsu DX200 S3 as measured by the SPC-2 benchmark.

  • The Oracle ZFS Storage ZS4-4 appliance has a 4.6x price-performance advantage over the Fujitsu ET8700 S2 and 1.9x performance advantage as shown by the SPC-2 benchmark.

  • The Oracle ZFS Storage ZS4-4 appliance has a 4.6x price-performance advantage over the Hitachi Virtual Storage Platform (VSP) and 1.96x performance advantage as measured by the SPC-2 benchmark.

  • The Oracle ZFS Storage ZS4-4 appliance has a 1.6x price-performance advantage over the HP XP7 disk array as shown by the SPC-2 benchmark (HP even discounted their hardware 63%).

Performance Landscape

SPC-2 Price-Performance

Below is a table of the top SPC-2 Price-Performance results for HDD storage based systems, presented in increasing price-performance order (as of 03/17/2015). The complete set of results may be found at SPC2 top 10 Price-Performance list.

System SPC-2
MBPS
$/SPC-2
MBPS
Results
Identifier
Oracle ZFS Storage ZS3-2 16,212.66 $12.08 BE00002
Fujitsu Eternus DX200 S3 6,266.50 $15.42 B00071
SGI InfiniteStorage 5600 8,855.70 $15.97 B00065
Oracle ZFS Storage ZS4-4 31,486.23 $17.09 B00072
Oracle ZFS Storage ZS3-4 17,244.22 $22.53 B00067
NEC Storage M700 14,408.89 $25.10 B00066
Sun StorageTek 2530 663.51 $26.48 B00026
HP XP7 storage 43,012.53 $28.30 B00070
Fujitsu ETERNUS DX80 S2 2,685.50 $28.48 B00055
SGI InfiniteStorage 5500-SP 4,064.49 $28.57 B00059
Hitachi Unified Storage VM 11,274.83 $32.64 B00069

SPC-2 MBPS = the Performance Metric
$/SPC-2 MBPS = the Price-Performance Metric
Results Identifier = A unique identification of the result

SPC-2 Performance

The following table list the top SPC-2 -Performance results for HDD storage based systems, presented in decreasing performance order (as of 03/17/2015). The complete set of results may be found at the SPC2 top 10 Performance list.

HDD Based Systems SPC-2
MBPS
$/SPC-2
MBPS
TSC Price Results
Identifier
HP XP7 storage 43,012.52 $28.30 $1,217,462 B00070
Oracle ZFS Storage ZS4-4 31,486.23 $17.09 $538,050 B00072
Oracle ZFS Storage ZS3-4 17,244.22 $22.53 $388,472 B00067
Oracle ZFS Storage ZS3-2 16,212.66 $12.08 $195,915 BE00002
Fujitsu ETERNUS DX8870 S2 16,038.74 $79.51 $1,275,163 B00063
IBM System Storage DS8870 15,423.66 $131.21 $2,023,742 B00062
IBM SAN VC v6.4 14,581.03 $129.14 $1,883,037 B00061
Hitachi Virtual Storage Platform (VSP) 13,147.87 $95.38 $1,254,093 B00060
HP StorageWorks P9500 XP Storage Array 13,147.87 $88.34 $1,161,504 B00056

SPC-2 MBPS = the Performance Metric
$/SPC-2 MBPS = the Price-Performance Metric
TSC Price = Total Cost of Ownership Metric
Results Identifier = A unique identification of the result Metric

Complete SPC-2 benchmark results may be found at
http://www.storageperformance.org/results/benchmark_results_spc2.

Configuration Summary

Storage Configuration:

Oracle ZFS Storage ZS4-4 storage system in clustered configuration
2 x Oracle ZFS Storage ZS4-4 controllers with
8 x Intel Xeon processors
3 TB memory
24 x Oracle Storage Drive Enclosure DE2-24P, each with
24 x 300 GB 10K RPM SAS-2 drives

Benchmark Description

SPC Benchmark 2 (SPC-2): Consists of three distinct workloads designed to demonstrate the performance of a storage subsystem during the execution of business critical applications that require the large-scale, sequential movement of data. Those applications are characterized predominately by large I/Os organized into one or more concurrent sequential patterns. A description of each of the three SPC-2 workloads is listed below as well as examples of applications characterized by each workload.

  • Large File Processing: Applications in a wide range of fields, which require simple sequential process of one or more large files such as scientific computing and large-scale financial processing.
  • Large Database Queries: Applications that involve scans or joins of large relational tables, such as those performed for data mining or business intelligence.
  • Video on Demand: Applications that provide individualized video entertainment to a community of subscribers by drawing from a digital film library.

SPC-2 is built to:

  • Provide a level playing field for test sponsors.
  • Produce results that are powerful and yet simple to use.
  • Provide value for engineers as well as IT consumers and solution integrators.
  • Is easy to run, easy to audit/verify, and easy to use to report official results.

See Also

Disclosure Statement

SPC-2 and SPC-2 MBPS are registered trademarks of Storage Performance Council (SPC). Results as of March 17, 2015, for more information see www.storageperformance.org.

Oracle ZFS Storage ZS4-4 - B00072, Oracle ZFS Storage ZS3-2 - BE00002, Oracle ZFS Storage ZS3-4 - B00067, Fujitsu ETERNUS DX80 S2, B00055, Fujitsu ETERNUS DX8870 S2 - B00063, Fujitsu ETERNUS DX200 S3 - B00071, HP StorageWorks P9500 XP Storage Array - B00056, HP XP7 Storage Array - B00070, Hitachi Unified Storage VM - B00069, Hitachi Virtual Storage Platform (VSP) - B00060, IBM SAN VC v6.4 - B00061, IBM System Storage DS8870 - B00062, IBM XIV Storage System Gen3 - BE00001, NEC Storage M700 - B00066, SGI InfiniteStorage 5500-SP - B00059, SGI InfiniteStorage 5600 - B00065, Sun StorageTek 2530 - B00026.

Wednesday Jun 25, 2014

Oracle ZFS Storage ZS3-2 Delivers World Record Price-Performance on SPC-2/E

The Oracle ZFS Storage ZS3-2 appliance delivered a world record Price-Performance result, world record energy result and excellent overall performance for the SPC-2/E benchmark.

  • The Oracle ZFS Storage ZS3-2 appliance delivered the top SPC-2 Price-Performance of $12.08 and it delivered an overall score of 16,212.66 SPC-2 MBPS for the SPC-2/E benchmark.

  • The Oracle ZFS Storage ZS3-2 appliance produced the top Performance-Energy SPC-2/E benchmark result of 3.67 SPC2 MBPS / watt.

  • Oracle holds the top two performance results on the SPC-2 benchmark for HDD based systems.

  • The Oracle ZFS Storage ZS3-2 appliance has an 11x price-performance advantage over the IBM DS8870.

  • The Oracle ZFS Storage ZS3-2 appliance has an 8x price-performance advantage over the Hitachi Virtual Storage Platform (VSP).

  • The Oracle ZFS Storage ZS3-2 appliance has an 7.3x price-performance advantage over the HP P9500 XP disk array.

Performance Landscape

SPC-2 Price-Performance

Below is a table of the top SPC-2 Price-Performance results for HDD storage based systems, presented in increasing price-performance order (as of 06/25/2014). The complete set of results may be found at SPC2 top 10 Price-Performance list.

System SPC-2
MBPS
$/SPC-2
MBPS
Results
Identifier
Oracle ZFS Storage ZS3-2 16,212.66 $12.08 BE00002
SGI InfiniteStorage 5600 8,855.70 $15.97 B00065
Oracle ZFS Storage ZS3-4 17,244.22 $22.53 B00067
NEC Storage M700 14,408.89 $25.10 B00066
Sun StorageTek 2530 663.51 $26.48 B00026
Fujitsu ETERNUS DX80 S2 2,685.50 $28.48 B00055
SGI InfiniteStorage 5500-SP 4,064.49 $28.57 B00059
Hitachi Unified Storage VM 11,274.83 $32.64 B00069

SPC-2 MBPS = the Performance Metric
$/SPC-2 MBPS = the Price-Performance Metric
Results Identifier = A unique identification of the result

SPC-2/E Results

The table below list all SPC-2/E results. The SPC-2/E benchmark extends the SPC-2 benchmark by additionally measuring power consumption during the SPC-2 benchmark run.

System SPC-2
MBPS
$/SPC-2
MBPS
TSC Price SPC2 MBPS /
watt
Results
Identifier
Oracle ZFS Storage ZS3-2 16,212.66 $12.08 $195,915 3.67 BE00002
IBM XIV Storage System Gen3 7,467.99 $152.34 $1,137,641 0.81 BE00001

SPC-2 MBPS = the Performance Metric
$/SPC-2 MBPS = the Price-Performance Metric
TSC Price = Total Cost of Ownership Metric
SPC2 MBPS / watt = Number of SPC2 MB/second produced per watt consumed. Higher is Better.
Results Identifier = A unique identification of the result

SPC-2 Performance

The following table list the top SPC-2 -Performance results for HDD storage based systems, presented in decreasing performance order (as of 06/25/2014). The complete set of results may be found at the SPC2 top 10 Performance list.

System SPC-2
MBPS
$/SPC-2
MBPS
TSC Price Results
Identifier
Oracle ZFS Storage ZS3-4 17,244.22 $22.53 $388,472 B00067
Oracle ZFS Storage ZS3-2 16,212.66 $12.08 $195,915 BE00002
Fujitsu ETERNUS DX8870 S2 16,038.74 $79.51 $1,275,163 B00063
IBM System Storage DS8870 15,423.66 $131.21 $2,023,742 B00062
IBM SAN VC v6.4 14,581.03 $129.14 $1,883,037 B00061
Hitachi Virtual Storage Platform (VSP) 13,147.87 $95.38 $1,254,093 B00060
HP StorageWorks P9500 XP Storage Array 13,147.87 $88.34 $1,161,504 B00056

SPC-2 MBPS = the Performance Metric
$/SPC-2 MBPS = the Price-Performance Metric
TSC Price = Total Cost of Ownership Metric
Results Identifier = A unique identification of the result Metric

Complete SPC-2 benchmark results may be found at
http://www.storageperformance.org/results/benchmark_results_spc2.

Configuration Summary

Storage Configuration:

Oracle ZFS Storage ZS3-2 storage system in clustered configuration
2 x Oracle ZFS Storage ZS3-2 controllers, each with
4 x 2.1 GHz 8-core Intel Xeon processors
512 GB memory
12 x Sun Disk shelves, each with
24 x 300 GB 10K RPM SAS-2 drives

Benchmark Description

SPC Benchmark 2 (SPC-2): Consists of three distinct workloads designed to demonstrate the performance of a storage subsystem during the execution of business critical applications that require the large-scale, sequential movement of data. Those applications are characterized predominately by large I/Os organized into one or more concurrent sequential patterns. A description of each of the three SPC-2 workloads is listed below as well as examples of applications characterized by each workload.

  • Large File Processing: Applications in a wide range of fields, which require simple sequential process of one or more large files such as scientific computing and large-scale financial processing.
  • Large Database Queries: Applications that involve scans or joins of large relational tables, such as those performed for data mining or business intelligence.
  • Video on Demand: Applications that provide individualized video entertainment to a community of subscribers by drawing from a digital film library.

SPC-2 is built to:

  • Provide a level playing field for test sponsors.
  • Produce results that are powerful and yet simple to use.
  • Provide value for engineers as well as IT consumers and solution integrators.
  • Is easy to run, easy to audit/verify, and easy to use to report official results.

SPC Benchmark 2/Energy (SPC-2/E): consists of the complete set of SPC-2 performance measurement and reporting plus the measurement and reporting of energy use. This benchmark extension provides measurement and reporting to complete storage configurations, complementing SPC-2C/E, which focuses on storage component configurations.

See Also

Disclosure Statement

SPC-2 and SPC-2 MBPS are registered trademarks of Storage Performance Council (SPC). Results as of June 25, 2014, for more information see www.storageperformance.org.

Fujitsu ETERNUS DX80 S2, B00055, Fujitsu ETERNUS DX8870 S2 - B00063, HP StorageWorks P9500 XP Storage Array - B00056, Hitachi Unified Storage VM - B00069, Hitachi Virtual Storage Platform (VSP) - B00060, IBM SAN VC v6.4 - B00061, IBM System Storage DS8870 - B00062, IBM XIV Storage System Gen3 - BE00001, NEC Storage M700 - B00066, Oracle ZFS Storage ZS3-2 - BE00002, Oracle ZFS Storage ZS3-4 - B00067, SGI InfiniteStorage 5500-SP - B00059, SGI InfiniteStorage 5600 - B00065, Sun StorageTek 2530 - B00026.

Thursday Mar 27, 2014

SPARC M6-32 Produces SAP SD Two-Tier Benchmark World Record for 32-Processor Systems

Oracle's SPARC M6-32 server produced a world record result for 32-processors on the SAP two-tier Sales and Distribution (SD) Standard Application Benchmark using SAP Enhancement Package 5 for SAP ERP 6.0 (32 chips / 384 cores / 3072 threads).

  • SPARC M6-32 server achieved 140,000 SAP SD benchmark users with a low average dialog response time of 0.58 seconds running the SAP two-tier Sales and Distribution (SD) Standard Application Benchmark using SAP Enhancement package 5 for SAP ERP 6.0.

  • The SPARC M6-32 delivered 2.5 times more users than the IBM Power 780 result using SAP Enhancement Package 5 for SAP ERP 6.0. The IBM result also had 1.7 times worse average dialog response time compared to the SPARC M6-32 server result.

  • The SPARC M6-32 delivered 3.0 times more users than the Fujitsu PRIMEQUEST 2800E (with Intel Xeon E7-8890 v2 processors) result. The Fujitsu result also had 1.7 times worse average dialog response time compared to the SPARC M6-32 server result.

  • The SPARC M6-32 server solution was run with Oracle Solaris 11 and used Oracle Database 11g.

Performance Landscape

SAP-SD 2-Tier Performance Table (in decreasing performance order). With SAP ERP 6.0 Enhancement Package 4 for SAP ERP 6.0 (Old version of the benchmark, obsolete at the end of April, 2012), and SAP ERP 6.0 Enhancement Package 5 for SAP ERP 6.0 results (current version of the benchmark as of May, 2012).

System
Processor
Ch / Co / Th — Memory
OS
Database
Users Resp Time
(sec)
Version Cert#
Fujitsu SPARC M10-4S
SPARC64 X @3.0 GHz
40 / 640 / 1280 — 10 TB
Solaris 11
Oracle 11g
153,000 0.87 EHP5 2013014
SPARC M6-32 Server
SPARC M6 @3.6 GHz
32 / 384 / 3072 — 16 TB
Solaris 11
Oracle 11g
140,000 0.58 EHP5 2014008
IBM Power 795
POWER7 @4 GHz
32 / 256 / 1024 — 4 TB
AIX 7.1
DB2 9.7
126,063 0.98 EHP4 2010046
IBM Power 780
POWER7+ @3.72 GHz
12 / 96 / 834 — 1536 GB
AIX 7.1
DB2 10
57,024 0.98 EHP5 2012033
Fujitsu PRIMEQUEST 2800E
Intel Xeon E7-8890 v2 @2.8 GHz
8 / 120 / 240 — 1024 GB
Windows Server 2012 SE
SQL Server 2012
47,500 0.97 EHP5 2014003
IBM Power 760
POWER7+ @3.41 GHz
8 / 48 / 192 — 1024 GB
AIX 7.1
DB2 10
25,488 0.99 EHP5 2013004

Version – Version of SAP, EHP5 refers to SAP ERP 6.0 Enhancement Package 5 for SAP ERP 6.0 and EHP4 refers to SAP ERP 6.0 Enhancement Package 4 for SAP ERP 6.0

Ch / Co / Th – Total chips, coreas and threads

Complete benchmark results may be found at the SAP benchmark website http://www.sap.com/benchmark.

Configuration Summary and Results

Hardware Configuration:

1 x SPARC M6-32 server with
32 x 3.6 GHz SPARC M6 processors (total of 32 processors / 384 cores / 3072 threads)
16 TB memory
6 x Sun Server X3-2L each with
2 x Intel Xeon E5-2609 2.4 GHz Processors
16 GB Memory
4 x Flash Accelerator F40
12 x 3 TB SAS disks
2 x Sun Server X3-2L each with
2 x Intel Xeon E5-2609 2.4 GHz Processors
16 GB Memory
1 x 8-Port 6Gbps SAS-2 RAID PCI Express HBA
12 x 3 TB SAS disks

Software Configuration:

Oracle Solaris 11
SAP Enhancement Package 5 for SAP ERP 6.0
Oracle Database 11g Release 2

Certified Results (published by SAP)

Number of SAP SD benchmark users:
140,000
Average dialog response time:
0.58 seconds
Throughput:

  Fully processed order line items per hour:
15,878,670
  Dialog steps per hour:
47,636,000
  SAPS:
793,930
Average database request time (dialog/update):
0.020 sec / 0.041 sec
SAP Certification:
2014008

Benchmark Description

The SAP Standard Application SD (Sales and Distribution) Benchmark is an ERP business test that is indicative of full business workloads of complete order processing and invoice processing, and demonstrates the ability to run both the application and database software on a single system. The SAP Standard Application SD Benchmark represents the critical tasks performed in real-world ERP business environments.

SAP is one of the premier world-wide ERP application providers, and maintains a suite of benchmark tests to demonstrate the performance of competitive systems on the various SAP products.

See Also

Disclosure Statement

Two-tier SAP Sales and Distribution (SD) standard application benchmarks, SAP Enhancement Package 5 for SAP ERP 6.0 as of 3/26/14:

SPARC M6-32 (32 processors, 384 cores, 3072 threads) 140,000 SAP SD users, 32 x 3.6 GHz SPARC M6, 16 TB memory, Oracle Database 11g, Oracle Solaris 11, Cert# 2014008. Fujitsu SPARC M10-4S (40 processors, 640 cores, 1280 threads) 153,000 SAP SD users, 40 x 3.0 GHz SPARC65 X, 10 TB memory, Oracle Database 11g, Oracle Solaris 11, Cert# 2013014. IBM Power 780 (12 processors, 96 cores, 384 threads) 57,024 SAP SD users, 12 x 3.72 GHz IBM POWER7+, 1536 GB memory, DB210, AIX7.1, Cert#2012033. Fujitsu PRIMEQUEST 2800E (8 processors, 120 cores, 240 threads) 47,500 SAP SD users, 8 x 2.8 GHz Intel Xeon Processor E7-8890 v2, 1024 GB memory, SQL Server 2012, Windows Server 2012 Standard Edition, Cert# 2014003. IBM Power 760 (8 processors, 48 cores, 192 threads) 25,488 SAP SD users, 8 x 3.41 GHz IBM POWER7+, 1024 GB memory, DB2 10, AIX 7.1, Cert#2013004.

Two-tier SAP Sales and Distribution (SD) standard application benchmarks, SAP Enhancement Package 4 for SAP ERP 6.0 as of 3/26/14:

IBM Power 795 (32 processors, 256 cores, 1024 threads) 126,063 SAP SD users, 32 x 4 GHz IBM POWER7, 4 TB memory, DB2 9.7, AIX7.1, Cert#2010046.

SAP, R/3, reg TM of SAP AG in Germany and other countries. More info www.sap.com/benchmark

Wednesday Mar 05, 2014

SPARC T5-2 Delivers World Record 2-Socket SPECvirt_sc2010 Benchmark

Oracle's SPARC T5-2 server delivered a world record two-chip SPECvirt_sc2010 result of 4270 @ 264 VMs, establishing performance superiority in virtualized environments of the SPARC T5 processors with Oracle Solaris 11, which includes as standard virtualization products Oracle VM for SPARC and Oracle Solaris Zones.

  • The SPARC T5-2 server has 2.3x better performance than an HP BL620c G7 blade server (with two Westmere EX processors) which used VMware ESX 4.1 U1 virtualization software (best SPECvirt_sc2010 result on two-chip servers using VMware software).

  • The SPARC T5-2 server has 1.6x better performance than an IBM Flex System x240 server (with two Sandy Bridge processors) which used Kernel-based Virtual Machines (KVM).

  • This is the first SPECvirt_sc2010 result using Oracle production level software: Oracle Solaris 11.1, Oracle WebLogic Server 10.3.6, Oracle Database 11g Enterprise Edition, Oracle iPlanet Web Server 7 and Oracle Java Development Kit 7 (JDK). The only exception for the Dovecot mail server.

Performance Landscape

Complete benchmark results are at the SPEC website, SPECvirt_sc2010 Results. The following table highlights the leading two-chip results for the benchmark, bigger is better.

SPECvirt_sc2010
Leading Two-Chip Results
System Processor Result @ VMs Virtualization Software
SPARC T5-2 2 x SPARC T5, 3.6 GHz 4270 @ 264 Oracle VM Server for SPARC 3.0
Oracle Solaris Zones
IBM Flex System x240 2 x Intel E5-2690, 2.9 GHz 2741 @ 168 Red Hat Enterprise Linux 6.4 KVM
HP Proliant BL6200c G7 2 x Intel E7-2870, 2.4 GHz 1878 @ 120 VMware ESX 4.1 U1

Configuration Summary

System Under Test Highlights:

1 x SPARC T5-2 server, with
2 x 3.6 GHz SPARC T5 processors
1 TB memory
Oracle Solaris 11.1
Oracle VM Server for SPARC 3.0
Oracle iPlanet Web Server 7.0.15
Oracle PHP 5.3.14
Dovecot 2.1.17
Oracle WebLogic Server 11g (10.3.6)
Oracle Database 11g (11.2.0.3)
Java HotSpot(TM) 64-Bit Server VM on Solaris, version 1.7.0_51

Benchmark Description

The SPECvirt_sc2010 benchmark is SPEC's first benchmark addressing performance of virtualized systems. It measures the end-to-end performance of all system components that make up a virtualized environment.

The benchmark utilizes several previous SPEC benchmarks which represent common tasks which are commonly used in virtualized environments. The workloads included are derived from SPECweb2005, SPECjAppServer2004 and SPECmail2008. Scaling of the benchmark is achieved by running additional sets of virtual machines until overall throughput reaches a peak. The benchmark includes a quality of service criteria that must be met for a successful run.

Key Points and Best Practices

  • The SPARC T5 server running the Oracle Solaris 11.1, utilizes embedded virtualization products as the Oracle VM for SPARC and Oracle Solaris Zones, which provide a low overhead, flexible, scalable and manageable virtualization environment.

  • In order to provide a high level of data integrity and availability, all the benchmark data sets are stored on mirrored (RAID1) storage.

See Also

Disclosure Statement

SPEC and the benchmark name SPECvirt_sc are registered trademarks of the Standard Performance Evaluation Corporation. Results from www.spec.org as of 3/5/2014. SPARC T5-2, SPECvirt_sc2010 4270 @ 264 VMs; IBM Flex System x240, SPECvirt_sc2010 2741 @ 168 VMs; HP Proliant BL620c G7, SPECvirt_sc2010 1878 @ 120 VMs.

Tuesday Feb 18, 2014

SPARC T5-2 Produces SPECjbb2013-MultiJVM World Record for 2-Chip Systems

From www.spec.org

Defects Identified in SPECjbb®2013

December 9, 2014 - SPEC has identified a defect in its SPECjbb®2013 benchmark suite. SPEC has suspended sales of the benchmark software and is no longer accepting new submissions of SPECjbb®2013 results for publication on SPEC's website. Current SPECjbb®2013 licensees will receive a free copy of the new version of the benchmark when it becomes available.

SPEC is advising SPECjbb®2013 licensees and users of the SPECjbb®2013 metrics that the recently discovered defect impacts the comparability of results. This defect can significantly impact the amount of work done during the measurement period, resulting in an inflated SPECjbb®2013 metric. SPEC recommends that users not utilize these results for system comparisons without a full understanding of the impact of these defects on each benchmark result.

Additional information is available here.

The SPECjbb2013 benchmark shows modern Java application performance. Oracle's SPARC T5-2 set a two-chip world record, which is 1.8x faster than the best two-chip x86-based server. Using Oracle Solaris and Oracle Java, Oracle delivered this two-chip world record result on the MultiJVM SPECjbb2013 metric.

  • The SPARC T5-2 server achieved 114,492 SPECjbb2013-MultiJVM max-jOPS and 43,963 SPECjbb2013-MultiJVM critical-jOPS on the SPECjbb2013 benchmark. This result is a two-chip world record.

  • The SPARC T5-2 server running SPECjbb2013 is 1.8x faster than the Cisco UCS C240 M3 server (2.7 GHz Intel Xeon E5-2697 v2) based on both the SPECjbb2013-MultiJVM max-jOPS and SPECjbb2013-MultiJVM critical-jOPS metrics.

  • The SPARC T5-2 server running SPECjbb2013 is 2x faster than the HP ProLiant ML350p Gen8 server (2.7 GHz Intel Xeon E5-2697 v2) based on SPECjbb2013-MultiJVM max-jOPS and 1.3x faster based on SPECjbb2013-MultiJVM critical-jOPS.

  • The new Oracle results were obtained using Oracle Solaris 11 along with Oracle Java SE 8 on the SPARC T5-2 server.

  • The SPARC T5-2 server running SPECjbb2013 on a per chip basis is 1.3x faster than the NEC Express5800/A040b server (2.8 GHz Intel Xeon E7-4890 v2) based on both the SPECjbb2013-MultiJVM max-jOPS and SPECjbb2013-MultiJVM critical-jOPS metrics.

  • There are no IBM POWER7 or POWER7+ based server results on the SPECjbb2013 benchmark. IBM has published IBM POWER7+ based servers on the SPECjbb2005 which was retired by SPEC in 2013.

Performance Landscape

Results of SPECjbb2013 from www.spec.org as of March 6, 2014. These are the leading 2-chip SPECjbb2013 MultiJVM results.

SPECjbb2013 - 2-Chip MultiJVM Results
System Processor SPECjbb2013-MultiJVM JDK
max-jOPS critical-jOPS
SPARC T5-2 2xSPARC T5, 3.6 GHz 114,492 43,963 Oracle Java SE 8
Cisco UCS C240 M3 2xIntel E5-2697 v2, 2.7 GHz 63,079 23,797 Oracle Java SE 7u45
HP ProLiant ML350p Gen8 2xIntel E5-2697 v2, 2.7 GHz 62,393 24,310 Oracle Java SE 7u45
IBM System x3650 M4 BD 2xIntel E5-2695 v2, 2.4 GHz 59,124 22,275 IBM SDK V7 SR6 (*)
HP ProLiant ML350p Gen8 2xIntel E5-2697 v2, 2.7 GHz 57,594 32,103 Oracle Java SE 7u40
HP ProLiant BL460c Gen8 2xIntel E5-2697 v2, 2.7 GHz 56,367 30,078 Oracle Java SE 7u40
Sun Server X4-2, DDR3-1600 2xIntel E5-2697 v2, 2.7 GHz 52,664 20,553 Oracle Java SE 7u40
HP ProLiant DL360e Gen8 2xIntel E5-2470 v2, 2.4 GHz 48,772 17,915 Oracle Java SE 7u40

* IBM SDK V7 SR6 – IBM SDK, Java Technology Edition, Version 7, Service Refresh 6

The following table compares the SPARC T5 processor to the Intel E7 v2 processor.

SPECjbb2013 - Results Using JDK 8
Per Chip Comparison
System SPECjbb2013-MultiJVM SPECjbb2013-MultiJVM/Chip JDK
max-jOPS critical-jOPS max-jOPS critical-jOPS
SPARC T5-2
2xSPARC T5, 3.6 GHz
114,492 43,963 57,246 21,981 Oracle Java SE 8
NEC Express5800/A040b
4xIntel E7-4890 v2, 2.8 GHz
177,753 65,529 44,438 16,382 Oracle Java SE 8

SPARC per Chip Advantage 1.29x 1.34x

Configuration Summary

System Under Test:

SPARC T5-2 server
2 x SPARC T5, 3.60 GHz
512 GB memory (32 x 16 GB dimms)
Oracle Solaris 11.1
Oracle Java SE 8

Benchmark Description

The SPECjbb2013 benchmark has been developed from the ground up to measure performance based on the latest Java application features. It is relevant to all audiences who are interested in Java server performance, including JVM vendors, hardware developers, Java application developers, researchers and members of the academic community.

From SPEC's press release, "SPECjbb2013 replaces SPECjbb2005. The new benchmark has been developed from the ground up to measure performance based on the latest Java application features. It is expected to be used widely by all those interested in Java server performance, including JVM vendors, hardware developers, Java application developers, researchers and members of the academic community."

SPECjbb2013 features include:

  • A usage model based on a world-wide supermarket company with an IT infrastructure that handles a mix of point-of-sale requests, online purchases and data-mining operations.
  • Both a pure throughput metric and a metric that measures critical throughput under service-level agreements (SLAs) specifying response times ranging from 10ms to 500ms.
  • Support for multiple run configurations, enabling users to analyze and overcome bottlenecks at multiple layers of the system stack, including hardware, OS, JVM and application layers.
  • Exercising new Java 7 features and other important performance elements, including the latest data formats (XML), communication using compression, and messaging with security.
  • Support for virtualization and cloud environments.

See Also

Disclosure Statement

SPEC and the benchmark name SPECjbb are registered trademarks of Standard Performance Evaluation Corporation (SPEC). Results as of 3/6/2014, see http://www.spec.org for more information.  SPARC T5-2 114,492 SPECjbb2013-MultiJVM max-jOPS, 43,963 SPECjbb2013-MultiJVM critical-jOPS; NEC Express5800/A040b 177,753 SPECjbb2013-MultiJVM max-jOPS, 65,529 SPECjbb2013-MultiJVM critical-jOPS; Cisco UCS c240 M3 63,079 SPECjbb2013-MultiJVM max-jOPS, 23,797 SPECjbb2013-MultiJVM critical-jOPS; HP ProLiant ML350p Gen8 62,393 SPECjbb2013-MultiJVM max-jOPS, 24,310 SPECjbb2013-MultiJVM critical-jOPS; IBM System X3650 M4 BD 59,124 SPECjbb2013-MultiJVM max-jOPS, 22,275 SPECjbb2013-MultiJVM critical-jOPS; HP ProLiant ML350p Gen8 57,594 SPECjbb2013-MultiJVM max-jOPS, 32,103 SPECjbb2013-MultiJVM critical-jOPS; HP ProLiant BL460c Gen8 56,367 SPECjbb2013-MultiJVM max-jOPS, 30,078 SPECjbb2013-MultiJVM critical-jOPS; Sun Server X4-2 52,664 SPECjbb2013-MultiJVM max-jOPS, 20,553 SPECjbb2013-MultiJVM critical-jOPS; HP ProLiant DL360e Gen8 48,772 SPECjbb2013-MultiJVM max-jOPS, 17,915 SPECjbb2013-MultiJVM critical-jOPS.

Friday Feb 14, 2014

SPARC M6-32 Delivers Oracle E-Business and PeopleSoft World Record Benchmarks, Linear Data Warehouse Scaling in a Virtualized Configuration

This result demonstrates how the combination of Oracle virtualization technologies for SPARC and Oracle's SPARC M6-32 server allow the deployment and concurrent high performance execution of multiple Oracle applications and databases sized for the Enterprise.

  • In an 8-chip Dynamic Domain (also known as PDom), the SPARC M6-32 server set a World Record E-Business 12.1.3 X-Large world record with 14,660 online users running five simultaneous E-Business modules.

  • In a second 8-chip Dynamic Domain, the SPARC M6-32 server set a World Record PeopleSoft HCM 9.1 HR Self-Service online supporting 35,000 users while simultaneously running a batch workload in 29.17 minutes. This was done with a database of 600,480 employees. Two other separate tests were run, one supporting 40,000 online users only and another a batch-only workload that was run in 18.27 min.

  • In a third Dynamic Domain with 16-chips on the SPARC M6-32 server, a data warehouse test was run that showed near-linear scaling.

  • On the SPARC M6-32 server, several critical applications instances were virtualized: an Oracle E-Business application and database, an Oracle's PeopleSoft application and database, and a Decision Support database instance using Oracle Database 12c.

  • In this Enterprise Virtualization benchmark a SPARC M6-32 server utilized all levels of Oracle Virtualization features available for SPARC servers. The 32-chip SPARC M6 based server was divided in three separate Dynamic Domains (also known as PDoms), available only on the SPARC Enterprise M-Series systems, which are completely electrically isolated and independent hardware partitions. Each PDom was subsequently split into multiple hypervisor-based Oracle VM for SPARC partitions (also known as LDoms), each one running its own Oracle Solaris kernel and managing its own CPUs and I/O resources. The hardware resources allocated to each Oracle VM for SPARC partition were then organized in various Oracle Solaris Zones, to further refine application tier isolation and resources management. The three PDoms were dedicated to the enterprise applications as follows:

    • Oracle E-Business PDom: Oracle E-Business 12.1.3 Suite World Record Extra-Large benchmark, exercising five Online Modules: Customer Service, Human Resources Self Service, iProcurement, Order Management and Financial, with 14,660 users and an average user response time under 2 seconds.

    • PeopleSoft PDom: PeopleSoft Human Capital Management (HCM) 9.1 FP2 World Record Benchmark, using PeopleTools 8.52 and an Oracle Database 11g Release 2, with 35,000 users, at an average user Search Time of 1.46 seconds and Save Time of 0.93 seconds. An online run with 40,000 users, had an average user Search Time of 2.17 seconds and Save Time of 1.39 seconds, and a Payroll batch run completed in 29.17 minutes elapsed time for more than 500,000 employees.

    • Decision Support PDom: An Oracle Database 12c instance executing a Decision Support workload on about 30 billion rows of data and achieving linear scalability, i.e. on the 16 chips comprising the PDom, the workload ran 16x faster than on a single chip. Specifically, the 16-chip PDom processed about 320M rows/sec whereas a single chip could process about 20M rows/sec.

  • The SPARC M6-32 server is ideally suited for large-memory utilization. In this virtualized environment, three critical applications made use of 16 TB of physical memory. Each of the Oracle VM Server for SPARC environments utilized from 4 to 8 TB of memory, more than the limits of other virtualization solutions.

  • SPARC M6-32 Server Virtualization Layout Highlights

    • The Oracle E-Business application instances were run in a dedicated Dynamic Domain consisting of 8 SPARC M6 processors and 4 TB of memory. The PDom was split into four symmetric Oracle VM Server for SPARC (LDoms) environments of 2 chips and 1 TB of memory each, two dedicated to the Application Server tier and the other two to the Database Server tier. Each Logical Domain was subsequently divided into two Oracle Solaris Zones, for a total of eight, one for each E-Business Application server and one for each Oracle Database 11g instance.

    • The PeopleSoft application was run in a dedicated Dynamic Domain (PDom) consisting of 8 SPARC M6 processors and 4 TB of memory. The PDom was split into two Oracle VM Server for SPARC (LDoms) environments one of 6 chips and 3 TB of memory, reserved for the Web and Application Server tiers, and a second one of 2 chips and 1 TB of memory, reserved for the Database tier. Two PeopleSoft Application Servers, a Web Server instance, and a single Oracle Database 11g instance were each executed in their respective and exclusive Oracle Solaris Zone.

    • The Oracle Database 12c Decision Support workload was run in a Dynamic Domain consisting of 16 SPARC M6 processors and 8 TB of memory.

  • All the Oracle Applications and Database instances were running at high level of performance and concurrently in a virtualized environment. Running three Enterprise level application environments on a single SPARC M6-32 server offers centralized administration, simplified physical layout, high availability and security features (as each PDom and LDom runs its own Oracle Solaris operating system copy physically and logically isolated from the other environments), enabling the coexistence of multiple versions Oracle Solaris and application software on a single physical server.

  • Dynamic Domains and Oracle VM Server for SPARC guests were configured with independent direct I/O domains, allowing for fast and isolated I/O paths, providing secure and high performance I/O access.

Performance Landscape

Oracle E-Business Test using Oracle Database 11g
SPARC M6-32 PDom, 8 SPARC M6 Processors, 4 TB Memory
Total Online Users Weighted Average
Response Time (sec)
90th Percentile
Response Time (s)
14,660 0.81 0.88
Multiple Online Modules X-Large Configuration (HR Self-Service, Order Management, iProcurement, Customer Service, Financial)

PeopleSoft HR Self-Service Online Plus Payroll Batch using Oracle Database 11g
SPARC M6-32 PDom, 8 SPARC M6 Processors, 4 TB Memory
HR Self-Service Payroll Batch
Elapsed (min)
Online Users Average User
Search / Save
Time (sec)
Transactions
per Second
35,000 1.46 / 0.93 116 29.17

HR Self-Service Only Payroll Batch Only
Elapsed (min)
40,000 2.17 / 1.39 132 18.27

Oracle Database 12c Decision Support Query Test
SPARC M6-32 PDom, 16 SPARC M6 Processors, 8 TB Memory
Parallelism
Chips Used
Rows Processing Rate
(rows/s)
Scaling Normalized to 1 Chip
16 319,981,734 15.9
8 162,545,303 8.1
4 80,943,271 4.0
2 40,458,329 2.0
1 20,086,829 1.0

Configuration Summary

System Under Test:

SPARC M6-32 server with
32 x SPARC M6 processors (3.6 GHz)
16 TB memory

Storage Configuration:

6 x Sun Storage 2540-M2 each with
8 x Expansion Trays (each tray equipped with 12 x 300 GB SAS drives)
7 x Sun Server X3-2L each with
2 x Intel Xeon E5-2609 2.4 GHz Processors
16 GB Memory
4 x Sun Flash Accelerator F40 PCIe 400 GB cards
Oracle Solaris 11.1 (COMSTAR)
1 x Sun Server X3-2L with
2 x Intel Xeon E5-2609 2.4 GHz Processors
16 GB Memory
12 x 3 TB SAS disks
Oracle Solaris 11.1 (COMSTAR)

Software Configuration:

Oracle Solaris 11.1 (11.1.10.5.0), Oracle E-Business
Oracle Solaris 11.1 (11.1.10.5.0), PeopleSoft
Oracle Solaris 11.1 (11.1.9.5.0), Decision Support
Oracle Database 11g Release 2, Oracle E-Business and PeopleSoft
Oracle Database 12c Release 1, Decision Support
Oracle E-Business Suite 12.1.3
PeopleSoft Human Capital Management 9.1 FP2
PeopleSoft PeopleTools 8.52.03
Oracle Java SE 6u32
Oracle Tuxedo, Version 10.3.0.0, 64-bit, Patch Level 043
Oracle WebLogic Server 11g (10.3.4)

Oracle Dynamic Domains (PDoms) resources:


Oracle E-Business PeopleSoft Oracle DSS
Processors 8 8 16
Memory 4 TB 4 TB 8 TB
Oracle Solaris 11.1 (11.1.10.5.0) 11.1 (11.1.10.5.0) 11.1 (11.1.9.5.0)
Oracle Database 11g 11g 12c
Oracle VM for SPARC /
Oracle Solaris Zones
4 LDom / 8 Zones 2 LDom / 4 Zones None
Storage 7 x Sun Server X3-2L 1 x Sun Server X3-2L
(12 x 3 TB SAS )
2 x Sun Storage 2540-M2 / 2501 pairs
4 x Sun Storage 2540-M2/2501 pairs

Benchmark Description

This benchmark consists of three different applications running concurrently. It shows that large, enterprise workloads can be run on a single system and without performance impact between application environments.

The three workloads are:

  • Oracle E-Business Suite Online

    • This test simulates thousands of online users executing transactions typical of an internal Enterprise Resource Processing, including 5 application modules: Customer Service, Human Resources Self Service, Procurement, Order Management and Financial.

    • Each database tier uses a database instance of about 600 GB in size, and supporting thousands of application users, accessing hundreds of objects (tables, indexes, SQL stored procedures, etc.).

    • The application tier includes multiple web and application server instances, specifically Apache Web Server, Oracle Application Server 10g and Oracle Java SE 6u32.

  • PeopleSoft Human Capital Management

    • This test simulates thousands of online employees, managers and Human Resource administrators executing transactions typical of a Human Resources Self Service application for the Enterprise. Typical transactions are: viewing paychecks, promoting and hiring employees, updating employee profiles, etc.

    • The database tier uses a database instance of about 500 GB in size, containing information for 500,480 employees.

    • The application tier for this test includes web and application server instances, specifically Oracle WebLogic Server 11g, PeopleSoft Human Capital Management 9.1 and Oracle Java SE 6u32.

  • Decision Support Workload using the Oracle Database.

    • The query processes 30 billion rows stored in the Oracle Database, making heavy use of Oracle parallel query processing features. It performs multiple aggregations and summaries by reading and processing all the rows of the database.

Key Points and Best Practices

Oracle E-Business Environment

The Oracle E-Business Suite setup consisted 4 Oracle E-Business environments running 5 online Oracle E-Business modules simultaneously.

The Oracle E-Business environments were deployed on 4 Oracle VM for SPARC, respectively 2 for the Application tier and 2 for the Database tier. Each LDom included 2 SPARC M6 processor chips. The Application LDom was further split into 2 Oracle Solaris Zones, each one containing one Oracle E-Business Application instance. Similarly, on the Database tier, each LDom was further divided into 2 Oracle Solaris Zones, each containing an Oracle Database instance. Applications on the same LDom shared a 10 GbE network link to connect to the Database tier LDom. Each Application in a Zone was connected to its own dedicated Database Zone. The communication between the two Zones was implemented via Oracle Solaris 11 virtual network, which provides high performance, low latency transfers at memory speed using large frames (9000 bytes vs typical 1500 bytes frames).

The Oracle E-Business setup made use of the Oracle Database Shared Server feature in order to limit memory utilization, as well as the number of database Server processes. The Oracle Database configuration and optimization was substantially out-of-the-box, except for proper sizing the Oracle Database memory areas (System Global Area and Program Global Area).

In the Oracle E-Business Application LDom handling Customer Service and HR Self Service modules, 28 Forms servers and 8 OC4J application servers were hosted in the two separate Oracle Solaris Zones, for a total of 56 forms servers and 16 applications servers.

All the Oracle Database server processes and the listener processes were executed in the Oracle Solaris FX scheduler class.

PeopleSoft Environment

The PeopleSoft Application Oracle VM for SPARC had one Oracle Solaris Zone of 12 cores containing the web tier and two Oracle Solaris Zones of 57 cores total containing the Application tier. The Database tier was contained in an Oracle VM for SPARC consisting of one Oracle Solaris Zone of 24 cores. One core, in the Application Oracle VM, was dedicated to network and disk interrupt handling.

All database data files, recovery files and Oracle Clusterware files for the PeopleSoft test were created with the Oracle Automatic Storage Management (Oracle ASM) volume manager for the added benefit of the ease of management provided by Oracle ASM integrated storage management solution.

In the application tier, 5 PeopleSoft domains with 350 application servers (70 per each domain) were hosted in the two separate Oracle Solaris Zones for a total of 10 domains with 700 application server processes.

All PeopleSoft Application processes and Web Server JVM instances were executed in the Oracle Solaris FX scheduler class.

Oracle Decision Support Environment

The decision support workload showed how the combination of a large memory (8 TB) and a large number of processors (16 chips comprising 1536 virtual CPUs) together with Oracle parallel query facility can linearly increase the performance of certain decision support queries as the number of CPUs increase.

The large memory was used to cache the entire 30 billion row Oracle table in memory. There are a number of ways to accomplish this. The method deployed in this test was to allocate sufficient memory for Oracle's "keep cache" and direct the table to the "keep cache."

To demonstrate scalability, it was necessary to ensure that the number of Oracle parallel servers was always equal to the number of available virtual CPUs. This was accomplished by the combination of providing a degree of parallelism hint to the query and setting both parallel_max_servers and parallel_min_servers to the number of virtual CPUs.

The number of virtual CPUs for each stage of the scalability test was adjusted using the psradm command available in Oracle Solaris.

See Also

Disclosure Statement

Oracle and/or its affiliates. All rights reserved. Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners. PeopleSoft results as of 02/14/2014. Other results as of 09/22/2013.

Oracle E-Business Suite R12 extra-large multiple-online module benchmark, SPARC M6-32, SPARC M6, 3.6 GHz, 8 chips, 96 cores, 768 threads, 4 TB memory, 14,660 online users, average response time 0.81 sec, 90th percentile response time 0.88 sec, Oracle Solaris 11.1, Oracle Solaris Zones, Oracle VM for SPARC, Oracle E-Business Suite 12.1.3, Oracle Database 11g Release 2, Results as of 9/22/2013.

Thursday Jan 23, 2014

SPARC T5-2 Delivers World Record 2-Socket Application Server for SPECjEnterprise2010 Benchmark

Oracle's SPARC T5-2 servers have set the world record for the SPECjEnterprise2010 benchmark using two-socket application servers with a result of 17,033.54 SPECjEnterprise2010 EjOPS. The result used two SPARC T5-2 servers, one server for the application tier and the other server for the database tier.

  • The SPARC T5-2 server delivered 29% more performance compared to the 2-socket IBM PowerLinux server result of 13,161.07 SPECjEnterprise2010 EjOPS.

  • The two SPARC T5-2 servers have 1.2x better price performance than the two IBM PowerLinux 7R2 POWER7+ processor-based servers (based on hardware plus software configuration costs for both tiers). The price performance of the SPARC T5-2 server is $35.99 compared to the IBM PowerLinux 7R2 at $44.75.

  • The SPARC T5-2 server demonstrated 1.5x more performance compared to Oracle's x86-based 2-socket Sun Server X4-2 system (Ivy Bridge) result of 11,259.88 SPECjEnterprise2010 EjOPS. Oracle holds the top x86 2-socket application server SPECjEnterprise2010 result.

  • This SPARC T5-2 server result represents the best performance per socket for a single system in the application tier of 8,516.77 SPECjEnterprise2010 EjOPS per socket.

  • The application server used Oracle Fusion Middleware components including the Oracle WebLogic 12.1 application server and Java HotSpot(TM) 64-Bit Server VM on Solaris, version 1.7.0_45. The database server was configured with Oracle Database 12c Release 1.

  • This result demonstrated less than 1 second average response times for all SPECjEnterprise2010 transactions and represents Jave EE 5.0 transactions generated by 139,000 users.

Performance Landscape

Select 2-socket single application server results. Complete benchmark results are at the SPEC website, SPECjEnterprise2010 Results.

SPECjEnterprise2010 Performance Chart
1/22/2014
Submitter EjOPS* Java EE Server DB Server
Oracle 17,033.54 1 x SPARC T5-2
2 x 3.6 GHz SPARC T5
Oracle WebLogic 12c (12.1.2)
1 x SPARC T5-2
2 x 3.6 GHz SPARC T5
Oracle Database 12c (12.1.0.1)
IBM 13,161.07 1x IBM PowerLinux 7R2
2 x 4.2 GHz POWER 7+
WebSphere Application Server V8.5
1x IBM PowerLinux 7R2
2 x 4.2 GHz POWER 7+
IBM DB2 10.1 FP2
Oracle 11,259.88 1x Sun Server X4-2
2 x 2.7 GHz Intel Xeon E5-2697 v2
Oracle WebLogic 12c (12.1.2)
1x Sun Server X4-2L
2 x 2.7 GHz Intel Xeon E5-2697 v2
Oracle Database 12c (12.1.0.1)

* SPECjEnterprise2010 EjOPS (bigger is better)

Configuration Summary

Application Server:

1 x SPARC T5-2 server, with
2 x 3.6 GHz SPARC T5 processors
512 GB memory
2 x 10 GbE dual-port NIC
Oracle Solaris 11.1 (11.1.13.6.0)
Oracle WebLogic Server 12c (12.1.2)
Java HotSpot(TM) 64-Bit Server VM on Solaris, version 1.7.0_45

Database Server:

1 x SPARC T5-2 server, with
2 x 3.6 GHz SPARC T5 processors
512 GB memory
1 x 10 GbE dual-port NIC
2 x 8 Gb FC HBA
Oracle Solaris 11.1 (11.1.13.6.0)
Oracle Database 12c (12.1.0.1)

Storage Servers:

2 x Sun Server X4-2L (24-Drive), with
2 x 2.6 GHz Intel Xeon
64 GB memory
1 x 8 Gb FC HBA
4 x Sun Flash Accelerator F80 PCI-E Cards
Oracle Solaris 11.1

Benchmark Description

SPECjEnterprise2010 is the third generation of the SPEC organization's J2EE end-to-end industry standard benchmark application. The new SPECjEnterprise2010 benchmark has been re-designed and developed to cover the Java EE 5 specification's significantly expanded and simplified programming model, highlighting the major features used by developers in the industry today. This provides a real world workload driving the Application Server's implementation of the Java EE specification to its maximum potential and allowing maximum stressing of the underlying hardware and software systems,

  • The web zone, servlets, and web services
  • The EJB zone
  • JPA 1.0 Persistence Model
  • JMS and Message Driven Beans
  • Transaction management
  • Database connectivity
Moreover, SPECjEnterprise2010 also heavily exercises all parts of the underlying infrastructure that make up the application environment, including hardware, JVM software, database software, JDBC drivers, and the system network.

The primary metric of the SPECjEnterprise2010 benchmark is jEnterprise Operations Per Second (SPECjEnterprise2010 EjOPS). The primary metric for the SPECjEnterprise2010 benchmark is calculated by adding the metrics of the Dealership Management Application in the Dealer Domain and the Manufacturing Application in the Manufacturing Domain. There is NO price/performance metric in this benchmark.

Key Points and Best Practices

  • Two Oracle WebLogic server instances on the SPARC T5-2 server were hosted in 2 separate Oracle Solaris Zones.
  • The Oracle WebLogic application servers were executed in the FX scheduling class to improve performance by reducing the frequency of context switches.
  • The Oracle log writer process was run in the RT scheduling class.

See Also

Disclosure Statement

SPEC and the benchmark name SPECjEnterprise are registered trademarks of the Standard Performance Evaluation Corporation. Results from www.spec.org as of 1/22/2014. SPARC T5-2, 17,033.54 SPECjEnterprise2010 EjOPS; IBM PowerLinux 7R2, 13,161.07 SPECjEnterprise2010 EjOPS; Sun Server X4-2, 11,259.88 SPECjEnterprise2010 EjOPS.

The SPARC T5-2 configuration cost is the total application and database server hardware plus software. List price is $613,052 from http://www.oracle.com as of 1/22/2014. The IBM PowerLinux 7R2 configuration total hardware plus software list price is $588,970 based on public pricing from http://www.ibm.com as of 1/22/2014. Pricing does not include database storage hardware for IBM or Oracle.

Monday Nov 25, 2013

World Record Single System TPC-H @10000GB Benchmark on SPARC T5-4

Oracle's SPARC T5-4 server delivered world record single server performance of 377,594 QphH@10000GB with price/performance of $4.65/QphH@10000GB USD on the TPC-H @10000GB benchmark. This result shows that the 4-chip SPARC T5-4 server is significantly faster than the 8-chip server results from HP (Intel x86 based).

  • The SPARC T5-4 server with four SPARC T5 processors is 2.4 times faster than the HP ProLiant DL980 G7 server with eight x86 processors.

  • The SPARC T5-4 server delivered 4.8 times better performance per chip and 3.0 times better performance per core than the HP ProLiant DL980 G7 server.

  • The SPARC T5-4 server has 28% better price/performance than the HP ProLiant DL980 G7 server (for the price/QphH metric).

  • The SPARC T5-4 server with 2 TB memory is 2.4 times faster than the HP ProLiant DL980 G7 server with 4 TB memory (for the composite metric).

  • The SPARC T5-4 server took 9 hours, 37 minutes, 54 seconds for data loading while the HP ProLiant DL980 G7 server took 8.3 times longer.

  • The SPARC T5-4 server accomplished the refresh function in around a minute, the HP ProLiant DL980 G7 server took up to 7.1 times longer to do the same function.

This result demonstrates a complete data warehouse solution that shows the performance both of individual and concurrent query processing streams, faster loading, and refresh of the data during business operations. The SPARC T5-4 server delivers superior performance and cost efficiency when compared to the HP result.

Performance Landscape

The table lists the leading TPC-H @10000GB results for non-clustered systems.

TPC-H @10000GB, Non-Clustered Systems
System
Processor
P/C/T – Memory
Composite
(QphH)
$/perf
($/QphH)
Power
(QppH)
Throughput
(QthH)
Database Available
SPARC T5-4
3.6 GHz SPARC T5
4/64/512 – 2048 GB
377,594.3 $4.65 342,714.1 416,024.4 Oracle 11g R2 11/25/13
HP ProLiant DL980 G7
2.4 GHz Intel Xeon E7-4870
8/80/160 – 4096 GB
158,108.3 $6.49 185,473.6 134,780.5 SQL Server 2012 04/15/13

P/C/T = Processors, Cores, Threads
QphH = the Composite Metric (bigger is better)
$/QphH = the Price/Performance metric in USD (smaller is better)
QppH = the Power Numerical Quantity (bigger is better)
QthH = the Throughput Numerical Quantity (bigger is better)

The following table lists data load times and average refresh function times.

TPC-H @10000GB, Non-Clustered Systems
Database Load & Database Refresh
System
Processor
Data Loading
(h:m:s)
T5
Advan
RF1
(sec)
T5
Advan
RF2
(sec)
T5
Advan
SPARC T5-4
3.6 GHz SPARC T5
09:37:54 8.3x 58.8 7.1x 62.1 6.4x
HP ProLiant DL980 G7
2.4 GHz Intel Xeon E7-4870
79:28:23 1.0x 416.4 1.0x 394.9 1.0x

Data Loading = database load time
RF1 = throughput average first refresh transaction
RF2 = throughput average second refresh transaction
T5 Advan = the ratio of time to the SPARC T5-4 server time

Complete benchmark results found at the TPC benchmark website http://www.tpc.org.

Configuration Summary and Results

Server Under Test:

SPARC T5-4 server
4 x SPARC T5 processors (3.6 GHz total of 64 cores, 512 threads)
2 TB memory
2 x internal SAS (2 x 300 GB) disk drives
12 x 16 Gb FC HBA

External Storage:

24 x Sun Server X4-2L servers configured as COMSTAR nodes, each with
2 x 2.5 GHz Intel Xeon E5-2609 v2 processors
4 x Sun Flash Accelerator F80 PCIe Cards, 800 GB each
6 x 4 TB 7.2K RPM 3.5" SAS disks
1 x 8 Gb dual port HBA

2 x 48 port Brocade 6510 Fibre Channel Switches

Software Configuration:

Oracle Solaris 11.1
Oracle Database 11g Release 2 Enterprise Edition

Audited Results:

Database Size: 10000 GB (Scale Factor 10000)
TPC-H Composite: 377,594.3 QphH@10000GB
Price/performance: $4.65/QphH@10000GB USD
Available: 11/25/2013
Total 3 year Cost: $1,755,709 USD
TPC-H Power: 342,714.1
TPC-H Throughput: 416,024.4
Database Load Time: 9:37:54

Benchmark Description

The TPC-H benchmark is a performance benchmark established by the Transaction Processing Council (TPC) to demonstrate Data Warehousing/Decision Support Systems (DSS). TPC-H measurements are produced for customers to evaluate the performance of various DSS systems. These queries and updates are executed against a standard database under controlled conditions. Performance projections and comparisons between different TPC-H Database sizes (100GB, 300GB, 1000GB, 3000GB, 10000GB, 30000GB and 100000GB) are not allowed by the TPC.

TPC-H is a data warehousing-oriented, non-industry-specific benchmark that consists of a large number of complex queries typical of decision support applications. It also includes some insert and delete activity that is intended to simulate loading and purging data from a warehouse. TPC-H measures the combined performance of a particular database manager on a specific computer system.

The main performance metric reported by TPC-H is called the TPC-H Composite Query-per-Hour Performance Metric (QphH@SF, where SF is the number of GB of raw data, referred to as the scale factor). QphH@SF is intended to summarize the ability of the system to process queries in both single and multiple user modes. The benchmark requires reporting of price/performance, which is the ratio of the total HW/SW cost plus 3 years maintenance to the QphH. A secondary metric is the storage efficiency, which is the ratio of total configured disk space in GB to the scale factor.

Key Points and Best Practices

  • COMSTAR (Common Multiprotocol SCSI Target) is the software framework that enables an Oracle Solaris host to serve as a SCSI Target platform. COMSTAR uses a modular approach to break the huge task of handling all the different pieces in a SCSI target subsystem into independent functional modules which are glued together by the SCSI Target Mode Framework (STMF). The modules implementing functionality at SCSI level (disk, tape, medium changer etc.) are not required to know about the underlying transport. And the modules implementing the transport protocol (FC, iSCSI, etc.) are not aware of the SCSI-level functionality of the packets they are transporting. The framework hides the details of allocation providing execution context and cleanup of SCSI commands and associated resources and simplifies the task of writing the SCSI or transport modules.

  • The SPARC T5-4 server achieved a peak IO rate of 37 GB/sec from the Oracle database configured with this storage.

  • Twelve COMSTAR nodes were mirrored to another twelve COMSTAR nodes on which all of the Oracle database files were placed. IO performance was high and balanced across all the nodes.

  • Oracle Solaris 11.1 required very little system tuning.

  • Some vendors try to make the point that storage ratios are of customer concern. However, storage ratio size has more to do with disk layout and the increasing capacities of disks – so this is not an important metric when comparing systems.

  • The SPARC T5-4 server and Oracle Solaris efficiently managed the system load of nearly two thousand Oracle Database parallel processes.

See Also

Disclosure Statement

TPC Benchmark, TPC-H, QphH, QthH, QppH are trademarks of the Transaction Processing Performance Council (TPC). Results as of 11/25/13, prices are in USD. SPARC T5-4 www.tpc.org/3293; HP ProLiant DL980 G7 www.tpc.org/3285.

Thursday Sep 26, 2013

SPARC M6-32 Delivers Oracle E-Business and PeopleSoft World Record Benchmarks, Linear Data Warehouse Scaling in a Virtualized Configuration

This result has been superceded.  Please see the latest result.

 This result demonstrates how the combination of Oracle virtualization technologies for SPARC and Oracle's SPARC M6-32 server allow the deployment and concurrent high performance execution of multiple Oracle applications and databases sized for the Enterprise.

  • In an 8-chip Dynamic Domain (also known as PDom), the SPARC M6-32 server set a World Record E-Business 12.1.3 X-Large world record with 14,660 online users running five simultaneous E-Business modules.

  • In a second 8-chip Dynamic Domain, the SPARC M6-32 server set a World Record PeopleSoft HCM 9.1 HR Self-Service online supporting 34,000 users while simultaneously running a batch workload in 29.7 minutes. This was done with a database of 600,480 employees. In a separate test running a batch-only workload was run in 21.2 min.

  • In a third Dynamic Domain with 16-chips on the SPARC M6-32 server, a data warehouse test was run that showed near-linear scaling.

  • On the SPARC M6-32 server, several critical applications instances were virtualized: an Oracle E-Business application and database, an Oracle's PeopleSoft application and database, and a Decision Support database instance using Oracle Database 12c.

  • In this Enterprise Virtualization benchmark a SPARC M6-32 server utilized all levels of Oracle Virtualization features available for SPARC servers. The 32-chip SPARC M6 based server was divided in three separate Dynamic Domains (also known as PDoms), available only on the SPARC Enterprise M-Series systems, which are completely electrically isolated and independent hardware partitions. Each PDom was subsequently split into multiple hypervisor-based Oracle VM for SPARC partitions (also known as LDoms), each one running its own Oracle Solaris kernel and managing its own CPUs and I/O resources. The hardware resources allocated to each Oracle VM for SPARC partition were then organized in various Oracle Solaris Zones, to further refine application tier isolation and resources management. The three PDoms were dedicated to the enterprise applications as follows:

    • Oracle E-Business PDom: Oracle E-Business 12.1.3 Suite World Record Extra-Large benchmark, exercising five Online Modules: Customer Service, Human Resources Self Service, iProcurement, Order Management and Financial, with 14,660 users and an average user response time under 2 seconds.

    • PeopleSoft PDom: PeopleSoft Human Capital Management (HCM) 9.1 FP2 World Record Benchmark, using PeopleTools 8.52 and an Oracle Database 11g Release 2, with 34,000 users, at an average user Search Time of 1.11 seconds and Save Time of 0.77 seconds, and a Payroll batch run completed in 29.7 minutes elapsed time for more than 500,000 employees.

    • Decision Support PDom: An Oracle Database 12c instance executing a Decision Support workload on about 30 billion rows of data and achieving linear scalability, i.e. on the 16 chips comprising the PDom, the workload ran 16x faster than on a single chip. Specifically, the 16-chip PDom processed about 320M rows/sec whereas a single chip could process about 20M rows/sec.

  • The SPARC M6-32 server is ideally suited for large-memory utilization. In this virtualized environment, three critical applications made use of 16 TB of physical memory. Each of the Oracle VM Server for SPARC environments utilized from 4 to 8 TB of memory, more than the limits of other virtualization solutions.

  • SPARC M6-32 Server Virtualization Layout Highlights

    • The Oracle E-Business application instances were run in a dedicated Dynamic Domain consisting of 8 SPARC M6 processors and 4 TB of memory. The PDom was split into four symmetric Oracle VM Server for SPARC (LDoms) environments of 2 chips and 1 TB of memory each, two dedicated to the Application Server tier and the other two to the Database Server tier. Each Logical Domain was subsequently divided into two Oracle Solaris Zones, for a total of eight, one for each E-Business Application server and one for each Oracle Database 11g instance.

    • The PeopleSoft application was run in a dedicated Dynamic Domain (PDom) consisting of 8 SPARC M6 processors and 4 TB of memory. The PDom was split into two Oracle VM Server for SPARC (LDoms) environments one of 6 chips and 3 TB of memory, reserved for the Web and Application Server tiers, and a second one of 2 chips and 1 TB of memory, reserved for the Database tier. Two PeopleSoft Application Servers, a Web Server instance, and a single Oracle Database 11g instance were each executed in their respective and exclusive Oracle Solaris Zone.

    • The Oracle Database 12c Decision Support workload was run in a Dynamic Domain consisting of 16 SPARC M6 processors and 8 TB of memory.

  • All the Oracle Applications and Database instances were running at high level of performance and concurrently in a virtualized environment. Running three Enterprise level application environments on a single SPARC M6-32 server offers centralized administration, simplified physical layout, high availability and security features (as each PDom and LDom runs its own Oracle Solaris operating system copy physically and logically isolated from the other environments), enabling the coexistence of multiple versions Oracle Solaris and application software on a single physical server.

  • Dynamic Domains and Oracle VM Server for SPARC guests were configured with independent direct I/O domains, allowing for fast and isolated I/O paths, providing secure and high performance I/O access.

Performance Landscape

Oracle E-Business Test using Oracle Database 11g
SPARC M6-32 PDom, 8 SPARC M6 Processors, 4 TB Memory
Total Online Users Weighted Average
Response Time (sec)
90th Percentile
Response Time (s)
14,660 0.81 0.88
Multiple Online Modules X-Large Configuration (HR Self-Service, Order Management, iProcurement, Customer Service, Financial)

PeopleSoft HR Self-Service Online Plus Payroll Batch using Oracle Database 11g
SPARC M6-32 PDom, 8 SPARC M6 Processors, 4 TB Memory
HR Self-Service Payroll Batch
Elapsed (min)
Online Users Average User
Search / Save
Time (sec)
Transactions
per Second
34,000 1.11 / 0.77 113 29.7

Payroll Batch Only
Elapsed (min)
21.17

Oracle Database 12c Decision Support Query Test
SPARC M6-32 PDom, 16 SPARC M6 Processors, 8 TB Memory
Parallelism
Chips Used
Rows Processing Rate
(rows/s)
Scaling Normalized to 1 Chip
16 319,981,734 15.9
8 162,545,303 8.1
4 80,943,271 4.0
2 40,458,329 2.0
1 20,086,829 1.0

Configuration Summary

System Under Test:

SPARC M6-32 server with
32 x SPARC M6 processors (3.6 GHz)
16 TB memory

Storage Configuration:

6 x Sun Storage 2540-M2 each with
8 x Expansion Trays (each tray equipped with 12 x 300 GB SAS drives)
7 x Sun Server X3-2L each with
2 x Intel Xeon E5-2609 2.4 GHz Processors
16 GB Memory
4 x Sun Flash Accelerator F40 PCIe 400 GB cards
Oracle Solaris 11.1 (COMSTAR)
1 x Sun Server X3-2L with
2 x Intel Xeon E5-2609 2.4 GHz Processors
16 GB Memory
12 x 3 TB SAS disks
Oracle Solaris 11.1 (COMSTAR)

Software Configuration:

Oracle Solaris 11.1 (11.1.10.5.0), Oracle E-Business
Oracle Solaris 11.1 (11.1.10.5.0), PeopleSoft
Oracle Solaris 11.1 (11.1.9.5.0), Decision Support
Oracle Database 11g Release 2, Oracle E-Business and PeopleSoft
Oracle Database 12c Release 1, Decision Support
Oracle E-Business Suite 12.1.3
PeopleSoft Human Capital Management 9.1 FP2
PeopleSoft PeopleTools 8.52.03
Oracle Java SE 6u32
Oracle Tuxedo, Version 10.3.0.0, 64-bit, Patch Level 043
Oracle WebLogic Server 11g (10.3.4)

Oracle Dynamic Domains (PDoms) resources:


Oracle E-Business PeopleSoft Oracle DSS
Processors 8 8 16
Memory 4 TB 4 TB 8 TB
Oracle Solaris 11.1 (11.1.10.5.0) 11.1 (11.1.10.5.0) 11.1 (11.1.9.5.0)
Oracle Database 11g 11g 12c
Oracle VM for SPARC /
Oracle Solaris Zones
4 LDom / 8 Zones 2 LDom / 4 Zones None
Storage 7 x Sun Server X3-2L 1 x Sun Server X3-2L
(12 x 3 TB SAS )
2 x Sun Storage 2540-M2 / 2501 pairs
4 x Sun Storage 2540-M2/2501 pairs

Benchmark Description

This benchmark consists of three different applications running concurrently. It shows that large, enterprise workloads can be run on a single system and without performance impact between application environments.

The three workloads are:

  • Oracle E-Business Suite Online

    • This test simulates thousands of online users executing transactions typical of an internal Enterprise Resource Processing, including 5 application modules: Customer Service, Human Resources Self Service, Procurement, Order Management and Financial.

    • Each database tier uses a database instance of about 600 GB in size, and supporting thousands of application users, accessing hundreds of objects (tables, indexes, SQL stored procedures, etc.).

    • The application tier includes multiple web and application server instances, specifically Apache Web Server, Oracle Application Server 10g and Oracle Java SE 6u32.

  • PeopleSoft Human Capital Management

    • This test simulates thousands of online employees, managers and Human Resource administrators executing transactions typical of a Human Resources Self Service application for the Enterprise. Typical transactions are: viewing paychecks, promoting and hiring employees, updating employee profiles, etc.

    • The database tier uses a database instance of about 500 GB in size, containing information for 500,480 employees.

    • The application tier for this test includes web and application server instances, specifically Oracle WebLogic Server 11g, PeopleSoft Human Capital Management 9.1 and Oracle Java SE 6u32.

  • Decision Support Workload using the Oracle Database.

    • The query processes 30 billion rows stored in the Oracle Database, making heavy use of Oracle parallel query processing features. It performs multiple aggregations and summaries by reading and processing all the rows of the database.

Key Points and Best Practices

Oracle E-Business Environment

The Oracle E-Business Suite setup consisted 4 Oracle E-Business environments running 5 online Oracle E-Business modules simultaneously. The Oracle E-Business environments were deployed on 4 Oracle VM for SPARC, respectively 2 for the Application tier and 2 for the Database tier. Each LDom included 2 SPARC M6 processor chips. The Application LDom was further split into 2 Oracle Solaris Zones, each one containing one Oracle E-Business Application instance. Similarly, on the Database tier, each LDom was further divided into 2 Oracle Solaris Zones, each containing an Oracle Database instance. Applications on the same LDom shared a 10 GbE network link to connect to the Database tier LDom. Each Application in a Zone was connected to its own dedicated Database Zone. The communication between the two Zones was implemented via Oracle Solaris 11 virtual network, which provides high performance, low latency transfers at memory speed using large frames (9000 bytes vs typical 1500 bytes frames).

The Oracle E-Business setup made use of the Oracle Database Shared Server feature in order to limit memory utilization, as well as the number of database Server processes. The Oracle Database configuration and optimization was substantially out-of-the-box, except for proper sizing the Oracle Database memory areas (System Global Area and Program Global Area).

In the Oracle E-Business Application LDom handling Customer Service and HR Self Service modules, 28 Forms servers and 8 OC4J application servers were hosted in the two separate Oracle Solaris Zones, for a total of 56 forms servers and 16 applications servers.

All the Oracle Database server processes and the listener processes were executed in the Oracle Solaris FX scheduler class.

PeopleSoft Environment

The PeopleSoft Application Oracle VM for SPARC had one Oracle Solaris Zone of 12 cores containing the web tier and two Oracle Solaris Zones of 28 cores each containing the Application tier. The Database tier was contained in an Oracle VM for SPARC consisting of one Oracle Solaris Zone of 24 cores. One and a half cores, in the Application Oracle VM, were dedicated to network and disk interrupt handling.

All database data files, recovery files and Oracle Clusterware files for the PeopleSoft test were created with the Oracle Automatic Storage Management (Oracle ASM) volume manager for the added benefit of the ease of management provided by Oracle ASM integrated storage management solution.

In the application tier, 5 PeopleSoft domains with 350 application servers (70 per each domain) were hosted in the two separate Oracle Solaris Zones for a total of 10 domains with 700 application server processes.

All PeopleSoft Application processes and Web Server JVM instances were executed in the Oracle Solaris FX scheduler class.

Oracle Decision Support Environment

The decision support workload showed how the combination of a large memory (8 TB) and a large number of processors (16 chips comprising 1536 virtual CPUs) together with Oracle parallel query facility can linearly increase the performance of certain decision support queries as the number of CPUs increase.

The large memory was used to cache the entire 30 billion row Oracle table in memory. There are a number of ways to accomplish this. The method deployed in this test was to allocate sufficient memory for Oracle's "keep cache" and direct the table to the "keep cache."

To demonstrate scalability, it was necessary to ensure that the number of Oracle parallel servers was always equal to the number of available virtual CPUs. This was accomplished by the combination of providing a degree of parallelism hint to the query and setting both parallel_max_servers and parallel_min_servers to the number of virtual CPUs.

The number of virtual CPUs for each stage of the scalability test was adjusted using the psradm command available in Oracle Solaris.

See Also

Disclosure Statement

Oracle and/or its affiliates. All rights reserved. Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners. Results as of 09/22/2013.

Oracle E-Business Suite R12 extra-large multiple-online module benchmark, SPARC M6-32, SPARC M6, 3.6 GHz, 8 chips, 96 cores, 768 threads, 4 TB memory, 14,660 online users, average response time 0.81 sec, 90th percentile response time 0.88 sec, Oracle Solaris 11.1, Oracle Solaris Zones, Oracle VM for SPARC, Oracle E-Business Suite 12.1.3, Oracle Database 11g Release 2, Results as of 9/20/2013.

SPARC T5-8 Delivers World Record Single Server SPECjEnterprise2010 Benchmark, Utilizes Virtualized Environment

Oracle produced a world record single-server SPECjEnterprise2010 benchmark result of 36,571.36 SPECjEnterprise2010 EjOPS using one of Oracle's SPARC T5-8 servers for both the application and the database tier. Oracle VM Server for SPARC was used to virtualize the system to achieve this result.

  • The 8-chip SPARC T5 processor based server is 3.3x faster than the 8-chip IBM Power 780 server (POWER7+ processor based).

  • The SPARC T5-8 has 4.4x better price performance than the IBM Power 780, a POWER7+ processor based server (based on hardware plus software configuration costs). The price performance of the SPARC T5-8 server is $40.68 compared to the IBM Power 780 at $177.41. The IBM Power 780, POWER7+ based system has 1.2x better performance per core, but this did not reduce the total software and hardware cost to the customer. As shown by this comparison, performance-per-core is a poor predictor of characteristics relevant to customers. The SPARC T5-8 virtualized price performance was also less than the low-end IBM PowerLinux 7R2 at $62.26.

  • The SPARC T5-8 server ran the Oracle Solaris 11.1 operating system and used Oracle VM Server for SPARC to consolidate ten Oracle WebLogic application server instances and one database server instance to achieve this result.

  • This result demonstrated sub-second average response times for all SPECjEnterprise2010 transactions and represents JEE 5.0 transactions generated by 299,000 users.

  • The SPARC T5-8 server requires only 8 rack units, the same as the space of the IBM Power 780. In this configuration IBM has a hardware core density of 4 cores per rack unit which contrasts with the 16 cores per rack unit for the SPARC T5-8 server. This again demonstrates why performance-per-core is a poor predictor of characteristics relevant to customers.

  • The application server used Oracle Fusion Middleware components including the Oracle WebLogic 12.1 application server and Java HotSpot(TM) 64-Bit Server VM on Solaris, version 1.7.0_25. The database server was configured with Oracle Database 12c Release 1.

  • The SPARC T5-8 server is 2.8x faster than a non-virtualized IBM POWER7+ based server result (one server for application and one server for database), the IBM PowerLinux 7R2 achieved 13,161.07 SPECjEnterprise2010 EjOPS.

Performance Landscape

SPECjEnterprise2010 Performance Chart
Only Three Virtualized Results (App+DB on 1 Server) as of 9/23/2013
Submitter EjOPS* Chips per Server Java EE Server & DB Server
App DB
Oracle 36,571.36 5 3 1 x SPARC T5-8
8 chips, 128 cores, 3.6 GHz SPARC T5
Oracle WebLogic 12c (12.1.2)
Oracle Database 12c (12.1.0.1)
Oracle 27,843.57 4 4 1 x SPARC T5-8
8 chips, 128 cores, 3.6 GHz SPARC T5
Oracle WebLogic 12c (12.1.1)
Oracle Database 11g (11.2.0.3)
IBM 10,902.30 4 4 1 x IBM Power 780
8 chips, 32 cores, 4.42 GHz POWER7+
WebSphere Application Server V8.5
IBM DB2 Universal Database 10.1

* SPECjEnterprise2010 EjOPS (bigger is better)

Complete benchmark results are at the SPEC website, SPECjEnterprise2010 Results.

Configuration Summary

Oracle Summary

Application and Database Server:

1 x SPARC T5-8 server, with
8 x 3.6 GHz SPARC T5 processors
2 TB memory
9 x 10 GbE dual-port NIC
6 x 8 Gb dual-port HBA
Oracle Solaris 11.1 SRU 10.5
Oracle VM Server for SPARC
Oracle WebLogic Server 12c (12.1.2)
Java HotSpot(TM) 64-Bit Server VM on Solaris, version 1.7.0_25
Oracle Database 12c (12.1.0.1)

Storage Servers:

6 x Sun Server X3-2L (12-Drive), with
2 x 2.4 GHz Intel Xeon
16 GB memory
1 x 8 Gb FC HBA
4 x Sun Flash Accelerator F40 PCI-E Card
Oracle Solaris 11.1

2 x Sun Storage 2540-M2 Array
12 x 600 GB 15K RPM SAS HDD

Switch Hardware:

1 x Sun Network 10 GbE 72-port Top of Rack (ToR) Switch

IBM Summary

Application and Database Server:

1 x IBM Power 780 server, with
8 x 4.42 GHz POWER7+ processors
786 GB memory
6 x 10 GbE dual-port NIC
3 x 8 Gb four-port HBA
IBM AIX V7.1 TL2
IBM WebSphere Application Server V8.5
IBM J9 VM (build 2.6, JRE 1.7.0 IBM J9 AIX ppc-32)
IBM DB2 10.1
IBM InfoSphere Optim pureQuery Runtime v3.1.1

Storage:

2 x DS5324 Disk System with
48 x 146 GB 15K E-DDM Disks

1 x v7000 Disk Controller with
16 x 400 GB SSD Disks

Benchmark Description

SPECjEnterprise2010 is the third generation of the SPEC organization's J2EE end-to-end industry standard benchmark application. The new SPECjEnterprise2010 benchmark has been re-designed and developed to cover the Java EE 5 specification's significantly expanded and simplified programming model, highlighting the major features used by developers in the industry today. This provides a real world workload driving the Application Server's implementation of the Java EE specification to its maximum potential and allowing maximum stressing of the underlying hardware and software systems,
  • The web zone, servlets, and web services
  • The EJB zone
  • JPA 1.0 Persistence Model
  • JMS and Message Driven Beans
  • Transaction management
  • Database connectivity
Moreover, SPECjEnterprise2010 also heavily exercises all parts of the underlying infrastructure that make up the application environment, including hardware, JVM software, database software, JDBC drivers, and the system network.

The primary metric of the SPECjEnterprise2010 benchmark is jEnterprise Operations Per Second (SPECjEnterprise2010 EjOPS). The primary metric for the SPECjEnterprise2010 benchmark is calculated by adding the metrics of the Dealership Management Application in the Dealer Domain and the Manufacturing Application in the Manufacturing Domain. There is NO price/performance metric in this benchmark.

Key Points and Best Practices

  • Ten Oracle WebLogic server instances on the SPARC T5-8 server were hosted in 10 separate Oracle Solaris Zones within a separate guest domain on 80 cores (5 cpu chips).
  • The database ran in a separate guest domain consisting of 47 cores (3 cpu chips). One core was reserved for the primary domain.
  • The Oracle WebLogic application servers were executed in the FX scheduling class to improve performance by reducing the frequency of context switches.
  • The Oracle log writer process was run in the FX scheduling class at processor priority 60 to use the Critical Thread feature.

See Also

Disclosure Statement

SPEC and the benchmark name SPECjEnterprise are registered trademarks of the Standard Performance Evaluation Corporation. Results from www.spec.org as of 9/23/2013. SPARC T5-8, 36,571.36 SPECjEnterprise2010 EjOPS (using Oracle VM for SPARC and 5+3 split); SPARC T5-8, 27,843.57 SPECjEnterprise2010 EjOPS (using Oracle Zones and 4+4 split); IBM Power 780, 10,902.30 SPECjEnterprise2010 EjOPS; IBM PowerLinux 7R2, 13,161.07 SPECjEnterprise2010 EjOPS. SPARC T5-8 server total hardware plus software list price is $1,487,792 from http://www.oracle.com as of 9/20/2013. IBM Power 780 server total hardware plus software cost of $1,934,162 based on public pricing from http://www.ibm.com as of 5/22/2013. IBM PowerLinux 7R2 server total hardware plus software cost of $819,451 based on whywebsphere.com/2013/04/29/weblogic-12c-on-oracle-sparc-t5-8-delivers-half-the-transactions-per-core-at-double-the-cost-of-the-websphere-on-ibm-power7/ retrieved 9/20/2013.

SPARC T5-2 Server Beats x86 Server on Oracle Database Transparent Data Encryption

Database security is becoming increasingly important. Oracle Database Advanced Security Transparent Data Encryption (TDE) stops would-be attackers from bypassing the database and reading sensitive information from storage by enforcing data-at-rest encryption in the database layer. Oracle's SPARC T5-2 server outperformed x86 systems when running Oracle Database 12c with Transparent Data Encryption.

  • The SPARC T5-2 server sustained more than 8.0 GB/sec of read bandwidth while decrypting using Transparent Data Encryption (TDE) in Oracle Database 12c. This was the bandwidth available on the system and matched the rate for querying the non-encrypted data.

  • The SPARC T5-2 server achieves about 1.5x higher decryption rate per socket using Oracle Database 12c with TDE than a Sun Server X4-2 system.

  • The SPARC T5-2 server achieves more than double the decryption rate per socket using Oracle Database 12c with TDE than a Sun Server X3-2 system.

Performance Landscape

Table of Size 250 GB Encrypted with AES-128-CFB
Full Table Scan with Degree of Parallelism 128
System Chips Table Data Format SPARC T5-2 Advantage
Clear Encrypted
SPARC T5-2 2 8.4 GB/sec 8.3 GB/sec 1.0
Sun Server X4-2L 2 8.2 GB/sec 5.6 GB/sec 1.5

SPARC T5-2 1 8.4 GB/sec 4.2 GB/sec 1.0
Sun Server X4-2L 1 8.2 GB/sec 2.8 GB/sec 1.5
Sun Server X3-2L 1 8.2 GB/sec 2.0 GB/sec 2.1

Configuration Summary

Systems Under Test:

SPARC T5-2
2 x SPARC T5 processors, 3.6 GHz
256 GB memory
Oracle Solaris 11.1
Oracle Database 12c

Sun Server X3-2L
2 x Intel Xeon E5-2690 processor, 2.90 GHz
64 GB memory
Oracle Solaris 11.1
Oracle Database 12c

Sun Server X4-2L
2 x Intel Xeon E5-2697 v2 processor, 2.70 GHz
256 GB memory
Oracle Solaris 11.1
Oracle Database 12c

Storage:

Flash Storage

Benchmark Description

The purpose of the benchmark is to show the query performance of a database using data encryption to keep the data secure. The benchmark creates a 250 GB table. It is loaded both into a clear text (no encryption) tablespace and an AES-128 encrypted tablespace. Full table scans of the tables were timed.

Key Points and Best Practices

The Oracle Database feature, Transparent Data Encryption (TDE), simplifies the encryption of data within datafiles, preventing unauthorized access to it from the operating system. Transparent Data Encryption allows encryption of the entire contents of a tablespace.

With hardware acceleration of the encryption routines, the SPARC T5-2 server can achieve nearly the same query rate whether the table is encrypted or not up to a limit of about 4 GB/sec per chip.

See Also

Disclosure Statement

Copyright 2013, Oracle and/or its affiliates. All rights reserved. Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners. Results as of 23 September 2013.

Wednesday Sep 25, 2013

SPARC T5-8 Delivers World Record Oracle OLAP Perf Version 3 Benchmark Result on Oracle Database 12c

Oracle's SPARC T5-8 server delivered world record query performance for systems running Oracle Database 12c for the Oracle OLAP Perf Version 3 benchmark.

  • The query throughput on the SPARC T5-8 server is 1.7x higher than that of an 8-chip Intel Xeon E7-8870 server. Both systems had sub-second average response times.

  • The SPARC T5-8 server with the Oracle Database demonstrated the ability to support at least 700 concurrent users querying OLAP cubes (with no think time), processing 2.33 million analytic queries per hour with an average response time of less than 1 second per query. This performance was enabled by keeping the entire cube in-memory utilizing the 4 TB of memory on the SPARC T5-8 server.

  • Assuming a 60 second think time between query requests, the SPARC T5-8 server can support approximately 39,450 concurrent users with the same sub-second response time.

  • The workload uses a set of realistic Business Intelligence (BI) queries that run against an OLAP cube based on a 4 billion row fact table of sales data. The 4 billion rows are partitioned by month spanning 10 years.

  • The combination of the Oracle Database 12cwith the Oracle OLAP option running on a SPARC T5-8 server supports live data updates occurring concurrently with minimally impacted user query executions.

Performance Landscape

Oracle OLAP Perf Version 3 Benchmark
Oracle cube base on 4 billion fact table rows
10 years of data partitioned by month
System Queries/
hour
Users Average Response
Time (sec)
0 sec think time 60 sec think time
SPARC T5-8 2,329,000 700 39,450 <1 sec
8-chip Intel Xeon E7-8870 1,354,000 120 22,675 <1 sec

Configuration Summary

SPARC T5-8:

1 x SPARC T5-8 server with
8 x SPARC T5 processors, 3.6 GHz
4 TB memory
Data Storage and Redo Storage
Flash Storage
Oracle Solaris 11.1 (11.1.8.2.0)
Oracle Database 12c Release 1 (12.1.0.1) with Oracle OLAP option

Sun Server X2-8:

1 x Sun Server X2-8 with
8 x Intel Xeon E7-8870 processors, 2.4 GHz
1 TB memory
Data Storage and Redo Storage
Flash Storage
Oracle Solaris 10 10/12
Oracle Database 12c Release 1 (12.1.0.1) with Oracle OLAP option

Benchmark Description

The Oracle OLAP Perf Version 3 benchmark is a workload designed to demonstrate and stress the ability of the OLAP Option to deliver fast query, near real-time updates and rich calculations using a multi-dimensional model in the context of the Oracle data warehousing.

The bulk of the benchmark entails running a number of concurrent users, each issuing typical multidimensional queries against an Oracle cube. The cube has four dimensions: time, product, customer, and channel. Each query user issues approximately 150 different queries. One query chain may ask for total sales in a particular region (e.g South America) for a particular time period (e.g. Q4 of 2010) followed by additional queries which drill down into sales for individual countries (e.g. Chile, Peru, etc.) with further queries drilling down into individual stores, etc. Another query chain may ask for yearly comparisons of total sales for some product category (e.g. major household appliances) and then issue further queries drilling down into particular products (e.g. refrigerators, stoves. etc.), particular regions, particular customers, etc.

While the core of every OLAP Perf benchmark is real world query performance, the benchmark itself offers numerous execution options such as varying data set sizes, number of users, numbers of queries for any given user and cube update frequency. Version 3 of the benchmark is executed with a much larger number of query streams than previous versions and used a cube designed for near real-time updates. The results produced by version 3 of the benchmark are not directly comparable to results produced by previous versions of the benchmark.

The near real-time update capability is implemented along the following lines. A large Oracle cube, H, is built from a 4 billion row star schema, containing data up until the end of last business day. A second small cube, D, is then created which will contain all of today's new data coming in from outside the world. It will be updated every L minutes with the data coming in within the last L minutes. A third cube, R, joins cubes H and D for reporting purposes much like a view might join data from two tables. Calculations are installed into cube R. The use of a reporting cube which draws data from different storage cubes is a common practice.

Query users are never locked out of query operations while new data is added to the update cube. The point of the demonstration is to show that an Oracle OLAP system can be designed which results in data being no more than L minutes out of date, where L may be as low as just a few minutes. This is what is meant by near real-time analytics.

Key Points and Best Practices

  • Building and querying cubes with the Oracle OLAP option requires a large temporary tablespace. Normally temporary tablespaces would reside on disk storage. However, because the SPARC T5-8 server used in this benchmark had 4 TB of main memory, it was possible to use main memory for the OLAP temporary tablespace. This was accomplished by using a temporary, memory-based file system (TMPFS) for the temporary tablespace datafiles.

  • Since typical business intelligence users are often likely to issue similar queries, either with the same or different constants in the where clauses, setting the init.ora parameter "cursor_sharing" to "force" provides for additional query throughput and a larger number of potential users.

  • Assuming the normal Oracle Database initialization parameters (e.g. SGA, PGA, processes etc.) are appropriately set, out of the box performance for the Oracle OLAP workload should be close to what is reported here. Additional performance resulted from using memory for the OLAP temporary tablespace setting "cursor_sharing" to force.

  • Oracle OLAP Cube update performance was optimized by running update processes in the FX class with a priority greater than 0.

  • The maximum lag time between updates to the source fact table and data availability to query users (what was referred to as L in the benchmark description) was less than 3 minutes for the benchmark environment on the SPARC T5-8 server.

See Also

Disclosure Statement

Copyright 2013, Oracle and/or its affiliates. All rights reserved. Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners. Results as of 09/22/2013.

SPARC T5 Encryption Performance Tops Intel E5-2600 v2 Processor

The cryptography benchmark suite was developed by Oracle to measure security performance on important AES security modes. Oracle's SPARC T5 processor with it security software in silicon is faster than x86 servers that have the AES-NI instructions. In this test, the performance of on-processor encryption operations is measured (32 KB encryptions). Multiple threads are used to measure each processors maximum throughput. The SPARC T5-8 shows dramatically faster encryption.

  • A SPARC T5 processor running Oracle Solaris 11.1 is 2.7 times faster executing AES-CFB 256-bit key encryption (in cache) than the Intel E5-2697 v2 processor (with AES-NI) running Oracle Linux 6.3. AES-CFB encryption is used by Oracle Database for Transparent Data Encryption (TDE) which provides security for database storage.

  • On the AES-CFB 128-bit key encryption, the SPARC T5 processor is 2.5 times faster than the Intel E5-2697 v2 processor (with AES-NI) running Oracle Linux 6.3 for in-cache encryption. AES-CFB mode is used by Oracle Database for Transparent Data Encryption (TDE) which provides security for database storage.

  • The IBM POWER7+ has three hardware security units for 8-core processors, but IBM has not publicly shown any measured performance results on AES-CFB or other encryption modes.

Performance Landscape

Presented below are results for running encryption using the AES cipher with the CFB, CBC, CCM and GCM modes for key sizes of 128, 192 and 256. Decryption performance was similar and is not presented. Results are presented as MB/sec (10**6).

Encryption Performance – AES-CFB

Performance is presented for in-cache AES-CFB128 mode encryption. Multiple key sizes of 256-bit, 192-bit and 128-bit are presented. The encryption was performance on 32 KB of pseudo-random data (same data for each run).

AES-CFB
Microbenchmark Performance (MB/sec)
Processor GHz Chips Performance Software Environment
AES-256-CFB
SPARC T5 3.60 2 54,396 Oracle Solaris 11.1, libsoftcrypto + libumem
Intel E5-2697 v2 2.70 2 19,960 Oracle Linux 6.3, IPP/AES-NI
Intel E5-2690 2.90 2 12,823 Oracle Linux 6.3, IPP/AES-NI
AES-192-CFB
SPARC T5 3.60 2 61,000 Oracle Solaris 11.1, libsoftcrypto + libumem
Intel E5-2697 v2 2.70 2 23,217 Oracle Linux 6.3, IPP/AES-NI
Intel E5-2690 2.90 2 14,928 Oracle Linux 6.3, IPP/AES-NI
AES-128-CFB
SPARC T5 3.60 2 68,695 Oracle Solaris 11.1, libsoftcrypto + libumem
Intel E5-2697 v2 2.70 2 27,740 Oracle Linux 6.3, IPP/AES-NI
Intel E5-2690 2.90 2 17,824 Oracle Linux 6.3, IPP/AES-NI

Encryption Performance – AES-GCM

Performance is presented for in-cache AES-GCM mode encryption with authentication. Multiple key sizes of 256-bit, 192-bit and 128-bit are presented. The encryption/authentication was performance on 32 KB of pseudo-random data (same data for each run).

AES-GCM
Microbenchmark Performance (MB/sec)
Processor GHz Chips Performance Software Environment
AES-256-GCM
SPARC T5 3.60 2 34,101 Oracle Solaris 11.1, libsoftcrypto + libumem
Intel E5-2697 v2 2.70 2 15,338 Oracle Solaris 11.1, libsoftcrypto + libumem
Intel E5-2690 2.90 2 13,520 Oracle Linux 6.3, IPP/AES-NI
AES-192-GCM
SPARC T5 3.60 2 36,852 Oracle Solaris 11.1, libsoftcrypto + libumem
Intel E5-2697 v2 2.70 2 15,768 Oracle Solaris 11.1, libsoftcrypto + libumem
Intel E5-2690 2.90 2 14,159 Oracle Linux 6.3, IPP/AES-NI
AES-128-GCM
SPARC T5 3.60 2 39,003 Oracle Solaris 11.1, libsoftcrypto + libumem
Intel E5-2697 v2 2.70 2 16,405 Oracle Solaris 11.1, libsoftcrypto + libumem
Intel E5-2690 2.90 2 14,877 Oracle Linux 6.3, IPP/AES-NI

Encryption Performance – AES-CCM

Performance is presented for in-cache AES-CCM mode encryption with authentication. Multiple key sizes of 256-bit, 192-bit and 128-bit are presented. The encryption/authentication was performance on 32 KB of pseudo-random data (same data for each run).

AES-CCM
Microbenchmark Performance (MB/sec)
Processor GHz Chips Performance Software Environment
AES-256-CCM
SPARC T5 3.60 2 29,431 Oracle Solaris 11.1, libsoftcrypto + libumem
Intel E5-2697 v2 2.70 2 19,447 Oracle Linux 6.3, IPP/AES-NI
Intel E5-2690 2.90 2 12,493 Oracle Linux 6.3, IPP/AES-NI
AES-192-CCM
SPARC T5 3.60 2 33,715 Oracle Solaris 11.1, libsoftcrypto + libumem
Intel E5-2697 v2 2.70 2 22,634 Oracle Linux 6.3, IPP/AES-NI
Intel E5-2690 2.90 2 14,507 Oracle Linux 6.3, IPP/AES-NI
AES-128-CCM
SPARC T5 3.60 2 39,188 Oracle Solaris 11.1, libsoftcrypto + libumem
Intel E5-2697 v2 2.70 2 26,951 Oracle Linux 6.3, IPP/AES-NI
Intel E5-2690 2.90 2 17,256 Oracle Linux 6.3, IPP/AES-NI

Encryption Performance – AES-CBC

Performance is presented for in-cache AES-CBC mode encryption. Multiple key sizes of 256-bit, 192-bit and 128-bit are presented. The encryption was performance on 32 KB of pseudo-random data (same data for each run).

AES-CBC
Microbenchmark Performance (MB/sec)
Processor GHz Chips Performance Software Environment
AES-256-CBC
SPARC T5 3.60 2 56,933 Oracle Solaris 11.1, libsoftcrypto + libumem
Intel E5-2697 v2 2.70 2 19,962 Oracle Linux 6.3, IPP/AES-NI
Intel E5-2690 2.90 2 12,822 Oracle Linux 6.3, IPP/AES-NI
AES-192-CBC
SPARC T5 3.60 2 63,767 Oracle Solaris 11.1, libsoftcrypto + libumem
Intel E5-2697 v2 2.70 2 23,224 Oracle Linux 6.3, IPP/AES-NI
Intel E5-2690 2.90 2 14,915 Oracle Linux 6.3, IPP/AES-NI
AES-128-CBC
SPARC T5 3.60 2 72,508 Oracle Solaris 11.1, libsoftcrypto + libumem
Intel E5-2697 v2 2.70 2 27,733 Oracle Linux 6.3, IPP/AES-NI
Intel E5-2690 2.90 2 17,823 Oracle Linux 6.3, IPP/AES-NI

Configuration Summary

SPARC T5-2 server
2 x SPARC T5 processor, 3.6 GHz
512 GB memory
Oracle Solaris 11.1 SRU 4.2

Sun Server X4-2L server
2 x E5-2697 v2 processors, 2.70 GHz
256 GB memory
Oracle Linux 6.3

Sun Server X3-2 server
2 x E5-2690 processors, 2.90 GHz
128 GB memory
Oracle Linux 6.3

Benchmark Description

The benchmark measures cryptographic capabilities in terms of general low-level encryption, in-cache (32 KB encryptions) and on-chip using various ciphers, including AES-128-CFB, AES-192-CFB, AES-256-CFB, AES-128-CBC, AES-192-CBC, AES-256-CBC, AES-128-CCM, AES-192-CCM, AES-256-CCM, AES-128-GCM, AES-192-GCM and AES-256-GCM.

The benchmark results were obtained using tests created by Oracle which use various application interfaces to perform the various ciphers. They were run using optimized libraries for each platform to obtain the best possible performance.

See Also

Disclosure Statement

Copyright 2013, Oracle and/or its affiliates. All rights reserved. Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners. Results as of 9/23/2013.

Sun Server X4-2 Delivers Single App Server, 2-Chip x86 World Record SPECjEnterprise2010

Oracle's Sun Server X4-2 and Sun Server X4-2L servers, using the Intel Xeon E5-2697 v2 processor, produced a world record x86 two-chip single application server SPECjEnterprise2010 benchmark result of 11,259.88 SPECjEnterprise2010 EjOPS. The Sun Server X4-2 ran the application tier and the Sun Server X4-2L was used for the database tier.

  • The 2-socket Sun Server X4-2 demonstrated 16% better performance when compared to the 2-socket IBM X3650 M4 server result of 9,696.43 SPECjEnterprise2010 EjOPS.

  • This result used Oracle WebLogic Server 12c, Java HotSpot(TM) 64-Bit Server 1.7.0_40, Oracle Database 12c, and Oracle Linux.

Performance Landscape

Complete benchmark results are at the SPEC website, SPECjEnterprise2010 Results. The table below shows the top single application server, two-chip x86 results.

SPECjEnterprise2010 Performance Chart
as of 9/22/2013
Submitter EjOPS* Application Server Database Server
Oracle 11,259.88 1x Sun Server X4-2
2x 2.7 GHz Intel Xeon E5-2697 v2
Oracle WebLogic 12c (12.1.2)
1x Sun Server X4-2L
2x 2.7 GHz Intel Xeon E5-2697 v2
Oracle Database 12c (12.1.0.1)
IBM 9,696.43 1x IBM X3650 M4
2x 2.9 GHz Intel Xeon E5-2690
WebSphere Application Server V8.5
1x IBM X3650 M4
2x 2.9 GHz Intel Xeon E5-2690
IBM DB2 10.1
Oracle 8,310.19 1x Sun Server X3-2
2x 2.9 GHz Intel Xeon E5-2690
Oracle WebLogic 11g (10.3.6)
1x Sun Server X3-2L
2x 2.9 GHz Intel Xeon E5-2690
Oracle Database 11g (11.2.0.3)

* SPECjEnterprise2010 EjOPS, bigger is better.

Configuration Summary

Application Server:

1 x Sun Server X4-2
2 x 2.7 GHz Intel Xeon processor E5-2697 v2
256 GB memory
4 x 10 GbE NIC
Oracle Linux 5 Update 9 (kernel-2.6.39-400.124.1.el5uek)
Oracle WebLogic Server 12c (12.1.2)
Java HotSpot(TM) 64-Bit Server VM on Linux, version 1.7.0_40 (Java SE 7 Update 40)

Database Server:

1 x Sun Server X4-2L
2 x 2.7 GHz Intel Xeon E5-2697 v2
256 GB memory
1 x 10 GbE NIC
2 x FC HBA
3 x Sun StorageTek 2540 M2
Oracle Linux 5 Update 9 (kernel-2.6.39-400.124.1.el5uek)
Oracle Database 12c Enterprise Edition Release 12.1.0.1

Benchmark Description

SPECjEnterprise2010 is the third generation of the SPEC organization's J2EE end-to-end industry standard benchmark application. The SPECjEnterprise2010 benchmark has been designed and developed to cover the Java EE 5 specification's significantly expanded and simplified programming model, highlighting the major features used by developers in the industry today. This provides a real world workload driving the Application Server's implementation of the Java EE specification to its maximum potential and allowing maximum stressing of the underlying hardware and software systems.

The workload consists of an end to end web based order processing domain, an RMI and Web Services driven manufacturing domain and a supply chain model utilizing document based Web Services. The application is a collection of Java classes, Java Servlets, Java Server Pages, Enterprise Java Beans, Java Persistence Entities (pojo's) and Message Driven Beans.

The SPECjEnterprise2010 benchmark heavily exercises all parts of the underlying infrastructure that make up the application environment, including hardware, JVM software, database software, JDBC drivers, and the system network.

The primary metric of the SPECjEnterprise2010 benchmark is jEnterprise Operations Per Second ("SPECjEnterprise2010 EjOPS"). This metric is calculated by adding the metrics of the Dealership Management Application in the Dealer Domain and the Manufacturing Application in the Manufacturing Domain. There is no price/performance metric in this benchmark.

Key Points and Best Practices

  • Four Oracle WebLogic server instances were started using numactl binding 2 instances per chip.
  • Two Oracle database listener processes were started and each was bound to a separate chip.
  • Additional tuning information is in the report at http://spec.org.

See Also

Disclosure Statement

SPEC and the benchmark name SPECjEnterprise are registered trademarks of the Standard Performance Evaluation Corporation. Sun Server X4-2, 11,259.88 SPECjEnterprise2010 EjOPS; Sun Server X3-2, 8,310.19 SPECjEnterprise2010 EjOPS; IBM System X3650 M4, 9,696.43 SPECjEnterprise2010 EjOPS. Results from www.spec.org as of 9/22/2013.

Monday Sep 23, 2013

SPARC T5-2 Delivers Best 2-Chip MultiJVM SPECjbb2013 Result

From www.spec.org

Defects Identified in SPECjbb®2013

December 9, 2014 - SPEC has identified a defect in its SPECjbb®2013 benchmark suite. SPEC has suspended sales of the benchmark software and is no longer accepting new submissions of SPECjbb®2013 results for publication on SPEC's website. Current SPECjbb®2013 licensees will receive a free copy of the new version of the benchmark when it becomes available.

SPEC is advising SPECjbb®2013 licensees and users of the SPECjbb®2013 metrics that the recently discovered defect impacts the comparability of results. This defect can significantly impact the amount of work done during the measurement period, resulting in an inflated SPECjbb®2013 metric. SPEC recommends that users not utilize these results for system comparisons without a full understanding of the impact of these defects on each benchmark result.

Additional information is available here.

SPECjbb2013 is a new benchmark designed to show modern Java server performance. Oracle's SPARC T5-2 set a world record as the fastest two-chip system beating just introduced two-chip x86-based servers. Oracle, using Oracle Solaris and Oracle JDK, delivered this two-chip world record result on the MultiJVM SPECjbb2013 metric. SPECjbb2013 is the replacement for SPECjbb2005 (SPECjbb2005 will soon be retired by SPEC).

  • Oracle's SPARC T5-2 server achieved 81,084 SPECjbb2013-MultiJVM max-jOPS and 39,129 SPECjbb2013-MultiJVM critical-jOPS on the SPECjbb2013 benchmark. This result is a two chip world record.

  • There are no IBM POWER7 or POWER7+ based server results on the SPECjbb2013 benchmark. IBM has published IBM POWER7+ based servers on the SPECjbb2005 which will soon be retired by SPEC.

  • The 2-chip SPARC T5-2 server running SPECjbb2013 is 30% faster than the 2-chip Cisco UCS B200 M3 server (2.7 GHz E5-2697 v2 Ivy Bridge-based) based on SPECjbb2013-MultiJVM max-jOPS.

  • The 2-chip SPARC T5-2 server running SPECjbb2013 is 66% faster than the 2-chip Cisco UCS B200 M3 server (2.7 GHz E5-2697 v2 Ivy Bridge-based) based on SPECjbb2013-MultiJVM critical-jOPS.

  • These results were obtained using Oracle Solaris 11 along with Java Platform, Standard Edition, JDK 7 Update 40 on the SPARC T5-2 server.

From SPEC's press release, "SPECjbb2013 replaces SPECjbb2005. The new benchmark has been developed from the ground up to measure performance based on the latest Java application features. It is expected to be used widely by all those interested in Java server performance, including JVM vendors, hardware developers, Java application developers, researchers and members of the academic community."

Performance Landscape

Results of SPECjbb2013 from www.spec.org as of September 22, 2013 and this report.

SPECjbb2013
System Processor SPECjbb2013-MultiJVM JDK
type # max-jOPS critical-jOPS
SPARC T5-2 SPARC T5, 3.6 GHz 2 81,084 39,129 Oracle JDK 7u40
Cisco UCS B200 M3, DDR3-1866 Intel E5-2697 v2, 2.7 GHz 2 62,393 23,505 Oracle JDK 7u40
Sun Server X4-2, DDR3-1600 Intel E5-2697 v2, 2.7 GHz 2 52,664 20,553 Oracle JDK 7u40
Cisco UCS C220 M3 Intel E5-2690, 2.9 GHz 2 41,954 16,545 Oracle JDK 7u11

The above table represents all of the published results on www.spec.org. SPEC allows for self publication of SPECjbb2013 results. See below for locations where full reports were made available.

Configuration Summary

System Under Test:

SPARC T5-2 server
2 x SPARC T5, 3.60 GHz
512 GB memory (32 x 16 GB dimms)
Oracle Solaris 11.1
Oracle JDK 7 Update 40

Benchmark Description

The SPECjbb2013 benchmark has been developed from the ground up to measure performance based on the latest Java application features. It is relevant to all audiences who are interested in Java server performance, including JVM vendors, hardware developers, Java application developers, researchers and members of the academic community.

SPECjbb2013 replaces SPECjbb2005. New features include:

  • A usage model based on a world-wide supermarket company with an IT infrastructure that handles a mix of point-of-sale requests, online purchases and data-mining operations.
  • Both a pure throughput metric and a metric that measures critical throughput under service-level agreements (SLAs) specifying response times ranging from 10ms to 500ms.
  • Support for multiple run configurations, enabling users to analyze and overcome bottlenecks at multiple layers of the system stack, including hardware, OS, JVM and application layers.
  • Exercising new Java 7 features and other important performance elements, including the latest data formats (XML), communication using compression, and messaging with security.
  • Support for virtualization and cloud environments.

See Also

Disclosure Statement

SPEC and the benchmark name SPECjbb are registered trademarks of Standard Performance Evaluation Corporation (SPEC). Results as of 9/23/2013, see http://www.spec.org for more information. SPARC T5-2 81,084 SPECjbb2013-MultiJVM max-jOPS, 39,129 SPECjbb2013-MultiJVM critical-jOPS, result from https://blogs.oracle.com/BestPerf/resource/jbb2013/sparct5-922.pdf Cisco UCS B200 M3 62,393 SPECjbb2013-MultiJVM max-jOPS, 23,505 SPECjbb2013-MultiJVM critical-jOPS, result from http://www.cisco.com/en/US/prod/collateral/ps10265/le_41704_pb_specjbb2013b200.pdf; Sun Server X4-2 52,664 SPECjbb2013-MultiJVM max-jOPS, 20,553 SPECjbb2013-MultiJVM critical-jOPS, result from https://blogs.oracle.com/BestPerf/entry/20130918_x4_2_specjbb2013; Cisco UCS C220 M3 41,954 SPECjbb2013-MultiJVM max-jOPS, 16,545 SPECjbb2013-MultiJVM critical-jOPS result from www.spec.org.

Wednesday Sep 18, 2013

Sun Server X4-2 Performance Running SPECjbb2013 MultiJVM Benchmark

From www.spec.org

Defects Identified in SPECjbb®2013

December 9, 2014 - SPEC has identified a defect in its SPECjbb®2013 benchmark suite. SPEC has suspended sales of the benchmark software and is no longer accepting new submissions of SPECjbb®2013 results for publication on SPEC's website. Current SPECjbb®2013 licensees will receive a free copy of the new version of the benchmark when it becomes available.

SPEC is advising SPECjbb®2013 licensees and users of the SPECjbb®2013 metrics that the recently discovered defect impacts the comparability of results. This defect can significantly impact the amount of work done during the measurement period, resulting in an inflated SPECjbb®2013 metric. SPEC recommends that users not utilize these results for system comparisons without a full understanding of the impact of these defects on each benchmark result.

Additional information is available here.

Oracle's Sun Server X4-2 system, using Oracle Solaris and Oracle JDK, produced a SPECjbb2013 benchmark (MultiJVM metric) result. This benchmark was designed by the industry to showcase Java server performance.

  • The Sun Server X4-2 system is 24% faster than the fastest published Intel Xeon E5-2600 (Sandy Bridge) based two socket system's (Dell PowerEdge R720's) SPECjbb2013-MultiJVM max-jOPS.

  • The Sun Server X4-2 is 22% faster than the fastest published Intel Xeon E5-2600 (Sandy Bridge) based two socket system's (Dell PowerEdge R720's) SPECjbb2013-MultiJVM critical-jOPS.

  • The Sun Server X4-2 runs SPECjbb2013 (MultiJVM metric) at 70% of the published T5-2 SPECjbb2013-MultiJVM max-jOPS.

  • The Sun Server X4-2 runs SPECjbb2013 (MultiJVM metric) at 88% of the published T5-2 SPECjbb2013-MultiJVM critical-jOPS.

  • The combination of Oracle Solaris 11.1 and Oracle JDK 7 update 40 delivered a result of 52,664 SPECjbb2013-MultiJVM max-jOPS and 20,553 SPECjbb2013-MultiJVM critical-jOPS on the SPECjbb2013 benchmark.

From SPEC's press release, "SPECjbb2013 replaces SPECjbb2005. The new benchmark has been developed from the ground up to measure performance based on the latest Java application features. It is expected to be used widely by all those interested in Java server performance, including JVM vendors, hardware developers, Java application developers, researchers and members of the academic community."

Performance Landscape

Top two-socket results of SPECjbb2013 MultiJVM as of October 8, 2013.

SPECjbb2013
System Processor DDR3 SPECjbb2013-MultiJVM OS JDK
max-jOPS critical-jOPS
SPARC T5-2 2 x 3.6 GHz SPARC T5 1600 75,658 23,334 Solaris 11.1 7u17
Cisco UCS B200 M3 2 x 2.7 GHz Intel E5-2697 v2 1866 62,393 23,505 RHEL 6.4 7u40
Sun Server X4-2 2 x 2.7 GHz Intel E5-2697 v2 1600 52,664 20,553 Solaris 11.1 7u40
Dell PowerEdge R720 2 x 2.9 GHz Intel Xeon E5-2690 1600 42,431 16,779 RHEL 6.4 7u21

The above table includes published results from www.spec.org.

Configuration Summary

System Under Test:

Sun Server X4-2
2 x Intel E5-2697 v2, 2.7 GHz
Hyper-Threading enabled
Turbo Boost enabled
128 GB memory (16 x 8 GB dimms)
Oracle Solaris 11.1 (11.1.4.2.0)
Oracle JDK 7u40

Benchmark Description

The SPECjbb2013 benchmark has been developed from the ground up to measure performance based on the latest Java application features. It is relevant to all audiences who are interested in Java server performance, including JVM vendors, hardware developers, Java application developers, researchers and members of the academic community.

SPECjbb2013 replaces SPECjbb2005. New features include:

  • A usage model based on a world-wide supermarket company with an IT infrastructure that handles a mix of point-of-sale requests, online purchases and data-mining operations.
  • Both a pure throughput metric and a metric that measures critical throughput under service-level agreements (SLAs) specifying response times ranging from 10ms to 500ms.
  • Support for multiple run configurations, enabling users to analyze and overcome bottlenecks at multiple layers of the system stack, including hardware, OS, JVM and application layers.
  • Exercising new Java 7 features and other important performance elements, including the latest data formats (XML), communication using compression, and messaging with security.
  • Support for virtualization and cloud environments.

See Also

Disclosure Statement

SPEC and the benchmark name SPECjbb are registered trademarks of Standard Performance Evaluation Corporation (SPEC). Results from http://www.spec.org as of 10/8/2013. SPARC T5-2, 75,658 SPECjbb2013-MultiJVM max-jOPS, 23,334 SPECjbb2013-MultiJVM critical-jOPS; Cisco UCS B200 M3 62,393 SPECjbb2013-MultiJVM max-jOPS, 23,505 SPECjbb2013-MultiJVM critical-jOPS; Dell PowerEdge R720 42,431 SPECjbb2013-MultiJVM max-jOPS, 16,779 SPECjbb2013-MultiJVM critical-jOPS; Sun Server X4-2 52,664 SPECjbb2013-MultiJVM max-jOPS, 20,553 SPECjbb2013-MultiJVM critical-jOPS.

Tuesday Sep 10, 2013

Oracle ZFS Storage ZS3-4 Delivers World Record SPC-2 Performance

The Oracle Storage ZS3-4 storage system delivered a world record performance result for the SPC-2 benchmark along with excellent price-performance.

  • The Oracle Storage ZS3-4 storage system delivered an overall score of 17,244.22 SPC-2 MBPS™ and a SPC-2 price-performance of $22.53 on the SPC-2 benchmark.

  • This is over a 1.6X generational improvement in performance and over a 1.5X generational improvement in price-performance than over Oracle's Sun ZFS Storage 7420 SPC-2 Benchmark results.

  • The Oracle ZFS Storage ZS3-4 storage system has 6.8X better overall throughput and nearly 1.2X better price-performance than the IBM DS3524 Express turbo, which is IBM's best overall price-performance score on the SPC-2 benchmark.

  • The Oracle ZFS Storage ZS3-4 storage system has over 1.1X overall throughput and 5.8X better price-performance than the IBM DS8870, which is IBM's best overall performance score on the SPC-2 benchmark.

  • The Oracle ZFS Storage ZS3-4 storage system has over 1.3X overall throughput and 3.9X better price-performance than the HP StorageWorks P9500XP Disk Array on the SPC-2 benchmark.

Performance Landscape

SPC-2 Performance Chart (in decreasing performance order)

System SPC-2
MB/s
$/SPC-2
MB/s
ASU
Capacity
(GB)
TSC Price Data
Protection
Level
Date Results
Identifier
Oracle ZFS Storage ZS3-4 17,244.22 $22.53 31,611 $388,472 Mirroring 09/10/13 B00067
Fujitsu DX8700 S2 16,039 $79.51 71,404 $1,275,163 Mirroring 12/03/12 B00063
IBM DS8870 15,424 $131.21 30,924 $2,023,742 RAID-5 10/03/12 B00062
IBM SAN VC v6.4 14,581 $129.14 74,492 $1,883,037 RAID-5 08/01/12 B00061
NEC Storage M700 14,409 $25.13 53,550 $361,613 Mirroring 08/19/12 B00066
Hitachi VSP 13,148 $95.38 129,112 $1,254,093 RAID-5 07/27/12 B00060
HP StorageWorks P9500 13,148 $88.34 129,112 $1,161,504 RAID-5 03/07/12 B00056
Sun ZFS Storage 7420 10,704 $35.24 31,884 $377,225 Mirroring 04/12/12 B00058
IBM DS8800 9,706 $270.38 71,537 $2,624,257 RAID-5 12/01/10 B00051
HP XP24000 8,725 $187.45 18,401 $1,635,434 Mirroring 09/08/08 B00035

SPC-2 MB/s = the Performance Metric
$/SPC-2 MB/s = the Price-Performance Metric
ASU Capacity = the Capacity Metric
Data Protection = Data Protection Metric
TSC Price = Total Cost of Ownership Metric
Results Identifier = A unique identification of the result Metric

SPC-2 Price-Performance Chart (in increasing price-performance order)

System SPC-2
MB/s
$/SPC-2
MB/s
ASU
Capacity
(GB)
TSC Price Data
Protection
Level
Date Results
Identifier
SGI InfiniteStorage 5600 8,855.70 $15.97 28,748 $141,393 RAID6 03/06/13 B00065
Oracle ZFS Storage ZS3-4 17,244.22 $22.53 31,611 $388,472 Mirroring 09/10/13 B00067
Sun Storage J4200 548.80 $22.92 11,995 $12,580 Unprotected 07/10/08 B00033
NEC Storage M700 14,409 $25.13 53,550 $361,613 Mirroring 08/19/12 B00066
Sun Storage J4400 887.44 $25.63 23,965 $22,742 Unprotected 08/15/08 B00034
Sun StorageTek 2530 672.05 $26.15 1,451 $17,572 RAID5 08/16/07 B00026
Sun StorageTek 2530 663.51 $26.48 854 $17,572 Mirroring 08/16/07 B00025
Fujitsu ETERNUS DX80 1,357.55 $26.70 4,681 $36,247 Mirroring 03/15/10 B00050
IBM DS3524 Express Turbo 2,510 $26.76 14,374 $67,185 RAID-5 12/31/10 B00053
Fujitsu ETERNUS DX80 S2 2,685.50 $28.48 17,231 $76,475 Mirroring 08/19/11 B00055

SPC-2 MB/s = the Performance Metric
$/SPC-2 MB/s = the Price-Performance Metric
ASU Capacity = the Capacity Metric
Data Protection = Data Protection Metric
TSC Price = Total Cost of Ownership Metric
Results Identifier = A unique identification of the result Metric

Complete SPC-2 benchmark results may be found at http://www.storageperformance.org/results/benchmark_results_spc2.

Configuration Summary

Storage Configuration:

Oracle ZFS Storage ZS3-4 storage system in clustered configuration
2 x Oracle ZFS Storage ZS3-4 controllers, each with
4 x 2.4 GHz 10-core Intel Xeon processors
1024 GB memory
16 x Sun Disk shelves, each with
24 x 300 GB 15K RPM SAS-2 drives

Benchmark Description

SPC Benchmark-2 (SPC-2): Consists of three distinct workloads designed to demonstrate the performance of a storage subsystem during the execution of business critical applications that require the large-scale, sequential movement of data. Those applications are characterized predominately by large I/Os organized into one or more concurrent sequential patterns. A description of each of the three SPC-2 workloads is listed below as well as examples of applications characterized by each workload.

  • Large File Processing: Applications in a wide range of fields, which require simple sequential process of one or more large files such as scientific computing and large-scale financial processing.
  • Large Database Queries: Applications that involve scans or joins of large relational tables, such as those performed for data mining or business intelligence.
  • Video on Demand: Applications that provide individualized video entertainment to a community of subscribers by drawing from a digital film library.

SPC-2 is built to:

  • Provide a level playing field for test sponsors.
  • Produce results that are powerful and yet simple to use.
  • Provide value for engineers as well as IT consumers and solution integrators.
  • Is easy to run, easy to audit/verify, and easy to use to report official results.

See Also

Disclosure Statement

SPC-2 and SPC-2 MBPS are registered trademarks of Storage Performance Council (SPC). Results as of September 10, 2013, for more information see www.storageperformance.org. Oracle ZFS Storage ZS3-4 B00067, Fujitsu ET 8700 S2 B00063, IBM DS8870 B00062, IBM S.V.C 6.4 B00061, NEC Storage M700 B00066, Hitachi VSP B00060, HP P9500 XP Disk Array B00056, IBM DS8800 B00051.

Oracle ZFS Storage ZS3-4 Produces Best 2-Node Performance on SPECsfs2008 NFSv3

The Oracle ZFS Storage ZS3-4 storage system delivered world record two-node performance on the SPECsfs2008 NFSv3 benchmark, beating results published on NetApp's dual-controller and four-node high-end FAS6240 storage systems.

  • The Oracle ZFS Storage ZS3-4 storage system delivered a world record two-node result of 450,702 SPECsfs2008_nfs.v3 Ops/sec with an Overall Response Time (ORT) of 0.70 msec on the SPECsfs2008 NFSv3 benchmark.

  • The Oracle ZFS Storage ZS3-4 storage system delivered 2.4x higher throughput than the dual-controller NetApp FAS6240 and 4.5x higher throughput than the dual-controller NetApp FAS3270 on the SPECsfs2008_nfs.v3 benchmark at less than half the list price of either result.

  • The Oracle ZFS Storage ZS3-4 storage system had 42 percent higher throughput than the four-node NetApp FAS6240 on the SPECsfs2008 NFSv3 benchmark.

  • The Oracle ZFS Storage ZS3-4 storage aystem has 54 percent better Overall Response Time than the 4-node NetApp FAS6240 on the SPECsfs2008 NFSv3 benchmark.

Performance Landscape

Two node results for SPECsfs2008_nfs.v3 presented (in decreasing SPECsfs2008_nfs.v3 Ops/sec order) along with other select results.

Sponsor System Nodes Disks Throughput
(Ops/sec)
Overall Response
Time (msec)
Oracle ZS3-4 2 464 450,702 0.70
IBM SONAS 1.2 2 1975 403,326 3.23
NetApp FAS6240 4 288 260,388 1.53
NetApp FAS6240 2 288 190,675 1.17
EMC VG8 312 135,521 1.92
Oracle 7320 2 136 134,140 1.51
EMC NS-G8 100 110,621 2.32
NetApp FAS3270 2 360 101,183 1.66

Throughput SPECsfs2008_nfs.v3 Ops/sec — the Performance Metric
Overall Response Time — the corresponding Response Time Metric
Nodes — Nodes and Controllers are being used interchangeably

Complete SPECsfs2008 benchmark results may be found at http://www.spec.org/sfs2008/results/sfs2008.html.

Configuration Summary

Storage Configuration:

Oracle ZFS Storage ZS3-4 storage system in clustered configuration
2 x Oracle ZFS Storage ZS3-4 controllers, each with
8 x 2.4 GHz Intel Xeon E7-4870 processors
2 TB memory
2 x 10GbE NICs
20 x Sun Disk shelves
18 x shelves with 24 x 300 GB 15K RPM SAS-2 drives
2 x shelves with 20 x 300 GB 15K RPM SAS-2 drives and 8 x 73 GB SAS-2 flash-enabled write-cache

Benchmark Description

SPECsfs2008 is the latest version of the Standard Performance Evaluation Corporation (SPEC) benchmark suite measuring file server throughput and response time, providing a standardized method for comparing performance across different vendor platforms. SPECsfs2008 results summarize the server's capabilities with respect to the number of operations that can be handled per second, as well as the overall latency of the operations. The suite is a follow-on to the SFS97_R1 benchmark, adding a CIFS workload, an updated NFSv3 workload, support for additional client platforms, and a new test harness and reporting/submission framework.

See Also

Disclosure Statement

SPEC and SPECsfs are registered trademarks of Standard Performance Evaluation Corporation (SPEC). Results as of September 10, 2013, for more information see www.spec.org. Oracle ZFS Storage ZS3-4 Appliance 450,702 SPECsfs2008_nfs.v3 Ops/sec, 0.70 msec ORT, NetApp Data ONTAP 8.1 Cluster-Mode (4-node FAS6240) 260,388 SPECsfs2008_nfs.v3 Ops/Sec, 1.53 msec ORT, NetApp FAS6240 190,675 SPECsfs2008_nfs.v3 Ops/Sec, 1.17 msec ORT. NetApp FAS3270 101,183 SPECsfs2008_nfs.v3 Ops/Sec, 1.66 msec ORT.

Nodes refer to the item in the SPECsfs2008 disclosed Configuration Bill of Materials that have the Processing Elements that perform the NFS Processing Function. These are the first item listed in each of disclosed Configuration Bill of Materials except for EMC where it is both the first and third items listed, and HP, where it is the second item listed as Blade Servers. The number of nodes is from the QTY disclosed in the Configuration Bill of Materials as described above. Configuration Bill of Materials list price for Oracle result of US$ 423,644. Configuration Bill of Materials list price for NetApp FAS3270 result of US$ 1,215,290. Configuration Bill of Materials list price for NetApp FAS6240 result of US$ 1,028,118. Oracle pricing from https://shop.oracle.com/pls/ostore/f?p=dstore:home:0, traverse to "Storage and Tape" and then to "NAS Storage". NetApp's pricing from http://www.netapp.com/us/media/na-list-usd-netapp-custom-state-new-discounts.html.

Oracle ZFS Storage ZS3-2 Beats Comparable NetApp on SPECsfs2008 NFSv3

Oracle ZFS Storage ZS3-2 storage system delivered outstanding performance on the SPECsfs2008 NFSv3 benchmark, beating results published on NetApp's fastest midrange platform, the NetApp FAS3270, the NetApp FAS6240 and the EMC Gateway NS-G8 Server Failover Cluster.

  • The Oracle ZFS Storage ZS3-2 storage system delivered 210,535 SPECsfs2008_nfs.v3 Ops/sec with an Overall Response Time (ORT) of 1.12 msec on the SPECsfs2008 NFSv3 benchmark.

  • The Oracle ZFS Storage ZS3-2 storage system delivered 10% higher throughput than the NetApp FAS6240 on the SPECsfs2008 NFSv3 benchmark.

  • The Oracle ZFS Storage ZS3-2 storage system has 52% higher throughput than the NetApp FAS3270 on the SPECsfs2008 NFSv3 benchmark.

  • The Oracle ZFS Storage ZS3-2 storage system has 5% better Overall Response Time than the NetApp FAS6240 on the SPECsfs2008 NFSv3 benchmark.

  • The Oracle ZFS Storage ZS3-2 storage system has 33% better Overall Response Time than the NetApp FAS3270 on the SPECsfs2008 NFSv3 benchmark.

Performance Landscape

Results for SPECsfs2008 NFSv3 (in decreasing SPECsfs2008_nfs.v3 Ops/sec order) for competitive systems.

Sponsor System Throughput
(Ops/sec)
Overall Response
Time (msec)
Oracle ZS3-2 210,535 1.12
NetApp FAS6240 190,675 1.17
EMC VG8 135,521 1.92
EMC NS-G8 110,621 2.32
NetApp FAS3270 101,183 1.66
NetApp FAS3250 100,922 1.76

Throughput SPECsfs2008_nfs.v3 Ops/sec = the Performance Metric
Overall Response Time = the corresponding Response Time Metric

Complete SPECsfs2008 benchmark results may be found at http://www.spec.org/sfs2008/results/sfs2008.html.

Configuration Summary

Storage Configuration:

Oracle ZFS Storage ZS3-2 storage system in clustered configuration
2 x Oracle ZFS Storage ZS3-2 controllers, each with
4 x 2.1 GHz Intel Xeon E5-2658 processors
512 GB memory
8 x Sun Disk shelves
3 x shelves with 24 x 900 GB 10K RPM SAS-2 drives
3 x shelves with 20 x 900 GB 10K RPM SAS-2 drives
2 x shelves with 20 x 900 GB 10K RPM SAS-2 drives and 4 x 73 GB SAS-2 flash-enabled write-cache

Benchmark Description

SPECsfs2008 is the latest version of the Standard Performance Evaluation Corporation (SPEC) benchmark suite measuring file server throughput and response time, providing a standardized method for comparing performance across different vendor platforms. SPECsfs2008 results summarize the server's capabilities with respect to the number of operations that can be handled per second, as well as the overall latency of the operations. The suite is a follow-on to the SFS97_R1 benchmark, adding a CIFS workload, an updated NFSv3 workload, support for additional client platforms, and a new test harness and reporting/submission framework.

 

See Also

Disclosure Statement

SPEC and SPECsfs are registered trademarks of Standard Performance Evaluation Corporation (SPEC). Results as of September 10, 2013, for more information see www.spec.org. Oracle ZFS Storage ZS3-2 Appliance 210,535 SPECsfs2008_nfs.v3 Ops/sec, 1.12 msec ORT, NetApp FAS6240 190,675 SPECsfs2008_nfs.v3 Ops/Sec, 1.17 msec ORT, EMC Celerra VG8 Server Failover Cluster, 2 Data Movers (1 stdby) / Symmetrix VMAX 135,521 SPECsfs2008_nfs.v3 Ops/Sec, 1.92 msec ORT, EMC Celerra Gateway NS-G8 Server Failover Cluster, 3 Datamovers (1 stdby) / Symmetrix V-Max 110,621 SPECsfs2008_nfs.v3 Ops/Sec, 2.32 msec ORT. NetApp FAS3270 101,183 SPECsfs2008_nfs.v3 Ops/Sec, 1.66 msec ORT. NetApp FAS3250 100,922 SPECsfs2008_nfs.v3 Ops/Sec, 1.76 msec ORT.

About

BestPerf is the source of Oracle performance expertise. In this blog, Oracle's Strategic Applications Engineering group explores Oracle's performance results and shares best practices learned from working on Enterprise-wide Applications.

Index Pages
Search

Archives
« May 2016
SunMonTueWedThuFriSat
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
    
       
Today