Tuesday Mar 22, 2016

Siebel PSPP: SPARC T7-2 World Record Result, Beats IBM

Oracle set a new world record for the Siebel Platform Sizing and Performance Program (PSPP) benchmark using Oracle's SPARC T7-2 server for the application server with Oracle's Siebel CRM 8.1.1.4 Industry Applications and Oracle Database 12c running on Oracle Solaris.

  • The SPARC T7-2 server running the application tier achieved 55,000 users with sub-second response time and with throughput of 457,909 business transactions per hour on the Siebel PSPP benchmark.

  • The SPARC T7-2 server in the application tier delivered 3.3 times more users on a per chip basis compared to published IBM POWER8 based server results.

  • For the new Oracle results, eight cores of a SPARC T7-2 server were used for the database tier running at 32% utilization (as measured by mpstat). The IBM result used 6 cores at about 75% utilization for the database/gateway tier.

  • The SPARC T7-2 server in the application tier delivered nearly the same number of users on a per core basis compared to published IBM POWER8 based server results.

  • The SPARC T7-2 server in the application tier delivered nearly 2.8 times more users on a per chip basis compared to earlier published SPARC T5-2 server results.

  • The SPARC T7-2 server in the application tier delivered nearly 1.4 times more users on a per core basis compared to earlier published SPARC T5-2 server results.

  • The SPARC T7-2 server used Oracle Solaris Zones which provide flexible, scalable and manageable virtualization to scale the application within and across multiple nodes.

The Siebel 8.1.1.4 PSPP workload includes Siebel Call Center and Order Management System.

Performance Landscape

Application Server TPH Users Users/
Chip
Users/
Core
Response Times
Call
Center
Order
Mgmt
1 x SPARC T7-2
(2 x SPARC M7 @4.13 GHz)
457,909 55,000 27,500 859 0.045 sec 0.257 sec
3 x IBM S824 (each 2 x 8 active core
LPARs, POWER8 @4.1 GHz)
418,976 50,000 8,333 1041 0.031 sec 0.175 sec
2 x SPARC T5-2
(each with 2 x SPARC T5 @3.6 GHz)
333,339 40,000 10,000 625 0.110 sec 0.608 sec

TPH – Business transactions throughput per hour

Configuration Summary

Application Server:

1 x SPARC T7-2 server with
2 x SPARC M7 processors, 4.13 GHz
1 TB memory
6 x 300 GB SAS internal disks
Oracle Solaris 11.3
Siebel CRM 8.1.1.4 SIA

Web/Database/Gateway Server:

1 x SPARC T7-2 server with
2 x SPARC M7 processors, 4.13 GHz (20 active cores: 8 cores for DB, 12 for Web/Gateway)
512 TB memory
6 x 300 GB SAS internal disks
2 x 1.6 TB NVMe SSD
Oracle Solaris 11.3
Siebel CRM 8.1.1.4 SIA
iPlanet Web Server 7
Oracle Database 12c (12.1.0.2)

Benchmark Description

Siebel PSPP benchmark includes Call Center and Order Management:

  • Siebel Financial Services Call Center – Provides the most complete solution for sales and service, allowing customer service and telesales representatives to provide superior customer support, improve customer loyalty, and increase revenues through cross-selling and up-selling.

    High-level description of the use cases tested: Incoming Call Creates Opportunity, Quote and Order and Incoming Call Creates Service Request. Three complex business transactions are executed simultaneously for specific number of concurrent users. The ratios of these 3 scenarios were 30%, 40%, 30% respectively, which together were totaling 70% of all transactions simulated in this benchmark. Between each user operation and the next one, the think time averaged approximately 10, 13, and 35 seconds respectively.

  • Siebel Order Management – Oracle's Siebel Order Management allows employees such as salespeople and call center agents to create and manage quotes and orders through their entire life cycle. Siebel Order Management can be tightly integrated with back-office applications allowing users to perform tasks such as checking credit, confirming availability, and monitoring the fulfillment process.

    High-level description of the use cases tested: Order & Order Items Creation and Order Updates. Two complex Order Management transactions were executed simultaneously for specific number of concurrent users concurrently with aforementioned three Call Center scenarios above. The ratio of these 2 scenarios was 50% each, which together were totaling 30% of all transactions simulated in this benchmark. Between each user operation and the next one, the think time averaged approximately 20 and 67 seconds respectively.

See Also

Disclosure Statement

Copyright 2016, Oracle and/or its affiliates. All rights reserved. Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners. Results as of March 22, 2016.

Monday Oct 26, 2015

SPECjEnterprise2010: SPARC T7-1 World Record with Single Application Server Using 1 to 4 Chips

Oracle's SPARC T7-1 servers have set a world record for the SPECjEnterprise2010 benchmark for solutions using a single application server with one to four chips. The result of 25,818.85 SPECjEnterprise2010 EjOPS used two SPARC T7-1 servers, one server for the application tier and the other server for the database tier.

  • The SPARC T7-1 servers obtained a result of 25,093.06 SPECjEnterprise2010 EjOPS using encrypted data. This secured result used Oracle Advanced Security Transparent Data Encryption (TDE) for the application database tablespaces with the AES-256-CFB cipher. The network connection between the application server and the database server was also encrypted using the secure JDBC.

  • The SPARC T7-1 server solution delivered 34% more performance compared to the two-chip IBM x3650 M5 server result of 19,282.14 SPECjEnterprise2010 EjOPS.

  • The SPARC T7-1 server solution delivered 14% more performance compared to the four-chip IBM Power System S824 server result of 22,543.34 SPECjEnterprise2010 EjOPS.

  • The SPARC T7-1 server based results demonstrated 20% more performance compared to the Oracle Server X5-2 system result of 21,504.30 SPECjEnterprise2010 EjOPS. Oracle holds the top x86 two-chip application server SPECjEnterprise2010 result.

  • The application server used Oracle Fusion Middleware components including the Oracle WebLogic 12.1 application server and Java HotSpot(TM) 64-Bit Server VM on Solaris, version 1.8.0_60. The database server was configured with Oracle Database 12c Release 1.

  • For the secure result, the application data was encrypted in the Oracle database using the Oracle Advanced Security Transparent Data Encryption (TDE) feature. Hardware accelerated cryptography support in the SPARC M7 processor for the AES-256-CFB cipher was used to provide data security.

  • The benchmark performance using the secure SPARC T7-1 server configuration with encryption was less than 3% when compared to the peak result.

  • This result demonstrated less than 1 second average response times for all SPECjEnterprise2010 transactions and represents Jave EE 5.0 transactions generated by over 210,000 users.

Performance Landscape

Select single application server results. Complete benchmark results are at the SPEC website, SPECjEnterprise2010 Results.

SPECjEnterprise2010 Performance Chart
10/25/2015
Submitter EjOPS* Java EE Server DB Server Notes
Oracle 25,818.85 1 x SPARC T7-1
1 x 4.13 GHz SPARC M7
Oracle WebLogic 12c (12.1.3)
1 x SPARC T7-1
1 x 4.13 GHz SPARC M7
Oracle Database 12c (12.1.0.2)
-
Oracle 25,093.06 1 x SPARC T7-1
1 x 4.13 GHz SPARC M7
Oracle WebLogic 12c (12.1.3)
Network Data Encryption for JDBC
1 x SPARC T7-1
1 x 4.13 GHz SPARC M7
Oracle Database 12c (12.1.0.2)
Transparent Data Encryption
Secure
IBM 22,543.34 1 x IBM Power S824
4 x 3.5 GHz POWER 8
WebSphere Application Server V8.5
1 x IBM Power S824
4 x 3.5 GHz POWER 8
IBM DB2 10.5 FP3
-
Oracle 21,504.30 1 x Oracle Server X5-2
2 x 2.3 GHz Intel Xeon E5-2699 v3
Oracle WebLogic 12c (12.1.3)
1 x Oracle Server X5-2
2 x 2.3 GHz Intel Xeon E5-2699 v3
Oracle Database 12c (12.1.0.2)
COD
IBM 19,282.14 1 x System x3650 M5
2 x 2.6 GHz Intel Xeon E5-2697 v3
WebSphere Application Server V8.5
1 x System x3850 X6
4 x 2.8 GHz Intel Xeon E7-4890 v2
IBM DB2 10.5 FP5
-

* SPECjEnterprise2010 EjOPS (bigger is better)

The Cluster on Die (COD) mode is a BIOS setting that effectively splits the chip in half, making the operating system think it has twice as many chips as it does (in this case, four, 9 core chips). Intel has stated that COD is appropriate only for highly NUMA optimized workloads. Dell has shown that there is a 3.7x slower bandwidth to the other half of the chip split by COD.

Configuration Summary

Application Server:

1 x SPARC T7-1 server, with
1 x SPARC M7 processor (4.13 GHz)
256 GB memory (16 x 16 GB)
2 x 600 GB SAS HDD
2 x 400 GB SAS SSD
3 x Sun Dual Port 10 GbE PCIe 2.0 Networking card with Intel 82599 10 GbE Controller
Oracle Solaris 11.3 (11.3.0.0.30)
Oracle WebLogic Server 12c (12.1.3)
Java HotSpot(TM) 64-Bit Server VM on Solaris, version 1.8.0_60

Database Server:

1 x SPARC T7-1 server, with
1 x SPARC M7 processor (4.13 GHz)
512 GB memory (16 x 32 GB)
2 x 600 GB SAS HDD
1 x Sun Dual Port 10 GbE PCIe 2.0 Networking card with Intel 82599 10 GbE Controller
1 x Sun Storage 16 Gb Fibre Channel Universal HBA
Oracle Solaris 11.3 (11.3.0.0.30)
Oracle Database 12c (12.1.0.2)

Storage Servers:

1 x Oracle Server X5-2L (8-Drive), with
2 x Intel Xeon Processor E5-2699 v3 (2.3 GHz)
32 GB memory
1 x Sun Storage 16 Gb Fibre Channel Universal HBA
4 x 1.6 TB NVMe SSD
2 x 600 GB SAS HDD
Oracle Solaris 11.3 (11.3.0.0.30)
1 x Oracle Server X5-2L (24-Drive), with
2 x Intel Xeon Processor E5-2699 v3 (2.3 GHz)
32 GB memory
1 x Sun Storage 16 Gb Fibre Channel Universal HBA
14 x 600 GB SAS HDD
Oracle Solaris 11.3 (11.3.0.0.30)

1 x Brocade 6510 16 Gb FC switch

Benchmark Description

SPECjEnterprise2010 is the third generation of the SPEC organization's J2EE end-to-end industry standard benchmark application. The SPECjEnterprise2010 benchmark has been designed and developed to cover the Java EE 5 specification's significantly expanded and simplified programming model, highlighting the major features used by developers in the industry today. This provides a real world workload driving the Application Server's implementation of the Java EE specification to its maximum potential and allowing maximum stressing of the underlying hardware and software systems,

  • The web zone, servlets, and web services
  • The EJB zone
  • JPA 1.0 Persistence Model
  • JMS and Message Driven Beans
  • Transaction management
  • Database connectivity
Moreover, SPECjEnterprise2010 also heavily exercises all parts of the underlying infrastructure that make up the application environment, including hardware, JVM software, database software, JDBC drivers, and the system network.

The primary metric of the SPECjEnterprise2010 benchmark is jEnterprise Operations Per Second (SPECjEnterprise2010 EjOPS). The primary metric for the SPECjEnterprise2010 benchmark is calculated by adding the metrics of the Dealership Management Application in the Dealer Domain and the Manufacturing Application in the Manufacturing Domain. There is NO price/performance metric in this benchmark.

Key Points and Best Practices

  • Four Oracle WebLogic server instances on the SPARC T7-1 server were hosted in 4 separate Oracle Solaris Zones.
  • The Oracle WebLogic application servers were executed in the FX scheduling class to improve performance by reducing the frequency of context switches.
  • The Oracle log writer process was run in the RT scheduling class.

See Also

Disclosure Statement

SPEC and the benchmark name SPECjEnterprise are registered trademarks of the Standard Performance Evaluation Corporation. Results from www.spec.org as of 10/25/2015. SPARC T7-1, 25,818.85 SPECjEnterprise2010 EjOPS (unsecure); SPARC T7-1, 25,093.06 SPECjEnterprise2010 EjOPS (secure); Oracle Server X5-2, 21,504.30 SPECjEnterprise2010 EjOPS (unsecure); IBM Power S824, 22,543.34 SPECjEnterprise2010 EjOPS (unsecure); IBM x3650 M5, 19,282.14 SPECjEnterprise2010 EjOPS (unsecure);

Oracle Internet Directory: SPARC T7-2 World Record

Oracle's SPARC T7-2 server running Oracle Internet Directory (OID, Oracle's LDAP Directory Server) on Oracle Solaris 11 on a virtualized processor configuration achieved a record result on the Oracle Internet Directory benchmark.

  • The SPARC T7-2 server, virtualized to use a single processor, achieved world record performance running Oracle Internet Directory benchmark with 50M users.

  • The SPARC T7-2 server and Oracle Internet Directory using Oracle Database 12c running on Oracle Solaris 11 achieved record result of 1.18M LDAP searches/sec with an average latency of 0.85 msec with 1000 clients.

  • The SPARC T7 server demonstrated 25% better throughput and 23% better latency for LDAP search/sec over similarly configured SPARC T5 server benchmark environment.

  • Oracle Internet Directory achieved near linear scalability on the virtualized single processor domain on the SPARC T7-2 server with 79K LDAP searches/sec with 2 cores to 1.18M LDAP searches/sec with 32 cores.

  • Oracle Internet Directory and the virtualized single processor domain on the SPARC T7-2 server achieved up to 22,408 LDAP modify/sec with an average latency of 2.23 msec for 50 clients.

Performance Landscape

A virtualized single SPARC M7 processor in a SPARC T7-2 server was used for the test results presented below. The SPARC T7-2 server and SPARC T5-2 server results were run as part of this benchmark effort. The remaining results were part of a previous benchmark effort.

Oracle Internet Directory Tests
System chips/
cores
Search Modify Add
ops/sec lat (msec) ops/sec lat (msec) ops/sec lat (msec)
SPARC T7-2 1/32 1,177,947 0.85 22,400 2.2 1,436 11.1
SPARC T5-2 2/32 944,624 1.05 16,700 2.9 1,000 15.95
SPARC T4-4 4/32 682,000 1.46 12,000 4.0 835 19.0

Scaling runs were also made on the virtualized single processor domain on the SPARC T7-2 server.

Scaling of Search Tests – SPARC T7-2, One Processor
Cores Clients ops/sec Latency (msec)
32 1000 1,177,947 0.85
24 1000 863,343 1.15
16 500 615,563 0.81
8 500 280,029 1.78
4 100 156,114 0.64
2 100 79,300 1.26

Configuration Summary

System Under Test:

SPARC T7-2
2 x SPARC M7 processors, 4.13 GHz
512 GB memory
6 x 600 GB internal disks
1 x Sun Storage ZS3-2 (used for database and log files)
Flash storage (used for redo logs)
Oracle Solaris 11.3
Oracle Internet Directory 11g Release 1 PS7 (11.1.1.7.0)
Oracle Database 12c Enterprise Edition 12.1.0.2 (64-bit)

Benchmark Description

Oracle Internet Directory (OID) is Oracle's LDAPv3 Directory Server. The throughput for five key operations are measured — Search, Compare, Modify, Mix and Add.

LDAP Search Operations Test

This test scenario involved concurrent clients binding once to OID and then performing repeated LDAP Search operations. The salient characteristics of this test scenario is as follows:

  • SLAMD SearchRate job was used.
  • BaseDN of the search is root of the DIT, the scope is SUBTREE, the search filter is of the form UID=, DN and UID are the required attribute.
  • Each LDAP search operation matches a single entry.
  • The total number concurrent clients was 1000 and were distributed amongst two client nodes.
  • Each client binds to OID once and performs repeated LDAP Search operations, each search operation resulting in the lookup of a unique entry in such a way that no client looks up the same entry twice and no two clients lookup the same entry and all entries are searched randomly.
  • In one run of the test, random entries from the 50 Million entries are looked up in as many LDAP Search operations.
  • Test job was run for 60 minutes.

LDAP Compare Operations Test

This test scenario involved concurrent clients binding once to OID and then performing repeated LDAP Compare operations on userpassword attribute. The salient characteristics of this test scenario is as follows:

  • SLAMD CompareRate job was used.
  • Each LDAP compare operation matches user password of user.
  • The total number concurrent clients was 1000 and were distributed amongst two client nodes.
  • Each client binds to OID once and performs repeated LDAP compare operations.
  • In one run of the test, random entries from the 50 Million entries are compared in as many LDAP compare operations.
  • Test job was run for 60 minutes.

LDAP Modify Operations Test

This test scenario consisted of concurrent clients binding once to OID and then performing repeated LDAP Modify operations. The salient characteristics of this test scenario is as follows:

  • SLAMD LDAP modrate job was used.
  • A total of 50 concurrent LDAP clients were used.
  • Each client updates a unique entry each time and a total of 50 Million entries are updated.
  • Test job was run for 60 minutes.
  • Value length was set to 11.
  • Attribute that is being modified is not indexed.

LDAP Mixed Load Test

The test scenario involved both the LDAP search and LDAP modify clients enumerated above.

  • The ratio involved 60% LDAP search clients, 30% LDAP bind and 10% LDAP modify clients.
  • A total of 1000 concurrent LDAP clients were used and were distributed on 2 client nodes.
  • Test job was run for 60 minutes.

LDAP Add Load Test

The test scenario involved concurrent clients adding new entries as follows.

  • Slamd standard add rate job is used.
  • A total of 500,000 entries were added.
  • A total of 16 concurrent LDAP clients were used.
  • Slamd add's inetorgperson objectclass entry with 21 attributes (includes operational attributes).

See Also

Disclosure Statement

Copyright 2015, Oracle and/or its affiliates. All rights reserved. Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners. Results as of 25 October 2015.

Oracle E-Business Payroll Batch Extra-Large: SPARC T7-1 World Record

Oracle's SPARC T7-1 server set a world record running the Oracle E-Business Suite 12.1.3 Standard Extra-Large (250,000 Employees) Payroll (Batch) workload.

  • The SPARC T7-1 server produced a world record result of 1,527,494 employee records processed per hour (9.82 min elapsed time) on the Oracle E-Business Suite R12 (12.1.3) Extra-Large Payroll (Batch) benchmark.

  • The SPARC T7-1 server equipped with one 4.13 GHz SPARC M7 processor, demonstrated 36% better hourly employee throughput compared to a two-chip Cisco UCS B200 M4 (Intel Xeon E5-2697 v3).

  • The SPARC T7-1 server equipped with one 4.13 GHz SPARC M7 processor, demonstrated 40% better hourly employee throughput compared to two-chip IBM S824 (POWER8 using 12 cores total).

Performance Landscape

This is the world record result for the Payroll Extra-Large model using Oracle E-Business 12.1.3 workload.

Batch Workload: Payroll Extra-Large Model
System Processor Employees/Hr Elapsed Time
SPARC T7-1 1 x SPARC M7 (4.13 GHz) 1,527,494 9.82 minutes
Cisco UCS B200 M4 2 x Intel Xeon Processor E5-2697 v3 1,125,281 13.33 minutes
IBM S824 2 x POWER8 (3.52 GHz) 1,090,909 13.75 minutes
Cisco UCS B200 M3 2 x Intel Xeon Processor E5-2697 v2 1,017,639 14.74 minutes
Cisco UCS B200 M3 2 x Intel Xeon Processor E5-2690 839,865 17.86 minutes
Sun Server X3-2L 2 x Intel Xeon Processor E5-2690 789,473 19.00 minutes

Configuration Summary

Hardware Configuration:

SPARC T7-1 server
1 x SPARC M7 processor (4.13 GHz)
256 GB memory (16 x 16 GB)
Oracle ZFS Storage ZS3-2 appliance (DB Data storage) with
40 x 900 GB 10K RPM SAS-2 HDD,
8 x Write Flash Accelerator SSD and
2 x Read Flash Accelerator SSD 1.6 TB SAS
Oracle Flash Accelerator F160 PCIe Card (1.6 TB NVMe for DB Log storage)

Software Configuration:

Oracle Solaris 11.3
Oracle E-Business Suite R12 (12.1.3)
Oracle Database 11g (11.2.0.3.0)

Benchmark Description

The Oracle E-Business Suite Standard R12 Benchmark combines online transaction execution by simulated users with concurrent batch processing to model a typical scenario for a global enterprise. This benchmark ran one Batch component, Payroll, in the Extra-Large size.

Results can be published in four sizes and use one or more online/batch modules

  • X-large: Maximum online users running all business flows between 10,000 to 20,000; 750,000 order to cash lines per hour and 250,000 payroll checks per hour.
    • Order to Cash Online — 2400 users
      • The percentage across the 5 transactions in Order Management module is:
        • Insert Manual Invoice — 16.66%
        • Insert Order — 32.33%
        • Order Pick Release — 16.66%
        • Ship Confirm — 16.66%
        • Order Summary Report — 16.66%
    • HR Self-Service — 4000 users
    • Customer Support Flow — 8000 users
    • Procure to Pay — 2000 users
  • Large: 10,000 online users; 100,000 order to cash lines per hour and 100,000 payroll checks per hour.
  • Medium: up to 3000 online users; 50,000 order to cash lines per hour and 10,000 payroll checks per hour.
  • Small: up to 1000 online users; 10,000 order to cash lines per hour and 5,000 payroll checks per hour.

Key Points and Best Practices

  • All system optimizations are in the published report which is referenced in the See Also section below.

See Also

Disclosure Statement

Oracle E-Business X-Large Payroll Batch workload, SPARC T7-1, 4.13 GHz, 1 chip, 32 cores, 256 threads, 256 GB memory, elapsed time 9.82 minutes, 1,527,494 hourly employee throughput, Oracle Solaris 11.3, Oracle E-Business Suite 12.1.3, Oracle Database 11g Release 2, Results as of 10/25/2015.

Oracle E-Business Suite Applications R12.1.3 (OLTP X-Large): SPARC M7-8 World Record

Oracle's SPARC M7-8 server, using a four-chip Oracle VM Server for SPARC (LDom) virtualized server, produced a world record 20,000 users running the Oracle E-Business OLTP X-Large benchmark. The benchmark runs five Oracle E-Business online workloads concurrently: Customer Service, iProcurement, Order Management, Human Resources Self-Service, and Financials.

  • The virtualized four-chip LDom on the SPARC M7-8 was able to handle more users than the previous best result which used eight processors of Oracle's SPARC M6-32 server.

  • The SPARC M7-8 server using Oracle VM Server for SPARC provides enterprise applications high availability, where each application is executed on its own environment, insulated and independent of the others.

Performance Landscape

Oracle E-Business (3-tier) OLTP X-Large Benchmark
System Chips Total Online Users Weighted Average
Response Time (sec)
90th Percentile
Response Time (s)
SPARC M7-8 4 20,000 0.70 1.13
SPARC M6-32 8 18,500 0.61 1.16

Break down of the total number of users by component.

Users per Component
Component SPARC M7-8 SPARC M6-32
Total Online Users 20,000 users 18,500 users
HR Self-Service
Order-to-Cash
iProcurement
Customer Service
Financial
5000 users
2500 users
2700 users
7000 users
2800 users
4000 users
2300 users
2400 users
7000 users
2800 users

Configuration Summary

System Under Test:

SPARC M7-8 server
8 x SPARC M7 processors (4.13 GHz)
4 TB memory
2 x 600 GB SAS-2 HDD
using a Logical Domain with
4 x SPARC M7 processors (4.13 GHz)
2 TB memory
2 x Sun Storage Dual 16Gb Fibre Channel PCIe Universal HBA
2 x Sun Dual Port 10GBase-T Adapter
Oracle Solaris 11.3
Oracle E-Business Suite 12.1.3
Oracle Database 11g Release 2

Storage Configuration:

4 x Oracle ZFS Storage ZS3-2 appliances each with
2 x Read Flash Accelerator SSD
1 x Storage Drive Enclosure DE2-24P containing:
20 x 900 GB 10K RPM SAS-2 HDD
4 x Write Flash Accelerator SSD
1 x Sun Storage Dual 8Gb FC PCIe HBA
Used for Database files, Zones OS, EBS Mid-Tier Apps software stack
and db-tier Oracle Server
2 x Sun Server X4-2L server with
2 x Intel Xeon Processor E5-2650 v2
128 GB memory
1 x Sun Storage 6Gb SAS PCIe RAID HBA
4 x 400 GB SSD
14 x 600 GB HDD
Used for Redo log files, db backup storage.

Benchmark Description

The Oracle E-Business OLTP X-Large benchmark simulates thousands of online users executing transactions typical of an internal Enterprise Resource Processing, simultaneously executing five application modules: Customer Service, Human Resources Self Service, iProcurement, Order Management and Financial.

Each database tier uses a database instance of about 600 GB in size, supporting thousands of application users, accessing hundreds of objects (tables, indexes, SQL stored procedures, etc.).

Key Points and Best Practices

This test demonstrates virtualization technologies running concurrently various Oracle multi-tier business critical applications and databases on four SPARC M7 processors contained in a single SPARC M7-8 server supporting thousands of users executing a high volume of complex transactions with constrained (<1 sec) weighted average response time.

The Oracle E-Business LDom is further configured using Oracle Solaris Zones.

This result of 20,000 users was achieved by load balancing the Oracle E-Business Suite Applications 12.1.3 five online workloads across two Oracle Solaris processor sets and redirecting all network interrupts to a dedicated third processor set.

Each applications processor set (set-1 and set-2) was running concurrently two Oracle E-Business Suite Application servers and two database servers instances, each within its own Oracle Solaris Zone (4 x Zones per set).

Each application server network interface (to a client) was configured to map with the locality group associated to the CPUs processing the related workload, to guarantee memory locality of network structures and application servers hardware resources.

All external storage was connected with at least two paths to the host multipath-capable fibre channel controller ports and Oracle Solaris I/O multipathing feature was enabled.

See Also

Disclosure Statement

Oracle E-Business Suite R12 extra-large multiple-online module benchmark, SPARC M7-8, SPARC M7, 4.13 GHz, 4 chips, 128 cores, 1024 threads, 2 TB memory, 20,000 online users, average response time 0.70 sec, 90th percentile response time 1.13 sec, Oracle Solaris 11.3, Oracle Solaris Zones, Oracle VM Server for SPARC, Oracle E-Business Suite 12.1.3, Oracle Database 11g Release 2, Results as of 10/25/2015.

Oracle E-Business Order-To-Cash Batch Large: SPARC T7-1 World Record

Oracle's SPARC T7-1 server set a world record running the Oracle E-Business Suite 12.1.3 Standard Large (100,000 Order/Inventory Lines) Order-To-Cash (Batch) workload.

  • The SPARC T7-1 server produced a world record hourly order line throughput of 273,973 per hour (21.90 min elapsed time) on the Oracle E-Business Suite R12 (12.1.3) Large Order-To-Cash (Batch) benchmark using a SPARC T7-1 server for the database and application tiers running Oracle Database 11g on Oracle Solaris 11.

  • The SPARC T7-1 server demonstrated 12% better hourly order line throughput compared to a two-chip Cisco UCS B200 M4 (Intel Xeon Processor E5-2697 v3).

Performance Landscape

Results for the Oracle E-Business 12.1.3 Order-To-Cash Batch Large model workload.

Batch Workload: Order-To-Cash Large Model
System CPU Employees/Hr Elapsed Time (min)
SPARC T7-1 1 x SPARC M7 processor 273,973 21.90
Cisco UCS B200 M4 2 x Intel Xeon Processor E5-2697 v3 243,803 24.61
Cisco UCS B200 M3 2 x Intel Xeon Processor E5-2690 232,739 25.78

Configuration Summary

Hardware Configuration:

SPARC T7-1 server with
1 x SPARC M7 processor (4.13 GHz)
256 GB memory (16 x 16 GB)
Oracle ZFS Storage ZS3-2 appliance (DB Data storage) with
40 x 900 GB 10K RPM SAS-2 HDD,
8 x Write Flash Accelerator SSD and
2 x Read Flash Accelerator SSD 1.6TB SAS
Oracle Flash Accelerator F160 PCIe Card (1.6 TB NVMe for DB Log storage)

Software Configuration:

Oracle Solaris 11.3
Oracle E-Business Suite R12 (12.1.3)
Oracle Database 11g (11.2.0.3.0)

Benchmark Description

The Oracle E-Business Suite Standard R12 Benchmark combines online transaction execution by simulated users with concurrent batch processing to model a typical scenario for a global enterprise. This benchmark ran one Batch component, Order-To-Cash, in the Large size.

Results can be published in four sizes and use one or more online/batch modules

  • X-large: Maximum online users running all business flows between 10,000 to 20,000; 750,000 order to cash lines per hour and 250,000 payroll checks per hour.
    • Order to Cash Online — 2400 users
      • The percentage across the 5 transactions in Order Management module is:
        • Insert Manual Invoice — 16.66%
        • Insert Order — 32.33%
        • Order Pick Release — 16.66%
        • Ship Confirm — 16.66%
        • Order Summary Report — 16.66%
    • HR Self-Service — 4000 users
    • Customer Support Flow — 8000 users
    • Procure to Pay — 2000 users
  • Large: 10,000 online users; 100,000 order to cash lines per hour and 100,000 payroll checks per hour.
  • Medium: up to 3000 online users; 50,000 order to cash lines per hour and 10,000 payroll checks per hour.
  • Small: up to 1000 online users; 10,000 order to cash lines per hour and 5,000 payroll checks per hour.

Key Points and Best Practices

  • All system optimizations are in the published report, find link in See Also section below.

See Also

Disclosure Statement

Oracle E-Business Large Order-To-Cash Batch workload, SPARC T7-1, 4.13 GHz, 1 chip, 32 cores, 256 threads, 256 GB memory, elapsed time 21.90 minutes, 273,973 hourly order line throughput, Oracle Solaris 11.3, Oracle E-Business Suite 12.1.3, Oracle Database 11g Release 2, Results as of 10/25/2015.

SPARC T7-1 Delivers 1-Chip World Records for SPEC CPU2006 Rate Benchmarks

This page has been updated on November 19, 2015. The SPARC T7-1 server results have been published at www.spec.org.

Oracle's SPARC T7-1 server delivered world record SPEC CPU2006 rate benchmark results for systems with one chip. This was accomplished with Oracle Solaris 11.3 and Oracle Solaris Studio 12.4 software.

  • The SPARC T7-1 server achieved world record scores of 1200 SPECint_rate2006, 1120 SPECint_rate_base2006, 832 SPECfp_rate2006, and 801 SPECfp_rate_base2006.

  • The SPARC T7-1 server beat the 1 chip Fujitsu CELSIUS C740 with an Intel Xeon Processor E5-2699 v3 by 1.7x on the SPECint_rate2006 benchmark. The SPARC T7-1 server beat the 1 chip NEC Express5800/R120f-1M with an Intel Xeon Processor E5-2699 v3 by 1.8x on the SPECfp_rate2006 benchmark.

  • The SPARC T7-1 server beat the 1 chip IBM Power S812LC server with a POWER8 processor by 1.9 times on the SPECint_rate2006 benchmark and by 1.8 times on the SPECfp_rate2006 benchmark.

  • The SPARC T7-1 server beat the 1 chip Fujitsu SPARC M10-4S with a SPARC64 X+ processor by 2.2x on the SPECint_rate2006 benchmark and by 1.6x on the SPECfp_rate2006 benchmark.

  • The SPARC T7-1 server improved upon the previous generation SPARC platform which used the SPARC T5 processor by 2.5 on the SPECint_rate2006 benchmark and by 2.3 on the SPECfp_rate2006 benchmark.

The SPEC CPU2006 benchmarks are derived from the compute-intensive portions of real applications, stressing chip, memory hierarchy, and compilers. The benchmarks are not intended to stress other computer components such as networking, the operating system, or the I/O system. Note that there are many other SPEC benchmarks, including benchmarks that specifically focus on Java computing, enterprise computing, and network file systems.

Performance Landscape

Complete benchmark results are at the SPEC website. The tables below provide the new Oracle results, as well as select results from other vendors.

Presented are single chip SPEC CPU2006 rate results. Only the best results published at www.spec.org per chip type are presented (best Intel, IBM, Fujitsu, Oracle chips).

SPEC CPU2006 Rate Results – One Chip
System Chip Peak Base
  SPECint_rate2006
SPARC T7-1 SPARC M7 (4.13 GHz, 32 cores) 1200 1120
Fujitsu CELSIUS C740 Intel E5-2699 v3 (2.3 GHz, 18 cores) 715 693
IBM Power S812LC POWER8 (2.92 GHz, 10 cores) 642 482
Fujitsu SPARC M10-4S SPARC64 X+ (3.7 GHz, 16 cores) 546 479
SPARC T5-1B SPARC T5 (3.6 GHz, 16 cores) 489 441
IBM Power 710 Express POWER7 (3.55 GHz, 8 cores) 289 255
  SPECfp_rate2006
SPARC T7-1 SPARC M7 (4.13 GHz, 32 cores) 832 801
NEC Express5800/R120f-1M Intel E5-2699 v3 (2.3 GHz, 18 cores) 474 460
IBM Power S812LC POWER8 (2.92 GHz, 10 cores) 468 394
Fujitsu SPARC M10-4S SPARC64 X+ (3.7 GHz, 16 cores) 462 418
SPARC T5-1B SPARC T5 (3.6 GHz, 16 cores) 369 350
IBM Power 710 Express POWER7 (3.55 GHz, 8 cores) 248 229

The following table highlights the performance of the single-chip SPARC M7 processor based server to the best published two-chip POWER8 processor based server.

SPEC CPU2006 Rate Results
Comparing One SPARC M7 Chip to Two POWER8 Chips
System Chip Peak Base
  SPECint_rate2006
SPARC T7-1 1 x SPARC M7 (4.13 GHz, 32core) 1200 1120
IBM Power S822LC 2 x POWER8 (2.92 GHz, 2x 10core) 1100 853
  SPECfp_rate2006
SPARC T7-1 1 x SPARC M7 (4.13 GHz, 32 cores) 832 801
IBM Power S822LC 2 x POWER8 (2.92 GHz, 2x 10core) 888 745

Configuration Summary

System Under Test:

SPARC T7-1
1 x SPARC M7 processor (4.13 GHz)
512 GB memory (16 x 32 GB dimms)
800 GB on 4 x 400 GB SAS SSD (mirrored)
Oracle Solaris 11.3
Oracle Solaris Studio 12.4 with 4/15 Patch Set

Benchmark Description

SPEC CPU2006 is SPEC's most popular benchmark. It measures:

  • Speed — single copy performance of chip, memory, compiler
  • Rate — multiple copy (throughput)

The benchmark is also divided into integer intensive applications and floating point intensive applications:

  • integer: 12 benchmarks derived from applications such as artificial intelligence chess playing, artificial intelligence go playing, quantum computer simulation, perl, gcc, XML processing, and pathfinding
  • floating point: 17 benchmarks derived from applications, including chemistry, physics, genetics, and weather.

It is also divided depending upon the amount of optimization allowed:

  • base: optimization is consistent per compiled language, all benchmarks must be compiled with the same flags per language.
  • peak: specific compiler optimization is allowed per application.

The overall metrics for the benchmark which are commonly used are:

  • SPECint_rate2006, SPECint_rate_base2006: integer, rate
  • SPECfp_rate2006, SPECfp_rate_base2006: floating point, rate
  • SPECint2006, SPECint_base2006: integer, speed
  • SPECfp2006, SPECfp_base2006: floating point, speed

Key Points and Best Practices

  • Jobs were bound using pbind.

See Also

Disclosure Statement

SPEC and the benchmark names SPECfp and SPECint are registered trademarks of the Standard Performance Evaluation Corporation. Results as of November 19, 2015 from www.spec.org.
SPARC T7-1: 1200 SPECint_rate2006, 1120 SPECint_rate_base2006, 832 SPECfp_rate2006, 801 SPECfp_rate_base2006; SPARC T5-1B: 489 SPECint_rate2006, 440 SPECint_rate_base2006, 369 SPECfp_rate2006, 350 SPECfp_rate_base2006; Fujitsu SPARC M10-4S: 546 SPECint_rate2006, 479 SPECint_rate_base2006, 462 SPECfp_rate2006, 418 SPECfp_rate_base2006. IBM Power 710 Express: 289 SPECint_rate2006, 255 SPECint_rate_base2006, 248 SPECfp_rate2006, 229 SPECfp_rate_base2006; Fujitsu CELSIUS C740: 715 SPECint_rate2006, 693 SPECint_rate_base2006; NEC Express5800/R120f-1M: 474 SPECfp_rate2006, 460 SPECfp_rate_base2006; IBM Power S822LC: 1100 SPECint_rate2006, 853 SPECint_rate_base2006, 888 SPECfp_rate2006, 745 SPECfp_rate_base2006; IBM Power S812LC: 642 SPECint_rate2006, 482 SPECint_rate_base2006, 468 SPECfp_rate2006, 394 SPECfp_rate_base2006.

SPARC T7-4 Delivers 4-Chip World Record for SPEC OMP2012

Oracle's SPARC T7-4 server delivered world record performance on the SPEC OMP2012 benchmark for systems with four chips. This was accomplished with Oracle Solaris 11.3 and Oracle Solaris Studio 12.4 software.

  • The SPARC T7-4 server delivered world record for systems with four chips of 27.9 SPECompG_peak2012 and 26.4 SPECompG_base2012.

  • The SPARC T7-4 server beat the four chip HP ProLiant DL580 Gen9 with Intel Xeon Processor E7-8890 v3 by 29% on the SPECompG_peak2012 benchmark.

  • This SPEC OMP2012 benchmark result demonstrates that the SPARC M7 processor performs well on floating-point intensive technical computing and modeling workloads.

Performance Landscape

Complete benchmark results are at the SPEC website, SPEC OMP2012 Results. The table below provides the new Oracle result as well as the previous best four chip results.

SPEC OMP2012 Results
Four Chip Results
System Processor Peak Base
SPARC T7-4 SPARC M7, 4.13 GHz 27.9 26.4
HP ProLiant DL580 Gen9 Intel Xeon E7-8890 v3, 2.5 GHz 21.5 20.4
Cisco UCS C460 M4 Intel Xeon E7-8890 v3, 2.5 GHz -- 20.8

Configuration Summary

Systems Under Test:

SPARC T7-4
4 x 4.13 GHz SPARC M7 processors
2 TB memory (64 x 32 GB dimms)
4 x 600 GB SAS 10,000 RPM HDD (mirrored)
Oracle Solaris 11.3 (11.3.0.30.0)
Oracle Solaris Studio 12.4 with 4/15 Patch Set

Benchmark Description

The following was taken from the SPEC website.

SPEC OMP2012 focuses on compute intensive performance, which means these benchmarks emphasize the performance of:

  • the computer processor (CPU),
  • the memory architecture,
  • the parallel support libraries, and
  • the compilers.

It is important to remember the contribution of the latter three components. SPEC OMP performance intentionally depends on more than just the processor.

SPEC OMP2012 contains a suite that focuses on parallel computing performance using the OpenMP parallelism standard.

The suite can be used to measure along the following vector:

  • Compilation method: Consistent compiler options across all programs of a given language (the base metrics) and, optionally, compiler options tuned to each program (the peak metrics).

SPEC OMP2012 is not intended to stress other computer components such as networking, the operating system, graphics, or the I/O system. Note that there are many other SPEC benchmarks, including benchmarks that specifically focus on graphics, distributed Java computing, webservers, and network file systems.

Key Points and Best Practices

  • Jobs were bound using OMP_PLACES.

See Also

Disclosure Statement

SPEC and the benchmark name SPEC OMP are registered trademarks of the Standard Performance Evaluation Corporation. Results as of November 11, 2015 from www.spec.org. SPARC T7-4 (4 chips, 128 cores, 1024 threads): 27.9 SPECompG_peak2012, 26.4 SPECompG_base2012; HP ProLiant DL580 Gen9 (4 chips, 72 cores, 144 threads): 21.5 SPECompG_peak2012, 20.4 SPECompG_base2012; Cisco UCS C460 M7 (4 chips, 72 cores, 144 threads): 20.8 SPECompG_base2012.

Friday Apr 03, 2015

Oracle Server X5-2 Produces World Record 2-Chip Single Application Server SPECjEnterprise2010 Result

Two Oracle Server X5-2 systems, using the Intel Xeon E5-2699 v3 processor, produced a World Record x86 two-chip single application server SPECjEnterprise2010 benchmark result of 21,504.30 SPECjEnterprise2010 EjOPS. One Oracle Server X5-2 ran the application tier and the second Oracle Server X5-2 was used for the database tier.

  • The Oracle Server X5-2 system demonstrated 11% better performance when compared to the IBM X3650 M5 server result of 19,282.14 SPECjEnterprise2010 EjOPS.

  • The Oracle Server X5-2 system demonstrated 1.9x better performance when compared to the previous generation Sun Server X4-2 server result of 11,259.88 SPECjEnterprise2010 EjOPS.

  • This result used Oracle WebLogic Server 12c, Java HotSpot(TM) 64-Bit Server 1.8.0_40 Oracle Database 12c, and Oracle Linux.

Performance Landscape

Complete benchmark results are at the SPEC website, SPECjEnterprise2010 Results. The table below shows the top single application server, two-chip x86 results.

SPECjEnterprise2010 Performance Chart
as of 4/1/2015
Submitter EjOPS* Application Server Database Server
Oracle 21,504.30 1x Oracle Server X5-2
2x 2.3 GHz Intel Xeon E5-2699 v3
Oracle WebLogic 12c (12.1.3)
1x Oracle Server X5-2
2x 2.3 GHz Intel Xeon E5-2699 v3
Oracle Database 12c (12.1.0.2)
IBM 19,282.14 1x IBM X3650 M5
2x 2.6 GHz Intel Xeon E5-2697 v3
WebSphere Application Server V8.5
1x IBM X3850 X6
4x 2.8 GHz Intel Xeon E7-4890 v2
IBM DB2 10.5
Oracle 11,259.88 1x Sun Server X4-2
2x 2.7 GHz Intel Xeon E5-2697 v2
Oracle WebLogic 12c (12.1.2)
1x Sun Server X4-2L
2x 2.7 GHz Intel Xeon E5-2697 v2
Oracle Database 12c (12.1.0.1)

* SPECjEnterprise2010 EjOPS, bigger is better.

Configuration Summary

Application Server:

1 x Oracle Server X5-2
2 x 2.3 GHz Intel Xeon E5-2699 v3 processors
256 GB memory
3 x 10 GbE NIC
Oracle Linux 6 Update 5 (kernel-2.6.39-400.243.1.el6uek.x86_64)
Oracle WebLogic Server 12c (12.1.3)
Java HotSpot(TM) 64-Bit Server VM on Linux, version 1.8.0_40 (Java SE 8 Update 40)
BIOS SW 1.2

Database Server:

1 x Oracle Server X5-2
2 x 2.3 GHz Intel Xeon E5-2699 v3 processors
512 GB memory
2 x 10 GbE NIC
1 x 16 Gb FC HBA
2 x Oracle Server X5-2L Storage
Oracle Linux 6 Update 5 (kernel-3.8.13-16.2.1.el6uek.x86_64)
Oracle Database 12c Enterprise Edition Release 12.1.0.2

Benchmark Description

SPECjEnterprise2010 is the third generation of the SPEC organization's J2EE end-to-end industry standard benchmark application. The SPECjEnterprise2010 benchmark has been designed and developed to cover the Java EE 5 specification's significantly expanded and simplified programming model, highlighting the major features used by developers in the industry today. This provides a real world workload driving the Application Server's implementation of the Java EE specification to its maximum potential and allowing maximum stressing of the underlying hardware and software systems.

The workload consists of an end to end web based order processing domain, an RMI and Web Services driven manufacturing domain and a supply chain model utilizing document based Web Services. The application is a collection of Java classes, Java Servlets, Java Server Pages, Enterprise Java Beans, Java Persistence Entities (pojo's) and Message Driven Beans.

The SPECjEnterprise2010 benchmark heavily exercises all parts of the underlying infrastructure that make up the application environment, including hardware, JVM software, database software, JDBC drivers, and the system network.

The primary metric of the SPECjEnterprise2010 benchmark is jEnterprise Operations Per Second ("SPECjEnterprise2010 EjOPS"). This metric is calculated by adding the metrics of the Dealership Management Application in the Dealer Domain and the Manufacturing Application in the Manufacturing Domain. There is no price/performance metric in this benchmark.

Key Points and Best Practices

  • Four Oracle WebLogic server instances were started using numactl binding 2 instances per chip.
  • Four Oracle database listener processes were started, 2 processes bound per processor.
  • Additional tuning information is in the report at http://spec.org.
  • COD (Cluster on Die) is enabled in the BIOS on the application server.

See Also

Disclosure Statement

SPEC and the benchmark name SPECjEnterprise are registered trademarks of the Standard Performance Evaluation Corporation. Oracle Server X5-2, 21,504.30 SPECjEnterprise2010 EjOPS; IBM System X3650 M5, 19,282.14 SPECjEnterprise2010 EjOPS. Sun Server X4-2, 11,259.88 SPECjEnterprise2010 EjOPS; Results from www.spec.org as of 4/1/2015.

Wednesday Mar 05, 2014

SPARC T5-2 Delivers World Record 2-Socket SPECvirt_sc2010 Benchmark

Oracle's SPARC T5-2 server delivered a world record two-chip SPECvirt_sc2010 result of 4270 @ 264 VMs, establishing performance superiority in virtualized environments of the SPARC T5 processors with Oracle Solaris 11, which includes as standard virtualization products Oracle VM for SPARC and Oracle Solaris Zones.

  • The SPARC T5-2 server has 2.3x better performance than an HP BL620c G7 blade server (with two Westmere EX processors) which used VMware ESX 4.1 U1 virtualization software (best SPECvirt_sc2010 result on two-chip servers using VMware software).

  • The SPARC T5-2 server has 1.6x better performance than an IBM Flex System x240 server (with two Sandy Bridge processors) which used Kernel-based Virtual Machines (KVM).

  • This is the first SPECvirt_sc2010 result using Oracle production level software: Oracle Solaris 11.1, Oracle WebLogic Server 10.3.6, Oracle Database 11g Enterprise Edition, Oracle iPlanet Web Server 7 and Oracle Java Development Kit 7 (JDK). The only exception for the Dovecot mail server.

Performance Landscape

Complete benchmark results are at the SPEC website, SPECvirt_sc2010 Results. The following table highlights the leading two-chip results for the benchmark, bigger is better.

SPECvirt_sc2010
Leading Two-Chip Results
System Processor Result @ VMs Virtualization Software
SPARC T5-2 2 x SPARC T5, 3.6 GHz 4270 @ 264 Oracle VM Server for SPARC 3.0
Oracle Solaris Zones
IBM Flex System x240 2 x Intel E5-2690, 2.9 GHz 2741 @ 168 Red Hat Enterprise Linux 6.4 KVM
HP Proliant BL6200c G7 2 x Intel E7-2870, 2.4 GHz 1878 @ 120 VMware ESX 4.1 U1

Configuration Summary

System Under Test Highlights:

1 x SPARC T5-2 server, with
2 x 3.6 GHz SPARC T5 processors
1 TB memory
Oracle Solaris 11.1
Oracle VM Server for SPARC 3.0
Oracle iPlanet Web Server 7.0.15
Oracle PHP 5.3.14
Dovecot 2.1.17
Oracle WebLogic Server 11g (10.3.6)
Oracle Database 11g (11.2.0.3)
Java HotSpot(TM) 64-Bit Server VM on Solaris, version 1.7.0_51

Benchmark Description

The SPECvirt_sc2010 benchmark is SPEC's first benchmark addressing performance of virtualized systems. It measures the end-to-end performance of all system components that make up a virtualized environment.

The benchmark utilizes several previous SPEC benchmarks which represent common tasks which are commonly used in virtualized environments. The workloads included are derived from SPECweb2005, SPECjAppServer2004 and SPECmail2008. Scaling of the benchmark is achieved by running additional sets of virtual machines until overall throughput reaches a peak. The benchmark includes a quality of service criteria that must be met for a successful run.

Key Points and Best Practices

  • The SPARC T5 server running the Oracle Solaris 11.1, utilizes embedded virtualization products as the Oracle VM for SPARC and Oracle Solaris Zones, which provide a low overhead, flexible, scalable and manageable virtualization environment.

  • In order to provide a high level of data integrity and availability, all the benchmark data sets are stored on mirrored (RAID1) storage.

See Also

Disclosure Statement

SPEC and the benchmark name SPECvirt_sc are registered trademarks of the Standard Performance Evaluation Corporation. Results from www.spec.org as of 3/5/2014. SPARC T5-2, SPECvirt_sc2010 4270 @ 264 VMs; IBM Flex System x240, SPECvirt_sc2010 2741 @ 168 VMs; HP Proliant BL620c G7, SPECvirt_sc2010 1878 @ 120 VMs.

Thursday Jan 23, 2014

SPARC T5-2 Delivers World Record 2-Socket Application Server for SPECjEnterprise2010 Benchmark

Oracle's SPARC T5-2 servers have set the world record for the SPECjEnterprise2010 benchmark using two-socket application servers with a result of 17,033.54 SPECjEnterprise2010 EjOPS. The result used two SPARC T5-2 servers, one server for the application tier and the other server for the database tier.

  • The SPARC T5-2 server delivered 29% more performance compared to the 2-socket IBM PowerLinux server result of 13,161.07 SPECjEnterprise2010 EjOPS.

  • The two SPARC T5-2 servers have 1.2x better price performance than the two IBM PowerLinux 7R2 POWER7+ processor-based servers (based on hardware plus software configuration costs for both tiers). The price performance of the SPARC T5-2 server is $35.99 compared to the IBM PowerLinux 7R2 at $44.75.

  • The SPARC T5-2 server demonstrated 1.5x more performance compared to Oracle's x86-based 2-socket Sun Server X4-2 system (Ivy Bridge) result of 11,259.88 SPECjEnterprise2010 EjOPS. Oracle holds the top x86 2-socket application server SPECjEnterprise2010 result.

  • This SPARC T5-2 server result represents the best performance per socket for a single system in the application tier of 8,516.77 SPECjEnterprise2010 EjOPS per socket.

  • The application server used Oracle Fusion Middleware components including the Oracle WebLogic 12.1 application server and Java HotSpot(TM) 64-Bit Server VM on Solaris, version 1.7.0_45. The database server was configured with Oracle Database 12c Release 1.

  • This result demonstrated less than 1 second average response times for all SPECjEnterprise2010 transactions and represents Jave EE 5.0 transactions generated by 139,000 users.

Performance Landscape

Select 2-socket single application server results. Complete benchmark results are at the SPEC website, SPECjEnterprise2010 Results.

SPECjEnterprise2010 Performance Chart
1/22/2014
Submitter EjOPS* Java EE Server DB Server
Oracle 17,033.54 1 x SPARC T5-2
2 x 3.6 GHz SPARC T5
Oracle WebLogic 12c (12.1.2)
1 x SPARC T5-2
2 x 3.6 GHz SPARC T5
Oracle Database 12c (12.1.0.1)
IBM 13,161.07 1x IBM PowerLinux 7R2
2 x 4.2 GHz POWER 7+
WebSphere Application Server V8.5
1x IBM PowerLinux 7R2
2 x 4.2 GHz POWER 7+
IBM DB2 10.1 FP2
Oracle 11,259.88 1x Sun Server X4-2
2 x 2.7 GHz Intel Xeon E5-2697 v2
Oracle WebLogic 12c (12.1.2)
1x Sun Server X4-2L
2 x 2.7 GHz Intel Xeon E5-2697 v2
Oracle Database 12c (12.1.0.1)

* SPECjEnterprise2010 EjOPS (bigger is better)

Configuration Summary

Application Server:

1 x SPARC T5-2 server, with
2 x 3.6 GHz SPARC T5 processors
512 GB memory
2 x 10 GbE dual-port NIC
Oracle Solaris 11.1 (11.1.13.6.0)
Oracle WebLogic Server 12c (12.1.2)
Java HotSpot(TM) 64-Bit Server VM on Solaris, version 1.7.0_45

Database Server:

1 x SPARC T5-2 server, with
2 x 3.6 GHz SPARC T5 processors
512 GB memory
1 x 10 GbE dual-port NIC
2 x 8 Gb FC HBA
Oracle Solaris 11.1 (11.1.13.6.0)
Oracle Database 12c (12.1.0.1)

Storage Servers:

2 x Sun Server X4-2L (24-Drive), with
2 x 2.6 GHz Intel Xeon
64 GB memory
1 x 8 Gb FC HBA
4 x Sun Flash Accelerator F80 PCI-E Cards
Oracle Solaris 11.1

Benchmark Description

SPECjEnterprise2010 is the third generation of the SPEC organization's J2EE end-to-end industry standard benchmark application. The new SPECjEnterprise2010 benchmark has been re-designed and developed to cover the Java EE 5 specification's significantly expanded and simplified programming model, highlighting the major features used by developers in the industry today. This provides a real world workload driving the Application Server's implementation of the Java EE specification to its maximum potential and allowing maximum stressing of the underlying hardware and software systems,

  • The web zone, servlets, and web services
  • The EJB zone
  • JPA 1.0 Persistence Model
  • JMS and Message Driven Beans
  • Transaction management
  • Database connectivity
Moreover, SPECjEnterprise2010 also heavily exercises all parts of the underlying infrastructure that make up the application environment, including hardware, JVM software, database software, JDBC drivers, and the system network.

The primary metric of the SPECjEnterprise2010 benchmark is jEnterprise Operations Per Second (SPECjEnterprise2010 EjOPS). The primary metric for the SPECjEnterprise2010 benchmark is calculated by adding the metrics of the Dealership Management Application in the Dealer Domain and the Manufacturing Application in the Manufacturing Domain. There is NO price/performance metric in this benchmark.

Key Points and Best Practices

  • Two Oracle WebLogic server instances on the SPARC T5-2 server were hosted in 2 separate Oracle Solaris Zones.
  • The Oracle WebLogic application servers were executed in the FX scheduling class to improve performance by reducing the frequency of context switches.
  • The Oracle log writer process was run in the RT scheduling class.

See Also

Disclosure Statement

SPEC and the benchmark name SPECjEnterprise are registered trademarks of the Standard Performance Evaluation Corporation. Results from www.spec.org as of 1/22/2014. SPARC T5-2, 17,033.54 SPECjEnterprise2010 EjOPS; IBM PowerLinux 7R2, 13,161.07 SPECjEnterprise2010 EjOPS; Sun Server X4-2, 11,259.88 SPECjEnterprise2010 EjOPS.

The SPARC T5-2 configuration cost is the total application and database server hardware plus software. List price is $613,052 from http://www.oracle.com as of 1/22/2014. The IBM PowerLinux 7R2 configuration total hardware plus software list price is $588,970 based on public pricing from http://www.ibm.com as of 1/22/2014. Pricing does not include database storage hardware for IBM or Oracle.

Monday Nov 25, 2013

World Record Single System TPC-H @10000GB Benchmark on SPARC T5-4

Oracle's SPARC T5-4 server delivered world record single server performance of 377,594 QphH@10000GB with price/performance of $4.65/QphH@10000GB USD on the TPC-H @10000GB benchmark. This result shows that the 4-chip SPARC T5-4 server is significantly faster than the 8-chip server results from HP (Intel x86 based).

  • The SPARC T5-4 server with four SPARC T5 processors is 2.4 times faster than the HP ProLiant DL980 G7 server with eight x86 processors.

  • The SPARC T5-4 server delivered 4.8 times better performance per chip and 3.0 times better performance per core than the HP ProLiant DL980 G7 server.

  • The SPARC T5-4 server has 28% better price/performance than the HP ProLiant DL980 G7 server (for the price/QphH metric).

  • The SPARC T5-4 server with 2 TB memory is 2.4 times faster than the HP ProLiant DL980 G7 server with 4 TB memory (for the composite metric).

  • The SPARC T5-4 server took 9 hours, 37 minutes, 54 seconds for data loading while the HP ProLiant DL980 G7 server took 8.3 times longer.

  • The SPARC T5-4 server accomplished the refresh function in around a minute, the HP ProLiant DL980 G7 server took up to 7.1 times longer to do the same function.

This result demonstrates a complete data warehouse solution that shows the performance both of individual and concurrent query processing streams, faster loading, and refresh of the data during business operations. The SPARC T5-4 server delivers superior performance and cost efficiency when compared to the HP result.

Performance Landscape

The table lists the leading TPC-H @10000GB results for non-clustered systems.

TPC-H @10000GB, Non-Clustered Systems
System
Processor
P/C/T – Memory
Composite
(QphH)
$/perf
($/QphH)
Power
(QppH)
Throughput
(QthH)
Database Available
SPARC T5-4
3.6 GHz SPARC T5
4/64/512 – 2048 GB
377,594.3 $4.65 342,714.1 416,024.4 Oracle 11g R2 11/25/13
HP ProLiant DL980 G7
2.4 GHz Intel Xeon E7-4870
8/80/160 – 4096 GB
158,108.3 $6.49 185,473.6 134,780.5 SQL Server 2012 04/15/13

P/C/T = Processors, Cores, Threads
QphH = the Composite Metric (bigger is better)
$/QphH = the Price/Performance metric in USD (smaller is better)
QppH = the Power Numerical Quantity (bigger is better)
QthH = the Throughput Numerical Quantity (bigger is better)

The following table lists data load times and average refresh function times.

TPC-H @10000GB, Non-Clustered Systems
Database Load & Database Refresh
System
Processor
Data Loading
(h:m:s)
T5
Advan
RF1
(sec)
T5
Advan
RF2
(sec)
T5
Advan
SPARC T5-4
3.6 GHz SPARC T5
09:37:54 8.3x 58.8 7.1x 62.1 6.4x
HP ProLiant DL980 G7
2.4 GHz Intel Xeon E7-4870
79:28:23 1.0x 416.4 1.0x 394.9 1.0x

Data Loading = database load time
RF1 = throughput average first refresh transaction
RF2 = throughput average second refresh transaction
T5 Advan = the ratio of time to the SPARC T5-4 server time

Complete benchmark results found at the TPC benchmark website http://www.tpc.org.

Configuration Summary and Results

Server Under Test:

SPARC T5-4 server
4 x SPARC T5 processors (3.6 GHz total of 64 cores, 512 threads)
2 TB memory
2 x internal SAS (2 x 300 GB) disk drives
12 x 16 Gb FC HBA

External Storage:

24 x Sun Server X4-2L servers configured as COMSTAR nodes, each with
2 x 2.5 GHz Intel Xeon E5-2609 v2 processors
4 x Sun Flash Accelerator F80 PCIe Cards, 800 GB each
6 x 4 TB 7.2K RPM 3.5" SAS disks
1 x 8 Gb dual port HBA

2 x 48 port Brocade 6510 Fibre Channel Switches

Software Configuration:

Oracle Solaris 11.1
Oracle Database 11g Release 2 Enterprise Edition

Audited Results:

Database Size: 10000 GB (Scale Factor 10000)
TPC-H Composite: 377,594.3 QphH@10000GB
Price/performance: $4.65/QphH@10000GB USD
Available: 11/25/2013
Total 3 year Cost: $1,755,709 USD
TPC-H Power: 342,714.1
TPC-H Throughput: 416,024.4
Database Load Time: 9:37:54

Benchmark Description

The TPC-H benchmark is a performance benchmark established by the Transaction Processing Council (TPC) to demonstrate Data Warehousing/Decision Support Systems (DSS). TPC-H measurements are produced for customers to evaluate the performance of various DSS systems. These queries and updates are executed against a standard database under controlled conditions. Performance projections and comparisons between different TPC-H Database sizes (100GB, 300GB, 1000GB, 3000GB, 10000GB, 30000GB and 100000GB) are not allowed by the TPC.

TPC-H is a data warehousing-oriented, non-industry-specific benchmark that consists of a large number of complex queries typical of decision support applications. It also includes some insert and delete activity that is intended to simulate loading and purging data from a warehouse. TPC-H measures the combined performance of a particular database manager on a specific computer system.

The main performance metric reported by TPC-H is called the TPC-H Composite Query-per-Hour Performance Metric (QphH@SF, where SF is the number of GB of raw data, referred to as the scale factor). QphH@SF is intended to summarize the ability of the system to process queries in both single and multiple user modes. The benchmark requires reporting of price/performance, which is the ratio of the total HW/SW cost plus 3 years maintenance to the QphH. A secondary metric is the storage efficiency, which is the ratio of total configured disk space in GB to the scale factor.

Key Points and Best Practices

  • COMSTAR (Common Multiprotocol SCSI Target) is the software framework that enables an Oracle Solaris host to serve as a SCSI Target platform. COMSTAR uses a modular approach to break the huge task of handling all the different pieces in a SCSI target subsystem into independent functional modules which are glued together by the SCSI Target Mode Framework (STMF). The modules implementing functionality at SCSI level (disk, tape, medium changer etc.) are not required to know about the underlying transport. And the modules implementing the transport protocol (FC, iSCSI, etc.) are not aware of the SCSI-level functionality of the packets they are transporting. The framework hides the details of allocation providing execution context and cleanup of SCSI commands and associated resources and simplifies the task of writing the SCSI or transport modules.

  • The SPARC T5-4 server achieved a peak IO rate of 37 GB/sec from the Oracle database configured with this storage.

  • Twelve COMSTAR nodes were mirrored to another twelve COMSTAR nodes on which all of the Oracle database files were placed. IO performance was high and balanced across all the nodes.

  • Oracle Solaris 11.1 required very little system tuning.

  • Some vendors try to make the point that storage ratios are of customer concern. However, storage ratio size has more to do with disk layout and the increasing capacities of disks – so this is not an important metric when comparing systems.

  • The SPARC T5-4 server and Oracle Solaris efficiently managed the system load of nearly two thousand Oracle Database parallel processes.

See Also

Disclosure Statement

TPC Benchmark, TPC-H, QphH, QthH, QppH are trademarks of the Transaction Processing Performance Council (TPC). Results as of 11/25/13, prices are in USD. SPARC T5-4 www.tpc.org/3293; HP ProLiant DL980 G7 www.tpc.org/3285.

Thursday Sep 26, 2013

SPARC T5-8 Delivers World Record Single Server SPECjEnterprise2010 Benchmark, Utilizes Virtualized Environment

Oracle produced a world record single-server SPECjEnterprise2010 benchmark result of 36,571.36 SPECjEnterprise2010 EjOPS using one of Oracle's SPARC T5-8 servers for both the application and the database tier. Oracle VM Server for SPARC was used to virtualize the system to achieve this result.

  • The 8-chip SPARC T5 processor based server is 3.3x faster than the 8-chip IBM Power 780 server (POWER7+ processor based).

  • The SPARC T5-8 has 4.4x better price performance than the IBM Power 780, a POWER7+ processor based server (based on hardware plus software configuration costs). The price performance of the SPARC T5-8 server is $40.68 compared to the IBM Power 780 at $177.41. The IBM Power 780, POWER7+ based system has 1.2x better performance per core, but this did not reduce the total software and hardware cost to the customer. As shown by this comparison, performance-per-core is a poor predictor of characteristics relevant to customers. The SPARC T5-8 virtualized price performance was also less than the low-end IBM PowerLinux 7R2 at $62.26.

  • The SPARC T5-8 server ran the Oracle Solaris 11.1 operating system and used Oracle VM Server for SPARC to consolidate ten Oracle WebLogic application server instances and one database server instance to achieve this result.

  • This result demonstrated sub-second average response times for all SPECjEnterprise2010 transactions and represents JEE 5.0 transactions generated by 299,000 users.

  • The SPARC T5-8 server requires only 8 rack units, the same as the space of the IBM Power 780. In this configuration IBM has a hardware core density of 4 cores per rack unit which contrasts with the 16 cores per rack unit for the SPARC T5-8 server. This again demonstrates why performance-per-core is a poor predictor of characteristics relevant to customers.

  • The application server used Oracle Fusion Middleware components including the Oracle WebLogic 12.1 application server and Java HotSpot(TM) 64-Bit Server VM on Solaris, version 1.7.0_25. The database server was configured with Oracle Database 12c Release 1.

  • The SPARC T5-8 server is 2.8x faster than a non-virtualized IBM POWER7+ based server result (one server for application and one server for database), the IBM PowerLinux 7R2 achieved 13,161.07 SPECjEnterprise2010 EjOPS.

Performance Landscape

SPECjEnterprise2010 Performance Chart
Only Three Virtualized Results (App+DB on 1 Server) as of 9/23/2013
Submitter EjOPS* Chips per Server Java EE Server & DB Server
App DB
Oracle 36,571.36 5 3 1 x SPARC T5-8
8 chips, 128 cores, 3.6 GHz SPARC T5
Oracle WebLogic 12c (12.1.2)
Oracle Database 12c (12.1.0.1)
Oracle 27,843.57 4 4 1 x SPARC T5-8
8 chips, 128 cores, 3.6 GHz SPARC T5
Oracle WebLogic 12c (12.1.1)
Oracle Database 11g (11.2.0.3)
IBM 10,902.30 4 4 1 x IBM Power 780
8 chips, 32 cores, 4.42 GHz POWER7+
WebSphere Application Server V8.5
IBM DB2 Universal Database 10.1

* SPECjEnterprise2010 EjOPS (bigger is better)

Complete benchmark results are at the SPEC website, SPECjEnterprise2010 Results.

Configuration Summary

Oracle Summary

Application and Database Server:

1 x SPARC T5-8 server, with
8 x 3.6 GHz SPARC T5 processors
2 TB memory
9 x 10 GbE dual-port NIC
6 x 8 Gb dual-port HBA
Oracle Solaris 11.1 SRU 10.5
Oracle VM Server for SPARC
Oracle WebLogic Server 12c (12.1.2)
Java HotSpot(TM) 64-Bit Server VM on Solaris, version 1.7.0_25
Oracle Database 12c (12.1.0.1)

Storage Servers:

6 x Sun Server X3-2L (12-Drive), with
2 x 2.4 GHz Intel Xeon
16 GB memory
1 x 8 Gb FC HBA
4 x Sun Flash Accelerator F40 PCI-E Card
Oracle Solaris 11.1

2 x Sun Storage 2540-M2 Array
12 x 600 GB 15K RPM SAS HDD

Switch Hardware:

1 x Sun Network 10 GbE 72-port Top of Rack (ToR) Switch

IBM Summary

Application and Database Server:

1 x IBM Power 780 server, with
8 x 4.42 GHz POWER7+ processors
786 GB memory
6 x 10 GbE dual-port NIC
3 x 8 Gb four-port HBA
IBM AIX V7.1 TL2
IBM WebSphere Application Server V8.5
IBM J9 VM (build 2.6, JRE 1.7.0 IBM J9 AIX ppc-32)
IBM DB2 10.1
IBM InfoSphere Optim pureQuery Runtime v3.1.1

Storage:

2 x DS5324 Disk System with
48 x 146 GB 15K E-DDM Disks

1 x v7000 Disk Controller with
16 x 400 GB SSD Disks

Benchmark Description

SPECjEnterprise2010 is the third generation of the SPEC organization's J2EE end-to-end industry standard benchmark application. The new SPECjEnterprise2010 benchmark has been re-designed and developed to cover the Java EE 5 specification's significantly expanded and simplified programming model, highlighting the major features used by developers in the industry today. This provides a real world workload driving the Application Server's implementation of the Java EE specification to its maximum potential and allowing maximum stressing of the underlying hardware and software systems,
  • The web zone, servlets, and web services
  • The EJB zone
  • JPA 1.0 Persistence Model
  • JMS and Message Driven Beans
  • Transaction management
  • Database connectivity
Moreover, SPECjEnterprise2010 also heavily exercises all parts of the underlying infrastructure that make up the application environment, including hardware, JVM software, database software, JDBC drivers, and the system network.

The primary metric of the SPECjEnterprise2010 benchmark is jEnterprise Operations Per Second (SPECjEnterprise2010 EjOPS). The primary metric for the SPECjEnterprise2010 benchmark is calculated by adding the metrics of the Dealership Management Application in the Dealer Domain and the Manufacturing Application in the Manufacturing Domain. There is NO price/performance metric in this benchmark.

Key Points and Best Practices

  • Ten Oracle WebLogic server instances on the SPARC T5-8 server were hosted in 10 separate Oracle Solaris Zones within a separate guest domain on 80 cores (5 cpu chips).
  • The database ran in a separate guest domain consisting of 47 cores (3 cpu chips). One core was reserved for the primary domain.
  • The Oracle WebLogic application servers were executed in the FX scheduling class to improve performance by reducing the frequency of context switches.
  • The Oracle log writer process was run in the FX scheduling class at processor priority 60 to use the Critical Thread feature.

See Also

Disclosure Statement

SPEC and the benchmark name SPECjEnterprise are registered trademarks of the Standard Performance Evaluation Corporation. Results from www.spec.org as of 9/23/2013. SPARC T5-8, 36,571.36 SPECjEnterprise2010 EjOPS (using Oracle VM for SPARC and 5+3 split); SPARC T5-8, 27,843.57 SPECjEnterprise2010 EjOPS (using Oracle Zones and 4+4 split); IBM Power 780, 10,902.30 SPECjEnterprise2010 EjOPS; IBM PowerLinux 7R2, 13,161.07 SPECjEnterprise2010 EjOPS. SPARC T5-8 server total hardware plus software list price is $1,487,792 from http://www.oracle.com as of 9/20/2013. IBM Power 780 server total hardware plus software cost of $1,934,162 based on public pricing from http://www.ibm.com as of 5/22/2013. IBM PowerLinux 7R2 server total hardware plus software cost of $819,451 based on whywebsphere.com/2013/04/29/weblogic-12c-on-oracle-sparc-t5-8-delivers-half-the-transactions-per-core-at-double-the-cost-of-the-websphere-on-ibm-power7/ retrieved 9/20/2013.

Wednesday Sep 25, 2013

SPARC T5-8 Delivers World Record Oracle OLAP Perf Version 3 Benchmark Result on Oracle Database 12c

Oracle's SPARC T5-8 server delivered world record query performance for systems running Oracle Database 12c for the Oracle OLAP Perf Version 3 benchmark.

  • The query throughput on the SPARC T5-8 server is 1.7x higher than that of an 8-chip Intel Xeon E7-8870 server. Both systems had sub-second average response times.

  • The SPARC T5-8 server with the Oracle Database demonstrated the ability to support at least 700 concurrent users querying OLAP cubes (with no think time), processing 2.33 million analytic queries per hour with an average response time of less than 1 second per query. This performance was enabled by keeping the entire cube in-memory utilizing the 4 TB of memory on the SPARC T5-8 server.

  • Assuming a 60 second think time between query requests, the SPARC T5-8 server can support approximately 39,450 concurrent users with the same sub-second response time.

  • The workload uses a set of realistic Business Intelligence (BI) queries that run against an OLAP cube based on a 4 billion row fact table of sales data. The 4 billion rows are partitioned by month spanning 10 years.

  • The combination of the Oracle Database 12cwith the Oracle OLAP option running on a SPARC T5-8 server supports live data updates occurring concurrently with minimally impacted user query executions.

Performance Landscape

Oracle OLAP Perf Version 3 Benchmark
Oracle cube base on 4 billion fact table rows
10 years of data partitioned by month
System Queries/
hour
Users Average Response
Time (sec)
0 sec think time 60 sec think time
SPARC T5-8 2,329,000 700 39,450 <1 sec
8-chip Intel Xeon E7-8870 1,354,000 120 22,675 <1 sec

Configuration Summary

SPARC T5-8:

1 x SPARC T5-8 server with
8 x SPARC T5 processors, 3.6 GHz
4 TB memory
Data Storage and Redo Storage
Flash Storage
Oracle Solaris 11.1 (11.1.8.2.0)
Oracle Database 12c Release 1 (12.1.0.1) with Oracle OLAP option

Sun Server X2-8:

1 x Sun Server X2-8 with
8 x Intel Xeon E7-8870 processors, 2.4 GHz
1 TB memory
Data Storage and Redo Storage
Flash Storage
Oracle Solaris 10 10/12
Oracle Database 12c Release 1 (12.1.0.1) with Oracle OLAP option

Benchmark Description

The Oracle OLAP Perf Version 3 benchmark is a workload designed to demonstrate and stress the ability of the OLAP Option to deliver fast query, near real-time updates and rich calculations using a multi-dimensional model in the context of the Oracle data warehousing.

The bulk of the benchmark entails running a number of concurrent users, each issuing typical multidimensional queries against an Oracle cube. The cube has four dimensions: time, product, customer, and channel. Each query user issues approximately 150 different queries. One query chain may ask for total sales in a particular region (e.g South America) for a particular time period (e.g. Q4 of 2010) followed by additional queries which drill down into sales for individual countries (e.g. Chile, Peru, etc.) with further queries drilling down into individual stores, etc. Another query chain may ask for yearly comparisons of total sales for some product category (e.g. major household appliances) and then issue further queries drilling down into particular products (e.g. refrigerators, stoves. etc.), particular regions, particular customers, etc.

While the core of every OLAP Perf benchmark is real world query performance, the benchmark itself offers numerous execution options such as varying data set sizes, number of users, numbers of queries for any given user and cube update frequency. Version 3 of the benchmark is executed with a much larger number of query streams than previous versions and used a cube designed for near real-time updates. The results produced by version 3 of the benchmark are not directly comparable to results produced by previous versions of the benchmark.

The near real-time update capability is implemented along the following lines. A large Oracle cube, H, is built from a 4 billion row star schema, containing data up until the end of last business day. A second small cube, D, is then created which will contain all of today's new data coming in from outside the world. It will be updated every L minutes with the data coming in within the last L minutes. A third cube, R, joins cubes H and D for reporting purposes much like a view might join data from two tables. Calculations are installed into cube R. The use of a reporting cube which draws data from different storage cubes is a common practice.

Query users are never locked out of query operations while new data is added to the update cube. The point of the demonstration is to show that an Oracle OLAP system can be designed which results in data being no more than L minutes out of date, where L may be as low as just a few minutes. This is what is meant by near real-time analytics.

Key Points and Best Practices

  • Building and querying cubes with the Oracle OLAP option requires a large temporary tablespace. Normally temporary tablespaces would reside on disk storage. However, because the SPARC T5-8 server used in this benchmark had 4 TB of main memory, it was possible to use main memory for the OLAP temporary tablespace. This was accomplished by using a temporary, memory-based file system (TMPFS) for the temporary tablespace datafiles.

  • Since typical business intelligence users are often likely to issue similar queries, either with the same or different constants in the where clauses, setting the init.ora parameter "cursor_sharing" to "force" provides for additional query throughput and a larger number of potential users.

  • Assuming the normal Oracle Database initialization parameters (e.g. SGA, PGA, processes etc.) are appropriately set, out of the box performance for the Oracle OLAP workload should be close to what is reported here. Additional performance resulted from using memory for the OLAP temporary tablespace setting "cursor_sharing" to force.

  • Oracle OLAP Cube update performance was optimized by running update processes in the FX class with a priority greater than 0.

  • The maximum lag time between updates to the source fact table and data availability to query users (what was referred to as L in the benchmark description) was less than 3 minutes for the benchmark environment on the SPARC T5-8 server.

See Also

Disclosure Statement

Copyright 2013, Oracle and/or its affiliates. All rights reserved. Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners. Results as of 09/22/2013.

Sun Server X4-2 Delivers Single App Server, 2-Chip x86 World Record SPECjEnterprise2010

Oracle's Sun Server X4-2 and Sun Server X4-2L servers, using the Intel Xeon E5-2697 v2 processor, produced a world record x86 two-chip single application server SPECjEnterprise2010 benchmark result of 11,259.88 SPECjEnterprise2010 EjOPS. The Sun Server X4-2 ran the application tier and the Sun Server X4-2L was used for the database tier.

  • The 2-socket Sun Server X4-2 demonstrated 16% better performance when compared to the 2-socket IBM X3650 M4 server result of 9,696.43 SPECjEnterprise2010 EjOPS.

  • This result used Oracle WebLogic Server 12c, Java HotSpot(TM) 64-Bit Server 1.7.0_40, Oracle Database 12c, and Oracle Linux.

Performance Landscape

Complete benchmark results are at the SPEC website, SPECjEnterprise2010 Results. The table below shows the top single application server, two-chip x86 results.

SPECjEnterprise2010 Performance Chart
as of 9/22/2013
Submitter EjOPS* Application Server Database Server
Oracle 11,259.88 1x Sun Server X4-2
2x 2.7 GHz Intel Xeon E5-2697 v2
Oracle WebLogic 12c (12.1.2)
1x Sun Server X4-2L
2x 2.7 GHz Intel Xeon E5-2697 v2
Oracle Database 12c (12.1.0.1)
IBM 9,696.43 1x IBM X3650 M4
2x 2.9 GHz Intel Xeon E5-2690
WebSphere Application Server V8.5
1x IBM X3650 M4
2x 2.9 GHz Intel Xeon E5-2690
IBM DB2 10.1
Oracle 8,310.19 1x Sun Server X3-2
2x 2.9 GHz Intel Xeon E5-2690
Oracle WebLogic 11g (10.3.6)
1x Sun Server X3-2L
2x 2.9 GHz Intel Xeon E5-2690
Oracle Database 11g (11.2.0.3)

* SPECjEnterprise2010 EjOPS, bigger is better.

Configuration Summary

Application Server:

1 x Sun Server X4-2
2 x 2.7 GHz Intel Xeon processor E5-2697 v2
256 GB memory
4 x 10 GbE NIC
Oracle Linux 5 Update 9 (kernel-2.6.39-400.124.1.el5uek)
Oracle WebLogic Server 12c (12.1.2)
Java HotSpot(TM) 64-Bit Server VM on Linux, version 1.7.0_40 (Java SE 7 Update 40)

Database Server:

1 x Sun Server X4-2L
2 x 2.7 GHz Intel Xeon E5-2697 v2
256 GB memory
1 x 10 GbE NIC
2 x FC HBA
3 x Sun StorageTek 2540 M2
Oracle Linux 5 Update 9 (kernel-2.6.39-400.124.1.el5uek)
Oracle Database 12c Enterprise Edition Release 12.1.0.1

Benchmark Description

SPECjEnterprise2010 is the third generation of the SPEC organization's J2EE end-to-end industry standard benchmark application. The SPECjEnterprise2010 benchmark has been designed and developed to cover the Java EE 5 specification's significantly expanded and simplified programming model, highlighting the major features used by developers in the industry today. This provides a real world workload driving the Application Server's implementation of the Java EE specification to its maximum potential and allowing maximum stressing of the underlying hardware and software systems.

The workload consists of an end to end web based order processing domain, an RMI and Web Services driven manufacturing domain and a supply chain model utilizing document based Web Services. The application is a collection of Java classes, Java Servlets, Java Server Pages, Enterprise Java Beans, Java Persistence Entities (pojo's) and Message Driven Beans.

The SPECjEnterprise2010 benchmark heavily exercises all parts of the underlying infrastructure that make up the application environment, including hardware, JVM software, database software, JDBC drivers, and the system network.

The primary metric of the SPECjEnterprise2010 benchmark is jEnterprise Operations Per Second ("SPECjEnterprise2010 EjOPS"). This metric is calculated by adding the metrics of the Dealership Management Application in the Dealer Domain and the Manufacturing Application in the Manufacturing Domain. There is no price/performance metric in this benchmark.

Key Points and Best Practices

  • Four Oracle WebLogic server instances were started using numactl binding 2 instances per chip.
  • Two Oracle database listener processes were started and each was bound to a separate chip.
  • Additional tuning information is in the report at http://spec.org.

See Also

Disclosure Statement

SPEC and the benchmark name SPECjEnterprise are registered trademarks of the Standard Performance Evaluation Corporation. Sun Server X4-2, 11,259.88 SPECjEnterprise2010 EjOPS; Sun Server X3-2, 8,310.19 SPECjEnterprise2010 EjOPS; IBM System X3650 M4, 9,696.43 SPECjEnterprise2010 EjOPS. Results from www.spec.org as of 9/22/2013.

Tuesday Sep 10, 2013

Oracle ZFS Storage ZS3-4 Delivers World Record SPC-2 Performance

The Oracle Storage ZS3-4 storage system delivered a world record performance result for the SPC-2 benchmark along with excellent price-performance.

  • The Oracle Storage ZS3-4 storage system delivered an overall score of 17,244.22 SPC-2 MBPS™ and a SPC-2 price-performance of $22.53 on the SPC-2 benchmark.

  • This is over a 1.6X generational improvement in performance and over a 1.5X generational improvement in price-performance than over Oracle's Sun ZFS Storage 7420 SPC-2 Benchmark results.

  • The Oracle ZFS Storage ZS3-4 storage system has 6.8X better overall throughput and nearly 1.2X better price-performance than the IBM DS3524 Express turbo, which is IBM's best overall price-performance score on the SPC-2 benchmark.

  • The Oracle ZFS Storage ZS3-4 storage system has over 1.1X overall throughput and 5.8X better price-performance than the IBM DS8870, which is IBM's best overall performance score on the SPC-2 benchmark.

  • The Oracle ZFS Storage ZS3-4 storage system has over 1.3X overall throughput and 3.9X better price-performance than the HP StorageWorks P9500XP Disk Array on the SPC-2 benchmark.

Performance Landscape

SPC-2 Performance Chart (in decreasing performance order)

System SPC-2
MB/s
$/SPC-2
MB/s
ASU
Capacity
(GB)
TSC Price Data
Protection
Level
Date Results
Identifier
Oracle ZFS Storage ZS3-4 17,244.22 $22.53 31,611 $388,472 Mirroring 09/10/13 B00067
Fujitsu DX8700 S2 16,039 $79.51 71,404 $1,275,163 Mirroring 12/03/12 B00063
IBM DS8870 15,424 $131.21 30,924 $2,023,742 RAID-5 10/03/12 B00062
IBM SAN VC v6.4 14,581 $129.14 74,492 $1,883,037 RAID-5 08/01/12 B00061
NEC Storage M700 14,409 $25.13 53,550 $361,613 Mirroring 08/19/12 B00066
Hitachi VSP 13,148 $95.38 129,112 $1,254,093 RAID-5 07/27/12 B00060
HP StorageWorks P9500 13,148 $88.34 129,112 $1,161,504 RAID-5 03/07/12 B00056
Sun ZFS Storage 7420 10,704 $35.24 31,884 $377,225 Mirroring 04/12/12 B00058
IBM DS8800 9,706 $270.38 71,537 $2,624,257 RAID-5 12/01/10 B00051
HP XP24000 8,725 $187.45 18,401 $1,635,434 Mirroring 09/08/08 B00035

SPC-2 MB/s = the Performance Metric
$/SPC-2 MB/s = the Price-Performance Metric
ASU Capacity = the Capacity Metric
Data Protection = Data Protection Metric
TSC Price = Total Cost of Ownership Metric
Results Identifier = A unique identification of the result Metric

SPC-2 Price-Performance Chart (in increasing price-performance order)

System SPC-2
MB/s
$/SPC-2
MB/s
ASU
Capacity
(GB)
TSC Price Data
Protection
Level
Date Results
Identifier
SGI InfiniteStorage 5600 8,855.70 $15.97 28,748 $141,393 RAID6 03/06/13 B00065
Oracle ZFS Storage ZS3-4 17,244.22 $22.53 31,611 $388,472 Mirroring 09/10/13 B00067
Sun Storage J4200 548.80 $22.92 11,995 $12,580 Unprotected 07/10/08 B00033
NEC Storage M700 14,409 $25.13 53,550 $361,613 Mirroring 08/19/12 B00066
Sun Storage J4400 887.44 $25.63 23,965 $22,742 Unprotected 08/15/08 B00034
Sun StorageTek 2530 672.05 $26.15 1,451 $17,572 RAID5 08/16/07 B00026
Sun StorageTek 2530 663.51 $26.48 854 $17,572 Mirroring 08/16/07 B00025
Fujitsu ETERNUS DX80 1,357.55 $26.70 4,681 $36,247 Mirroring 03/15/10 B00050
IBM DS3524 Express Turbo 2,510 $26.76 14,374 $67,185 RAID-5 12/31/10 B00053
Fujitsu ETERNUS DX80 S2 2,685.50 $28.48 17,231 $76,475 Mirroring 08/19/11 B00055

SPC-2 MB/s = the Performance Metric
$/SPC-2 MB/s = the Price-Performance Metric
ASU Capacity = the Capacity Metric
Data Protection = Data Protection Metric
TSC Price = Total Cost of Ownership Metric
Results Identifier = A unique identification of the result Metric

Complete SPC-2 benchmark results may be found at http://www.storageperformance.org/results/benchmark_results_spc2.

Configuration Summary

Storage Configuration:

Oracle ZFS Storage ZS3-4 storage system in clustered configuration
2 x Oracle ZFS Storage ZS3-4 controllers, each with
4 x 2.4 GHz 10-core Intel Xeon processors
1024 GB memory
16 x Sun Disk shelves, each with
24 x 300 GB 15K RPM SAS-2 drives

Benchmark Description

SPC Benchmark-2 (SPC-2): Consists of three distinct workloads designed to demonstrate the performance of a storage subsystem during the execution of business critical applications that require the large-scale, sequential movement of data. Those applications are characterized predominately by large I/Os organized into one or more concurrent sequential patterns. A description of each of the three SPC-2 workloads is listed below as well as examples of applications characterized by each workload.

  • Large File Processing: Applications in a wide range of fields, which require simple sequential process of one or more large files such as scientific computing and large-scale financial processing.
  • Large Database Queries: Applications that involve scans or joins of large relational tables, such as those performed for data mining or business intelligence.
  • Video on Demand: Applications that provide individualized video entertainment to a community of subscribers by drawing from a digital film library.

SPC-2 is built to:

  • Provide a level playing field for test sponsors.
  • Produce results that are powerful and yet simple to use.
  • Provide value for engineers as well as IT consumers and solution integrators.
  • Is easy to run, easy to audit/verify, and easy to use to report official results.

See Also

Disclosure Statement

SPC-2 and SPC-2 MBPS are registered trademarks of Storage Performance Council (SPC). Results as of September 10, 2013, for more information see www.storageperformance.org. Oracle ZFS Storage ZS3-4 B00067, Fujitsu ET 8700 S2 B00063, IBM DS8870 B00062, IBM S.V.C 6.4 B00061, NEC Storage M700 B00066, Hitachi VSP B00060, HP P9500 XP Disk Array B00056, IBM DS8800 B00051.

Oracle ZFS Storage ZS3-4 Produces Best 2-Node Performance on SPECsfs2008 NFSv3

The Oracle ZFS Storage ZS3-4 storage system delivered world record two-node performance on the SPECsfs2008 NFSv3 benchmark, beating results published on NetApp's dual-controller and four-node high-end FAS6240 storage systems.

  • The Oracle ZFS Storage ZS3-4 storage system delivered a world record two-node result of 450,702 SPECsfs2008_nfs.v3 Ops/sec with an Overall Response Time (ORT) of 0.70 msec on the SPECsfs2008 NFSv3 benchmark.

  • The Oracle ZFS Storage ZS3-4 storage system delivered 2.4x higher throughput than the dual-controller NetApp FAS6240 and 4.5x higher throughput than the dual-controller NetApp FAS3270 on the SPECsfs2008_nfs.v3 benchmark at less than half the list price of either result.

  • The Oracle ZFS Storage ZS3-4 storage system had 42 percent higher throughput than the four-node NetApp FAS6240 on the SPECsfs2008 NFSv3 benchmark.

  • The Oracle ZFS Storage ZS3-4 storage aystem has 54 percent better Overall Response Time than the 4-node NetApp FAS6240 on the SPECsfs2008 NFSv3 benchmark.

Performance Landscape

Two node results for SPECsfs2008_nfs.v3 presented (in decreasing SPECsfs2008_nfs.v3 Ops/sec order) along with other select results.

Sponsor System Nodes Disks Throughput
(Ops/sec)
Overall Response
Time (msec)
Oracle ZS3-4 2 464 450,702 0.70
IBM SONAS 1.2 2 1975 403,326 3.23
NetApp FAS6240 4 288 260,388 1.53
NetApp FAS6240 2 288 190,675 1.17
EMC VG8 312 135,521 1.92
Oracle 7320 2 136 134,140 1.51
EMC NS-G8 100 110,621 2.32
NetApp FAS3270 2 360 101,183 1.66

Throughput SPECsfs2008_nfs.v3 Ops/sec — the Performance Metric
Overall Response Time — the corresponding Response Time Metric
Nodes — Nodes and Controllers are being used interchangeably

Complete SPECsfs2008 benchmark results may be found at http://www.spec.org/sfs2008/results/sfs2008.html.

Configuration Summary

Storage Configuration:

Oracle ZFS Storage ZS3-4 storage system in clustered configuration
2 x Oracle ZFS Storage ZS3-4 controllers, each with
8 x 2.4 GHz Intel Xeon E7-4870 processors
2 TB memory
2 x 10GbE NICs
20 x Sun Disk shelves
18 x shelves with 24 x 300 GB 15K RPM SAS-2 drives
2 x shelves with 20 x 300 GB 15K RPM SAS-2 drives and 8 x 73 GB SAS-2 flash-enabled write-cache

Benchmark Description

SPECsfs2008 is the latest version of the Standard Performance Evaluation Corporation (SPEC) benchmark suite measuring file server throughput and response time, providing a standardized method for comparing performance across different vendor platforms. SPECsfs2008 results summarize the server's capabilities with respect to the number of operations that can be handled per second, as well as the overall latency of the operations. The suite is a follow-on to the SFS97_R1 benchmark, adding a CIFS workload, an updated NFSv3 workload, support for additional client platforms, and a new test harness and reporting/submission framework.

See Also

Disclosure Statement

SPEC and SPECsfs are registered trademarks of Standard Performance Evaluation Corporation (SPEC). Results as of September 10, 2013, for more information see www.spec.org. Oracle ZFS Storage ZS3-4 Appliance 450,702 SPECsfs2008_nfs.v3 Ops/sec, 0.70 msec ORT, NetApp Data ONTAP 8.1 Cluster-Mode (4-node FAS6240) 260,388 SPECsfs2008_nfs.v3 Ops/Sec, 1.53 msec ORT, NetApp FAS6240 190,675 SPECsfs2008_nfs.v3 Ops/Sec, 1.17 msec ORT. NetApp FAS3270 101,183 SPECsfs2008_nfs.v3 Ops/Sec, 1.66 msec ORT.

Nodes refer to the item in the SPECsfs2008 disclosed Configuration Bill of Materials that have the Processing Elements that perform the NFS Processing Function. These are the first item listed in each of disclosed Configuration Bill of Materials except for EMC where it is both the first and third items listed, and HP, where it is the second item listed as Blade Servers. The number of nodes is from the QTY disclosed in the Configuration Bill of Materials as described above. Configuration Bill of Materials list price for Oracle result of US$ 423,644. Configuration Bill of Materials list price for NetApp FAS3270 result of US$ 1,215,290. Configuration Bill of Materials list price for NetApp FAS6240 result of US$ 1,028,118. Oracle pricing from https://shop.oracle.com/pls/ostore/f?p=dstore:home:0, traverse to "Storage and Tape" and then to "NAS Storage". NetApp's pricing from http://www.netapp.com/us/media/na-list-usd-netapp-custom-state-new-discounts.html.

Wednesday Jun 12, 2013

SPARC T5-4 Produces World Record Single Server TPC-H @3000GB Benchmark Result

Oracle's SPARC T5-4 server delivered world record single server performance of 409,721 QphH@3000GB with price/performance of $3.94/QphH@3000GB on the TPC-H @3000GB benchmark. This result shows that the 4-chip SPARC T5-4 server is significantly faster than the 8-chip server results from IBM (POWER7 based) and HP (Intel x86 based).

This result demonstrates a complete data warehouse solution that shows the performance both of individual and concurrent query processing streams, faster loading, and refresh of the data during business operations. The SPARC T5-4 server delivers superior performance and cost efficiency when compared to the IBM POWER7 result.

  • The SPARC T5-4 server with four SPARC T5 processors is 2.1 times faster than the IBM Power 780 server with eight POWER7 processors and 2.5 times faster than the HP ProLiant DL980 G7 server with eight x86 processors on the TPC-H @3000GB benchmark. The SPARC T5-4 server also delivered better performance per core than these eight processor systems from IBM and HP.

  • The SPARC T5-4 server with four SPARC T5 processors is 2.1 times faster than the IBM Power 780 server with eight POWER7 processors on the TPC-H @3000GB benchmark.

  • The SPARC T5-4 server costs 38% less per $/QphH@3000GB compared to the IBM Power 780 server with the TPC-H @3000GB benchmark.

  • The SPARC T5-4 server took 2 hours, 6 minutes, 4 seconds for data loading while the IBM Power 780 server took 2.8 times longer.

  • The SPARC T5-4 server executed the first refresh function (RF1) in 19.4 seconds, the IBM Power 780 server took 7.6 times longer.

  • The SPARC T5-4 server with four SPARC T5 processors is 2.5 times faster than the HP ProLiant DL980 G7 server with the same number of cores on the TPC-H @3000GB benchmark.

  • The SPARC T5-4 server took 2 hours, 6 minutes, 4 seconds for data loading while the HP ProLiant DL980 G7 server took 4.1 times longer.

  • The SPARC T5-4 server executed the first refresh function (RF1) in 19.4 seconds, the HP ProLiant DL980 G7 server took 8.9 times longer.

  • The SPARC T5-4 server delivered 6% better performance than the SPARC Enterprise M9000-64 server and 2.1 times better than the SPARC Enterprise M9000-32 server on the TPC-H @3000GB benchmark.

Performance Landscape

The table lists the leading TPC-H @3000GB results for non-clustered systems.

TPC-H @3000GB, Non-Clustered Systems
System
Processor
P/C/T – Memory
Composite
(QphH)
$/perf
($/QphH)
Power
(QppH)
Throughput
(QthH)
Database Available
SPARC T5-4
3.6 GHz SPARC T5
4/64/512 – 2048 GB
409,721.8 $3.94 345,762.7 485,512.1 Oracle 11g R2 09/24/13
SPARC Enterprise M9000
3.0 GHz SPARC64 VII+
64/256/256 – 1024 GB
386,478.3 $18.19 316,835.8 471,428.6 Oracle 11g R2 09/22/11
SPARC T4-4
3.0 GHz SPARC T4
4/32/256 – 1024 GB
205,792.0 $4.10 190,325.1 222,515.9 Oracle 11g R2 05/31/12
SPARC Enterprise M9000
2.88 GHz SPARC64 VII
32/128/256 – 512 GB
198,907.5 $15.27 182,350.7 216,967.7 Oracle 11g R2 12/09/10
IBM Power 780
4.1 GHz POWER7
8/32/128 – 1024 GB
192,001.1 $6.37 210,368.4 175,237.4 Sybase 15.4 11/30/11
HP ProLiant DL980 G7
2.27 GHz Intel Xeon X7560
8/64/128 – 512 GB
162,601.7 $2.68 185,297.7 142,685.6 SQL Server 2008 10/13/10

P/C/T = Processors, Cores, Threads
QphH = the Composite Metric (bigger is better)
$/QphH = the Price/Performance metric in USD (smaller is better)
QppH = the Power Numerical Quantity
QthH = the Throughput Numerical Quantity

The following table lists data load times and refresh function times during the power run.

TPC-H @3000GB, Non-Clustered Systems
Database Load & Database Refresh
System
Processor
Data Loading
(h:m:s)
T5
Advan
RF1
(sec)
T5
Advan
RF2
(sec)
T5
Advan
SPARC T5-4
3.6 GHz SPARC T5
02:06:04 1.0x 19.4 1.0x 22.4 1.0x
IBM Power 780
4.1 GHz POWER7
05:51:50 2.8x 147.3 7.6x 133.2 5.9x
HP ProLiant DL980 G7
2.27 GHz Intel Xeon X7560
08:35:17 4.1x 173.0 8.9x 126.3 5.6x

Data Loading = database load time
RF1 = power test first refresh transaction
RF2 = power test second refresh transaction
T5 Advan = the ratio of time to T5 time

Complete benchmark results found at the TPC benchmark website http://www.tpc.org.

Configuration Summary and Results

Hardware Configuration:

SPARC T5-4 server
4 x SPARC T5 processors (3.6 GHz total of 64 cores, 512 threads)
2 TB memory
2 x internal SAS (2 x 300 GB) disk drives

External Storage:

12 x Sun Storage 2540-M2 array with Sun Storage 2501-M2 expansion trays, each with
24 x 15K RPM 300 GB drives, 2 controllers, 2 GB cache
2 x Brocade 6510 Fibre Channel Switches (48 x 16 Gbs port each)

Software Configuration:

Oracle Solaris 11.1
Oracle Database 11g Release 2 Enterprise Edition

Audited Results:

Database Size: 3000 GB (Scale Factor 3000)
TPC-H Composite: 409,721.8 QphH@3000GB
Price/performance: $3.94/QphH@3000GB
Available: 09/24/2013
Total 3 year Cost: $1,610,564
TPC-H Power: 345,762.7
TPC-H Throughput: 485,512.1
Database Load Time: 2:06:04

Benchmark Description

The TPC-H benchmark is a performance benchmark established by the Transaction Processing Council (TPC) to demonstrate Data Warehousing/Decision Support Systems (DSS). TPC-H measurements are produced for customers to evaluate the performance of various DSS systems. These queries and updates are executed against a standard database under controlled conditions. Performance projections and comparisons between different TPC-H Database sizes (100GB, 300GB, 1000GB, 3000GB, 10000GB, 30000GB and 100000GB) are not allowed by the TPC.

TPC-H is a data warehousing-oriented, non-industry-specific benchmark that consists of a large number of complex queries typical of decision support applications. It also includes some insert and delete activity that is intended to simulate loading and purging data from a warehouse. TPC-H measures the combined performance of a particular database manager on a specific computer system.

The main performance metric reported by TPC-H is called the TPC-H Composite Query-per-Hour Performance Metric (QphH@SF, where SF is the number of GB of raw data, referred to as the scale factor). QphH@SF is intended to summarize the ability of the system to process queries in both single and multiple user modes. The benchmark requires reporting of price/performance, which is the ratio of the total HW/SW cost plus 3 years maintenance to the QphH. A secondary metric is the storage efficiency, which is the ratio of total configured disk space in GB to the scale factor.

Key Points and Best Practices

  • Twelve of Oracle's Sun Storage 2540-M2 arrays with Sun Storage 2501-M2 expansion trays were used for the benchmark. Each contains 24 15K RPM drives and is connected to a single dual port 16Gb FC HBA using 2 ports through a Brocade 6510 Fibre Channel switch.

  • The SPARC T5-4 server achieved a peak IO rate of 33 GB/sec from the Oracle database configured with this storage.

  • Oracle Solaris 11.1 required very little system tuning.

  • Some vendors try to make the point that storage ratios are of customer concern. However, storage ratio size has more to do with disk layout and the increasing capacities of disks – so this is not an important metric when comparing systems.

  • The SPARC T5-4 server and Oracle Solaris efficiently managed the system load of two thousand Oracle Database parallel processes.

  • Six Sun Storage 2540-M2/2501-M2 arrays were mirrored to another six Sun Storage 2540-M2/25001-M2 arrays on which all of the Oracle database files were placed. IO performance was high and balanced across all the arrays.

  • The TPC-H Refresh Function (RF) simulates periodical refresh portion of Data Warehouse by adding new sales and deleting old sales data. Parallel DML (parallel insert and delete in this case) and database log performance are a key for this function and the SPARC T5-4 server outperformed both the IBM POWER7 server and HP ProLiant DL980 G7 server. (See the RF columns above.)

See Also

Disclosure Statement

TPC-H, QphH, $/QphH are trademarks of Transaction Processing Performance Council (TPC). For more information, see www.tpc.org, results as of 6/7/13. Prices are in USD. SPARC T5-4 www.tpc.org/3288; SPARC T4-4 www.tpc.org/3278; SPARC Enterprise M9000 www.tpc.org/3262; SPARC Enterprise M9000 www.tpc.org/3258; IBM Power 780 www.tpc.org/3277; HP ProLiant DL980 www.tpc.org/3285. 

Wednesday May 01, 2013

SPARC T5-8 Delivers Best Single System SPECjEnterprise2010 Benchmark, Beats IBM

Oracle produced a world record single-server SPECjEnterprise2010 benchmark result of 27,843.57 SPECjEnterprise2010 EjOPS using one of Oracle's SPARC T5-8 servers for both the application and the database tier. This result directly compares the 8-chip SPARC T5-8 server (8 SPARC T5 processors) to the 8-chip IBM Power 780 server (8 POWER7+ processor).

  • The 8-chip SPARC T5 processor based server is 2.6x faster than the 8-chip IBM POWER7+ processor based server.

  • Both Oracle and IBM used virtualization to provide 4-chips for application and 4-chips for database.

  • The server cost/performance for the SPARC T5 processor based server was 6.9x better than the server cost/performance of the IBM POWER7+ processor based server. The cost/performance of the SPARC T5-8 server is $10.72 compared to the IBM Power 780 at $73.83.

  • The total configuration cost/performance (hardware+software) for the SPARC T5 processor based server was 3.6x better than the IBM POWER7+ processor based server. The cost/performance of the SPARC T5-8 server is $56.21 compared to the IBM Power 780 at $199.42. The IBM system had 1.6x better performance per core, but this did not reduce the total software and hardware cost to the customer. As shown by this comparison, performance-per-core is a poor predictor of characteristics relevant to customers.

  • The total IBM hardware plus software cost was $2,174,152 versus the total Oracle hardware plus software cost of $1,565,092. At this price IBM could only provide 768 GB of memory while Oracle was able to deliver 2 TB in the SPARC T5-8 server.

  • The SPARC T5-8 server requires only 8 rack units, the same as the space of the IBM Power 780. In this configuration IBM has a hardware core density of 4 cores per rack unit which contrasts with the 16 cores per rack unit for the SPARC T5-8 server. This again demonstrates why performance-per-core is a poor predictor of characteristics relevant to customers.

  • The virtualized SPARC T5 processor based server ran the application tier servers on 4 chips using Oracle Solaris Zones and the database tier in a 4-chip Oracle Solaris Zone. The virtualized IBM POWER7+ processor based server ran the application in a 4-chip LPAR and the database in a 4-chip LPAR.

  • The SPARC T5-8 server ran the Oracle Solaris 11.1 operating system and used Oracle Solaris Zones to consolidate eight Oracle WebLogic application server instances and one database server instance to achieve this result. The IBM system used LPARS and AIX V7.1.

  • This result demonstrated less than 1 second average response times for all SPECjEnterprise2010 transactions and represents JEE 5.0 transactions generated by 227,500 users.

  • The application server used Oracle Fusion Middleware components including the Oracle WebLogic 12.1 application server and Java HotSpot(TM) 64-Bit Server VM on Solaris, version 1.7.0_15. The database server was configured with Oracle Database 11g Release 2.

  • IBM has a non-virtualized result (one server for application and one server for database). The IBM PowerLinux 7R2 achieved 13,161.07 SPECjEnterprise2010 EjOPS which means it was 2.1x slower than the SPARC T5-8 server. The total configuration cost/performance (hardware+software) for the SPARC T5 processor based server was 11% better than the IBM POWER7+ processor based server. The cost/performance of the SPARC T5-8 server is $56.21 compared to the IBM PowerLinux 7R2 at $62.26. As shown by this comparison, performance-per-core is a poor predictor of characteristics relevant to customers.

Performance Landscape

Complete benchmark results are at the SPEC website, SPECjEnterprise2010 Results.

SPECjEnterprise2010 Performance Chart
Only Two Virtualized Results (App+DB on 1 Server) as of 5/1/2013
Submitter EjOPS* Java EE Server & DB Server
Oracle 27,843.57 1 x SPARC T5-8
8 chips, 128 cores, 3.6 GHz SPARC T5
Oracle WebLogic 12c (12.1.1)
Oracle Database 11g (11.2.0.3)
IBM 10,902.30 1 x IBM Power 780
8 chips, 32 cores, 4.42 GHz POWER7+
WebSphere Application Server V8.5
IBM DB2 Universal Database 10.1

* SPECjEnterprise2010 EjOPS (bigger is better)

Configuration Summary

Oracle Summary

Application and Database Server:

1 x SPARC T5-8 server, with
8 x 3.6 GHz SPARC T5 processors
2 TB memory
5 x 10 GbE dual-port NIC
6 x 8 Gb dual-port HBA
Oracle Solaris 11.1 SRU 4.5
Oracle WebLogic Server 12c (12.1.1)
Java HotSpot(TM) 64-Bit Server VM on Solaris, version 1.7.0_15
Oracle Database 11g (11.2.0.3)

Storage Servers:

6 x Sun Server X3-2L (12-Drive), with
2 x 2.4 GHz Intel Xeon
16 GB memory
1 x 8 Gb FC HBA
4 x Sun Flash Accelerator F40 PCI-E Card
Oracle Solaris 11.1

2 x Sun Storage 2540-M2 Array
12 x 600 GB 15K RPM SAS HDD

Switch Hardware:

1 x Sun Network 10 GbE 72-port Top of Rack (ToR) Switch

IBM Summary

Application and Database Server:

1 x IBM Power 780 server, with
8 x 4.42 GHz POWER7+ processors
786 GB memory
6 x 10 GbE dual-port NIC
3 x 8 Gb four-port HBA
IBM AIX V7.1 TL2
IBM WebSphere Application Server V8.5
IBM J9 VM (build 2.6, JRE 1.7.0 IBM J9 AIX ppc-32)
IBM DB2 10.1
IBM InfoSphere Optim pureQuery Runtime v3.1.1

Storage:

2 x DS5324 Disk System with
48 x 146GB 15K E-DDM Disks

1 x v7000 Disk Controller with
16 x 400GB SSD Disks

Benchmark Description

SPECjEnterprise2010 is the third generation of the SPEC organization's J2EE end-to-end industry standard benchmark application. The new SPECjEnterprise2010 benchmark has been re-designed and developed to cover the Java EE 5 specification's significantly expanded and simplified programming model, highlighting the major features used by developers in the industry today. This provides a real world workload driving the Application Server's implementation of the Java EE specification to its maximum potential and allowing maximum stressing of the underlying hardware and software systems,
  • The web zone, servlets, and web services
  • The EJB zone
  • JPA 1.0 Persistence Model
  • JMS and Message Driven Beans
  • Transaction management
  • Database connectivity
Moreover, SPECjEnterprise2010 also heavily exercises all parts of the underlying infrastructure that make up the application environment, including hardware, JVM software, database software, JDBC drivers, and the system network.

The primary metric of the SPECjEnterprise2010 benchmark is jEnterprise Operations Per Second (SPECjEnterprise2010 EjOPS). The primary metric for the SPECjEnterprise2010 benchmark is calculated by adding the metrics of the Dealership Management Application in the Dealer Domain and the Manufacturing Application in the Manufacturing Domain. There is NO price/performance metric in this benchmark.

Key Points and Best Practices

  • Eight Oracle WebLogic server instances on the SPARC T5-8 server were hosted in 8 separate Oracle Solaris Zones to demonstrate consolidation of multiple application servers. The 8 zones were bound to 4 resource pools using 64 cores (4 cpu chips).
  • The database ran in a separate Oracle Solaris Zone bound to a resource pool consisting 64 cores (4 cpu chips). The database shadow processes were run in the FX scheduling class and bound to one of four cpu chips using the plgrp command.
  • The Oracle WebLogic application servers were executed in the FX scheduling class to improve performance by reducing the frequency of context switches.
  • The Oracle log writer process was run in the FX scheduling class at processor priority 60 to use the Critical Thread feature.

See Also

Disclosure Statement

SPEC and the benchmark name SPECjEnterprise are registered trademarks of the Standard Performance Evaluation Corporation. Results from www.spec.org as of 5/1/2013. SPARC T5-8, 27,843.57 SPECjEnterprise2010 EjOPS; IBM Power 780, 10,902.30 SPECjEnterprise2010 EjOPS; IBM PowerLinux 7R2, 13,161.07 SPECjEnterprise2010 EjOPS. Oracle server only hardware list price is $298,494 and total hardware plus software list price is $1,565,092 from http://www.oracle.com as of  5/22/2013. IBM server only hardware list price is $804,931 and total hardware plus software cost of $2,174,152 based on public pricing from http://www.ibm.com as of 5/22/2013. IBM PowerLinux 7R2 server total hardware plus software cost of $819,451 based on public pricing from http://www.ibm.com as of 5/22/2013.

Tuesday Mar 26, 2013

SPARC T5-8 Produces TPC-C Benchmark Single-System World Record Performance

Oracle's SPARC T5-8 server equipped with eight 3.6 GHz SPARC T5 processors obtained a result of 8,552,523 tpmC on the TPC-C benchmark. This result is a world record for single servers. Oracle demonstrated this world record database performance running Oracle Database 11g Release 2 Enterprise Edition with Partitioning.

  • The SPARC T5-8 server delivered a single system TPC-C world record of 8,552,523 tpmC with a price performance of $0.55/tpmC using Oracle Database 11g Release 2. This configuration is available 09/25/13.

  • The SPARC T5-8 server has 2.8x times better performance than the 4-processor IBM x3850 X5 system equipped with Intel Xeon processors.

  • The SPARC T5-8 server delivers 1.7x the performance compared to the next best eight processor result.

  • The SPARC T5-8 server delivers 2.4x the performance per chip compared to the IBM Power 780 3-node cluster result.

  • The SPARC T5-8 server delivers 1.8x the performance per chip compared to the IBM Power 780 non-clustered result.

  • The SPARC T5-8 server delivers 1.4x the performance per chip compared to the IBM Flex x240 Xeon result.

  • The SPARC T5-8 server delivers 1.7x the performance per chip compared to the Sun Server X2-8 system equipped with Intel Xeon processors.

  • The SPARC T5-8 server demonstrated over 3.1 Million 4KB IOP/sec with 76% idle, in a separate IO intensive workload, demonstrating its ability process a large IO workload with lots of processing headroom.

  • This result showed Oracle's integrated hardware and software stacks provide industry leading performance.

  • The Oracle solution utilized Oracle Solaris 11.1 with Oracle Database 11g Enterprise Edition with Partitioning and demonstrates stability and performance with this highly secure operating environment to produce the world record TPC-C benchmark performance.

Performance Landscape

Select TPC-C results (sorted by tpmC, bigger is better)

System p/c/t tpmC Price
/tpmC
Avail Database Memory
Size
IBM Power 780 Cluster 24/192/768 10,366,254 1.38 USD 10/13/2010 IBM DB2 9.7 6 TB
SPARC T5-8 8/128/1024 8,552,523 0.55 USD 9/25/2013 Oracle 11g R2 4 TB
IBM Power 595 32/64/128 6,085,166 2.81 USD 12/10/2008 IBM DB2 9.5 4 TB
Sun Server X2-8 8/80/160 5,055,888 0.89 USD 7/10/2012 Oracle 11g R2 4 TB
IBM x3850 X5 4/40/80 3,014,684 0.59 USD 7/11/2011 IBM DB2 9.7 3 TB
IBM Flex x240 2/16/32 1,503,544 0.53 USD 8/16/2012 IBM DB2 9.7 768 GB
IBM Power 780 2/8/32 1,200,011 0.69 USD 10/13/2010 IBM DB2 9.5 512 GB

p/c/t - processors, cores, threads
Avail - availability date

Oracle and IBM TPC-C Response times

System tpmC Response Time (sec)
New Order 90th%
Response Time (sec)
New Order Average
IBM Power 780 Cluster 10,366,254 2.100 1.137
SPARC T5-8 8,552,523 0.410 0.234
IBM Power 595 6,085,166 1.690 1.220
IBM Power 780 1,200,011 0.694 0.403

Oracle uses Response Time New Order Average and Response Time New Order 90th% for comparison between Oracle and IBM.

Graphs of Oracle's and IBM's Response Time New Order Average and Response Time New Order 90th% can be found in the full disclosure reports on TPC's website TPC-C Official Result Page.

Configuration Summary and Results

Hardware Configuration:

Server
SPARC T5-8
8 x 3.6 GHz SPARC T5
4 TB memory
2 x 600 GB 10K RPM SAS2 internal disks
12 x 8 Gbs FC HBA

Data Storage
54 x Sun Server X3-2L systems configured as COMSTAR heads, each with
2 x 2.4 GHz Intel Xeon E5-2609 processors
16 GB memory
4 x Sun Flash Accelerator F40 PCIe Cards (400 GB each)
12 x 3 TB 7.2K RPM 3.5" SAS disks
2 x 600 GB 10K RPM SAS2 disks
2 x Brocade 6510 switches

Redo Storage
2 x Sun Server X3-2L systems configured as COMSTAR heads, each with
2 x 2.4 GHz Intel Xeon E5-2609 processors
16 GB memory
12 x 3 TB 7.2K RPM 3.5" SAS disks
2 x 600 GB 10K RPM SAS2 disks

Clients
16 x Sun Server X3-2 servers, each with
2 x 2.9 GHz Intel Xeon E5-2690 processors
64 GB memory
2 x 600 GB 10K RPM SAS2 disks

Software Configuration:

Oracle Solaris 11.1 SRU 4.5 (for SPARC T5-8)
Oracle Solaris 11.1 (for COMSTAR systems)
Oracle Database 11g Release 2 Enterprise Edition with Partitioning
Oracle iPlanet Web Server 7.0 U5
Oracle Tuxedo CFS-R

Results:

System: SPARC T5-8
tpmC: 8,552,523
Price/tpmC: 0.55 USD
Available: 9/25/2013
Database: Oracle Database 11g
Cluster: no
Response Time New Order Average: 0.234 seconds

Benchmark Description

TPC-C is an OLTP system benchmark. It simulates a complete environment where a population of terminal operators executes transactions against a database. The benchmark is centered around the principal activities (transactions) of an order-entry environment. These transactions include entering and delivering orders, recording payments, checking the status of orders, and monitoring the level of stock at the warehouses.

Key Points and Best Practices

  • Oracle Database 11g Release 2 Enterprise Edition with Partitioning scales easily to this high level of performance.

  • COMSTAR (Common Multiprotocol SCSI Target) is the software framework that enables an Oracle Solaris host to serve as a SCSI Target platform. COMSTAR uses a modular approach to break the huge task of handling all the different pieces in a SCSI target subsystem into independent functional modules which are glued together by the SCSI Target Mode Framework (STMF). The modules implementing functionality at SCSI level (disk, tape, medium changer etc.) are not required to know about the underlying transport. And the modules implementing the transport protocol (FC, iSCSI, etc.) are not aware of the SCSI-level functionality of the packets they are transporting. The framework hides the details of allocation providing execution context and cleanup of SCSI commands and associated resources and simplifies the task of writing the SCSI or transport modules.

  • Oracle iPlanet Web Server middleware is used for the client tier of the benchmark. Each web server instance supports more than a quarter-million users while satisfying the response time requirement from the TPC-C benchmark.

See Also

Disclosure Statement

TPC Benchmark C, tpmC, and TPC-C are trademarks of the Transaction Processing Performance Council (TPC). SPARC T5-8 (8/128/1024) with Oracle Database 11g Release 2 Enterprise Edition with Partitioning, 8,552,523 tpmC, $0.55 USD/tpmC, available 9/25/2013. IBM Power 780 Cluster (24/192/768) with DB2 ESE 9.7, 10,366,254 tpmC, $1.38 USD/tpmC, available 10/13/2010. IBM x3850 X5 (4/40/80) with DB2 ESE 9.7, 3,014,684 tpmC, $0.59 USD/tpmC, available 7/11/2011. IBM x3850 X5 (4/32/64) with DB2 ESE 9.7, 2,308,099 tpmC, $0.60 USD/tpmC, available 5/20/2011. IBM Flex x240 (2/16/32) with DB2 ESE 9.7, 1,503,544 tpmC, $0.53 USD/tpmC, available 8/16/2012. IBM Power 780 (2/8/32) with IBM DB2 9.5, 1,200,011 tpmC, $0.69 USD/tpmC, available 10/13/2010. Source: http://www.tpc.org/tpcc, results as of 3/26/2013.

SPARC T5-8 Delivers SPECjEnterprise2010 Benchmark World Record Performance

Oracle produced a world record SPECjEnterprise2010 benchmark result of 57,422.17 SPECjEnterprise2010 EjOPS using Oracle's SPARC T5-8 server in the application tier and another SPARC T5-8 server for the database tier.

  • The SPARC T5-8 server demonstrated 3.4x better performance compared to an 8-socket IBM Power 780 server result of 16,646.34 SPECjEnterprise2010 EjOPS. The SPARC T5-8 is 3.7x less expensive for the application server hardware list cost compared to the IBM configuration.

  • The SPARC T5 processor demonstrated 1.7x better performance per core compared to the POWER7 processor used in the IBM Power 780 SPECjEnterprise2010 result.

  • The SPARC T5-8 server demonstrated 2.2x better performance compared to the Cisco UCS B440 M2 Blade Server result of 26,118.67 SPECjEnterprise2010 EjOPS.

  • The SPARC T5-8 servers used in the application and database tiers ran the Oracle Solaris 11.1 operating system.

  • The SPARC T5-8 server for the application tier used Oracle Solaris Zones to consolidate sixteen Oracle WebLogic Server instances to achieve this result.

  • This result demonstrated less than 1 second response time for all SPECjEnterprise2010 transactions, while demonstrating a sustained load of Java EE 5 transactions equivalent to 468,000 users.

  • The SPARC T5-8 application server used Oracle Fusion Middleware components including the Oracle WebLogic 12.1 application server and Oracle JDK 7 Update 15. The SPARC T5-8 database server was configured with Oracle Database 11g Release 2.

  • This result used six Sun Server X3-2L systems each configured with 4 x 400 GB Sun Flash Accelerator F40 PCIe Card devices as storage servers for the database files.

  • This result represents the best performance/socket for a single system in the application tier of 7,177.77 SPECjEnterprise2010 EjOPS per socket.

  • A single SPARC T5-8 server in the application tier producing 57,422.17 SPECjEnterprise2010 EjOPS can replace a total of 4x SPARC T4-4 servers that obtained 40,104.86 SPECjEnterprise2010 EjOPS. A single SPARC T5-8 server in the application tier producing 57,422.17 SPECjEnterprise2010 EjOPS can replace 6x SPARC T3-4 servers where each SPARC T3-4 server obtained 9,456.28 SPECjEnterprise2010 EjOPS.

  • Oracle Fusion Middleware provides a family of complete, integrated, hot pluggable and best-of-breed products known for enabling enterprise customers to create and run agile and intelligent business applications. Oracle WebLogic Server's on-going, record-setting Java application server performance demonstrates why so many customers rely on Oracle Fusion Middleware as their foundation for innovation.

Performance Landscape

Complete benchmark results are at the SPEC website, SPECjEnterprise2010 Results.

SPECjEnterprise2010 Performance Chart
as of 3/26/2013
Submitter EjOPS* Java EE Server DB Server
Oracle 57,422.17 1 x SPARC T5-8
8 chips, 128 cores, 3.6 GHz SPARC T5
Oracle WebLogic 12c (12.1.1)
1 x SPARC T5-8
8 chips, 128 cores, 3.6 GHz SPARC T5
Oracle Database 11g (11.2.0.3)
Oracle 40,104.86 4 x SPARC T4-4
4 chips, 32 cores, 3.0 GHz SPARC T4
Oracle WebLogic 11g (10.3.5)
2 x SPARC T4-4
4 chips, 32 cores, 3.0 GHz SPARC T4
Oracle Database 11g (11.2.0.2)
Oracle 27,150.05 1x Sun Server X2-8
8x 2.4 GHz Intel Xeon E7-8870
Oracle WebLogic 12c
1x Sun Server X2-4
4x 2.4 GHz Intel Xeon E7-4870
Oracle Database 11g (11.2.0.2)
Cisco 26,118.67 2 x Cisco UCS B440 M2
4 chips, 40 cores, 2.4 GHz Xeon E7-4870
Oracle WebLogic 11g (10.3.5)
1 x Cisco UCS C460 M2
4 chips, 40 cores, 2.4 GHz Xeon E7-4870
Oracle Database 11g (11.2.0.2)
IBM 16,646.34 1 x IBM Power 780
8 chips, 64 cores, 3.86 GHz POWER7
WebSphere Application Server V7.0
1 x IBM Power 750 Express
4 chips, 32 cores, 3.55 GHz POWER7
IBM DB2 Universal Database 9.7

* SPECjEnterprise2010 EjOPS (bigger is better)

Configuration Summary

Application Server:

1 x SPARC T5-8 server, with
8 x 3.6 GHz SPARC T5 processors
2 TB memory
8 x 10 GbE dual-port NIC
Oracle Solaris 11.1 SRU 4.5
Oracle WebLogic Server 12c (12.1.1)
Oracle JDK 7 Update 15

Database Server:

1 x SPARC T5-8 server, with
8 x 3.6 GHz SPARC T5 processors
2 TB memory
5 x 10 GbE dual-port NIC
6 x 8 Gb FC dual-port HBA
Oracle Solaris 11.1 SRU 4.5
Oracle Database 11g Enterprise Edition Release 11.2.0.3

Storage Servers:

6 x Sun Server X3-2L (12-Drive), with
2 x 2.4 GHz Intel Xeon
16 GB memory
1 x 8 Gb FC HBA
4 x Sun Flash Accelerator F40 PCI-E Card
Oracle Solaris 11.1

2 x Sun Storage 2540-M2 Array
12 x 600 GB 15K RPM SAS HDD

Switch Hardware:

1 x Sun Network 10 GbE 72-port Top of Rack (ToR) Switch

Benchmark Description

SPECjEnterprise2010 is the third generation of the SPEC organization's J2EE end-to-end industry standard benchmark application. The new SPECjEnterprise2010 benchmark has been re-designed and developed to cover the Java EE 5 specification's significantly expanded and simplified programming model, highlighting the major features used by developers in the industry today. This provides a real world workload driving the Application Server's implementation of the Java EE specification to its maximum potential and allowing maximum stressing of the underlying hardware and software systems,
  • The web zone, servlets, and web services
  • The EJB zone
  • JPA 1.0 Persistence Model
  • JMS and Message Driven Beans
  • Transaction management
  • Database connectivity
Moreover, SPECjEnterprise2010 also heavily exercises all parts of the underlying infrastructure that make up the application environment, including hardware, JVM software, database software, JDBC drivers, and the system network.

The primary metric of the SPECjEnterprise2010 benchmark is jEnterprise Operations Per Second (SPECjEnterprise2010 EjOPS). The primary metric for the SPECjEnterprise2010 benchmark is calculated by adding the metrics of the Dealership Management Application in the Dealer Domain and the Manufacturing Application in the Manufacturing Domain. There is NO price/performance metric in this benchmark.

Key Points and Best Practices

  • Sixteen Oracle WebLogic server instances on the SPARC T5-8 server were hosted in 16 separate Oracle Solaris Zones to demonstrate consolidation of multiple application servers.
  • Each Oracle Solaris Zone was bound to a separate processor set, each contained total 58 hardware strands. This was done to improve performance by using the physical memory closest to the processors to reduce memory access latency. The default set was used for network and disk interrupt handling.
  • The Oracle WebLogic application servers were executed in the FX scheduling class to improve performance by reducing the frequency of context switches.
  • The Oracle database processes were run in 8 processor sets using psrset(1M) and executed in the FX scheduling class. This improved performance by reducing memory access latency and reducing context switches.
  • The Oracle log writer process was run in a separate processor set containing a single core and run in the RT scheduling class. This insured that the log writer had the most efficient use of CPU resources.

See Also

Disclosure Statement

SPEC and the benchmark name SPECjEnterprise are registered trademarks of the Standard Performance Evaluation Corporation. Results from www.spec.org as of 3/26/2013. SPARC T5-8, 57,422.17 SPECjEnterprise2010 EjOPS; SPARC T4-4, 40,104.86 SPECjEnterprise2010 EjOPS; Sun Server X2-8, 27,150.05 SPECjEnterprise2010 EjOPS; Cisco UCS B440 M2, 26,118.67 SPECjEnterprise2010 EjOPS; IBM Power 780, 16,646.34 SPECjEnterprise2010 EjOPS. SPARC T3-4 9456.28 SPECjEnterprise2010 EjOPS.

SPARC T5-8 (SPARC T5-8 Server base package, 8xSPARC T5 16-core processors, 128x16GB-1066 DIMMS, 2x600GB 10K RPM 2.5. SAS-2 HDD, 4x Power Cables) List Price $268,742. IBM Power 780 (IBM Power 780:9179 Model MHB, 8x3.8GHz 16-core, 64x one processor activation, 4xCEC Enclosure with IBM Bezel, I/O Backplane and System Midplane,16x 0/32GB DDR3 Memory (4x8GB) DIMMS-1066MHz Power7 CoD Memory, 12x Activation of 1 GB DDR3 Power7 Memory, 5x Activation of 100GB DDR3 Power7 Memory, 1x Disk/Media Backplane. 2x 146.8GB SAS 15K RPM 2.5. HDD (AIX/Linux only), 4x AC Power Supply 1725W) List Price $992,023. Source: Oracle.com and IBM.com, collected 03/18/2013.

SPARC T5 Systems Deliver SPEC CPU2006 Rate Benchmark Multiple World Records

Oracle's SPARC T5 processor based systems delivered world record performance on the SPEC CPU2006 rate benchmarks. This was accomplished with Oracle Solaris 11.1 and Oracle Solaris Studio 12.3 software.

SPARC T5-8

  • The SPARC T5-8 server delivered world record SPEC CPU2006 rate benchmark results for systems with eight processors.

  • The SPARC T5-8 server achieved scores of 3750 SPECint_rate2006, 3490 SPECint_rate_base2006, 3020 SPECfp_rate2006, and 2770 SPECfp_rate_base2006.

  • The SPARC T5-8 server beat the 8 processor IBM Power 760 with POWER7+ processors by 1.7x on the SPECint_rate2006 benchmark and 2.2x on the SPECfp_rate2006 benchmark.

  • The SPARC T5-8 server beat the 8 processor IBM Power 780 with POWER7 processors by 35% on the SPECint_rate2006 benchmark and 14% on the SPECfp_rate2006 benchmark.

  • The SPARC T5-8 server beat the 8 processor HP DL980 G7 with Intel Xeon E7-4870 processors by 1.7x on the SPECint_rate2006 benchmark and 2.1x on the SPECfp_rate2006 benchmark.

SPARC T5-1B

  • The SPARC T5-1B server module delivered world record SPEC CPU2006 rate benchmark results for systems with one processor.

  • The SPARC T5-1B server module achieved scores of 467 SPECint_rate2006, 436 SPECint_rate_base2006, 369 SPECfp_rate2006, and 350 SPECfp_rate_base2006.

  • The SPARC T5-1B server module beat the 1 processor IBM Power 710 Express with a POWER7 processor by 62% on the SPECint_rate2006 benchmark and 49% on the SPECfp_rate2006 benchmark.

  • The SPARC T5-1B server module beat the 1 processor NEC Express5800/R120d-1M with an Intel Xeon E5-2690 processor by 31% on the SPECint_rate2006 benchmark. The SPARC T5-1B server module beat the 1 processor Huawei RH2288 V2 with an Intel Xeon E5-2690 processor by 44% on the SPECfp_rate2006 benchmark.

  • The SPARC T5-1B server module beat the 1 processor Supermicro A+ 1012G-MTF with an AMD Operton 6386 SE processor by 51% on the SPECint_rate2006 benchmark and 65% on the SPECfp_rate2006 benchmark.

Performance Landscape

Complete benchmark results are at the SPEC website, SPEC CPU2006 Results. The tables below provide the new Oracle results, as well as, select results from other vendors.

SPEC CPU2006 Rate Results – Eight Processors
System Processor ch/co/th * Peak Base
SPECint_rate2006
SPARC T5-8 SPARC T5, 3.6 GHz 8/128/1024 3750 3490
IBM Power 780 POWER7, 3.92 GHz 8/64/256 2770 2420
HP DL980 G7 Xeon E7-4870, 2.4 GHz 8/80/160 2180 2070
IBM Power 760 POWER7+, 3.42 GHz 8/48/192 2170 1480
Dell PowerEdge C6145 Opteron 6180 SE, 2.5 GHz 8/96/96 1670 1440
SPECfp_rate2006
SPARC T5-8 SPARC T5, 3.6 GHz 8/128/1024 3020 2770
IBM Power 780 POWER7, 3.92 GHz 8/64/256 2640 2410
HP DL980 G7 Xeon E7-4870, 2.4 GHz 8/80/160 1430 1380
IBM Power 760 POWER7+, 3.42 GHz 8/48/192 1400 1130
Dell PowerEdge C6145 Opteron 6180 SE, 2.5 GHz 8/96/96 1310 1200

* ch/co/th — chips / cores / threads enabled

SPEC CPU2006 Rate Results – One Processor
System Processor ch/co/th * Peak Base
SPECint_rate2006
SPARC T5-1B SPARC T5, 3.6 GHz 1/16/128 467 436
NEC Express5800/R120d-1M Xeon E5-2690, 2.9 GHz 1/8/16 357 343
Supermicro A+ 1012G-MTF Opteron 6386 SE, 2.8 GHz 1/16/16 309 269
IBM Power 710 Express POWER7, 3.556 GHz 1/8/32 289 255
SPECfp_rate2006
SPARC T5-1B SPARC T5, 3.6 GHz 1/16/128 369 350
Huawei RH2288 V2 Xeon E5-2690, 2.9 GHz 1/8/16 257 250
IBM Power 710 Express POWER7, 3.556 GHz 1/8/32 248 229
Supermicro A+ 1012G-MTF Opteron 6386 SE, 2.8 GHz 1/16/16 223 199

* ch/co/th — chips / cores / threads enabled

Configuration Summary

Systems Under Test:

SPARC T5-8
8 x 3.6 GHz SPARC T5 processors
4 TB memory (128 x 32 GB dimms)
2 TB on 8 x 600 GB 10K RPM SAS disks, arranged as 4 x 2-way mirrors
Oracle Solaris 11.1 (SRU 4.6)
Oracle Solaris Studio 12.3 1/13 PSE

SPARC T5-1B
1 x 3.6 GHz SPARC T5 processor
256 GB memory (16 x 16 GB dimms)
157 GB on 2 x 300 GB 10K RPM SAS disks (mirrored)
Oracle Solaris 11.1 (SRU 3.4)
Oracle Solaris Studio 12.3 1/13 PSE

Benchmark Description

SPEC CPU2006 is SPEC's most popular benchmark. It measures:

  • Speed — single copy performance of chip, memory, compiler
  • Rate — multiple copy (throughput)

The benchmark is also divided into integer intensive applications and floating point intensive applications:

  • integer: 12 benchmarks derived from real applications such as perl, gcc, XML processing, and pathfinding
  • floating point: 17 benchmarks derived from real applications, including chemistry, physics, genetics, and weather.

It is also divided depending upon the amount of optimization allowed:

  • base: optimization is consistent per compiled language, all benchmarks must be compiled with the same flags per language.
  • peak: specific compiler optimization is allowed per application.

The overall metrics for the benchmark which are commonly used are:

  • SPECint_rate2006, SPECint_rate_base2006: integer, rate
  • SPECfp_rate2006, SPECfp_rate_base2006: floating point, rate
  • SPECint2006, SPECint_base2006: integer, speed
  • SPECfp2006, SPECfp_base2006: floating point, speed

See Also

Disclosure Statement

SPEC and the benchmark names SPECfp and SPECint are registered trademarks of the Standard Performance Evaluation Corporation. Results as of March 26, 2013 from www.spec.org and this report. SPARC T5-8: 3750 SPECint_rate2006, 3490 SPECint_rate_base2006, 3020 SPECfp_rate2006, 2770 SPECfp_rate_base2006; SPARC T5-1B: 467 SPECint_rate2006, 436 SPECint_rate_base2006, 369 SPECfp_rate2006, 350 SPECfp_rate_base2006.

SPARC T5-2 Scores Siebel CRM Benchmark World Record

Oracle set a new world record for the Siebel Platform Sizing and Performance Program (PSPP) benchmark using Oracle's SPARC T5-2 servers for the application server with Oracle's Siebel CRM 8.1.1.4 Industry Applications and Oracle Database 11g Release 2 running on Oracle Solaris.

  • The SPARC T5-2 servers running the application tier achieved 40,000 users with sub-second response time and with throughput of 333,339 business transactions per hour on the Siebel PSPP benchmark.

  • The SPARC T5-2 servers in the application tier delivered 2 times better performance on a per chip basis compared to earlier published SPARC T4 numbers.

  • The Siebel 8.1.1.4 PSPP workload includes Siebel Call Center and Order Management System.

  • The SPARC T5-2 server used Oracle Solaris Zones which provide flexible, scalable and manageable virtualization to scale the application within and across multiple nodes.

Performance Landscape

Application Server Transactions/
hour
Users Users/
Core
Call
Center
Order
Mgmt
Response Times (sec)
2 x SPARC T5-2 (2 x SPARC T5 3.6 GHz) 333,339 40,000 625 0.110 0.608
3 x SPARC T4-2 (2 x SPARC T4 2.85 GHz) 239,748 29,000 604 0.165 0.925
2 x IBM Power 750 (POWER7 3.55 GHz, 16 active cores) 176,185 21,000 656 0.052 0.250

Oracle:
Call Center + Order Management
Transactions: 273,786 + 59,553
Users: 28,000 + 12,000

IBM:
Call Center + Order Management
Transactions: 144,457 + 31,728
Users: 14,700 + 6,300

Configuration Summary

Application Server Configuration:

2 x SPARC T5-2 servers, each with
2 x SPARC T5 processors, 3.6 GHz
512 GB memory
6 x 300 GB SAS internal disks
Oracle Solaris 10 8/11
Siebel CRM 8.1.1.4 SIA

Web Server Configuration:

1 x SPARC T4-1 server
1 x SPARC T4 processor, 2.85 GHz
128 GB memory
Oracle Solaris 10 8/11
iPlanet Web Server 7

Database Server Configuration:

1 x SPARC T4-2 server
2 x SPARC T4 processors, 2.85 GHz
256 GB memory
Flash Storage
Oracle Solaris 10 8/11
Oracle Database 11g Release 2 (11.2.0.2)

Benchmark Description

Siebel PSPP benchmark includes Call Center and Order Management:

  • Siebel Financial Services Call Center – Provides the most complete solution for sales and service, allowing customer service and telesales representatives to provide superior customer support, improve customer loyalty, and increase revenues through cross-selling and up-selling.

    High-level description of the use cases tested: Incoming Call Creates Opportunity, Quote and Order and Incoming Call Creates Service Request. Three complex business transactions are executed simultaneously for specific number of concurrent users. The ratios of these 3 scenarios were 30%, 40%, 30% respectively, which together were totaling 70% of all transactions simulated in this benchmark. Between each user operation and the next one, the think time averaged approximately 10, 13, and 35 seconds respectively.

  • Siebel Order Management – Oracle's Siebel Order Management allows employees such as salespeople and call center agents to create and manage quotes and orders through their entire life cycle. Siebel Order Management can be tightly integrated with back-office applications allowing users to perform tasks such as checking credit, confirming availability, and monitoring the fulfillment process.

    High-level description of the use cases tested: Order & Order Items Creation and Order Updates. Two complex Order Management transactions were executed simultaneously for specific number of concurrent users concurrently with aforementioned three Call Center scenarios above. The ratio of these 2 scenarios was 50% each, which together were totaling 30% of all transactions simulated in this benchmark. Between each user operation and the next one, the think time averaged approximately 20 and 67 seconds respectively.

Key Points and Best Practices

  • No processor cores or cache were activated or deactivated on the SPARC T-Series systems to achieve special benchmark effects.

See Also

Disclosure Statement

Copyright 2013, Oracle and/or its affiliates. All rights reserved. Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners. Results as of 26 March 2013.

SPARC T5-8 Delivers Oracle OLAP World Record Performance

Oracle's SPARC T5-8 server delivered world record query performance with near real-time analytic capability using the Oracle OLAP Perf Version 3 workload running Oracle Database 11g Release 2 on Oracle Solaris 11.

  • The maximum query throughput on the SPARC T5-8 server is 1.6x higher than that of the 8-chip Intel Xeon E7-8870 server. Both systems had sub-second response time.

  • The SPARC T5-8 server with the Oracle Database demonstrated the ability to support at least 600 concurrent users querying OLAP cubes (with no think time), processing 2.93 million analytic queries per hour with an average response time of 0.66 seconds per query. This performance was enabled by keeping the entire cube in-memory utilizing the 4 TB of memory on the SPARC T5-8 server.

  • Assuming a 60 second think time between query requests, the SPARC T5-8 server can support approximately 49,450 concurrent users with the same 0.66 sec response time.

  • The SPARC T5-8 server delivered 4.3x times the maximum query throughput of a SPARC T4-4 server.

  • The workload uses a set of realistic BI queries that run against an OLAP cube based on a 4 billion row fact table of sales data. The 4 billion rows are partitioned by month spanning 10 years.

  • The combination of the Oracle Database with the Oracle OLAP option running on a SPARC T5-8 server supports live data updates occurring concurrently with minimally impacted user query executions.

Performance Landscape

Oracle OLAP Perf Version 3 Benchmark
Oracle cube base on 4 billion fact table rows
10 years of data partitioned by month
System Queries/
hour
Users* Average Response
Time (sec)
0 sec think time 60 sec think time
SPARC T5-8 2,934,000 600 49,450 0.66
8-chip Intel Xeon E7-8870 1,823,000 120 30,500 0.19
SPARC T4-4 686,500 150 11,580 0.71

Configuration Summary and Results

SPARC T5-8 Hardware Configuration:

1 x SPARC T5-8 server with
8 x SPARC T5 processors, 3.6 GHz
4 TB memory
Data Storage and Redo Storage
1 x Sun Storage F5100 Flash Array (with 80 FMODs)
Oracle Solaris 11.1
Oracle Database 11g Release 2 (11.2.0.3) with Oracle OLAP option

Sun Server X2-8 Hardware Configuration:

1 x Sun Server X2-8 with
8 x Intel Xeon E7-8870 processors, 2.4 GHz
512 GB memory
Data Storage and Redo Storage
3 x StorageTek 2540/2501 array pairs
Oracle Solaris 10 10/12
Oracle Database 11g Release 2 (11.2.0.2) with Oracle OLAP option

SPARC T4-4 Hardware Configuration:

1 x SPARC T4-4 server with
4 x SPARC T4 processors, 3.0 GHz
1 TB memory
Data Storage
1 x Sun Fire X4275 (using COMSTAR)
2 x Sun Storage F5100 Flash Array (each with 80 FMODs)
Redo Storage
1 x Sun Fire X4275 (using COMSTAR with 8 HDD)
Oracle Solaris 11 11/11
Oracle Database 11g Release 2 (11.2.0.3) with Oracle OLAP option

Benchmark Description

The Oracle OLAP Perf Version 3 benchmark is a workload designed to demonstrate and stress the ability of the OLAP Option to deliver fast query, near real-time updates and rich calculations using a multi-dimensional model in the context of the Oracle data warehousing.

The bulk of the benchmark entails running a number of concurrent users, each issuing typical multidimensional queries against an Oracle cube. The cube has four dimensions: time, product, customer, and channel. Each query user issues approximately 150 different queries. One query chain may ask for total sales in a particular region (e.g South America) for a particular time period (e.g. Q4 of 2010) followed by additional queries which drill down into sales for individual countries (e.g. Chile, Peru, etc.) with further queries drilling down into individual stores, etc. Another query chain may ask for yearly comparisons of total sales for some product category (e.g. major household appliances) and then issue further queries drilling down into particular products (e.g. refrigerators, stoves. etc.), particular regions, particular customers, etc.

While the core of every OLAP Perf benchmark is real world query performance, the benchmark itself offers numerous execution options such as varying data set sizes, number of users, numbers of queries for any given user and cube update frequency. Version 3 of the benchmark is executed with a much larger number of query streams than previous versions and used a cube designed for near real-time updates. The results produced by version 3 of the benchmark are not directly comparable to results produced by previous versions of the benchmark.

The near real-time update capability is implemented along the following lines. A large Oracle cube, H, is built from a 4 billion row star schema, containing data up until the end of last business day. A second small cube, D, is then created which will contain all of today's new data coming in from outside the world. It will be updated every L minutes with the data coming in within the last L minutes. A third cube, R, joins cubes H and D for reporting purposes much like a view might join data from two tables. Calculations are installed into cube R. The use of a reporting cube which draws data from different storage cubes is a common practice.

Query users are never locked out of query operations while new data is added to the update cube. The point of the demonstration is to show that an Oracle OLAP system can be designed which results in data being no more than L minutes out of date, where L may be as low as just a few minutes. This is what is meant by near real-time analytics.

Key Points and Best Practices

  • Update performance of the D cube was optimized by running update processes in the FX class with a priority greater than 0. The maximum lag time between updates to the source fact table and data availability to query users (what was referred to as L in the benchmark description) was less than 3 minutes for the benchmark environment on the SPARC T5-8 server.

  • Building and querying cubes with the Oracle OLAP option requires a large temporary tablespace. Normally temporary tablespaces would reside on disk storage. However, because the SPARC T5-8 server used in this benchmark had 4 TB of main memory, it was possible to use main memory for the OLAP temporary tablespace. This was done by using files in /tmp for the temporary tablespace datafiles.

  • Since typical BI users are often likely to issue similar queries, either with the same, or different, constants in the where clauses, setting the init.ora parameter "cursor_sharing" to "force" provides for additional query throughput and a larger number of potential users.

  • Assuming the normal Oracle initialization parameters (e.g. SGA, PGA, processes etc.) are appropriately set, out of the box performance for the OLAP Perf workload should be close to what is reported here. Additional performance resulted from (a)using memory for the OLAP temporary tablespace (b)setting "cursor_sharing" to force.

  • For a given number of query users with zero think time, the main measured metrics are the average query response time and the query throughput. A derived metric is the maximum number of users the system can support, with the same response time, assuming some non-zero think time. The calculation of this maximum is from the well-known "response-time law"

      N = (rt + tt) * tp

    where rt is the average response time, tt is the think time and tp is the measured throughput.

    Setting tt to 60 seconds, rt to 0.66 seconds and tp to 815 queries/sec (2,934,000 queries/hour), the above formula shows that the SPARC T5-8 server will support 49,450 concurrent users with a think time of 60 seconds and an average response time of 0.66 seconds.

    For more information about the "response-time law" see chapter 3 from the book "Quantitative System Performance" cited below.

See Also

Disclosure Statement

Copyright 2013, Oracle and/or its affiliates. All rights reserved. Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners. Results as of 03/26/2013.

SPARC T5-2 Obtains Oracle Internet Directory Benchmark World Record Performance

Oracle's SPARC T5-2 server running Oracle Internet Directory (OID, Oracle's LDAP Directory Server) on Oracle Solaris 11 achieved a record result for LDAP searches/second with 1000 clients.

  • The SPARC T5-2 server running Oracle Internet Directory on Oracle Solaris 11 achieved a result of 944,624 LDAP searches/sec with an average latency of 1.05 ms with 1000 clients.

  • The SPARC T5-2 server running Oracle Internet Directory demonstrated 2.7x better throughput and 39% better latency improvement over similarly configured OID and SPARC T4 benchmark environment.

  • The SPARC T5-2 server running Oracle Internet Directory demonstrates 39% better throughput and latency for LDAP searches on core-to-core comparison over an x86 system configured with two Intel Xeon X5675 processors.

  • Oracle Internet Directory achieved near linear scaling on the SPARC T5-2 server with 68,399 LDAP searches/sec with 2 cores to 944,624 LDAP searches/sec with 32 cores.

  • Oracle Internet Directory and the SPARC T5-2 server achieved up to 12,453 LDAP modifys/sec with an average latency of 3.9 msec for 50 clients.

Performance Landscape

Oracle Internet Directory Tests
System c/c/th Search Modify Add
ops/sec lat (msec) ops/sec lat (msec) ops/sec lat (msec)
SPARC T5-2 2/32/256 944,624 1.05 12,453 3.9 888 17.9
SPARC T4-4 4/32/256 682,000 1.46 12,000 4.0 835 19.0

In order to compare the SPARC T5-2 to a 12-core x86 system, only 1 processor and 12 cores was used in the SPARC T5-2.

Oracle Internet Directory Tests – Comparing Against x86
System c/c/th Search Compare Authentication
ops/sec lat (msec) ops/sec lat (msec) ops/sec lat (msec)
SPARC T5-2 1/12/96 417,000 1.19 274,185 1.82 149,623 3.30
x86 2 x Intel X5675 2/12/24 299,000 1.66 202,433 2.46 119,198 4.19

Scaling runs were also made on the SPARC T5-2 server.

Scaling of Search Tests – SPARC T5-2
Cores Clients ops/sec Latency (msec)
32 1000 944,624 1.05
24 1000 823,741 1.21
16 500 560,709 0.88
8 500 270,601 1.84
4 100 145,879 0.68
2 100 68,399 1.46

Configuration Summary

System Under Test:

SPARC T5-2
2 x SPARC T5 processors, 3.6 GHz
512 GB memory
4 x 300 GB internal disks
Flash Storage (used for database and log files)
1 x Sun Storage 2540-M2 (used for redo logs)
Oracle Solaris 11.1
Oracle Internet Directory 11g Release 1 PS6 (11.1.1.7.0)
Oracle Database 11g Enterprise Edition 11.2.0.3 (64-bit)

Benchmark Description

Oracle Internet Directory (OID) is Oracle's LDAPv3 Directory Server. The throughput for five key operations are measured — Search, Compare, Modify, Mix and Add.

LDAP Search Operations Test

This test scenario involved concurrent clients binding once to OID and then performing repeated LDAP Search operations. The salient characteristics of this test scenario is as follows:

  • SLAMD SearchRate job was used.
  • BaseDN of the search is root of the DIT, the scope is SUBTREE, the search filter is of the form UID=, DN and UID are the required attribute.
  • Each LDAP search operation matches a single entry.
  • The total number concurrent clients was 1000 and were distributed amongst two client nodes.
  • Each client binds to OID once and performs repeated LDAP Search operations, each search operation resulting in the lookup of a unique entry in such a way that no client looks up the same entry twice and no two clients lookup the same entry and all entries are searched randomly.
  • In one run of the test, random entries from the 50 Million entries are looked up in as many LDAP Search operations.
  • Test job was run for 60 minutes.

LDAP Compare Operations Test

This test scenario involved concurrent clients binding once to OID and then performing repeated LDAP Compare operations on userpassword attribute. The salient characteristics of this test scenario is as follows:

  • SLAMD CompareRate job was used.
  • Each LDAP compare operation matches user password of user.
  • The total number concurrent clients was 1000 and were distributed amongst two client nodes.
  • Each client binds to OID once and performs repeated LDAP compare operations.
  • In one run of the test, random entries from the 50 Million entries are compared in as many LDAP compare operations.
  • Test job was run for 60 minutes.

LDAP Modify Operations Test

This test scenario consisted of concurrent clients binding once to OID and then performing repeated LDAP Modify operations. The salient characteristics of this test scenario is as follows:

  • SLAMD LDAP modrate job was used.
  • A total of 50 concurrent LDAP clients were used.
  • Each client updates a unique entry each time and a total of 50 Million entries are updated.
  • Test job was run for 60 minutes.
  • Value length was set to 11.
  • Attribute that is being modified is not indexed.

LDAP Mixed Load Test

The test scenario involved both the LDAP search and LDAP modify clients enumerated above.

  • The ratio involved 60% LDAP search clients, 30% LDAP bind and 10% LDAP modify clients.
  • A total of 1000 concurrent LDAP clients were used and were distributed on 2 client nodes.
  • Test job was run for 60 minutes.

LDAP Add Load Test

The test scenario involved concurrent clients adding new entries as follows.

  • Slamd standard add rate job is used.
  • A total of 500,000 entries were added.
  • A total of 16 concurrent LDAP clients were used.
  • Slamd add's inetorgperson objectclass entry with 21 attributes (includes operational attributes).

See Also

Disclosure Statement

Copyright 2013, Oracle and/or its affiliates. All rights reserved. Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners. Results as of 26 March 2013.

Thursday Nov 08, 2012

SPARC T4-4 Delivers World Record Performance on Oracle OLAP Perf Version 2 Benchmark

Oracle's SPARC T4-4 server delivered world record performance with subsecond response time on the Oracle OLAP Perf Version 2 benchmark using Oracle Database 11g Release 2 running on Oracle Solaris 11.

  • The SPARC T4-4 server achieved throughput of 430,000 cube-queries/hour with an average response time of 0.85 seconds and the median response time of 0.43 seconds. This was achieved by using only 60% of the available CPU resources leaving plenty of headroom for future growth.

Performance Landscape

Oracle OLAP Perf Version 2 Benchmark
4 Billion Fact Table Rows
System Queries/
hour
Users* Response Time (sec)
Average Median
SPARC T4-4 430,000 7,300 0.85 0.43

* Users - the supported number of users with a given think time of 60 seconds

Configuration Summary and Results

Hardware Configuration:

SPARC T4-4 server with
4 x SPARC T4 processors, 3.0 GHz
1 TB memory
Data Storage
1 x Sun Fire X4275 (using COMSTAR)
2 x Sun Storage F5100 Flash Array (each with 80 FMODs)
Redo Storage
1 x Sun Fire X4275 (using COMSTAR with 8 HDD)

Software Configuration:

Oracle Solaris 11 11/11
Oracle Database 11g Release 2 (11.2.0.3) with Oracle OLAP option

Benchmark Description

The Oracle OLAP Perf Version 2 benchmark is a workload designed to demonstrate and stress the Oracle OLAP product's core features of fast query, fast update, and rich calculations on a multi-dimensional model to support enhanced Data Warehousing.

The bulk of the benchmark entails running a number of concurrent users, each issuing typical multidimensional queries against an Oracle OLAP cube. The cube has four dimensions: time, product, customer, and channel. Each query user issues approximately 150 different queries. One query chain may ask for total sales in a particular region (e.g South America) for a particular time period (e.g. Q4 of 2010) followed by additional queries which drill down into sales for individual countries (e.g. Chile, Peru, etc.) with further queries drilling down into individual stores, etc. Another query chain may ask for yearly comparisons of total sales for some product category (e.g. major household appliances) and then issue further queries drilling down into particular products (e.g. refrigerators, stoves. etc.), particular regions, particular customers, etc.

Results from version 2 of the benchmark are not comparable with version 1. The primary difference is the type of queries along with the query mix.

Key Points and Best Practices

  • Since typical BI users are often likely to issue similar queries, with different constants in the where clauses, setting the init.ora prameter "cursor_sharing" to "force" will provide for additional query throughput and a larger number of potential users. Except for this setting, together with making full use of available memory, out of the box performance for the OLAP Perf workload should provide results similar to what is reported here.

  • For a given number of query users with zero think time, the main measured metrics are the average query response time, the median query response time, and the query throughput. A derived metric is the maximum number of users the system can support achieving the measured response time assuming some non-zero think time. The calculation of the maximum number of users follows from the well-known response-time law

      N = (rt + tt) * tp

    where rt is the average response time, tt is the think time and tp is the measured throughput.

    Setting tt to 60 seconds, rt to 0.85 seconds and tp to 119.44 queries/sec (430,000 queries/hour), the above formula shows that the T4-4 server will support 7,300 concurrent users with a think time of 60 seconds and an average response time of 0.85 seconds.

    For more information see chapter 3 from the book "Quantitative System Performance" cited below.

See Also

Disclosure Statement

Copyright 2012, Oracle and/or its affiliates. All rights reserved. Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners. Results as of 11/2/2012.

Tuesday Oct 02, 2012

Performance of Oracle Business Intelligence Benchmark on SPARC T4-4

Oracle's SPARC T4-4 server configured with four SPARC T4 3.0 GHz processors delivered 25,000 concurrent users on Oracle Business Intelligence Enterprise Edition (BI EE) 11g benchmark using Oracle Database 11g Release 2 running on Oracle Solaris 10.

  • A SPARC T4-4 server running Oracle Business Intelligence Enterprise Edition 11g achieved 25,000 concurrent users with an average response time of 0.36 seconds with Oracle BI server cache set to ON.

  • The benchmark data clearly shows that the underlying hardware, SPARC T4 server, and the Oracle BI EE 11g (11.1.1.6.0 64-bit) platform scales within a single system supporting 25,000 concurrent users while executing 415 transactions/sec.

  • The benchmark demonstrated the scalability of Oracle Business Intelligence Enterprise Edition 11g 11.1.1.6.0, which was deployed in a vertical scale-out fashion on a single SPARC T4-4 server.

  • Oracle Internet Directory configured on SPARC T4 server provided authentication for the 25,000 Oracle BI EE users with sub-second response time.

  • A SPARC T4-4 with internal Solid State Drive (SSD) using the ZFS file system showed significant I/O performance improvement over traditional disk for the Web Catalog activity. In addition, ZFS helped get past the UFS limitation of 32767 sub-directories in a Web Catalog directory.

  • The multi-threaded 64-bit Oracle Business Intelligence Enterprise Edition 11g and SPARC T4-4 server proved to be a successful combination by providing sub-second response times for the end user transactions, consuming only half of the available CPU resources at 25,000 concurrent users, leaving plenty of head room for increased load.

  • The Oracle Business Intelligence on SPARC T4-4 server benchmark results demonstrate that comprehensive BI functionality built on a unified infrastructure with a unified business model yields best-in-class scalability, reliability and performance.

  • Oracle BI EE 11g is a newer version of Business Intelligence Suite with richer and superior functionality. Results produced with Oracle BI EE 11g benchmark are not comparable to results with Oracle BI EE 10g benchmark. Oracle BI EE 11g is a more difficult benchmark to run, exercising more features of Oracle BI.

Performance Landscape

Results for the Oracle BI EE 11g version of the benchmark. Results are not comparable to the Oracle BI EE 10g version of the benchmark.

Oracle BI EE 11g Benchmark
System Number of Users Response Time (sec)
1 x SPARC T4-4 (4 x SPARC T4 3.0 GHz) 25,000 0.36

Results for the Oracle BI EE 10g version of the benchmark. Results are not comparable to the Oracle BI EE 11g version of the benchmark.

Oracle BI EE 10g Benchmark
System Number of Users
2 x SPARC T5440 (4 x SPARC T2+ 1.6 GHz) 50,000
1 x SPARC T5440 (4 x SPARC T2+ 1.6 GHz) 28,000

Configuration Summary

Hardware Configuration:

SPARC T4-4 server
4 x SPARC T4-4 processors, 3.0 GHz
128 GB memory
4 x 300 GB internal SSD

Storage Configuration:

Sun ZFS Storage 7120
16 x 146 GB disks

Software Configuration:

Oracle Solaris 10 8/11
Oracle Solaris Studio 12.1
Oracle Business Intelligence Enterprise Edition 11g (11.1.1.6.0)
Oracle WebLogic Server 10.3.5
Oracle Internet Directory 11.1.1.6.0
Oracle Database 11g Release 2

Benchmark Description

Oracle Business Intelligence Enterprise Edition (Oracle BI EE) delivers a robust set of reporting, ad-hoc query and analysis, OLAP, dashboard, and scorecard functionality with a rich end-user experience that includes visualization, collaboration, and more.

The Oracle BI EE benchmark test used five different business user roles - Marketing Executive, Sales Representative, Sales Manager, Sales Vice-President, and Service Manager. These roles included a maximum of 5 different pre-built dashboards. Each dashboard page had an average of 5 reports in the form of a mix of charts, tables and pivot tables, returning anywhere from 50 rows to approximately 500 rows of aggregated data. The test scenario also included drill-down into multiple levels from a table or chart within a dashboard.

The benchmark test scenario uses a typical business user sequence of dashboard navigation, report viewing, and drill down. For example, a Service Manager logs into the system and navigates to his own set of dashboards using Service Manager. The BI user selects the Service Effectiveness dashboard, which shows him four distinct reports, Service Request Trend, First Time Fix Rate, Activity Problem Areas, and Cost Per Completed Service Call spanning 2002 to 2005. The user then proceeds to view the Customer Satisfaction dashboard, which also contains a set of 4 related reports, drills down on some of the reports to see the detail data. The BI user continues to view more dashboards – Customer Satisfaction and Service Request Overview, for example. After navigating through those dashboards, the user logs out of the application. The benchmark test is executed against a full production version of the Oracle Business Intelligence 11g Applications with a fully populated underlying database schema. The business processes in the test scenario closely represent a real world customer scenario.

See Also

Disclosure Statement

Copyright 2012, Oracle and/or its affiliates. All rights reserved. Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners. Results as of 30 September 2012.

World Record Siebel PSPP Benchmark on SPARC T4 Servers

Oracle's SPARC T4 servers set a new World Record for Oracle's Siebel Platform Sizing and Performance Program (PSPP) benchmark suite. The result used Oracle's Siebel Customer Relationship Management (CRM) Industry Applications Release 8.1.1.4 and Oracle Database 11g Release 2 running Oracle Solaris on three SPARC T4-2 and two SPARC T4-1 servers.

  • The SPARC T4 servers running the Siebel PSPP 8.1.1.4 workload which includes Siebel Call Center and Order Management System demonstrates impressive throughput performance of the SPARC T4 processor by achieving 29,000 users.

  • This is the first Siebel PSPP 8.1.1.4 benchmark supporting 29,000 concurrent users with a rate of 239,748 Business Transactions/hour.

  • The benchmark demonstrates vertical and horizontal scalability of Siebel CRM Release 8.1.1.4 on SPARC T4 servers.

Performance Landscape

Systems Txn/hr Users Call Center Order
Management
Response Times (sec)
1 x SPARC T4-1 (1 x SPARC T4 2.85 GHz) – Web
3 x SPARC T4-2 (2 x SPARC T4 2.85 GHz) – App/Gateway
1 x SPARC T4-1 (1 x SPARC T4 2.85 GHz) – DB
239,748 29,000 0.165 0.925

Oracle:
Call Center + Order Management
Transactions: 197,128 + 42,620
Users: 20300 + 8700

Configuration Summary

Web Server Configuration:

1 x SPARC T4-1 server
1 x SPARC T4 processor, 2.85 GHz
128 GB memory
Oracle Solaris 10 8/11
iPlanet Web Server 7

Application Server Configuration:

3 x SPARC T4-2 servers, each with
2 x SPARC T4 processor, 2.85 GHz
256 GB memory
3 x 300 GB SAS internal disks
Oracle Solaris 10 8/11
Siebel CRM 8.1.1.4 SIA

Database Server Configuration:

1 x SPARC T4-1 server
1 x SPARC T4 processor, 2.85 GHz
128 GB memory
Oracle Solaris 11 11/11
Oracle Database 11g Release 2 (11.2.0.2)

Storage Configuration:

1 x Sun Storage F5100 Flash Array
80 x 24 GB flash modules

Benchmark Description

Siebel 8.1 PSPP benchmark includes Call Center and Order Management:

  • Siebel Financial Services Call Center – Provides the most complete solution for sales and service, allowing customer service and telesales representatives to provide superior customer support, improve customer loyalty, and increase revenues through cross-selling and up-selling.

    High-level description of the use cases tested: Incoming Call Creates Opportunity, Quote and Order and Incoming Call Creates Service Request . Three complex business transactions are executed simultaneously for specific number of concurrent users. The ratios of these 3 scenarios were 30%, 40%, 30% respectively, which together were totaling 70% of all transactions simulated in this benchmark. Between each user operation and the next one, the think time averaged approximately 10, 13, and 35 seconds respectively.

  • Siebel Order Management – Oracle's Siebel Order Management allows employees such as salespeople and call center agents to create and manage quotes and orders through their entire life cycle. Siebel Order Management can be tightly integrated with back-office applications allowing users to perform tasks such as checking credit, confirming availability, and monitoring the fulfillment process.

    High-level description of the use cases tested: Order & Order Items Creation and Order Updates. Two complex Order Management transactions were executed simultaneously for specific number of concurrent users concurrently with aforementioned three Call Center scenarios above. The ratio of these 2 scenarios was 50% each, which together were totaling 30% of all transactions simulated in this benchmark. Between each user operation and the next one, the think time averaged approximately 20 and 67 seconds respectively.

Key Points and Best Practices

  • No processor cores or cache were activated or deactivated on the SPARC T-Series systems to achieve special benchmark effects.

See Also

Disclosure Statement

Copyright 2012, Oracle and/or its affiliates. All rights reserved. Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners. Results as of 30 September 2012.

Monday Oct 01, 2012

World Record Performance on PeopleSoft Enterprise Financials Benchmark on SPARC T4-2

Oracle's SPARC T4-2 server achieved World Record performance on Oracle's PeopleSoft Enterprise Financials 9.1 executing 20 Million Journals lines in 8.92 minutes on Oracle Database 11g Release 2 running on Oracle Solaris 11. This is the first result published on this version of the benchmark.

  • The SPARC T4-2 server was able to process 20 million general ledger journal edit and post batch jobs in 8.92 minutes on this benchmark that reflects a large customer environment that utilizes a back-end database of nearly 500 GB.

  • This benchmark demonstrates that the SPARC T4-2 server with PeopleSoft Financials 9.1 can easily process 100 million journal lines in less than 1 hour.

  • The SPARC T4-2 server delivered more than 146 MB/sec of IO throughput with Oracle Database 11g running on Oracle Solaris 11.

Performance Landscape

Results are presented for PeopleSoft Financials Benchmark 9.1. Results obtained with PeopleSoft Financials Benchmark 9.1 are not comparable to the the previous version of the benchmark, PeopleSoft Financials Benchmark 9.0, due to significant change in data model and supports only batch.

PeopleSoft Financials Benchmark, Version 9.1
Solution Under Test Batch (min)
SPARC T4-2 (2 x SPARC T4, 2.85 GHz) 8.92

Results from PeopleSoft Financials Benchmark 9.0.

PeopleSoft Financials Benchmark, Version 9.0
Solution Under Test Batch (min) Batch with Online (min)
SPARC Enterprise M4000 (Web/App)
SPARC Enterprise M5000 (DB)
33.09 34.72
SPARC T3-1 (Web/App)
SPARC Enterprise M5000 (DB)
35.82 37.01

Configuration Summary

Hardware Configuration:

1 x SPARC T4-2 server
2 x SPARC T4 processors, 2.85 GHz
128 GB memory

Storage Configuration:

1 x Sun Storage F5100 Flash Array (for database and redo logs)
2 x Sun Storage 2540-M2 arrays and 2 x Sun Storage 2501-M2 arrays (for backup)

Software Configuration:

Oracle Solaris 11 11/11 SRU 7.5
Oracle Database 11g Release 2 (11.2.0.3)
PeopleSoft Financials 9.1 Feature Pack 2
PeopleSoft Supply Chain Management 9.1 Feature Pack 2
PeopleSoft PeopleTools 8.52 latest patch - 8.52.03
Oracle WebLogic Server 10.3.5
Java Platform, Standard Edition Development Kit 6 Update 32

Benchmark Description

The PeopleSoft Enterprise Financials 9.1 benchmark emulates a large enterprise that processes and validates a large number of financial journal transactions before posting the journal entry to the ledger. The validation process certifies that the journal entries are accurate, ensuring that ChartFields values are valid, debits and credits equal out, and inter/intra-units are balanced. Once validated, the entries are processed, ensuring that each journal line posts to the correct target ledger, and then changes the journal status to posted. In this benchmark, the Journal Edit & Post is set up to edit and post both Inter-Unit and Regular multi-currency journals. The benchmark processes 20 million journal lines using AppEngine for edits and Cobol for post processes.

See Also

Disclosure Statement

Copyright 2012, Oracle and/or its affiliates. All rights reserved. Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners. Results as of 1 October 2012.

Tuesday May 01, 2012

SPARC T4 Servers Running Oracle Solaris 11 and Oracle RAC Deliver World Record on PeopleSoft HRMS 9.1

Oracle's SPARC T4-4 server running Oracle's PeopleSoft HRMS Self-Service 9.1 benchmark achieved world record performance with 18,000 interactive users. This was accomplished using a high availability configuration using Oracle Real Application Clusters (RAC) 11g Release 2 software for the database tier running on Oracle Solaris 11. The benchmark configuration included the SPARC T4-4 server for the application tier, a SPARC T4-2 server for the web tier and two SPARC T4-2 servers for the database tier.

  • The combination of the SPARC T4 servers running PeopleSoft HRSS 9.1 benchmark supports 4.5x the number of users an IBM pSeries 570 running PeopleSoft HRSS 8.9, with an average response time 40 percent better than IBM.

  • This result was obtained with two SPARC T4-2 servers running the database service using Oracle Real Application Clusters 11g Release 2 software in a high availability configuration.

  • The two SPARC T4-2 servers in the database tier used Oracle Solaris 11, and Oracle RAC 11g Release 2 software with database shared disk storage managed by Oracle Automatic Storage Management (ASM).

  • The average CPU utilization on one SPARC T4-4 server in the application tier handling 18,000 users is 54 percent, showing significant headroom for growth.

  • The SPARC T4 server for the application tier used Oracle Solaris Containers on Oracle Solaris 10, which provides a flexible, scalable and manageable virtualized environment.

  • The Peoplesoft HRMS Self-Service benchmark demonstrates better performance on Oracle hardware and software, engineered to work together, than Oracle software on IBM.

Performance Landscape

PeopleSoft HRMS Self-Service 9.1 Benchmark
Systems Processors Users Ave Response -
Search (sec)
Ave Response -
Save (sec)
SPARC T4-2 (web)
SPARC T4-4 (app)
2 x SPARC T4-2 (db)
2 x SPARC T4, 2.85 GHz
4 x SPARC T4, 3.0 GHz
2 x (2 x SPARC T4, 2.85 GHz)
18,000 1.048 0.742
SPARC T4-2 (web)
SPARC T4-4 (app)
SPARC T4-4 (db)
2 x SPARC T4, 2.85 GHz
4 x SPARC T4, 3.0 GHz
4 x SPARC T4, 3.0 GHz
15,000 1.01 0.63
PeopleSoft HRMS Self-Service 8.9 Benchmark
IBM Power 570 (web/app)
IBM Power 570 (db)
12 x POWER5, 1.9 GHz
4 x POWER5, 1.9 GHz
4,000 1.74 1.25
IBM p690 (web)
IBM p690 (app)
IBM p690 (db)
4 x POWER4, 1.9 GHz
12 x POWER4, 1.9 GHz
6 x 4392 MIPS/Gen1
4,000 1.35 1.01

The main differences between version 9.1 and version 8.9 of the benchmark are:

  • the database expanded from 100K employees and 20K managers to 500K employees and 100K managers,
  • the manager data was expanded,
  • a new transaction, "Employee Add Profile," was added, the percent of users executing it is less then 2%, and the transaction has a heavier footprint,
  • version 9.1 has a different benchmark metric (Average Response Search/Save time for x number of users) versus single user search/save time,
  • newer versions of the PeopleSoft application and PeopleTools software are used.

Configuration Summary

Application Server:

1 x SPARC T4-4 server
4 x SPARC T4 processors 3.0 GHz
512 GB main memory
5 x 300 GB SAS internal disks,
2 x 100 GB internal SSDs
1 x 300 GB internal SSD
Oracle Solaris 10 8/11
PeopleSoft PeopleTools 8.51.02
PeopleSoft HCM 9.1
Oracle Tuxedo, Version 10.3.0.0, 64-bit, Patch Level 031
Java HotSpot(TM) 64-Bit Server VM on Solaris, version 1.6.0_20

Web Server:

1 x SPARC T4-2 server
2 x SPARC T4 processors 2.85 GHz
256 GB main memory
2 x 300 GB SAS internal disks
1 x 100 GB internal SSD
Oracle Solaris 10 8/11
PeopleSoft PeopleTools 8.51.02
Oracle WebLogic Server 11g (10.3.3)
Java HotSpot(TM) 64-Bit Server VM on Solaris, version 1.6.0_20

Database Server:

2 x SPARC T4-2 servers, each with
2 x SPARC T4 processors 2.85 GHz
128 GB main memory
3 x 300 GB SAS internal disks
Oracle Solaris 11 11/11
Oracle Database 11g Release 2
Oracle Real Application Clusters

Database Storage:

Data
1 x Sun Storage F5100 Flash Array (80 flash modules)
1 x COMSTAR Sun Fire X4470 M2 server
4 x Intel Xeon X7550 processors 2.0 GHz
128 GB main memory
Oracle Solaris 11 11/11
Redo
2 x COMSTAR Sun Fire X4275 servers, each with
1 x Intel Xeon E5540 processor 2.53 GHz
6 GB main memory)
12 x 2 TB SAS disks
Oracle Solaris 11 Express 2010.11

Connectivity:

1 x 8-port 10GbE switch
1 x 24-port 1GbE switch
1 x 32-port Brocade FC switch

Benchmark Description

The purpose of the PeopleSoft HRMS Self-Service 9.1 benchmark is to measure comparative online performance of the selected processes in PeopleSoft Enterprise HCM 9.1 with Oracle Database 11g. The benchmark kit is an Oracle standard benchmark kit run by all platform vendors to measure the performance. It is an OLTP benchmark with no dependency on remote COBOL calls, there is no batch workload, and DB SQLs are moderately complex. The results are certified by Oracle and a white paper is published.

PeopleSoft defines a business transaction as a series of HTML pages that guide a user through a particular scenario. Users are defined as corporate Employees, Managers and HR administrators. The benchmark consists of 14 scenarios which emulate users performing typical HCM transactions such as viewing paychecks, promoting and hiring employees, updating employee profiles and other typical HCM application transactions.

All of these transactions are well defined in the PeopleSoft HR Self-Service 9.1 benchmark kit. This benchmark metric is the Weighted Average Response search/save time for all users.

Key Points and Best Practices

  • The combined processing power of two SPARC T4-2 servers running the highly available Oracle RAC database can provide greater throughput and Oracle RAC scalability than is available from a single server.

  • All database data files/recovery files and Oracle Clusterware files were created with Oracle Automatic Storage Management (Oracle ASM) volume manager and file system which resulted in equivalent performance of conventional volume managers, file systems, and raw devices, but with the added benefit of the ease of management provided by Oracle ASM integrated storage management solution.

  • Five Oracle PeopleSoft Domains with 200 application servers (40 per each Domain) on the SPARC T4-4 server were hosted in two separate Oracle Solaris Containers for a total of 10 Domains/400 application servers processes to demonstrate consolidation of multiple application servers, ease of administration and load balancing.

  • Each Oracle Solaris Container was bound to a separate processor set, each containing 124 virtual processors. The default set (composed of 4 virtual processors from first and third processor socket, total of 8 virtual processors) was used for network and disk interrupt handling. This was done to improve performance by reducing memory access latency by using the physical memory closest to the processors and offload I/O interrupt handling to default set virtual processors, freeing up processing resources for application server virtual processors.

See Also

Disclosure Statement

Oracle's PeopleSoft HRMS 9.1 benchmark, www.oracle.com/us/solutions/benchmark/apps-benchmark/peoplesoft-167486.html, results 5/1/2012.

About

BestPerf is the source of Oracle performance expertise. In this blog, Oracle's Strategic Applications Engineering group explores Oracle's performance results and shares best practices learned from working on Enterprise-wide Applications.

Index Pages
Search

Archives
« June 2016
SunMonTueWedThuFriSat
   
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
  
       
Today