Tuesday Oct 02, 2012

SPARC T4-4 Delivers World Record First Result on PeopleSoft Combined Benchmark

Oracle's SPARC T4-4 servers running Oracle's PeopleSoft HCM 9.1 combined online and batch benchmark achieved World Record 18,000 concurrent users while executing a PeopleSoft Payroll batch job of 500,000 employees in 43.32 minutes and maintaining online users response time at < 2 seconds.

  • This world record is the first to run online and batch workloads concurrently.

  • This result was obtained with a SPARC T4-4 server running Oracle Database 11g Release 2, a SPARC T4-4 server running PeopleSoft HCM 9.1 application server and a SPARC T4-2 server running Oracle WebLogic Server in the web tier.

  • The SPARC T4-4 server running the application tier used Oracle Solaris Zones which provide a flexible, scalable and manageable virtualization environment.

  • The average CPU utilization on the SPARC T4-2 server in the web tier was 17%, on the SPARC T4-4 server in the application tier it was 59%, and on the SPARC T4-4 server in the database tier was 35% (online and batch) leaving significant headroom for additional processing across the three tiers.

  • The SPARC T4-4 server used for the database tier hosted Oracle Database 11g Release 2 using Oracle Automatic Storage Management (ASM) for database files management with I/O performance equivalent to raw devices.

  • This is the first three tier mixed workload (online and batch) PeopleSoft benchmark also processing PeopleSoft payroll batch workload.

Performance Landscape

PeopleSoft HR Self-Service and Payroll Benchmark
Systems Users Ave Response
Search (sec)
Ave Response
Save (sec)
Batch
Time (min)
Streams
SPARC T4-2 (web)
SPARC T4-4 (app)
SPARC T4-4 (db)
18,000 0.944 0.503 43.32 64

Configuration Summary

Application Configuration:

1 x SPARC T4-4 server with
4 x SPARC T4 processors, 3.0 GHz
512 GB memory
1 x 600 GB SAS internal disks
4 x 300 GB SAS internal disks
1 x 100 GB and 2 x 300 GB internal SSDs
2 x 10 Gbe HBA
Oracle Solaris 11 11/11
PeopleTools 8.52
PeopleSoft HCM 9.1
Oracle Tuxedo, Version 10.3.0.0, 64-bit, Patch Level 031
Java Platform, Standard Edition Development Kit 6 Update 32

Database Configuration:

1 x SPARC T4-4 server with
4 x SPARC T4 processors, 3.0 GHz
256 GB memory
1 x 600 GB SAS internal disks
2 x 300 GB SAS internal disks
Oracle Solaris 11 11/11
Oracle Database 11g Release 2
PeopleTools 8.52
Oracle Tuxedo, Version 10.3.0.0, 64-bit, Patch Level 031

Web Tier Configuration:

1 x SPARC T4-2 server with
2 x SPARC T4 processors, 2.85 GHz
256 GB memory
2 x 300 GB SAS internal disks
1 x 300 GB internal SSD
1 x 100 GB internal SSD
Oracle Solaris 11 11/11
PeopleTools 8.52
Oracle WebLogic Server 10.3.4
Java Platform, Standard Edition Development Kit 6 Update 32

Storage Configuration:

1 x Sun Server X2-4 as a COMSTAR head for data
4 x Intel Xeon X7550, 2.0 GHz
128 GB memory
1 x Sun Storage F5100 Flash Array (80 flash modules)
1 x Sun Storage F5100 Flash Array (40 flash modules)

1 x Sun Fire X4275 as a COMSTAR head for redo logs
12 x 2 TB SAS disks with Niwot Raid controller

Benchmark Description

This benchmark combines PeopleSoft HCM 9.1 HR Self Service online and PeopleSoft Payroll batch workloads to run on a unified database deployed on Oracle Database 11g Release 2.

The PeopleSoft HRSS benchmark kit is a Oracle standard benchmark kit run by all platform vendors to measure the performance. It's an OLTP benchmark where DB SQLs are moderately complex. The results are certified by Oracle and a white paper is published.

PeopleSoft HR SS defines a business transaction as a series of HTML pages that guide a user through a particular scenario. Users are defined as corporate Employees, Managers and HR administrators. The benchmark consist of 14 scenarios which emulate users performing typical HCM transactions such as viewing paycheck, promoting and hiring employees, updating employee profile and other typical HCM application transactions.

All these transactions are well-defined in the PeopleSoft HR Self-Service 9.1 benchmark kit. This benchmark metric is the weighted average response search/save time for all the transactions.

The PeopleSoft 9.1 Payroll (North America) benchmark demonstrates system performance for a range of processing volumes in a specific configuration. This workload represents large batch runs typical of a ERP environment during a mass update. The benchmark measures five application business process run times for a database representing large organization. They are Paysheet Creation, Payroll Calculation, Payroll Confirmation, Print Advice forms, and Create Direct Deposit File. The benchmark metric is the cumulative elapsed time taken to complete the Paysheet Creation, Payroll Calculation and Payroll Confirmation business application processes.

The benchmark metrics are taken for each respective benchmark while running simultaneously on the same database back-end. Specifically, the payroll batch processes are started when the online workload reaches steady state (the maximum number of online users) and overlap with online transactions for the duration of the steady state.

Key Points and Best Practices

  • Two Oracle PeopleSoft Domain sets with 200 application servers each on a SPARC T4-4 server were hosted in 2 separate Oracle Solaris Zones to demonstrate consolidation of multiple application servers, ease of administration and performance tuning.

  • Each Oracle Solaris Zone was bound to a separate processor set, each containing 15 cores (total 120 threads). The default set (1 core from first and third processor socket, total 16 threads) was used for network and disk interrupt handling. This was done to improve performance by reducing memory access latency by using the physical memory closest to the processors and offload I/O interrupt handling to default set threads, freeing up cpu resources for Application Servers threads and balancing application workload across 240 threads.

See Also

Disclosure Statement

Oracle's PeopleSoft HR and Payroll combined benchmark, www.oracle.com/us/solutions/benchmark/apps-benchmark/peoplesoft-167486.html, results 09/30/2012.

Monday Oct 01, 2012

World Record Batch Rate on Oracle JD Edwards Consolidated Workload with SPARC T4-2

Oracle produced a World Record batch throughput for single system results on Oracle's JD Edwards EnterpriseOne Day-in-the-Life benchmark using Oracle's SPARC T4-2 server running Oracle Solaris Containers and consolidating JD Edwards EnterpriseOne, Oracle WebLogic servers and the Oracle Database 11g Release 2. The workload includes both online and batch workload.

  • The SPARC T4-2 server delivered a result of 8,000 online users while concurrently executing a mix of JD Edwards EnterpriseOne Long and Short batch processes at 95.5 UBEs/min (Universal Batch Engines per minute).

  • In order to obtain this record benchmark result, the JD Edwards EnterpriseOne, Oracle WebLogic and Oracle Database 11g Release 2 servers were executed each in separate Oracle Solaris Containers which enabled optimal system resources distribution and performance together with scalable and manageable virtualization.

  • One SPARC T4-2 server running Oracle Solaris Containers and consolidating JD Edwards EnterpriseOne, Oracle WebLogic servers and the Oracle Database 11g Release 2 utilized only 55% of the available CPU power.

  • The Oracle DB server in a Shared Server configuration allows for optimized CPU resource utilization and significant memory savings on the SPARC T4-2 server without sacrificing performance.

  • This configuration with SPARC T4-2 server has achieved 33% more Users/core, 47% more UBEs/min and 78% more Users/rack unit than the IBM Power 770 server.

  • The SPARC T4-2 server with 2 processors ran the JD Edwards "Day-in-the-Life" benchmark and supported 8,000 concurrent online users while concurrently executing mixed batch workloads at 95.5 UBEs per minute. The IBM Power 770 server with twice as many processors supported only 12,000 concurrent online users while concurrently executing mixed batch workloads at only 65 UBEs per minute.

  • This benchmark demonstrates more than 2x cost savings by consolidating the complete solution in a single SPARC T4-2 server compared to earlier published results of 10,000 users and 67 UBEs per minute on two SPARC T4-2 and SPARC T4-1.

  • The Oracle DB server used mirrored (RAID 1) volumes for the database providing high availability for the data without impacting performance.

Performance Landscape

JD Edwards EnterpriseOne Day in the Life (DIL) Benchmark
Consolidated Online with Batch Workload

System Rack
Units
(U)
Batch
Rate
(UBEs/m)
Online
Users
Users
/ U
Users
/ Core
Version
SPARC T4-2 (2 x SPARC T4, 2.85 GHz) 3 95.5 8,000 2,667 500 9.0.2
IBM Power 770 (4 x POWER7, 3.3 GHz, 32 cores) 8 65 12,000 1,500 375 9.0.2

Batch Rate (UBEs/m) — Batch transaction rate in UBEs per minute

Configuration Summary

Hardware Configuration:

1 x SPARC T4-2 server with
2 x SPARC T4 processors, 2.85 GHz
256 GB memory
4 x 300 GB 10K RPM SAS internal disk
2 x 300 GB internal SSD
2 x Sun Storage F5100 Flash Arrays

Software Configuration:

Oracle Solaris 10
Oracle Solaris Containers
JD Edwards EnterpriseOne 9.0.2
JD Edwards EnterpriseOne Tools (8.98.4.2)
Oracle WebLogic Server 11g (10.3.4)
Oracle HTTP Server 11g
Oracle Database 11g Release 2 (11.2.0.1)

Benchmark Description

JD Edwards EnterpriseOne is an integrated applications suite of Enterprise Resource Planning (ERP) software. Oracle offers 70 JD Edwards EnterpriseOne application modules to support a diverse set of business operations.

Oracle's Day in the Life (DIL) kit is a suite of scripts that exercises most common transactions of JD Edwards EnterpriseOne applications, including business processes such as payroll, sales order, purchase order, work order, and manufacturing processes, such as ship confirmation. These are labeled by industry acronyms such as SCM, CRM, HCM, SRM and FMS. The kit's scripts execute transactions typical of a mid-sized manufacturing company.

  • The workload consists of online transactions and the UBE – Universal Business Engine workload of 61 short and 4 long UBEs.

  • LoadRunner runs the DIL workload, collects the user’s transactions response times and reports the key metric of Combined Weighted Average Transaction Response time.

  • The UBE processes workload runs from the JD Enterprise Application server.

    • Oracle's UBE processes come as three flavors:

      • Short UBEs < 1 minute engage in Business Report and Summary Analysis,

      • Mid UBEs > 1 minute create a large report of Account, Balance, and Full Address,

      • Long UBEs > 2 minutes simulate Payroll, Sales Order, night only jobs.

    • The UBE workload generates large numbers of PDF files reports and log files.

    • The UBE Queues are categorized as the QBATCHD, a single threaded queue for large and medium UBEs, and the QPROCESS queue for short UBEs run concurrently.

Oracle's UBE process performance metric is Number of Maximum Concurrent UBE processes at transaction rate, UBEs/minute.

Key Points and Best Practices

Two JD Edwards EnterpriseOne Application Servers, two Oracle WebLogic Servers 11g Release 1 coupled with two Oracle Web Tier HTTP server instances and one Oracle Database 11g Release 2 database on a single SPARC T4-2 server were hosted in separate Oracle Solaris Containers bound to four processor sets to demonstrate consolidation of multiple applications, web servers and the database with best resource utilizations.

  • Interrupt fencing was configured on all Oracle Solaris Containers to channel the interrupts to processors other than the processor sets used for the JD Edwards Application server, Oracle WebLogic servers and the database server.

  • A Oracle WebLogic vertical cluster was configured on each WebServer Container with twelve managed instances each to load balance users' requests and to provide the infrastructure that enables scaling to high number of users with ease of deployment and high availability.

  • The database log writer was run in the real time RT class and bound to a processor set.

  • The database redo logs were configured on the raw disk partitions.

  • The Oracle Solaris Container running the Enterprise Application server completed 61 Short UBEs, 4 Long UBEs concurrently as the mixed size batch workload.

  • The mixed size UBEs ran concurrently from the Enterprise Application server with the 8,000 online users driven by the LoadRunner.

See Also

Disclosure Statement

Copyright 2012, Oracle and/or its affiliates. All rights reserved. Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners. Results as of 09/30/2012.

Oracle TimesTen In-Memory Database Performance on SPARC T4-2

The Oracle TimesTen In-Memory Database is optimized to run on Oracle's SPARC T4 processor platforms running Oracle Solaris 11 providing unsurpassed scalability, performance, upgradability, protection of investment and return on investment. The following demonstrate the value of combining Oracle TimesTen In-Memory Database with SPARC T4 servers and Oracle Solaris 11:

On a Mobile Call Processing test, the 2-socket SPARC T4-2 server outperforms:

  • Oracle's SPARC Enterprise M4000 server (4 x 2.66 GHz SPARC64 VII+) by 34%.

  • Oracle's SPARC T3-4 (4 x 1.65 GHz SPARC T3) by 2.7x, or 5.4x per processor.

Utilizing the TimesTen Performance Throughput Benchmark (TPTBM), the SPARC T4-2 server protects investments with:

  • 2.1x the overall performance of a 4-socket SPARC Enterprise M4000 server in read-only mode and 1.5x the performance in update-only testing. This is 4.2x more performance per processor than the SPARC64 VII+ 2.66 GHz based system.

  • 10x more performance per processor than the SPARC T2+ 1.4 GHz server.

  • 1.6x better performance per processor than the SPARC T3 1.65 GHz based server.

In replication testing, the two socket SPARC T4-2 server is over 3x faster than the performance of a four socket SPARC Enterprise T5440 server in both asynchronous replication environment and the highly available 2-Safe replication. This testing emphasizes parallel replication between systems.

Performance Landscape

Mobile Call Processing Test Performance

System Processor Sockets/Cores Tps Tps/
Socket
SPARC T4-2 SPARC T4, 2.85 GHz 2 16 218,400 109,200
M4000 SPARC64 VII+, 2.66 GHz 4 16 162,900 40,725
SPARC T3-4 SPARC T3, 1.65 GHz 4 64 80,400 20,100

TimesTen Performance Throughput Benchmark (TPTBM) Read-Only

System Processor Sockets/Cores Tps Tps/
Socket
SPARC T4-2 SPARC T4, 2.85 GHz 2 16 6.5M 3.3M
SPARC T3-4 SPARC T3, 1.65 GHz 4 64 7.9M 2.0M
M4000 SPARC64 VII+, 2.66 GHz 4 16 3.1M 0.8M
T5440 SPARC T2+, 1.4 GHz 4 32 3.1M 0.8M

TimesTen Performance Throughput Benchmark (TPTBM) Update-Only

System Processor Sockets/Cores Tps Tps/
Socket
SPARC T4-2 SPARC T4, 2.85 GHz 2 16 547,800 273,900
M4000 SPARC64 VII+, 2.66 GHz 4 16 363,800 90,950
SPARC T3-4 SPARC T3, 1.65 GHz 4 64 240,250 60,125

TimesTen Replication Tests

System Processor Sockets/Cores Asynchronous 2-Safe
SPARC T4-2 SPARC T4, 2.85 GHz 2 16 38,024 13,701
SPARC T5440 SPARC T2+, 1.4 GHz 4 32 11,621 4,615

Configuration Summary

Hardware Configurations:

SPARC T4-2 server
2 x SPARC T4 processors, 2.85 GHz
256 GB memory
1 x 8 Gbs FC Qlogic HBA
1 x 6 Gbs SAS HBA
4 x 300 GB internal disks
Sun Storage F5100 Flash Array (40 x 24 GB flash modules)
1 x Sun Fire X4275 server configured as COMSTAR head

SPARC T3-4 server
4 x SPARC T3 processors, 1.6 GHz
512 GB memory
1 x 8 Gbs FC Qlogic HBA
8 x 146 GB internal disks
1 x Sun Fire X4275 server configured as COMSTAR head

SPARC Enterprise M4000 server
4 x SPARC64 VII+ processors, 2.66 GHz
128 GB memory
1 x 8 Gbs FC Qlogic HBA
1 x 6 Gbs SAS HBA
2 x 146 GB internal disks
Sun Storage F5100 Flash Array (40 x 24 GB flash modules)
1 x Sun Fire X4275 server configured as COMSTAR head

Software Configuration:

Oracle Solaris 11 11/11
Oracle TimesTen 11.2.2.4

Benchmark Descriptions

TimesTen Performance Throughput BenchMark (TPTBM) is shipped with TimesTen and measures the total throughput of the system. The workload can test read-only, update-only, delete and insert operations as required.

Mobile Call Processing is a customer-based workload for processing calls made by mobile phone subscribers. The workload has a mixture of read-only, update, and insert-only transactions. The peak throughput performance is measured from multiple concurrent processes executing the transactions until a peak performance is reached via saturation of the available resources.

Parallel Replication tests using both asynchronous and 2-Safe replication methods. For asynchronous replication, transactions are processed in batches to maximize the throughput capabilities of the replication server and network. In 2-Safe replication, also known as no data-loss or high availability, transactions are replicated between servers immediately emphasizing low latency. For both environments, performance is measured in the number of parallel replication servers and the maximum transactions-per-second for all concurrent processes.

See Also

Disclosure Statement

Copyright 2012, Oracle and/or its affiliates. All rights reserved. Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners. Results as of 1 October 2012.

World Record Performance on PeopleSoft Enterprise Financials Benchmark on SPARC T4-2

Oracle's SPARC T4-2 server achieved World Record performance on Oracle's PeopleSoft Enterprise Financials 9.1 executing 20 Million Journals lines in 8.92 minutes on Oracle Database 11g Release 2 running on Oracle Solaris 11. This is the first result published on this version of the benchmark.

  • The SPARC T4-2 server was able to process 20 million general ledger journal edit and post batch jobs in 8.92 minutes on this benchmark that reflects a large customer environment that utilizes a back-end database of nearly 500 GB.

  • This benchmark demonstrates that the SPARC T4-2 server with PeopleSoft Financials 9.1 can easily process 100 million journal lines in less than 1 hour.

  • The SPARC T4-2 server delivered more than 146 MB/sec of IO throughput with Oracle Database 11g running on Oracle Solaris 11.

Performance Landscape

Results are presented for PeopleSoft Financials Benchmark 9.1. Results obtained with PeopleSoft Financials Benchmark 9.1 are not comparable to the the previous version of the benchmark, PeopleSoft Financials Benchmark 9.0, due to significant change in data model and supports only batch.

PeopleSoft Financials Benchmark, Version 9.1
Solution Under Test Batch (min)
SPARC T4-2 (2 x SPARC T4, 2.85 GHz) 8.92

Results from PeopleSoft Financials Benchmark 9.0.

PeopleSoft Financials Benchmark, Version 9.0
Solution Under Test Batch (min) Batch with Online (min)
SPARC Enterprise M4000 (Web/App)
SPARC Enterprise M5000 (DB)
33.09 34.72
SPARC T3-1 (Web/App)
SPARC Enterprise M5000 (DB)
35.82 37.01

Configuration Summary

Hardware Configuration:

1 x SPARC T4-2 server
2 x SPARC T4 processors, 2.85 GHz
128 GB memory

Storage Configuration:

1 x Sun Storage F5100 Flash Array (for database and redo logs)
2 x Sun Storage 2540-M2 arrays and 2 x Sun Storage 2501-M2 arrays (for backup)

Software Configuration:

Oracle Solaris 11 11/11 SRU 7.5
Oracle Database 11g Release 2 (11.2.0.3)
PeopleSoft Financials 9.1 Feature Pack 2
PeopleSoft Supply Chain Management 9.1 Feature Pack 2
PeopleSoft PeopleTools 8.52 latest patch - 8.52.03
Oracle WebLogic Server 10.3.5
Java Platform, Standard Edition Development Kit 6 Update 32

Benchmark Description

The PeopleSoft Enterprise Financials 9.1 benchmark emulates a large enterprise that processes and validates a large number of financial journal transactions before posting the journal entry to the ledger. The validation process certifies that the journal entries are accurate, ensuring that ChartFields values are valid, debits and credits equal out, and inter/intra-units are balanced. Once validated, the entries are processed, ensuring that each journal line posts to the correct target ledger, and then changes the journal status to posted. In this benchmark, the Journal Edit & Post is set up to edit and post both Inter-Unit and Regular multi-currency journals. The benchmark processes 20 million journal lines using AppEngine for edits and Cobol for post processes.

See Also

Disclosure Statement

Copyright 2012, Oracle and/or its affiliates. All rights reserved. Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners. Results as of 1 October 2012.

Tuesday Aug 28, 2012

SPARC T4-2 Produces World Record Oracle Essbase Aggregate Storage Benchmark Result

Significance of Results

Oracle's SPARC T4-2 server configured with a Sun Storage F5100 Flash Array and running Oracle Solaris 10 with Oracle Database 11g has achieved exceptional performance for the Oracle Essbase Aggregate Storage Option benchmark. The benchmark has upwards of 1 billion records, 15 dimensions and millions of members. Oracle Essbase is a multi-dimensional online analytical processing (OLAP) server and is well-suited to work well with SPARC T4 servers.

  • The SPARC T4-2 server (2 cpus) running Oracle Essbase 11.1.2.2.100 outperformed the previous published results on Oracle's SPARC Enterprise M5000 server (4 cpus) with Oracle Essbase 11.1.1.3 on Oracle Solaris 10 by 80%, 32% and 2x performance improvement on Data Loading, Default Aggregation and Usage Based Aggregation, respectively.

  • The SPARC T4-2 server with Sun Storage F5100 Flash Array and Oracle Essbase running on Oracle Solaris 10 achieves sub-second query response times for 20,000 users in a 15 dimension database.

  • The SPARC T4-2 server configured with Oracle Essbase was able to aggregate and store values in the database for a 15 dimension cube in 398 minutes with 16 threads and in 484 minutes with 8 threads.

  • The Sun Storage F5100 Flash Array provides more than a 20% improvement out-of-the-box compared to a mid-size fiber channel disk array for default aggregation and user-based aggregation.

  • The Sun Storage F5100 Flash Array with Oracle Essbase provides the best combination for large Oracle Essbase databases leveraging Oracle Solaris ZFS and taking advantage of high bandwidth for faster load and aggregation.

  • Oracle Fusion Middleware provides a family of complete, integrated, hot pluggable and best-of-breed products known for enabling enterprise customers to create and run agile and intelligent business applications. Oracle Essbase's performance demonstrates why so many customers rely on Oracle Fusion Middleware as their foundation for innovation.

Performance Landscape

System Data Size
(millions of items)
Database
Load
(minutes)
Default
Aggregation
(minutes)
Usage Based
Aggregation
(minutes)
SPARC T4-2, 2 x SPARC T4 2.85 GHz 1000 149 398* 55
Sun M5000, 4 x SPARC64 VII 2.53 GHz 1000 269 526 115
Sun M5000, 4 x SPARC64 VII 2.4 GHz 400 120 448 18

* – 398 mins with CALCPARALLEL set to 16; 484 mins with CALCPARALLEL threads set to 8

Configuration Summary

Hardware Configuration:

1 x SPARC T4-2
2 x 2.85 GHz SPARC T4 processors
128 GB memory
2 x 300 GB 10000 RPM SAS internal disks

Storage Configuration:

1 x Sun Storage F5100 Flash Array
40 x 24 GB flash modules
SAS HBA with 2 SAS channels
Data Storage Scheme Striped - RAID 0
Oracle Solaris ZFS

Software Configuration:

Oracle Solaris 10 8/11
Installer V 11.1.2.2.100
Oracle Essbase Client v 11.1.2.2.100
Oracle Essbase v 11.1.2.2.100
Oracle Essbase Administration services 64-bit
Oracle Database 11g Release 2 (11.2.0.3)
HP's Mercury Interactive QuickTest Professional 9.5.0

Benchmark Description

The objective of the Oracle Essbase Aggregate Storage Option benchmark is to showcase the ability of Oracle Essbase to scale in terms of user population and data volume for large enterprise deployments. Typical administrative and end-user operations for OLAP applications were simulated to produce benchmark results.

The benchmark test results include:

  • Database Load: Time elapsed to build a database including outline and data load.
  • Default Aggregation: Time elapsed to build aggregation.
  • User Based Aggregation: Time elapsed of the aggregate views proposed as a result of tracked retrieval queries.

Summary of the data used for this benchmark:

  • 40 flat files, each of size 1.2 GB, 49.4 GB in total
  • 10 million rows per file, 1 billion rows total
  • 28 columns of data per row
  • Database outline has 15 dimensions (five of them are attribute dimensions)
  • Customer dimension has 13.3 million members
  • 3 rule files

Key Points and Best Practices

  • The Sun Storage F5100 Flash Array has been used to accelerate the application performance.

  • Setting data load threads (DLTHREADSPREPARE) to 64 and Load Buffer to 6 improved dataloading by about 9%.

  • Factors influencing aggregation materialization performance are "Aggregate Storage Cache" and "Number of Threads" (CALCPARALLEL) for parallel view materialization. The optimal values for this workload on the SPARC T4-2 server were:

      Aggregate Storage Cache: 32 GB
      CALCPARALLEL: 16

     

See Also

Disclosure Statement

Copyright 2012, Oracle and/or its affiliates. All rights reserved. Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners. Results as of 28 August 2012.

Tuesday May 01, 2012

SPARC T4 Servers Running Oracle Solaris 11 and Oracle RAC Deliver World Record on PeopleSoft HRMS 9.1

Oracle's SPARC T4-4 server running Oracle's PeopleSoft HRMS Self-Service 9.1 benchmark achieved world record performance with 18,000 interactive users. This was accomplished using a high availability configuration using Oracle Real Application Clusters (RAC) 11g Release 2 software for the database tier running on Oracle Solaris 11. The benchmark configuration included the SPARC T4-4 server for the application tier, a SPARC T4-2 server for the web tier and two SPARC T4-2 servers for the database tier.

  • The combination of the SPARC T4 servers running PeopleSoft HRSS 9.1 benchmark supports 4.5x the number of users an IBM pSeries 570 running PeopleSoft HRSS 8.9, with an average response time 40 percent better than IBM.

  • This result was obtained with two SPARC T4-2 servers running the database service using Oracle Real Application Clusters 11g Release 2 software in a high availability configuration.

  • The two SPARC T4-2 servers in the database tier used Oracle Solaris 11, and Oracle RAC 11g Release 2 software with database shared disk storage managed by Oracle Automatic Storage Management (ASM).

  • The average CPU utilization on one SPARC T4-4 server in the application tier handling 18,000 users is 54 percent, showing significant headroom for growth.

  • The SPARC T4 server for the application tier used Oracle Solaris Containers on Oracle Solaris 10, which provides a flexible, scalable and manageable virtualized environment.

  • The Peoplesoft HRMS Self-Service benchmark demonstrates better performance on Oracle hardware and software, engineered to work together, than Oracle software on IBM.

Performance Landscape

PeopleSoft HRMS Self-Service 9.1 Benchmark
Systems Processors Users Ave Response -
Search (sec)
Ave Response -
Save (sec)
SPARC T4-2 (web)
SPARC T4-4 (app)
2 x SPARC T4-2 (db)
2 x SPARC T4, 2.85 GHz
4 x SPARC T4, 3.0 GHz
2 x (2 x SPARC T4, 2.85 GHz)
18,000 1.048 0.742
SPARC T4-2 (web)
SPARC T4-4 (app)
SPARC T4-4 (db)
2 x SPARC T4, 2.85 GHz
4 x SPARC T4, 3.0 GHz
4 x SPARC T4, 3.0 GHz
15,000 1.01 0.63
PeopleSoft HRMS Self-Service 8.9 Benchmark
IBM Power 570 (web/app)
IBM Power 570 (db)
12 x POWER5, 1.9 GHz
4 x POWER5, 1.9 GHz
4,000 1.74 1.25
IBM p690 (web)
IBM p690 (app)
IBM p690 (db)
4 x POWER4, 1.9 GHz
12 x POWER4, 1.9 GHz
6 x 4392 MIPS/Gen1
4,000 1.35 1.01

The main differences between version 9.1 and version 8.9 of the benchmark are:

  • the database expanded from 100K employees and 20K managers to 500K employees and 100K managers,
  • the manager data was expanded,
  • a new transaction, "Employee Add Profile," was added, the percent of users executing it is less then 2%, and the transaction has a heavier footprint,
  • version 9.1 has a different benchmark metric (Average Response Search/Save time for x number of users) versus single user search/save time,
  • newer versions of the PeopleSoft application and PeopleTools software are used.

Configuration Summary

Application Server:

1 x SPARC T4-4 server
4 x SPARC T4 processors 3.0 GHz
512 GB main memory
5 x 300 GB SAS internal disks,
2 x 100 GB internal SSDs
1 x 300 GB internal SSD
Oracle Solaris 10 8/11
PeopleSoft PeopleTools 8.51.02
PeopleSoft HCM 9.1
Oracle Tuxedo, Version 10.3.0.0, 64-bit, Patch Level 031
Java HotSpot(TM) 64-Bit Server VM on Solaris, version 1.6.0_20

Web Server:

1 x SPARC T4-2 server
2 x SPARC T4 processors 2.85 GHz
256 GB main memory
2 x 300 GB SAS internal disks
1 x 100 GB internal SSD
Oracle Solaris 10 8/11
PeopleSoft PeopleTools 8.51.02
Oracle WebLogic Server 11g (10.3.3)
Java HotSpot(TM) 64-Bit Server VM on Solaris, version 1.6.0_20

Database Server:

2 x SPARC T4-2 servers, each with
2 x SPARC T4 processors 2.85 GHz
128 GB main memory
3 x 300 GB SAS internal disks
Oracle Solaris 11 11/11
Oracle Database 11g Release 2
Oracle Real Application Clusters

Database Storage:

Data
1 x Sun Storage F5100 Flash Array (80 flash modules)
1 x COMSTAR Sun Fire X4470 M2 server
4 x Intel Xeon X7550 processors 2.0 GHz
128 GB main memory
Oracle Solaris 11 11/11
Redo
2 x COMSTAR Sun Fire X4275 servers, each with
1 x Intel Xeon E5540 processor 2.53 GHz
6 GB main memory)
12 x 2 TB SAS disks
Oracle Solaris 11 Express 2010.11

Connectivity:

1 x 8-port 10GbE switch
1 x 24-port 1GbE switch
1 x 32-port Brocade FC switch

Benchmark Description

The purpose of the PeopleSoft HRMS Self-Service 9.1 benchmark is to measure comparative online performance of the selected processes in PeopleSoft Enterprise HCM 9.1 with Oracle Database 11g. The benchmark kit is an Oracle standard benchmark kit run by all platform vendors to measure the performance. It is an OLTP benchmark with no dependency on remote COBOL calls, there is no batch workload, and DB SQLs are moderately complex. The results are certified by Oracle and a white paper is published.

PeopleSoft defines a business transaction as a series of HTML pages that guide a user through a particular scenario. Users are defined as corporate Employees, Managers and HR administrators. The benchmark consists of 14 scenarios which emulate users performing typical HCM transactions such as viewing paychecks, promoting and hiring employees, updating employee profiles and other typical HCM application transactions.

All of these transactions are well defined in the PeopleSoft HR Self-Service 9.1 benchmark kit. This benchmark metric is the Weighted Average Response search/save time for all users.

Key Points and Best Practices

  • The combined processing power of two SPARC T4-2 servers running the highly available Oracle RAC database can provide greater throughput and Oracle RAC scalability than is available from a single server.

  • All database data files/recovery files and Oracle Clusterware files were created with Oracle Automatic Storage Management (Oracle ASM) volume manager and file system which resulted in equivalent performance of conventional volume managers, file systems, and raw devices, but with the added benefit of the ease of management provided by Oracle ASM integrated storage management solution.

  • Five Oracle PeopleSoft Domains with 200 application servers (40 per each Domain) on the SPARC T4-4 server were hosted in two separate Oracle Solaris Containers for a total of 10 Domains/400 application servers processes to demonstrate consolidation of multiple application servers, ease of administration and load balancing.

  • Each Oracle Solaris Container was bound to a separate processor set, each containing 124 virtual processors. The default set (composed of 4 virtual processors from first and third processor socket, total of 8 virtual processors) was used for network and disk interrupt handling. This was done to improve performance by reducing memory access latency by using the physical memory closest to the processors and offload I/O interrupt handling to default set virtual processors, freeing up processing resources for application server virtual processors.

See Also

Disclosure Statement

Oracle's PeopleSoft HRMS 9.1 benchmark, www.oracle.com/us/solutions/benchmark/apps-benchmark/peoplesoft-167486.html, results 5/1/2012.

Thursday Apr 19, 2012

Sun ZFS Storage 7420 Appliance Delivers 2-Node World Record SPECsfs2008 NFS Benchmark

Oracle's Sun ZFS Storage 7420 appliance delivered world record two-node performance on the SPECsfs2008 NFS benchmark, beating results published on NetApp's dual-controller and 4-node high-end FAS6240 storage systems.

  • The Sun ZFS Storage 7420 appliance delivered a world record two-node result of 267,928 SPECsfs2008_nfs.v3 Ops/sec with an Overall Response Time (ORT) of 1.31 msec on the SPECsfs2008 NFS benchmark.

  • The Sun ZFS Storage 7420 appliance delivered 1.4x higher throughput than the dual-controller NetApp FAS6240 and 2.6x higher throughput than the dual-controller NetApp FAS3270 on the SPECsfs2008_nfs.v3 benchmark at less than half the list price of either result.

  • The Sun ZFS Storage 7420 appliance required 10 percent less rack space than the dual-controller NetApp FAS6240.

  • The Sun ZFS Storage 7420 appliance had 3 percent higher throughput than the 4-node NetApp FAS6240 on the SPECsfs2008_nfs.v3 benchmark.

  • The Sun ZFS Storage 7420 appliance required 25 percent less rack space than the 4-node NetApp FAS6240.

  • The Sun ZFS Storage 7420 appliance has 14 percent better Overall Response Time than the 4-node NetApp FAS6240 on the SPECsfs2008_nfs.v3 benchmark.

Performance Landscape

SPECsfs2008_nfs.v3 Performance Chart (in decreasing SPECsfs2008_nfs.v3 Ops/sec order)

Sponsor System Throughput
(Ops/sec)
Overall Response
Time (msec)
Nodes Memory (GB)
Including Flash
Disks Rack Units –
Controllers
+Disks
Oracle 7420 267,928 1.31 2 6,728 280 54
NetApp FAS6240 260,388 1.53 4 2,256 288 72
NetApp FAS6240 190,675 1.17 2 1,128 288 60
EMC VG8 135,521 1.92 280 312
Oracle 7320 134,140 1.51 2 4,968 136 26
EMC NS-G8 110,621 2.32 264 100
NetApp FAS3270 101,183 1.66 2 40 360 66

Throughput SPECsfs2008_nfs.v3 Ops/sec — the Performance Metric
Overall Response Time — the corresponding Response Time Metric
Nodes — Nodes and Controllers are being used interchangeably

Complete SPECsfs2008 benchmark results may be found at http://www.spec.org/sfs2008/results/sfs2008.html.

Configuration Summary

Storage Configuration:

Sun ZFS Storage 7420 appliance in clustered configuration
2 x Sun ZFS Storage 7420 controllers, each with
4 x 2.4 GHz Intel Xeon E7-4870 processors
1 TB memory
4 x 512 GB SSD flash-enabled read-cache
2 x 10GbE NICs
12 x Sun Disk shelves
10 x shelves with 24 x 300 GB 15K RPM SAS-2 drives
2 x shelves with 20 x 300 GB 15K RPM SAS-2 drives and 4 x 73 GB SAS-2 flash-enabled write-cache

Server Configuration:

4 x Sun Fire X4270 M2 servers, each with
2 x 3.3 GHz Intel Xeon E5680 processors
144 GB memory
1 x 10 GbE NIC
Oracle Solaris 10 9/10

Switches:

1 x 24-port 10Gb Ethernet Switch

Benchmark Description

SPECsfs2008 is the latest version of the Standard Performance Evaluation Corporation (SPEC) benchmark suite measuring file server throughput and response time, providing a standardized method for comparing performance across different vendor platforms. SPECsfs2008 results summarize the server's capabilities with respect to the number of operations that can be handled per second, as well as the overall latency of the operations. The suite is a follow-on to the SFS97_R1 benchmark, adding a CIFS workload, an updated NFSv3 workload, support for additional client platforms, and a new test harness and reporting/submission framework.

See Also

Disclosure Statement

SPEC and SPECsfs are registered trademarks of Standard Performance Evaluation Corporation (SPEC). Results as of April 18, 2012, for more information see www.spec.org. Sun ZFS Storage 7420 Appliance 267,928 SPECsfs2008_nfs.v3 Ops/sec, 1.31 msec ORT, NetApp Data ONTAP 8.1 Cluster-Mode (4-node FAS6240) 260,388 SPECsfs2008_nfs.v3 Ops/Sec, 1.53 msec ORT, NetApp FAS6240 190,675 SPECsfs2008_nfs.v3 Ops/Sec, 1.17 msec ORT. NetApp FAS3270 101,183 SPECsfs2008_nfs.v3 Ops/Sec, 1.66 msec ORT.

Nodes refer to the item in the SPECsfs2008 disclosed Configuration Bill of Materials that have the Processing Elements that perform the NFS Processing Function. These are the first item listed in each of disclosed Configuration Bill of Materials except for EMC where it is both the first and third items listed, and HP, where it is the second item listed as Blade Servers. The number of nodes is from the QTY disclosed in the Configuration Bill of Materials as described above. Configuration Bill of Materials list price for Oracle result of US$ 423,644. Configuration Bill of Materials list price for NetApp FAS3270 result of US$ 1,215,290. Configuration Bill of Materials list price for NetApp FAS6240 result of US$ 1,028,118. Oracle pricing from https://shop.oracle.com/pls/ostore/f?p=dstore:home:0, traverse to "Storage and Tape" and then to "NAS Storage". NetApp's pricing from http://www.netapp.com/us/media/na-list-usd-netapp-custom-state-new-discounts.html.

Sunday Apr 15, 2012

Sun ZFS Storage 7420 Appliance Delivers Top High-End Price/Performance Result for SPC-2 Benchmark

Oracle's Sun ZFS Storage 7420 appliance delivered leading high-end price/performance on the SPC Benchmark 2 (SPC-2).

  • The Sun ZFS Storage 7420 appliance delivered a result of 10,704 SPC-2 MB/s at $35.24 $/SPC-2 MB/s on the SPC-2 benchmark.

  • The Sun ZFS Storage 7420 appliance beats the IBM DS8800 result by over 10% on SPC-2 MB/s and has 7.7x better $/SPC-2 MB/s.

  • The Sun ZFS Storage 7420 appliance achieved the best price/performance for the top 18 posted unique performance results on the SPC-2 benchmark.

Performance Landscape

SPC-2 Performance Chart (in decreasing performance order)

System SPC-2
MB/s
$/SPC-2
MB/s
ASU
Capacity
(GB)
TSC Price Data
Protection
Level
Date Results
Identifier
HP StorageWorks P9500 13,148 $88.34 129,112 $1,161,504 RAID-5 03/07/12 B00056
Sun ZFS Storage 7420 10,704 $35.24 31,884 $377,225 Mirroring 04/12/12 B00058
IBM DS8800 9,706 $270.38 71,537 $2,624,257 RAID-5 12/01/10 B00051
HP XP24000 8,725 $187.45 18,401 $1,635,434 Mirroring 09/08/08 B00035
Hitachi Storage Platform V 8,725 $187.49 18,401 $1,635,770 Mirroring 09/08/08 B00036
TMS RamSan-630 8,323 $49.37 8,117 $410,927 RAID-5 05/10/11 B00054
IBM XIV 7,468 $152.34 154,619 $1,137,641 RAID-1 10/19/11 BE00001
IBM DS8700 7,247 $277.22 32,642 $2,009,007 RAID-5 11/30/09 B00049
IBM SAN Vol Ctlr 4.2 7,084 $463.66 101,155 $3,284,767 RAID-5 07/12/07 B00024
Fujitsu ETERNUS DX440 S2 5,768 $66.50 42,133 $383,576 Mirroring 04/12/12 B00057
IBM DS5300 5,634 $74.13 16,383 $417,648 RAID-5 10/21/09 B00045
Sun Storage 6780 5,634 $47.03 16,383 $264,999 RAID-5 10/28/09 B00047
IBM DS5300 5,544 $75.33 14,043 $417,648 RAID-6 10/21/09 B00046
Sun Storage 6780 5,544 $47.80 14,043 $264,999 RAID-6 10/28/09 B00048
IBM DS5300 4,818 $93.80 16,383 $451,986 RAID-5 09/25/08 B00037
Sun Storage 6780 4,818 $53.61 16,383 $258,329 RAID-5 02/02/09 B00039
IBM DS5300 4,676 $96.67 14,043 $451,986 RAID-6 09/25/08 B00038
Sun Storage 6780 4,676 $55.25 14,043 $258,329 RAID-6 02/03/09 B00040
IBM SAN Vol Ctlr 4.1 4,544 $400.78 51,265 $1,821,301 RAID-5 09/12/06 B00011
IBM SAN Vol Ctlr 3.1 3,518 $563.93 20,616 $1,983,785 Mirroring 12/14/05 B00001
Fujitsu ETERNUS8000 1100 3,481 $238.93 4,570 $831,649 Mirroring 03/08/07 B00019
IBM DS8300 3,218 $539.38 15,393 $1,735,473 Mirroring 12/14/05 B00006
IBM Storwize V7000 3,133 $71.32 29,914 $223,422 RAID-5 12/13/10 B00052

SPC-2 MB/s = the Performance Metric
$/SPC-2 MB/s = the Price/Performance Metric
ASU Capacity = the Capacity Metric
Data Protection = Data Protection Metric
TSC Price = Total Cost of Ownership Metric
Results Identifier = A unique identification of the result Metric

Complete SPC-2 benchmark results may be found at http://www.storageperformance.org.

Configuration Summary

Storage Configuration:

Sun ZFS Storage 7420 appliance in clustered configuration
2 x Sun ZFS Storage 7420 controllers, each with
4 x 2.0 GHz Intel Xeon X7550 processors
512 GB memory, 64 x 8 GB 1066 MHz DDR3 DIMMs
16 x Sun Disk shelves, each with
24 x 300 GB 15K RPM SAS-2 drives

Server Configuration:

1 x Sun Fire X4470 server, with
4 x 2.4 GHz Intel Xeon E7-4870 processors
512 GB memory
8 x 8 Gb FC connections to the Sun ZFS Storage 7420 appliance
Oracle Solaris 11 11/11

2 x Sun Fire X4470 servers, each with
4 x 2.4 GHz Intel Xeon E7-4870 processors
256 GB memory
8 x 8 Gb FC connections to the Sun ZFS Storage 7420 appliance
Oracle Solaris 11 11/11

Benchmark Description

SPC Benchmark-2 (SPC-2): Consists of three distinct workloads designed to demonstrate the performance of a storage subsystem during the execution of business critical applications that require the large-scale, sequential movement of data. Those applications are characterized predominately by large I/Os organized into one or more concurrent sequential patterns. A description of each of the three SPC-2 workloads is listed below as well as examples of applications characterized by each workload.

  • Large File Processing: Applications in a wide range of fields, which require simple sequential process of one or more large files such as scientific computing and large-scale financial processing.
  • Large Database Queries: Applications that involve scans or joins of large relational tables, such as those performed for data mining or business intelligence.
  • Video on Demand: Applications that provide individualized video entertainment to a community of subscribers by drawing from a digital film library.

SPC-2 is built to:

  • Provide a level playing field for test sponsors.
  • Produce results that are powerful and yet simple to use.
  • Provide value for engineers as well as IT consumers and solution integrators.
  • Is easy to run, easy to audit/verify, and easy to use to report official results.

See Also

Disclosure Statement

SPC-2, SPC-2 MB/s, $/SPC-2 MB/s are registered trademarks of Storage Performance Council (SPC). Results as of April 12, 2012, for more information see www.storageperformance.org. Sun ZFS Storage 7420 Appliance http://www.storageperformance.org/results/benchmark_results_spc2#b00058; IBM DS8800 http://www.storageperformance.org/results/benchmark_results_spc2#b00051.

Thursday Apr 12, 2012

Sun Fire X4270 M3 SAP Enhancement Package 4 for SAP ERP 6.0 (Unicode) Two-Tier Standard Sales and Distribution (SD) Benchmark

Oracle's Sun Fire X4270 M3 server (now known as Sun Server X3-2L) achieved 8,320 SAP SD Benchmark users running SAP enhancement package 4 for SAP ERP 6.0 with unicode software using Oracle Database 11g and Oracle Solaris 10.

  • The Sun Fire X4270 M3 server using Oracle Database 11g and Oracle Solaris 10 beat both IBM Flex System x240 and IBM System x3650 M4 server running DB2 9.7 and Windows Server 2008 R2 Enterprise Edition.

  • The Sun Fire X4270 M3 server running Oracle Database 11g and Oracle Solaris 10 beat the HP ProLiant BL460c Gen8 server using SQL Server 2008 and Windows Server 2008 R2 Enterprise Edition by 6%.

  • The Sun Fire X4270 M3 server using Oracle Database 11g and Oracle Solaris 10 beat Cisco UCS C240 M3 server running SQL Server 2008 and Windows Server 2008 R2 Datacenter Edition by 9%.

  • The Sun Fire X4270 M3 server running Oracle Database 11g and Oracle Solaris 10 beat the Fujitsu PRIMERGY RX300 S7 server using SQL Server 2008 and Windows Server 2008 R2 Enterprise Edition by 10%.

Performance Landscape

SAP-SD 2-Tier Performance Table (in decreasing performance order).

SAP ERP 6.0 Enhancement Pack 4 (Unicode) Results
(benchmark version from January 2009 to April 2012)

System OS
Database
Users SAP
ERP/ECC
Release
SAPS SAPS/
Proc
Date
Sun Fire X4270 M3
2xIntel Xeon E5-2690 @2.90GHz
128 GB
Oracle Solaris 10
Oracle Database 11g
8,320 2009
6.0 EP4
(Unicode)
45,570 22,785 10-Apr-12
IBM Flex System x240
2xIntel Xeon E5-2690 @2.90GHz
128 GB
Windows Server 2008 R2 EE
DB2 9.7
7,960 2009
6.0 EP4
(Unicode)
43,520 21,760 11-Apr-12
HP ProLiant BL460c Gen8
2xIntel Xeon E5-2690 @2.90GHz
128 GB
Windows Server 2008 R2 EE
SQL Server 2008
7,865 2009
6.0 EP4
(Unicode)
42,920 21,460 29-Mar-12
IBM System x3650 M4
2xIntel Xeon E5-2690 @2.90GHz
128 GB
Windows Server 2008 R2 EE
DB2 9.7
7,855 2009
6.0 EP4
(Unicode)
42,880 21,440 06-Mar-12
Cisco UCS C240 M3
2xIntel Xeon E5-2690 @2.90GHz
128 GB
Windows Server 2008 R2 DE
SQL Server 2008
7,635 2009
6.0 EP4
(Unicode)
41,800 20,900 06-Mar-12
Fujitsu PRIMERGY RX300 S7
2xIntel Xeon E5-2690 @2.90GHz
128 GB
Windows Server 2008 R2 EE
SQL Server 2008
7,570 2009
6.0 EP4
(Unicode)
41,320 20,660 06-Mar-12

Complete benchmark results may be found at the SAP benchmark website http://www.sap.com/benchmark.

Configuration and Results Summary

Hardware Configuration:

Sun Fire X4270 M3
2 x 2.90 GHz Intel Xeon E5-2690 processors
128 GB memory
Sun StorageTek 6540 with 4 * 16 * 300GB 15Krpm 4Gb FC-AL

Software Configuration:

Oracle Solaris 10
Oracle Database 11g
SAP enhancement package 4 for SAP ERP 6.0 (Unicode)

Certified Results (published by SAP):

Number of benchmark users:
8,320
Average dialog response time:
0.95 seconds
Throughput:

Fully processed order line:
911,330

Dialog steps/hour:
2,734,000

SAPS:
45,570
SAP Certification:
2012014

Benchmark Description

The SAP Standard Application SD (Sales and Distribution) Benchmark is a two-tier ERP business test that is indicative of full business workloads of complete order processing and invoice processing, and demonstrates the ability to run both the application and database software on a single system. The SAP Standard Application SD Benchmark represents the critical tasks performed in real-world ERP business environments.

SAP is one of the premier world-wide ERP application providers, and maintains a suite of benchmark tests to demonstrate the performance of competitive systems on the various SAP products.

See Also

Disclosure Statement

Two-tier SAP Sales and Distribution (SD) standard SAP SD benchmark based on SAP enhancement package 4 for SAP ERP 6.0 (Unicode) application benchmark as of 04/11/12: Sun Fire X4270 M3 (2 processors, 16 cores, 32 threads) 8,320 SAP SD Users, 2 x 2.90 GHz Intel Xeon E5-2690, 128 GB memory, Oracle 11g, Solaris 10, Cert# 2012014. IBM Flex System x240 (2 processors, 16 cores, 32 threads) 7,960 SAP SD Users, 2 x 2.90 GHz Intel Xeon E5-2690, 128 GB memory, DB2 9.7, Windows Server 2008 R2 EE, Cert# 2012016. IBM System x3650 M4 (2 processors, 16 cores, 32 threads) 7,855 SAP SD Users, 2 x 2.90 GHz Intel Xeon E5-2690, 128 GB memory, DB2 9.7, Windows Server 2008 R2 EE, Cert# 2012010. Cisco UCS C240 M3 (2 processors, 16 cores, 32 threads) 7,635 SAP SD Users, 2 x 2.90 GHz Intel Xeon E5-2690, 128 GB memory, SQL Server 2008, Windows Server 2008 R2 DE, Cert# 2012011. Fujitsu PRIMERGY RX300 S7 (2 processors, 16 cores, 32 threads) 7,570 SAP SD Users, 2 x 2.90 GHz Intel Xeon E5-2690, 128 GB memory, SQL Server 2008, Windows Server 2008 R2 EE, Cert# 2012008. HP ProLiant DL380p Gen8 (2 processors, 16 cores, 32 threads) 7,865 SAP SD Users, 2 x 2.90 GHz Intel Xeon E5-2690, 128 GB memory, SQL Server 2008, Windows Server 2008 R2 EE, Cert# 2012012.

SAP, R/3, reg TM of SAP AG in Germany and other countries. More info www.sap.com/benchmark

Tuesday Apr 10, 2012

World Record Oracle E-Business Suite 12.1.3 Standard Extra-Large Payroll (Batch) Benchmark on Sun Server X3-2L

Oracle's Sun Server X3-2L (formerly Sun Fire X4270 M3) server set a world record running the Oracle E-Business Suite 12.1.3 Standard Extra-Large Payroll (Batch) benchmark.

  • This is the first published result using Oracle E-Business 12.1.3.

  • The Sun Server X3-2L result ran the Extra-Large Payroll workload in 19 minutes.

Performance Landscape

This is the first published result for the Payroll Extra-Large model using Oracle E-Business 12.1.3 benchmark.

Batch Workload: Payroll Extra-Large Model
System Employees/Hr Elapsed Time
Sun Server X3-2L 789,515 19 minutes

Configuration Summary

Hardware Configuration:

Sun Server X3-2L
2 x Intel Xeon E5-2690, 2.9 GHz
128 GB memory
8 x 100 GB SSD for data
1 x 300 GB SSD for log

Software Configuration:

Oracle Linux 5.7
Oracle E-Business Suite R12 (12.1.3)
Oracle Database 11g (11.2.0.3)

Benchmark Description

The Oracle E-Business Suite Standard R12 Benchmark combines online transaction execution by simulated users with concurrent batch processing to model a typical scenario for a global enterprise. This benchmark ran one Batch component, Payroll, in the Extra-Large size. The goal of the benchmark proposal is to execute and achieve best batch-payroll performance using X-Large configuragion.

Results can be published in four sizes and use one or more online/batch modules

  • X-large: Maximum online users running all business flows between 10,000 to 20,000; 750,000 order to cash lines per hour and 250,000 payroll checks per hour.
    • Order to Cash Online -- 2400 users
      • The percentage across the 5 transactions in Order Management module is:
        • Insert Manual Invoice -- 16.66%
        • Insert Order -- 32.33%
        • Order Pick Release -- 16.66%
        • Ship Confirm -- 16.66%
        • Order Summary Report -- 16.66%
    • HR Self-Service -- 4000 users
    • Customer Support Flow -- 8000 users
    • Procure to Pay -- 2000 users
  • Large: 10,000 online users; 100,000 order to cash lines per hour and 100,000 payroll checks per hour.
  • Medium: up to 3000 online users; 50,000 order to cash lines per hour and 10,000 payroll checks per hour.
  • Small: up to 1000 online users; 10,000 order to cash lines per hour and 5,000 payroll checks per hour.

See Also

Disclosure Statement

Oracle E-Business X-Large Batch-Payroll benchmark, Sun Server X3-2L, 2.90 GHz, 2 chips, 16 cores, 32 threads, 128 GB memory, elapsed time 19.0 minutes, 789,515 Employees/HR, Oracle Linux 5.7, Oracle E-Business Suite 12.1.3, Oracle Database 11g Release 2, Results as of 7/10/2012.

SPEC CPU2006 Results on Oracle's Sun x86 Servers

Oracle's new Sun x86 servers delivered world records on the benchmarks SPECfp2006 and SPECint_rate2006 for two processor servers. This was accomplished with Oracle Solaris 11 and Oracle Solaris Studio 12.3 software.

  • The Sun Fire X4170 M3 (now known as Sun Server X3-2) server achieved a world record result in for SPECfp2006 benchmark with a score of 96.8.

  • The Sun Blade X6270 M3 server module (now known as Sun Blade X3-2B) produced best integer throughput performance for all 2-socket servers with a SPECint_rate2006 score of 705.

  • The Sun x86 servers with Intel Xeon E5-2690 2.9 GHz processors produced a cross-generational performance improvement up to 1.8x over the previous generation, Sun x86 M2 servers.

Performance Landscape

Complete benchmark results are at the SPEC website, SPEC CPU2006 Results. The tables below provide the new Oracle results, as well as, select results from other vendors.

SPECint2006
System Processor c/c/t * Peak Base O/S Compiler
Fujitsu PRIMERGY BX924 S3 Intel E5-2690, 2.9 GHz 2/16/16 60.8 56.0 RHEL 6.2 Intel 12.1.2.273
Sun Fire X4170 M3 Intel E5-2690, 2.9 GHz 2/16/32 58.5 54.3 Oracle Linux 6.1 Intel 12.1.0.225
Sun Fire X4270 M2 Intel X5690, 3.47 GHz 2/12/12 46.2 43.9 Oracle Linux 5.5 Intel 12.0.1.116

SPECfp2006
System Processor c/c/t * Peak Base O/S Compiler
Sun Fire X4170 M3 Intel E5-2690, 2.9 GHz 2/16/32 96.8 86.4 Oracle Solaris 11 Studio 12.3
Sun Blade X6270 M3 Intel E5-2690, 2.9 GHz 2/16/32 96.0 85.2 Oracle Solaris 11 Studio 12.3
Sun Fire X4270 M3 Intel E5-2690, 2.9 GHz 2/16/32 95.9 85.1 Oracle Solaris 11 Studio 12.3
Fujitsu CELSIUS R920 Intel E5-2687, 2.9 GHz 2/16/16 93.8 87.6 RHEL 6.1 Intel 12.1.2.273
Sun Fire X4270 M2 Intel X5690, 3.47 GHz 2/12/24 64.2 59.2 Oracle Solaris 10 Studio 12.2

Only 2-chip server systems listed below, excludes workstations.

SPECint_rate2006
System Processor Base
Copies
c/c/t * Peak Base O/S Compiler
Sun Blade X6270 M3 Intel E5-2690, 2.9 GHz 32 2/16/32 705 632 Oracle Solaris 11 Studio 12.3
Sun Fire X4270 M3 Intel E5-2690, 2.9 GHz 32 2/16/32 705 630 Oracle Solaris 11 Studio 12.3
Sun Fire X4170 M3 Intel E5-2690, 2.9 GHz 32 2/16/32 702 628 Oracle Solaris 11 Studio 12.3
Cisco UCS C220 M3 Intel E5-2690, 2.9 GHz 32 2/16/32 697 671 RHEL 6.2 Intel 12.1.0.225
Sun Blade X6270 M2 Intel X5690, 3.47 GHz 24 2/12/24 410 386 Oracle Linux 5.5 Intel 12.0.1.116

SPECfp_rate2006
System Processor Base
Copies
c/c/t * Peak Base O/S Compiler
Cisco UCS C240 M3 Intel E5-2690, 2.9 GHz 32 2/16/32 510 496 RHEL 6.2 Intel 12.1.2.273
Sun Fire X4270 M3 Intel E5-2690, 2.9 GHz 64 2/16/32 497 461 Oracle Solaris 11 Studio 12.3
Sun Blade X6270 M3 Intel E5-2690, 2.9 GHz 32 2/16/32 497 460 Oracle Solaris 11 Studio 12.3
Sun Fire X4170 M3 Intel E5-2690, 2.9 GHz 64 2/16/32 495 464 Oracle Solaris 11 Studio 12.3
Sun Fire X4270 M2 Intel E5690, 3.47 GHz 24 2/12/24 273 265 Oracle Linux 5.5 Intel 12.0.1.116

* c/c/t — chips / cores / threads enabled

Configuration Summary and Results

Hardware Configuration:

Sun Fire X4170 M3 server
2 x 2.90 GHz Intel Xeon E5-2690 processors
128 GB memory (16 x 8 GB 2Rx4 PC3-12800R-11, ECC)

Sun Fire X4270 M3 server
2 x 2.90 GHz Intel Xeon E5-2690 processors
128 GB memory (16 x 8 GB 2Rx4 PC3-12800R-11, ECC)

Sun Blade X6270 M3 server module
2 x 2.90 GHz Intel Xeon E5-2690 processors
128 GB memory (16 x 8 GB 2Rx4 PC3-12800R-11, ECC)

Software Configuration:

Oracle Solaris 11 11/11 (SRU2)
Oracle Solaris Studio 12.3 (patch update 1 nightly build 120313) Oracle Linux Server Release 6.1
Intel C++ Studio XE 12.1.0.225
SPEC CPU2006 V1.2

Benchmark Description

SPEC CPU2006 is SPEC's most popular benchmark. It measures:

  • Speed — single copy performance of chip, memory, compiler
  • Rate — multiple copy (throughput)

The benchmark is also divided into integer intensive applications and floating point intensive applications:

  • integer: 12 benchmarks derived from real applications such as perl, gcc, XML processing, and pathfinding
  • floating point: 17 benchmarks derived from real applications, including chemistry, physics, genetics, and weather.

It is also divided depending upon the amount of optimization allowed:

  • base: optimization is consistent per compiled language, all benchmarks must be compiled with the same flags per language.
  • peak: specific compiler optimization is allowed per application.

The overall metrics for the benchmark which are commonly used are:

  • SPECint_rate2006, SPECint_rate_base2006: integer, rate
  • SPECfp_rate2006, SPECfp_rate_base2006: floating point, rate
  • SPECint2006, SPECint_base2006: integer, speed
  • SPECfp2006, SPECfp_base2006: floating point, speed

See here for additional information.

See Also

Disclosure Statement

SPEC and the benchmark names SPECfp and SPECint are registered trademarks of the Standard Performance Evaluation Corporation. Results as of 10 April 2012 from www.spec.org and this report.

SPEC CPU2006 Results on Oracle's Netra Server X3-2

Oracle's Netra Server X3-2 (formerly Sun Netra X4270 M3) equipped with the new Intel Xeon processor E5-2658, is up to 2.5x faster than the previous generation Netra systems on SPEC CPU2006 workloads.

Performance Landscape

Complete benchmark results are at the SPEC website, SPEC CPU2006 results. The tables below provide the new Oracle results and previous generation results.

SPECint2006
System Processor c/c/t * Peak Base O/S Compiler
Netra Server X3-2
Intel E5-2658, 2.1 GHz 2/16/32 38.5 36.0 Oracle Linux 6.1 Intel 12.1.0.225
Sun Netra X4270 Intel L5518, 2.13 GHz 2/8/16 27.9 25.0 Oracle Linux 5.4 Intel 11.1
Sun Netra X4250 Intel L5408, 2.13 GHz 2/8/8 20.3 17.9 SLES 10 SP1 Intel 11.0

SPECfp2006
System Processor c/c/t * Peak Base O/S Compiler
Netra Server X3-2 Intel E5-2658, 2.1 GHz 2/16/32 65.3 61.6 Oracle Linux 6.1 Intel 12.1.0.225
Sun Netra X4270 Intel L5518, 2.13 GHz 2/8/16 32.5 29.4 Oracle Linux 5.4 Intel 11.1
Sun Netra X4250 Intel L5408, 2.13 GHz 2/8/8 18.5 17.7 SLES 10 SP1 Intel 11.0

SPECint_rate2006
System Processor Base
Copies
c/c/t * Peak Base O/S Compiler
Netra Server X3-2 Intel E5-2658, 2.1 GHz 32 2/16/32 477 455 Oracle Linux 6.1 Intel 12.1.0.225
Sun Netra X4270 Intel L5518, 2.13 GHz 16 2/8/16 201 189 Oracle Linux 5.4 Intel 11.1
Sun Netra X4250 Intel L5408, 2.13 GHz 8 2/8/8 103 82.0 SLES 10 SP1 Intel 11.0

SPECfp_rate2006
System Processor Base
Copies
c/c/t * Peak Base O/S Compiler
Netra Server X3-2 Intel E5-2658, 2.1 GHz 32 2/16/32 392 383 Oracle Linux 6.1 Intel 12.1.0.225
Sun Netra X4270 Intel L5518, 2.13 GHz 16 2/8/16 155 153 Oracle Linux 5.4 Intel 11.1
Sun Netra X4250 Intel L5408, 2.13 GHz 8 2/8/8 55.9 52.3 SLES 10 SP1 Intel 11.0

* c/c/t — chips / cores / threads enabled

Configuration Summary

Hardware Configuration:

Netra Server X3-2
2 x 2.10 GHz Intel Xeon E5-2658 processors
128 GB memory (16 x 8 GB 2Rx4 PC3-12800R-11, ECC)

Software Configuration:

Oracle Linux Server Release 6.1
Intel C++ Studio XE 12.1.0.225
SPEC CPU2006 V1.2

Benchmark Description

SPEC CPU2006 is SPEC's most popular benchmark. It measures:

  • Speed — single copy performance of chip, memory, compiler
  • Rate — multiple copy (throughput)

The benchmark is also divided into integer intensive applications and floating point intensive applications:

  • integer: 12 benchmarks derived from real applications such as perl, gcc, XML processing, and pathfinding
  • floating point: 17 benchmarks derived from real applications, including chemistry, physics, genetics, and weather.

It is also divided depending upon the amount of optimization allowed:

  • base: optimization is consistent per compiled language, all benchmarks must be compiled with the same flags per language.
  • peak: specific compiler optimization is allowed per application.

The overall metrics for the benchmark which are commonly used are:

  • SPECint_rate2006, SPECint_rate_base2006: integer, rate
  • SPECfp_rate2006, SPECfp_rate_base2006: floating point, rate
  • SPECint2006, SPECint_base2006: integer, speed
  • SPECfp2006, SPECfp_base2006: floating point, speed

See here for additional information.

See Also

Disclosure Statement

SPEC and the benchmark names SPECfp and SPECint are registered trademarks of the Standard Performance Evaluation Corporation. Results as of 10 July 2012 from www.spec.org and this report.

Thursday Mar 29, 2012

Sun Server X2-8 (formerly Sun Fire X4800 M2) Delivers World Record TPC-C for x86 Systems

Oracle's Sun Server X2-8 (formerly Sun Fire X4800 M2 server) equipped with eight 2.4 GHz Intel Xeon Processor E7-8870 chips obtained a result of 5,055,888 tpmC on the TPC-C benchmark. This result is a world record for x86 servers. Oracle demonstrated this world record database performance running Oracle Database 11g Release 2 Enterprise Edition with Partitioning.

  • The Sun Server X2-8 delivered a new x86 TPC-C world record of 5,055,888 tpmC with a price performance of $0.89/tpmC using Oracle Database 11g Release 2. This configuration is available 7/10/12.

  • The Sun Server X2-8 delivers 3.0x times better performance than the next 8-processor result, an IBM System p 570 equipped with POWER6 processors.

  • The Sun Server X2-8 has 3.1x times better price/performance than the 8-processor 4.7GHz POWER6 IBM System p 570.

  • The Sun Server X2-8 has 1.6x times better performance than the 4-processor IBM x3850 X5 system equipped with Intel Xeon processors.

  • This is the first TPC-C result on any system using eight Intel Xeon Processor E7-8800 Series chips.

  • The Sun Server X2-8 is the first x86 system to get over 5 million tpmC.

  • The Oracle solution utilized Oracle Linux operating system and Oracle Database 11g Enterprise Edition Release 2 with Partitioning to produce the x86 world record TPC-C benchmark performance.

Performance Landscape

Select TPC-C results (sorted by tpmC, bigger is better)

System p/c/t tpmC Price
/tpmC
Avail Database Memory
Size
Sun Server X2-8 8/80/160 5,055,888 0.89 USD 7/10/2012 Oracle 11g R2 4 TB
IBM x3850 X5 4/40/80 3,014,684 0.59 USD 7/11/2011 DB2 ESE 9.7 3 TB
IBM x3850 X5 4/32/64 2,308,099 0.60 USD 5/20/2011 DB2 ESE 9.7 1.5 TB
IBM System p 570 8/16/32 1,616,162 3.54 USD 11/21/2007 DB2 9.0 2 TB

p/c/t - processors, cores, threads
Avail - availability date

Oracle and IBM TPC-C Response times

System tpmC Response Time (sec)
New Order 90th%
Response Time (sec)
New Order Average

Sun Server X2-8 5,055,888 0.210 0.166
IBM x3850 X5 3,014,684 0.500 0.272
Ratios - Oracle Better 1.6x 1.4x 1.3x

Oracle uses average new order response time for comparison between Oracle and IBM.

Graphs of Oracle's and IBM's response times for New-Order can be found in the full disclosure reports on TPC's website TPC-C Official Result Page.

Configuration Summary and Results

Hardware Configuration:

Server
Sun Server X2-8
8 x 2.4 GHz Intel Xeon Processor E7-8870
4 TB memory
8 x 300 GB 10K RPM SAS internal disks
8 x Dual port 8 Gbs FC HBA

Data Storage
10 x Sun Fire X4270 M2 servers configured as COMSTAR heads, each with
1 x 3.06 GHz Intel Xeon X5675 processor
8 GB memory
10 x 2 TB 7.2K RPM 3.5" SAS disks
2 x Sun Storage F5100 Flash Array storage (1.92 TB each)
1 x Brocade 5300 switches

Redo Storage
2 x Sun Fire X4270 M2 servers configured as COMSTAR heads, each with
1 x 3.06 GHz Intel Xeon X5675 processor
8 GB memory
11 x 2 TB 7.2K RPM 3.5" SAS disks

Clients
8 x Sun Fire X4170 M2 servers, each with
2 x 3.06 GHz Intel Xeon X5675 processors
48 GB memory
2 x 300 GB 10K RPM SAS disks

Software Configuration:

Oracle Linux (Sun Fire 4800 M2)
Oracle Solaris 11 Express (COMSTAR for Sun Fire X4270 M2)
Oracle Solaris 10 9/10 (Sun Fire X4170 M2)
Oracle Database 11g Release 2 Enterprise Edition with Partitioning
Oracle iPlanet Web Server 7.0 U5
Tuxedo CFS-R Tier 1

Results:

System: Sun Server X2-8
tpmC: 5,055,888
Price/tpmC: 0.89 USD
Available: 7/10/2012
Database: Oracle Database 11g
Cluster: no
New Order Average Response: 0.166 seconds

Benchmark Description

TPC-C is an OLTP system benchmark. It simulates a complete environment where a population of terminal operators executes transactions against a database. The benchmark is centered around the principal activities (transactions) of an order-entry environment. These transactions include entering and delivering orders, recording payments, checking the status of orders, and monitoring the level of stock at the warehouses.

Key Points and Best Practices

  • Oracle Database 11g Release 2 Enterprise Edition with Partitioning scales easily to this high level of performance.

  • COMSTAR (Common Multiprotocol SCSI Target) is the software framework that enables an Oracle Solaris host to serve as a SCSI Target platform. COMSTAR uses a modular approach to break the huge task of handling all the different pieces in a SCSI target subsystem into independent functional modules which are glued together by the SCSI Target Mode Framework (STMF). The modules implementing functionality at SCSI level (disk, tape, medium changer etc.) are not required to know about the underlying transport. And the modules implementing the transport protocol (FC, iSCSI, etc.) are not aware of the SCSI-level functionality of the packets they are transporting. The framework hides the details of allocation providing execution context and cleanup of SCSI commands and associated resources and simplifies the task of writing the SCSI or transport modules.

  • Oracle iPlanet Web Server middleware is used for the client tier of the benchmark. Each web server instance supports more than a quarter-million users while satisfying the response time requirement from the TPC-C benchmark.

See Also

Disclosure Statement

TPC Benchmark C, tpmC, and TPC-C are trademarks of the Transaction Processing Performance Council (TPC). Sun Server X2-8 (8/80/160) with Oracle Database 11g Release 2 Enterprise Edition with Partitioning, 5,055,888 tpmC, $0.89 USD/tpmC, available 7/10/2012. IBM x3850 X5 (4/40/80) with DB2 ESE 9.7, 3,014,684 tpmC, $0.59 USD/tpmC, available 7/11/2011. IBM x3850 X5 (4/32/64) with DB2 ESE 9.7, 2,308,099 tpmC, $0.60 USD/tpmC, available 5/20/2011. IBM System p 570 (8/16/32) with DB2 9.0, 1,616,162 tpmC, $3.54 USD/tpmC, available 11/21/2007. Source: http://www.tpc.org/tpcc, results as of 7/15/2011.

Sun Server X2-8 (formerly Sun Fire X4800 M2) Posts World Record x86 SPECjEnterprise2010 Result

Oracle's Sun Server X2-8 (formerly Sun Fire X4800 M2) using the Intel Xeon E7-8870 processor and Sun Server X2-4 using the Intel Xeon E7-4870 processor, produced a world record single application server SPECjEnterprise2010 benchmark result of 27,150.05 SPECjEnterprise2010 EjOPS. The Sun Server X2-8 ran the application tier and the Sun Server X2-4 was used for the database tier.

  • The Sun Server X2-8 demonstrated 63% better performance compared to IBM P780 server result of 16,646.34 SPECjEnterprise2010 EjOPS.

  • The Sun Server X2-8 demonstrated 4% better performance than the Cisco UCS B440 M2 result, both results used the same number of processors.

  • This result used Oracle WebLogic Server 12c, Java HotSpot(TM) 64-Bit Server 1.7.0_02, and Oracle Database 11g.

  • This result was produced using Oracle Linux.

Performance Landscape

Complete benchmark results are at the SPEC website, SPECjEnterprise2010 Results. The table below compares against the best results from IBM and Cisco.

SPECjEnterprise2010 Performance Chart
as of 7/11/2012
Submitter EjOPS* Application Server Database Server
Oracle 27,150.05 1x Sun Server X2-8
8x 2.4 GHz Intel Xeon E7-8870
Oracle WebLogic 12c
1x Sun Server X2-4
4x 2.4 GHz Intel Xeon E7-4870
Oracle Database 11g (11.2.0.2)
Cisco 26,118.67 2x UCS B440 M2 Blade Server
4x 2.4 GHz Intel Xeon E7-4870
Oracle WebLogic 11g (10.3.5)
1x UCS C460 M2 Blade Server
4x 2.4 GHz Intel Xeon E7-4870
Oracle Database 11g (11.2.0.2)
IBM 16,646.34 1x IBM Power 780
8x 3.86 GHz POWER 7
WebSphere Application Server V7
1x IBM Power 750 Express
4x 3.55 GHz POWER 7
IBM DB2 9.7 Workgroup Server Edition FP3a

* SPECjEnterprise2010 EjOPS, bigger is better.

Configuration Summary

Application Server:

1 x Sun Server X2-8

8 x 2.4 GHz Intel Xeon processor E7-8870
256 GB memory
4 x 10 GbE NIC
2 x FC HBA
Oracle Linux 5 Update 6
Oracle WebLogic Server Standard Edition Release 12.1.1
Java HotSpot(TM) 64-Bit Server VM on Linux, version 1.7.0_02 (Java SE 7 Update 2)

Database Server:

1 x Sun Server X2-4
4 x 2.4 GHz Intel Xeon E7-4870
512 GB memory
4 x 10 GbE NIC
2 x FC HBA
2 x Sun StorageTek 2540 M2
4 x Sun Fire X4270 M2
4 x Sun Storage F5100 Flash Array
Oracle Linux 5 Update 6
Oracle Database 11g Enterprise Edition Release 11.2.0.2

Benchmark Description

SPECjEnterprise2010 is the third generation of the SPEC organization's J2EE end-to-end industry standard benchmark application. The SPECjEnterprise2010 benchmark has been designed and developed to cover the Java EE 5 specification's significantly expanded and simplified programming model, highlighting the major features used by developers in the industry today. This provides a real world workload driving the Application Server's implementation of the Java EE specification to its maximum potential and allowing maximum stressing of the underlying hardware and software systems.

The workload consists of an end to end web based order processing domain, an RMI and Web Services driven manufacturing domain and a supply chain model utilizing document based Web Services. The application is a collection of Java classes, Java Servlets, Java Server Pages, Enterprise Java Beans, Java Persistence Entities (pojo's) and Message Driven Beans.

The SPECjEnterprise2010 benchmark heavily exercises all parts of the underlying infrastructure that make up the application environment, including hardware, JVM software, database software, JDBC drivers, and the system network.

The primary metric of the SPECjEnterprise2010 benchmark is jEnterprise Operations Per Second ("SPECjEnterprise2010 EjOPS"). This metric is calculated by adding the metrics of the Dealership Management Application in the Dealer Domain and the Manufacturing Application in the Manufacturing Domain. There is no price/performance metric in this benchmark.

Key Points and Best Practices

  • Sixteen Oracle WebLogic server instances were started using numactl, binding 2 instances per chip.
  • Eight Oracle database listener processes were started, binding 2 instances per chip using taskset.
  • Additional tuning information is in the report at http://spec.org.

See Also

Disclosure Statement

SPEC and the benchmark name SPECjEnterprise are registered trademarks of the Standard Performance Evaluation Corporation. Sun Server X2-8, 27,150.05 SPECjEnterprise2010 EjOPS; IBM Power 780, 16,646.34 SPECjEnterprise2010 EjOPS; Cisco UCS B440 M2, 26,118.67 SPECjEnterprise2010 EjOPS. Results from www.spec.org as of 7/11/2012.

Monday Feb 27, 2012

Sun ZFS Storage 7320 Appliance 33% Faster Than NetApp FAS3270 on SPECsfs2008

Oracle's Sun ZFS Storage 7320 appliance delivered outstanding performance on the SPECsfs2008 NFS benchmark, beating results published on NetApp's fastest midrange platform, the NetApp FAS3270, and the EMC Gateway NS-G8 Server Failover Cluster.

  • The Sun ZFS Storage 7320 appliance delivered 134,140 SPECsfs2008_nfs.v3 Ops/sec with an Overall Response Time (ORT) of 1.51 msec on the SPECsfs2008 NFS benchmark.

  • The Sun ZFS Storage 7320 appliance has 33% higher throughput than the NetApp FAS3270 on the SPECsfs2008 NFS benchmark.

  • The Sun ZFS Storage 7320 appliance required less than half the rack space of the NetApp FAS3270.

  • The Sun ZFS Storage 7320 appliance has 9% better Overall Response Time than the NetApp FAS3270 on the SPECsfs2008 NFS benchmark.

Performance Landscape

SPECsfs2008_nfs.v3 Performance Chart (in decreasing SPECsfs2008_nfs.v3 Ops/sec order)

Sponsor System Throughput
(Ops/sec)
Overall Response
Time (msec)
Memory
(GB)
Disks Exported
Capacity (TB)
Rack Units
Controllers+Disks
EMC VG8 135,521 1.92 280 312 19.2
Oracle 7320 134,140 1.51 288 136 37.0 26
EMC NS-G8 110,621 2.32 264 100 17.6
NetApp FAS3270 101,183 1.66 40 360 110.1 66

Throughput SPECsfs2008_nfs.v3 Ops/sec = the Performance Metric
Overall Response Time = the corresponding Response Time Metric

Complete SPECsfs2008 benchmark results may be found at http://www.spec.org/sfs2008/results/sfs2008.html.

Configuration Summary

Storage Configuration:

Sun ZFS Storage 7320 appliance in clustered configuration
2 x Sun ZFS Storage 7320 controllers, each with
2 x 2.4 GHz Intel Xeon E5620 processors
144 GB memory
4 x 512 GB SSD flash-enabled read-cache
6 x Sun Disk shelves
4 x shelves with 24 x 300 GB 15K RPM SAS-2 drives
2 x shelves with 20 x 300 GB 15K RPM SAS-2 drives and 4 x 73 GB SAS-2 flash-enabled write-cache

Server Configuration:

3 x Sun Fire X4270 M2 servers, each with
2 x 2.4 GHz Intel Xeon E5620 processors
12 GB memory
1 x 10 GbE connection to the Sun ZFS Storage 7320 appliance
Oracle Solaris 10 8/11

Benchmark Description

SPECsfs2008 is the latest version of the Standard Performance Evaluation Corporation (SPEC) benchmark suite measuring file server throughput and response time, providing a standardized method for comparing performance across different vendor platforms. SPECsfs2008 results summarize the server's capabilities with respect to the number of operations that can be handled per second, as well as the overall latency of the operations. The suite is a follow-on to the SFS97_R1 benchmark, adding a CIFS workload, an updated NFSv3 workload, support for additional client platforms, and a new test harness and reporting/submission framework.

See Also

Disclosure Statement

SPEC and SPECsfs are registered trademarks of Standard Performance Evaluation Corporation (SPEC). Results as of February 22, 2012, for more information see www.spec.org. Sun ZFS Storage 7320 Appliance 134,140 SPECsfs2008_nfs.v3 Ops/sec, 1.51 msec ORT, NetApp FAS3270 101,183 SPECsfs2008_nfs.v3 Ops/Sec, 1.66 msec ORT, EMC Celerra Gateway NS-G8 Server Failover Cluster, 3 Datamovers (1 stdby) / Symmetrix V-Max 110,621 SPECsfs2008_nfs.v3 Ops/Sec, 2.32 msec ORT.

About

BestPerf is the source of Oracle performance expertise. In this blog, Oracle's Strategic Applications Engineering group explores Oracle's performance results and shares best practices learned from working on Enterprise-wide Applications.

Index Pages
Search

Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today