Wednesday Nov 04, 2009

New TPC-C World Record Sun/Oracle

TPC-C Sun SPARC Enterprise T5440 with Oracle RAC World Record Database Result

Sun and Oracle demonstrate the World's fastest database performance. Sun Microsystems using 12 Sun SPARC Enterprise T5440 servers, 60 Sun Storage F5100 Flash arrays and Oracle 11g Enterprise Edition with Real Application Clusters and Partitioning delivered a world-record TPC-C benchmark result.

  • The 12-node Sun SPARC Enterprise T5440 server cluster result delivered a world record TPC-C benchmark result of 7,646,486.7 tpmC and $2.36 $/tpmC (USD) using Oracle 11g R1 on a configuration available 3/19/10.

  • The 12-node Sun SPARC Enterprise T5440 server cluster beats the performance of the IBM Power 595 (5GHz) with IBM DB2 9.5 database by 26% and has 16% better price/performance on the TPC-C benchmark.

  • The complete Oracle/Sun solution used 10.7x better computational density than the IBM configuration (computational density = performance/rack).

  • The complete Oracle/Sun solution used 8 times fewer racks than the IBM configuration.

  • The complete Oracle/Sun solution has 5.9x better power/performance than the IBM configuration.

  • The 12-node Sun SPARC Enterprise T5440 server cluster beats the performance of the HP Superdome (1.6GHz Itanium2) by 87% and has 19% better price/performance on the TPC-C benchmark.

  • The Oracle/Sun solution utilized Sun FlashFire technology to deliver this result. The Sun Storage F5100 flash array was used for database storage.

  • Oracle 11g Enterprise Edition with Real Application Clusters and Partitioning scales and effectively uses all of the nodes in this configuration to produce the world record performance.

  • This result showed Sun and Oracle's integrated hardware and software stacks provide industry-leading performance.

More information on this benchmark will be posted in the next several days.

Performance Landscape

TPC-C results (sorted by tpmC, bigger is better)


System
tpmC Price/tpmC Avail Database Cluster Racks w/KtpmC
12 x Sun SPARC Enterprise T5440 7,646,487 2.36 USD 03/19/10 Oracle 11g RAC Y 9 9.6
IBM Power 595 6,085,166 2.81 USD 12/10/08 IBM DB2 9.5 N 76 56.4
HP Integrity Superdome 4,092,799 2.93 USD 08/06/07 Oracle 10g R2 N 46 to be added

Avail - Availability date
w/KtmpC - Watts per 1000 tpmC
Racks - clients, servers, storage, infrastructure

Sun and IBM TPC-C Response times


System
tpmC

Response Time

New Order 90th%

Response Time

New Order Average

12 x Sun SPARC Enterprise T5440 7,646,487 0.170 0.168
IBM Power 595 6,085,166 1.69
1.22
Response Time Ratio - Sun Better

9.9x 7.3x

Sun uses 7x comparison to highlight the differences in response times between Sun's solution and IBM.  Although notice that Sun is 10x faster on New Order transactions that finish in the 90% percentile.

It is also interesting to note that none of Sun's response times, avg or 90th percentile, for any transaction is over 0.25 seconds. While IBM does not have even one interactive transaction, not even the menu, below 0.50 seconds. Graphs of Sun's and IBM's response times for New-Order can be found in the full disclosure reports on TPC's website TPC-C Official Result Page.

Results and Configuration Summary

Hardware Configuration:

    9 racks used to hold

    Servers:
      12 x Sun SPARC Enterprise T5440
      4 x 1.6 GHz UltraSPARC T2 Plus
      512 GB memory
      10 GbE network for cluster
    Storage:
      60 x Sun Storage F5100 Flash Array
      61 x Sun Fire X4275, Comstar SAS target emulation
      24 x Sun StorageTek 6140 (16 x 300 GB SAS 15K RPM)
      6 x Sun Storage J4400
      3 x 80-port Brocade FC switches
    Clients:
      24 x Sun Fire X4170, each with
      2 x 2.53 GHz X5540
      48 GB memory

Software Configuration:

    Solaris 10 10/09
    OpenSolaris 6/09 (COMSTAR) for Sun Fire X4275
    Oracle 11g Enterprise Edition with Real Application Clusters and Partitioning
    Tuxedo CFS-R Tier 1
    Sun Web Server 7.0 Update 5

Benchmark Description

TPC-C is an OLTP system benchmark. It simulates a complete environment where a population of terminal operators executes transactions against a database. The benchmark is centered around the principal activities (transactions) of an order-entry environment. These transactions include entering and delivering orders, recording payments, checking the status of orders, and monitoring the level of stock at the warehouses.

See Also

Disclosure Statement

TPC Benchmark C, tpmC, and TPC-C are trademarks of the Transaction Performance Processing Council (TPC). 12-node Sun SPARC Enterprise T5440 Cluster (1.6GHz UltraSPARC T2 Plus, 4 processor) with Oracle 11g Enterprise Edition with Real Application Clusters and Partitioning, 7,646,486.7 tpmC, $2.36/tpmC. Available 3/19/10. IBM Power 595 (5GHz Power6, 32 chips, 64 cores, 128 threads) with IBM DB2 9.5, 6,085,166 tpmC, $2.81/tpmC, available 12/10/08. HP Integrity Superdome(1.6GHz Itanium2, 64 processors, 128 cores, 256 threads) with Oracle 10g Enterprise Edition, 4,092,799 tpmC, $2.93/tpmC. Available 8/06/07. Source: www.tpc.org, results as of 11/5/09.

Wednesday Oct 14, 2009

Oracle Open World (OOW) BestPerf Index 14 October 2009

Here is a BestPerf blog index to a variety of benchmarks announced at Oracle Open World and others talked about at the conference.

Colors used:

Benchmark
Best Practices
Other

ORACLEOPENWORLD

CMT Servers

Oct 11, 2009 \* TPC-C World Record Sun - Oracle \*
Oct 13, 2009 Sun T5440 Oracle BI EE Sun T5440 World Record
Oct 13, 2009 SPECweb200 Sun T5440 World Record, Solaris Containers and Sun Storage F5100
Sep 01, 2009 String Searching - Sun T5240 & T5440 Outperform IBM Cell Broadband Engine
Aug 27, 2009 Sun T5240 Beats 4-Chip IBM Power 570 POWER6 System on SPECjbb2005
Aug 26, 2009 Sun T5220 Sets Single Chip World Record on SPECjbb2005
Aug 12, 2009 SPECmail2009 on Sun T5240 and Sun Java System Messaging Server 6.3
Jul 23, 2009 World Record Performance of Sun CMT Servers
Jul 22, 2009 Why does 1.6 beat 4.7?
Jul 21, 2009 Zeus ZXTM Traffic Manager World Record on Sun T5240
Jul 21, 2009 Sun T5440 World Record SAP-SD 4-Processor Two-tier SAP ERP 6.0 EP4 (Unicode)

SPARC64 Servers

Oct 13, 2009 SAP 2-tier SD Benchmark on Sun M9000/32 SPARC64 VII
Oct 13, 2009 Oracle PeopleSoft Payroll Sun M4000 and Sun Storage F5100 World Record Performance
Oct 12, 2009 Best Practices: M4000 Sun Storage F5100 is a good option for Peoplesoft Payroll
Oct 13, 2009 Oracle Hyperion Sun M5000 and Sun Storage 7410
Oct 13, 2009 SPECcpu2006 Results On MSeries Servers, New SPARC64 VII

X86 Servers

Oct 13, 2009 SAP 2-tier SD-Parallel on Sun Blade X6270 1-node, 2-node and 4-node
Aug 28, 2009 Sun X4270 World Record SAP-SD 2-Processor Two-tier SAP ERP 6.0 EP 4 (Unicode)
Oct 02, 2009 Sun X4270 VMware VMmark benchmark achieves excellent result
Sep 22, 2009 Sun X4270 Virtualized for Two-tier SAP ERP 6.0 EP4 (Unicode) Standard Sales and Distribution Benchmark

HPC Benchmarks

Oct 13, 2009 Halliburton ProMAX Oil & Gas Appl on Sun 6048/X6275 Cluster and Oracle Database
Oct 13, 2009 MCAE ABAQUS faster on Sun F5100 and Sun X4270 - Single Node World Record
Oct 12, 2009 MCAE ANSYS faster on Sun F5100 and Sun X4270
Oct 12, 2009 MCAE MCS/NASTRAN faster on Sun F5100 and Fire X4270
Oct 13, 2009 CP2K Life Sciences, Ab-initio Chem - Sun C48 with Sun Blade X6275 - QDR InfiniBand
Oct 09, 2009 X6275 Cluster Demonstrates Performance and Scalability on WRF 2.5km CONUS Dataset

Specific Storage Benchmarks

Oct 12, 2009 SPC-2 Sun Storage 6180 RAID 5 & RAID 6 Over 70% Better $/Performance than IBM
Oct 12, 2009 SPC-1 Sun Storage 6180 Over 70% Better $/Performance than IBM
Oct 12, 2009 1.6 Million 4K IOPS in 1RU on Sun Storage F5100 Flash Array

Additional CMT Server Benchmarks

Jul 21, 2009 1.6 GHz SPEC CPU2006 - Rate Benchmarks
Jul 21, 2009 Sun Blade T6320 World Record SPECjbb2005 performance
Jul 21, 2009 Sun T5440 SPECjbb2005 Beats IBM POWER6 Chip-to-Chip

Tuesday Oct 13, 2009

Oracle Hyperion Sun M5000 and Sun Storage 7410

The Sun SPARC Enterprise M5000 server with SPARC64 VII processors (configured with 4 CPUs) and Sun Storage 7410 Unified Storage System has achieved exceptional performance for Oracle Hyperion Essbase 11.1.1.3 and Oracle 11g database for hundreds of GB of data, 15 dimensional db and millions of members running on free and open Solaris 10. Oracle Hyperion is a component of Oracle Fusion Middleware.

  • The Sun Storage 7410 Unified Storage System provides more than 20% improvement out of the box compared to a mid-size fiber channel disk array for default aggregation and user based aggregation.

  • The Sun SPARC Enterprise M5000 server with Sun Storage 7410 Unified Storage System and Oracle Hyperion Essbase 11.1.1.3 running on Solaris 10 OS provides < 1sec query response times for 20K users in a 15 dimension database.

  • The Sun Storage 7410 Unified Storage system and Oracle Hyperion Essbase provides the best combination for large Essbase database leveraging ZFS and taking advantage of high bandwidth for faster load and aggregation.

  • Oracle Fusion Middleware provides a family of complete, integrated, hot plugable and best-of-breed products known for enabling enterprise customers to create and run agile and intelligent business applications. Oracle Hyperion's performance demonstrates why so many customers rely on Oracle Fusion Middleware as their foundation for innovation.

Performance Landscape

System Processor OS Storage Dataload Def.Agg UserAgg
Sun SE M5000 4 x 2.4GHz SPARC64 VII Solaris Sun Storage 7410 120 min 448 min 17.5 min
Sun SE M5000 4 x 2.4GHz SPARC64 VII Solaris Sun StorageTek 6140 128 min 526 min 24.7 min

Results and Configuration Summary

Hardware Configuration:

    1 x Sun SPARC Enterprise M5000 (2.4 GHz/32GB)
    1 x Sun StorageTek 6140 (32 x 146GB)
    1 x Sun Storage 7410 (24TB disk)

Software Configuration:

    Solaris 10 5/09
    Installer V 11.1.1.3
    Oracle Hyperion Essbase Client v 11.1.1.3
    Oracle Hyperion Essbase v 11.1.1.3
    Oracle Hyperion Essbase Adminstration services 64-bit
    Oracle Weblogic 9.2MP3 -- 64 bit
    Oracle Fusion Middleware
    Sun's JDK 1.5 Update 19 -- 64-bit
    Oracle RDBMS 11.1.0.7 64-bit
    HP's Mercury Interactive QuickTest Professional 9.0

Benchmark Description

Oracle Hyperion is a OLAP based analytics application used to analyze business needs and plans that is highly dimensional detailed, such as "what-if" analysis to look into the future, build multi-user scenario modeling, planning, customer buying patterns, etc.

The objective of the benchmark is to collect data for the following Oracle Hyperion Essbase benchmark key performance indicators (KPI):
  • Database build time: Time elapsed to build a database including outline and data load.

  • Database Aggregation build time: Time elapsed to build aggregation.

  • Analytic Query Time: With user load increasing from 500, 1000, 2000, 10000, 20000 users track the time required to process each query and hence track average analytic query time.

  • Analytic Queries per minute: Number of queries handled by the Essbase server per minute Track resource, i.e. CPU, memory usage.

The benchmark is based on the data set used by Product Assurance for 2005 Essbase 7.x testing.

    40 flat files of 1.2 GB each , 49.4 GB in total
    10 million rows per file, 400 million rows total
    28 columns of data per row
    49.4 GB total size of 40 files
    Database outline has 15 dimensions (five of them are attribute dimensions)
    Customer dimension has 13.3 million members

Key Points and Best Practices

  • The Sun Storage 7410 was configured with iSCSI.

See Also

Disclosure Statement

Oracle Hyperion Enterprise, www.oracle.com/solutions/mid/oracle-hyperion-enterprise.html, results 10/13/2009.

Sun T5440 Oracle BI EE Sun SPARC Enterprise T5440 World Record

The Oracle BI EE, a component of Oracle Fusion Middleware,  workload was run on two Sun SPARC Enterprise T5440 servers and achieved world record performance.
  • Two Sun SPARC Enterprise T5440 servers with four 1.6 GHz UltraSPARC T2 Plus processors delivered the best performance of 50K concurrent users on the Oracle BI EE 10.1.3.4 benchmark with Oracle 11g database running on free and open Solaris 10.

  • The two node Sun SPARC Enterprise T5440 servers with Oracle BI EE running on Solaris 10 using 8 Solaris Containers shows 1.8x scaling over Sun's previous one node SPARC Enterprise T5440 server result with 4 Solaris Containers.

  • The two node SPARC Enterprise T5440 servers demonstrated the performance and scalability of the UltraSPARC T2 Plus processor demonstrating 50K users can be serviced with 0.2776 sec response time.

  • The Sun SPARC Enterprise T5220 server was used as an NFS server with 4 internal SSDs and the ZFS file system which showed significant I/O performance improvement over traditional disk for Business Intelligence Web Catalog activity.

  • Oracle Fusion Middleware provides a family of complete, integrated, hot pluggable and best-of-breed products known for enabling enterprise customers to create and run agile and intelligent business applications. Oracle BI EE performance demonstrates why so many customers rely on Oracle Fusion Middleware as their foundation for innovation.

  • IBM has not published any POWER6 processor based results on this important benchmark.

Performance Landscape

System Processors Users
Chips GHz Type
2 x Sun SPARC Enterprise T5440 8 1.6 UltraSPARC T2 Plus 50,000
1 x Sun SPARC Enterprise T5440 4 1.6 UltraSPARC T2 Plus 28,000
5 x Sun Fire T2000 1 1.2 UltraSPARC T1 10,000

Results and Configuration Summary

Hardware Configuration:

    2 x Sun SPARC Enterprise T5440 (1.6GHz/128GB)
    1 x Sun SPARC Enterprise T5220 (1.2GHz/64GB) and 4 SSDs (used as NFS server)

Software Configuration:

    Solaris10 05/09
    Oracle BI EE 10.1.3.4
    Oracle Fusion Middleware
    Oracle 11gR1

Benchmark Description

The objective of this benchmark is to highlight how Oracle BI EE can support pervasive deployments in large enterprises, using minimal hardware, by simulating an organization that needs to support more than 25,000 active concurrent users, each operating in mixed mode: ad-hoc reporting, application development, and report viewing.

The user population was divided into a mix of administrative users and business users. A maximum of 28,000 concurrent users were actively interacting and working in the system during the steady-state period. The tests executed 580 transactions per second, with think times of 60 seconds per user, between requests. In the test scenario 95% of the workload consisted of business users viewing reports and navigating within dashboards. The remaining 5% of the concurrent users, categorized as administrative users, were doing application development.

The benchmark scenario used a typical business user sequence of dashboard navigation, report viewing, and drill down. For example, a Service Manager logs into the system and navigates to his own set of dashboards viz. .Service Manager.. The user then selects the .Service Effectiveness. dashboard, which shows him four distinct reports, .Service Request Trend., .First Time Fix Rate., .Activity Problem Areas., and .Cost Per completed Service Call . 2002 till 2005. . The user then proceeds to view the .Customer Satisfaction. dashboard, which also contains a set of 4 related reports. He then proceeds to drill-down on some of the reports to see the detail data. Then the user proceeds to more dashboards, for example .Customer Satisfaction. and .Service Request Overview.. After navigating through these dashboards, he logs out of the application

This benchmark did not use a synthetic database schema. The benchmark tests were run on a full production version of the Oracle Business Intelligence Applications with a fully populated underlying database schema. The business processes in the test scenario closely represents a true customer scenario.

See Also

Disclosure Statement

Oracle BI EE benchmark results 10/13/2009, see

CP2K Life Sciences, Ab-initio Dynamics - Sun Blade 6048 Chassis with Sun Blade X6275 - Scalability and Throughput with Quad Data Rate InfiniBand

Significance of Results

Clusters of Sun Blade X6275 and X6270 server modules were used to run benchmarks using the CP2K ab-initio dynamics applications software.

  • For the X6270 cluster with Dual Data Rate (DDR) InfiniBand the rate of increase of scalability slows dramatically at 16 nodes, whereas for the X6275 cluster with QDR InfiniBand the scalability continues to 72 nodes.
  • For 64 nodes, the speed of the Sun Blade X6275 cluster with QDR InfiniBand was 2.7X that of a Sun Blade X6270 cluster with DDR InfiniBand.

Ab-initio dynamics simulation is important to materials science research.  Dynamics simulation is used to determine the trajectories of atoms or molecules over time.

Performance Landscape

The CP2K Bulk Water Benchmarks web page plots the performance of CP2K ab-initio dynamics benchmarks that have from 32 to 512 water molecules for a cluster that comprises two 2.66GHz Xeon E5430 quad core CPUs per node and that uses Dual Data Rate InfiniBand.

The following table reports the execution time for the 512 water molecule benchmark when executed on the Sun Blade X6275 cluster having Quad Data Rate InfiniBand and on the Sun Blade X6270 cluster having Dual Data Rate InfiniBand. Each node of either Sun Blade cluster comprises two 2.93GHz Intel Xeon X5570 quad core CPUs. In the following table, the performance is expressed in terms of the "wall clock" time in seconds required to execute ten steps of the ab-initio dynamics simulation for 512 water molecules. A smaller number implies better performance.

Number
of Nodes
X6275 QDR InfiniBand
(seconds for 10 steps)
X6270 DDR InfiniBand
(seconds for 10 steps)
96
1184.36
72 564.16
64 598.41 1591.35
32 706.82 1436.49
24 950.02 1752.20
16 1227.73 2119.50
12 1440.16 1739.26
8 1876.95 2120.73
4 3408.39 3705.44

Results and Configuration Summary

Hardware Configuration:

    Sun Blade[tm] 6048 Modular System with 3 shelves, each shelf with
      12 x Sun Blade X6275, each blade with
        2 x (2 x 2.93 GHz Intel QC Xeon X5570 processors)
        2 x (24 GB memory)
        Hyper-Threading (HT) off, Turbo Mode on
    QDR InfiniBand
    96 x Sun Blade X6270, each blade with
      2 x 2.93 GHz Intel QC Xeon X5570 processors)
      1 x (24 GB memory)
      Hyper-Threading (HT) off, Turbo Mode off
    DDR InfiniBand
Software Configuration:
    SUSE Linux Enterprise Server 10 SP2 kernel version 2.6.16.60-0.31_lustre.1.8.0.1-smp
    OpenMPI 1.3.2
    Sun Studio 12 f90 compiler, ScaLAPACK, BLACS and Performance Libraries
    FFTW (Fastest Fourier Transform in the West) 3.2.1

Benchmark Description

CP2K is a parallel ab-initio dynamics code that is designed to perform atomistic and molecular simulations of solid state, liquid, molecular and biological systems. It provides a general framework for different methods such as e.g. density functional theory (DFT) using a mixed Gaussian and plane waves approach (GPW), and classical pair and many-body potentials.

Ab-initio dynamics simulation is widely used in materials science research. CP2K is a public-domain ab-initio dynamics software application.

Key Points and Best Practices

  • QDR InfiniBand scales better than DDR InfiniBand.
  • The Intel QC X5570 processors include a turbo boost feature coupled with a speed-step option in the CPU section of the Advanced BIOS settings. Under specific circumstances, this can provide cpu overclocking which increases the processor frequency from 2.93GHz to 3.2GHz. This feature was was enabled for the X6275 and disabled for the X6270 when generating the results reported here.

See Also

Disclosure Statement

CP2K, see http://cp2k.berlios.de/ for more information, results as of 10/13/2009.

Halliburton ProMAX Oil & Gas Application Fast on Sun 6048/X6275 Cluster

Significance of Results

The ProMAX family of seismic data processing tools is the most widely used Oil and Gas Industry seismic processing application. ProMAX is used for multiple applications, from field processing and quality control, to interpretive project-oriented reprocessing at oil companies and production processing at service companies. ProMAX is integrated with Halliburton's OpenWorks Geoscience Oracle Database to access prestack seismic data and populate the database with seismic images. This shows the powerful combination of scientific computing merged with commercial database technology.

A cluster of 48 Sun Blade X6275 server modules in a Sun Blade 6048 Modular System was configured with QDR Infiniband and a Lustre File System to demonstrate performance on ProMAX.

  • The 3D Prestack Kirchhoff Time Migration showed excellent scalability while utilizing the QDR Infiniband Lustre Filesystem.

    • 70808 Traces : 144x improvement going from 1 to 72 nodes
    • 283232 Traces : 98x improvement going from 1 to 96 nodes

  • The super linear scalability is attributed in part to data caching effects.

  • Improved the performance of the current production release of the ProMAX 3D Kirchhoff Time Migration by a factor up to 1.7x through recompilation of the source code using the Intel 11.1 compilers.

High Performance ProMAX allows Halliburton's GeoProbe interpretation application to perform migrations "on the fly" while pulling additional mapping data, well logs, and reservoir data from the OpenWorks Oracle Database.

  • Improves velocity modeling throughput for performing iterative Kirchhoff Migrations

  • Sun Grid Engine can be used to optimize the throughput of multiple migrations and maximize the return on investment of a Sun Blade 6048 Modular System.

Enabling hyperthreading and running 16 threads per node can benefit current and potential ProMAX users running on a Sun Blade X6275 configuration. In the tests run with the code rebuild, hyperthreading outperformed non-hyperthreading by as much as 27%.

Performance Landscape

Note: Results are all run with 16 Threads per Node with HyperThreading Enabled.


ProMAX 3D Prestack Kirchhoff Time Migration
SMP Threads and PVM Mode
Nodes Procs 70808 Traces 283232 Traces
Current Release
Execution Time
(sec)
Code Rebuild
Execution Time
(sec)
Current Release
Execution Time
(sec)
Code Rebuild
Execution Time
(sec)
96 192

18 13
72 144 3 2 22 16
48 96 4 3 38 23
24 48 11 7 76 48
16 32 23 14 117 72
12 24 37 23 165 100
8 16 62 35 258 150
4 8 129 78 514 343
1 2 486 288 2022 1278


ProMAX 3D Prestack Kirchhoff Time Migration
Scalability
Nodes Procs 70808 Traces 283232 Traces
Current Release
Speedup
1-Node
Code Rebuild
Speedup
1-Node
Current Release
Speedup
1-Node
Code Rebuild
Speedup
1-Node
96 192

112 98
72 144 162 144 92 80
48 96 121 96 53 55
24 48 44 41 26 26
16 32 21 20 17 18
12 24 13 12 12 13
8 16 8 8 8 8
4 8 4 4 4 4
1 2 1 1 1 1



ProMAX 3D Prestack Kirchhoff Time Migration
283232 Traces
Hyperthreading Performance Comparison
Nodes Procs Current Release Code Rebuild
8 Threads per
Node Perf
(sec)
16 Threads per
Node Perf
(sec)
8 Threads per
Node Perf
(sec)
16 Threads per
Node Perf
(sec)
24 48 95 76 59 48
16 32 144 117 91 72
12 24 196 165 122 100
8 16 310 258 190 150
4 8 629 514 390 343
1 2 2518 2022 1554 1278

Results and Configuration Summary

Hardware Configuration:
    Sun Blade 6048 Modular System with
      48 x Sun Blade X6275 (Vayu) Server Modules, each with
        4 x 2.93 GHz Intel Xeon QC X5570 processors
        12 x 4 GB memory at 1333 MHz
        2 x 24 GB Internal Flash
    QDR InfiniBand
    Lustre 1.8.0.1 File System

Software Configuration:

    OS: 64-bit SUSE Linux Enterprise Server SLES 10 SP 2
    PVM: Parallel Virtual Machine
    Resource Management: Sun Grid Engine
    Compiler: GNU C++ 4.1.2, Intel 11.1 Compilers
    Database: OpenWorks Database requires Oracle 10g Enterprise Edition
    Additional Libraries: pthreads 2.4, Java 1.6.0_01, BLAS, Stanford Exploration Project Libraries

Benchmark Description

This benchmark compares the current production release of ProMAX built with the GNU C++ and Fortran compilers to builds with the Intel Fortran and C++ compilers. Two different problem sizes were evaluated with the ProMAX 3D Prestack Kirchhoff Time Migration:
  • 70808 traces with 8 msec sample interval and trace length of 4992 msec
  • 283232 traces with 8 msec sample interval and trace length of 4992 msec

The ProMAX processing parameters used for this benchmark:
  • Input data set = Shots
    Minimum output inline = 65
    Maximum output inline = 85
    Inline output sampling interval = 1
    Minimum output xline = 1
    Maximum output xline = 200
    Xline output sampling interval = 1
    Antialias inline spacing = 15
    Antialias xline spacing = 15
    Stretch Mute Aperature Limit with Maximum Stretch = 15
    Image Gather Type = Full Offset Image Traces
    No Block Moveout
    Number of Alias Bands = 10
    3D Amplitude Phase Correction
    No compression
    Maximum Number of Cache Blocks = 500000

The compiler flags used for the various builds:
  • The Current Production Release Code was built with GNU Fortran and C++ flags.
    -O3 -m64 -march=x86-64 -mieee-fp -mfpmath=sse -msse2 -fforce-addr -fno-inline-functions

  • The application was rebuilt with the Intel Fortran and C++ flags.
    -xSSE4.2 -O3 -ipo -no-prec-div -static -m64 -ftz -fast-transcendentals -fp-speculation=fast

Key Points and Best Practices

Super linear scalability of the 70808 trace case for the larger node runs can be attributed to the fact that the dataset decomposition fits in cache which is shared by multiple threads per core.

See Also

Disclosure Statement

The following are trademarks or registered trademarks of Halliburton/Landmark Graphics: ProMAX, GeoProbe, OpenWorks.

Monday Oct 12, 2009

MCAE ABAQUS faster on Sun F5100 and Sun X4270 - Single Node World Record

The Sun Storage F5100 Flash Array can substantially improve performance over internal hard disk drives as shown by the I/O intensive ABAQUS MCAE application Standard benchmark tests on a Sun Fire X4270 server.

The I/O intensive ABAQUS "Standard" benchmarks test cases were run on a single Sun Fire X4270 server. Data is presented for runs at both 8 and 16 thread counts.

The ABAQUS "Standard" module is an MCAE application based on the finite element method (FEA) of analysis. This computer based numerical method inherently involves a substantial I/O component. The purpose was to evaluate the performance of the Sun Storage F5100 Flash Array relative to high performance 15K RPM internal striped HDDs.

  • The Sun Storage F5100 Flash Array outperformed the high performance 15K RPM SAS drives on the "S4b" test case by 14%.

  • The Sun Fire X4270 server coupled with a Sun Storage F5100 Flash Array established the world record performance on a single node for the four test cases S2A, S4B, S4D and S6.

Performance Landscape

ABAQUS "Standard" Benchmark Test S4B: Advantage of Sun Storage F5100

Results are total elapsed run times in seconds

Threads 4x15K RPM
72 GB SAS HDD
striped HW RAID0
Sun F5100
r/w buff 4096
striped
Sun F5100
Performance
Advantage
8 1504 1318 14%
16 1811 1649 10%

ABAQUS Standard Server Benchmark Subset: Single Node Record Performance

Results are total elapsed run times in seconds

Platform Cores S2a S4b S4d S6
X4270 w/F5100 8 302 1192 779 1237
HP BL460c G6 8 324 1309 843 1322
X4270 w/F5100 4 552 1970 1181 1706
HP BL460c G6 4 561 2062 1234 1812

Results and Configuration Summary

Hardware Configuration:
    Sun Fire X4270
      2 x 2.93 GHz QC Intel Xeon X5570 processors
      Hyperthreading enabled
      24 GB memory
      4 x 72 GB 15K RPM striped (HW RAID0) SAS disks
    Sun Storage F5100 Flash Array
      20 x 24 GB flash modules
      Intel controller

Software Configuration:

    O/S: 64-bit SUSE Linux Enterprise Server 10 SP 2
    Application: ABAQUS V6.9-1 Standard Module
    Benchmark: ABAQUS Standard Benchmark Test Suite

Benchmark Description

Abaqus/Standard Benchmark Problems

These problems provide an estimate of the performance that can be expected when running Abaqus/Standard or similar commercially available MCAE (FEA) codes like ANSYS and MSC/Nastran on different computers. The jobs are representative of those typically analyzed by Abaqus/Standard and other MCAE applications. These analyses include linear statics, nonlinear statics, and natural frequency extraction.

Please go here for a more complete description of the tests.

Key Points and Best Practices

  • The memory requirements for the test cases in the ABAQUS Standard benchmark test suite are rather substantial with some of the test cases requiring slightly over 20GB of memory. There are two memory limits one a minimum where out of core "memory" will be used when this limit is exceeded. This requires more time consuming cpu and another maximum memory limit that minimizes I/O operations. These memory limits are given in the ABAQUS output and can be established before making a full execution in a preliminary diagnostic mode run.
  • Based on the maximum physical memory on a platform the user can stipulate the maximum portion of this memory that can be allocated to the ABAQUS job. This is done in the "abaqus_v6.env" file that either resides in the subdirectory from where the job was launched or in the abaqus "site" subdirectory under the home installation directory.
  • Sometimes when running multiple cores on a single node, it is preferable from a performance standpoint to run in "smp" shared memory mode This is specified using the "THREADS" option on the "mpi_mode" line in the abaqus_v6.env file as opposed to the "MPI" option on this line. The test case considered here illustrates this point.
  • The test cases for the ABAQUS standard module all have a substantial I/O component where 15% to 25% of the total run times are associated with I/O activity (primarily scratch files). Performance will be enhanced by using the fastest available drives and striping together more than one of them or using a high performance disk storage system with high performance interconnects. On Linux OS's advantage can be taken of excess memory that can be used to cache and accelerate I/O.

See Also

Disclosure Statement

The following are trademarks or registered trademarks of Abaqus, Inc. or its subsidiaries in the United States and/or o ther countries: Abaqus, Abaqus/Standard, Abaqus/Explicit. All information on the ABAQUS website is Copyrighted 2004-2009 by Dassault Systemes. Results from http://www.simulia.com/support/v69/v69_performance.php as of October 12, 2009.

MCAE ANSYS faster on Sun F5100 and Sun X4270

Significance of Results

The Sun Storage F5100 Flash Array can greatly improve performance over internal hard disk drives as shown by the I/O intensive ANSYS MCAE application BMD benchmark tests on a Sun Fire X4270 server.

Select ANSYS 12 BMD benchmarks were run on a single Sun Fire X4270 server. These I/O intensive test cases were run to compare the performance of conventional high performance disk to Sun FlashFire technology.

The ANSYS 12.0 module is an MCAE application based on the finite element method (FEA) of analysis. This computer based numerical method inherently involves a substantial I/O component. The purpose was to evaluate the performance of the Sun Storage F5100 Flash Array relative to high performance 15K RPM internal stripped HDDs.

  • The Sun Storage F5100 Flash Array outperformed the high performance 15K RPM SAS drives on the "BMD-4" test case by 67% in the 8-core/8-thread server configuration.

  • The Sun Storage F5100 Flash Array outperformed the high performance 15K RPM SAS drives on the "BMD-7" test case by 18% in the 8-core/16-thread server configuration.

Performance Landscape

ANSYS 12 "BMD" Test Suite on Single X4270 (24GB mem.) - SMP Mode

Results are total elapsed run times in seconds

Test Case SMP 4x15K RPM
72 GB SAS HDD
striped HW RAID0
Sun F5100
r/w buff 4096
striped
Sun F5100
Performance
Advantage
bmd-4 8 523 314 67%
bmd-7 16 357 303 18%

Results and Configuration Summary

Hardware Configuration:
    Sun Fire X4270
      2 x 2.93 GHz QC Intel Xeon X5570 processors
      Hyperthreading enabled
      24 GB memory
      4 x 72 GB 15K RPM striped (HW RAID0) SAS disks
    Sun Storage F5100 Flash Array
      20 x 24 GB flash modules
      Intel controller

Software Configuration:

    O/S: 64-bit SUSE Linux Enterprise Server 10 SP 2
    Application: ANSYS Multiphysics 12.0
    Benchmark: ANSYS 12 "BMD" Benchmark Test Suite

Benchmark Description

ANSYS is a general purpose engineering analysis MCAE application that is based on the Finite Element Method. It performs both structural (stress) analysis and thermal analysis. These analyses may be either static or transient dynamic and can be linear or nonlinear as far as material behavior or deformations are concerned. Ansys provides a number of benchmark tests which exercise the capabilities of the software.

Please go here for a more complete description of the tests.

Key Points and Best Practices

Performance Considerations

The performance of Ansys (IO-intensive MCAE application) can be increased by reducing the IO demands of the application by increasing server memory or by using SSDs to increase the bandwidth and reduce the latency. The most I/O intensive case in the ANSYS distributed "BMD" test suite is BMD-4 particularly at the (maximum) 8 core level for a single node.


  • Ansys now takes full advantage of inexpensive RAID0 disk arrays and delivers sustained I/O rates.

  • Large memory can cache file accesses but often the size of ANSYS files grows much larger than the available physical memory so that system file caching is not able to hide the I/O cost.
  • For fast ANSYS runs the recommended configuration is a RAID 0 setup using 4 or more disks and a fast RAID controller. These fast I/O configurations are inexpensive to put together for systems and can achieve I/O rates in excess of 200 MB/sec.
  • SSD drives have much lower seek times, use less power, and tend to be about 2X faster than the fastest rotating disks for sustained throughput. The observed speed of a RAID 0 configuration of SSD drives for ANSYS simulations has been nearly as fast as I/O that is cached by large memory systems. SSD drives then may be the most affordable way to extend the capacity of a system to jobs that are too large to run in-core without incurring the performance penalty usually associated with I/O demands.

More About The ANSYS BMD "Distributed" Benchmarks

ANSYS is a general purpose engineering analysis MCAE application that is based on the Finite Element Method. It performs both structural (stress) analysis and thermal analysis. These analyses may be either static or transient dynamic and can be linear or nonlinear as far as material behavior or deformations are concerned.

In the most recent release of the ANSYS benchmarks there are now two test suites: The SMP "BM" suite designed to run on a single node with multi processors and the DMP "BMD" suite intended to run on multi node clusters but which can also run on a single node in SMP mode as in this study.

  • The test cases from both ANSYS test suites all have a substantial I/O component where 15% to 20% of the total run times are associated with I/O activity (primarily scratch files). Performance will be enhanced by using the fastest available drives and striping together more than one of them or using a high performance disk storage system with high performance interconnects. When running with the SX64 build a ZFS system might be a good idea to employ.
  • The ANSYS test cases don't scale very well (BMD better than BM) ; at best on up 8 cores.
  • The memory requirements for the test cases in the ANSYS BMD are greater than for the standard benchmark test suite. The requirements for the standard suite are not great requiring less than 3GB.

See Also

MCAE, SSD, HPC, ANSYS, Linux, SuSE, Performance, X64, Intel

Disclosure Statement

The following are trademarks or registered trademarks of ANSYS, Inc., ANSYS Multiphysics TM. All information on the ANSYS website is Copyrighted by ANSYS, Inc. Results from http://www.ansys.com/services/ss-intel-bench120.htm as of October 12, 2009.

MCAE MCS/NASTRAN faster on Sun F5100 and Fire X4270

Significance of Results

The Sun Storage F5100 Flash Array can double performance over internal hard disk drives as shown by the I/O intensive MSC/Nastran MCAE application MDR3 benchmark tests on a Sun Fire X4270 server.

The MD Nastran MDR3 benchmarks were run on a single Sun Fire X4270 server. The I/O intensive test cases were run at different core levels from one up to the maximum of 8 available cores in SMP mode.

The MSC/Nastran MD 2008 R3 module is an MCAE application based on the finite element method (FEA) of analysis. This computer based numerical method inherently involves a substantial I/O component. The purpose was to evaluate the performance of the Sun Storage F5100 Flash Array relative to high performance 15K RPM internal stripped HDDs.

  • The Sun Storage F5100 Flash Array outperformed the high performance 15K RPM SAS drives on the "xx0cmd2" test case by 107% in the 8-core server configuration.

  • The Sun Storage F5100 Flash Array outperformed the high performance 15K RPM SAS drives on the "xl0tdf1"test case by 85% in the 8-core server configuration.

The MD Nastran MDR3 test suite was designed to include some very I/O intensive test cases albeit some are not very scalable. These cases are the called "xx0wmd0" and "xx0xst0". Both were run and results are presented using a single core server configuration.

  • The Sun Storage F5100 Flash Array outperformed the high performance 15K RPM SAS drives on the "xx0xst0"test case by 33% in the single-core server configuration.

  • The Sun Storage F5100 Flash Array outperformed the high performance 15K RPM SAS drives on the "xx0wmd0"test case by 20% in the single-core server configuration.

Performance Landscape

MD Nastran MDR3 Benchmark Tests

Results in seconds

Test Case DMP 4x15K RPM
72 GB SAS HDD
striped HW RAID0
Sun F5100
r/w buff 4096
striped
Sun F5100
Performance
Advantage
xx0cmd2 8 959 463 107%
xl0tdf1 8 1104 596 85%
xx0xst0 1 1307 980 33%
xx0wmd0 1 20250 16806 20%

Results and Configuration Summary

Hardware Configuration:
    Sun Fire X4270
      2 x 2.93 GHz QC Intel Xeon X5570 processors
      24 GB memory
      4 x 72 GB 15K RPM striped (HW RAID0) SAS disks
    Sun Storage F5100 Flash Array
      20 x 24 GB flash modules
      Intel controller

Software Configuration:

    O/S: 64-bit SUSE Linux Enterprise Server 10 SP 2
    Application: MSC/NASTRAN MD 2008 R3
    Benchmark: MDR3 Benchmark Test Suite
    HP MPI: 02.03.00.00 [7585] Linux x86-64

Benchmark Description

The benchmark tests are representative of typical MSC/Nastran applications including both SMP and DMP runs involving linear statics, nonlinear statics, and natural frequency extraction.

The MD (Multi Discipline) Nastran 2008 application performs both structural (stress) analysis and thermal analysis. These analyses may be either static or transient dynamic and can be linear or nonlinear as far as material behavior and/or deformations are concerned. The new release includes the MARC module for general purpose nonlinear analyses and the Dytran module that employs an explicit solver to analyze crash and high velocity impact conditions.

Please go here for a more complete description of the tests.

Key Points and Best Practices

  • Based on the maximum physical memory on a platform the user can stipulate the maximum portion of this memory that can be allocated to the Nastran job. This is done on the command line with the mem= option. On Linux based systems where the platform has a large amount of memory and where the model does not have large scratch I/O requirements the memory can be allocated to a tmpfs scratch space file system. On Solaris X64 systems advantage can be taken of ZFS for higher I/O performance.

  • The MD Nastran MDR3 test cases don't scale very well, a few not at all and the rest on up to 8 cores at best.

  • The test cases for the MSC/Nastran module all have a substantial I/O component where 15% to 25% of the total run times are associated with I/O activity (primarily scratch files). The required scratch file size ranges from less than 1 GB on up to about 140 GB. Performance will be enhanced by using the fastest available drives and striping together more than one of them or using a high performance disk storage system, further enhanced as indicated here by implementing the Lustre based I/O system. High performance interconnects such as InfiniBand for inter node cluster message passing as well as I/O transfer from the storage system can also enhance performance substantially.

See Also

Disclosure Statement

MSC.Software is a registered trademark of MSC. All information on the MSC.Software website is copyrighted. MD Nastran MDR3 results from http://www.mscsoftware.com and this report as of October 12, 2009.

SPC-2 Sun Storage 6180 Array RAID 5 & RAID 6 Over 70% Better Price Performance than IBM

Significance of Results

Results on the Sun Storage 6180 Array with 8Gb connectivity are presented for the SPC-2 benchmark using RAID 5 and RAID 6.
  • The Sun Storage 6180 Array outperforms the IBM DS5020 by 77% in price performance for SPC-2 benchmark using RAID 5 data protection.

  • The Sun Storage 6180 Array outperforms the IBM DS5020 by 91% in price performance for SPC-2 benchmark using RAID 6 data protection.

  • The Sun Storage 6180 Array is 50% faster than the previous generation, the Sun Storage 6140 Array and IBM DS4700 on the SPC-2 benchmark using RAID 5 data protection.

Performance Landscape

SPC-2 Performance Chart (in increasing price-performance order)

Sponsor System SPC-2 MBPS $/SPC-2 MBPS ASU Capacity (GB) TSC Price Data Protection Level Date Results Identifier
Sun SS6180 1,286.74 $45.47 3,504.693 $58,512 RAID 6 10/08/09 B00044
IBM DS5020 1,286.74 $87.04 3,504.693 $112,002 RAID 6 10/08/09 B00042
Sun SS6180 1,244.89 $42.53 3,504.693 $52,951 RAID 5 10/08/09 B00043
IBM DS5020 1,244.89 $75.30 3,504.693 $93,742 RAID 5 10/08/09 B00041
Sun J4400 887.44 $25.63 23,965.918 $22,742 unprotected 08/15/08 B00034
IBM DS4700 823.62 $106.73 1,748.874 $87,903 RAID 5 04/01/08 B00028
Sun ST6140 790.67 $67.82 1,675.037 $53,622 RAID 5 02/13/07 B00017
Sun ST2540 735.62 $37.32 2,177.548 $27,451 RAID 5 04/10/07 B00021
IBM DS3400 731.25 $34.36 1,165.933 $25,123 RAID 5 02/27/08 B00027
Sun ST2530 672.05 $26.15 1,451.699 $17,572 RAID 5 08/16/07 B00026
Sun J4200 548.80 $22.92 11,995.295 $12,580 Unprotected 07/10/08 B00033

SPC-2 MBPS = the Performance Metric
$/SPC-2 MBPS = the Price/Performance Metric
ASU Capacity = the Capacity Metric
Data Protection = Data Protection Metric
TSC Price = Total Cost of Ownership Metric
Results Identifier = A unique identification of the result Metric

Complete SPC-2 benchmark results may be found at http://www.storageperformance.org.

Results and Configuration Summary

Storage Configuration:

    30 146.8GB 15K RPM drives (for RAID 5)
    36 146.8GB 15K RPM drives (for RAID 6)
    4 Qlogic HBA

Server Configuration:

    IBM system x3850 M2

Software Configuration:

    MS Win 2003 Server SP2
    SPC-2 benchmark kit

Benchmark Description

The SPC Benchmark-2™ (SPC-2) is a series of related benchmark performance tests that simulate the sequential component of demands placed upon on-line, non-volatile storage in server class computer systems. SPC-2 provides measurements in support of real world environments characterized by:
  • Large numbers of concurrent sequential transfers.
  • Demanding data rate requirements, including requirements for real time processing.
  • Diverse application techniques for sequential processing.
  • Substantial storage capacity requirements.
  • Data persistence requirements to ensure preservation of data without corruption or loss.

Key Points and Best Practices

  • This benchmark was performed using RAID 5 and RAID 6 protection.
  • The controller stripe size was set to 512k.
  • No volume manager was used.

See Also

Disclosure Statement

SPC-2, SPC-2 MBPS, $/SPC-2 MBPS are regular trademarks of Storage Performance Council (SPC). More info www.storageperformance.org. Sun Storage 6180 Array 1,286.74 SPC-2 MBPS, $/SPC-2 MBPS $45.47, ASU Capacity 3,504.693 GB, Protect RAID 6, Cost $58,512.00, Ident. B00044. Sun Storage 6180 Array 1,244.89 SPC-2 MBPS, $/SPC-2 MBPS $42.53, ASU Capacity 3,504.693 GB, Protect RAID 5, Cost $52,951.00, Ident. B00043.

SPC-1 Sun Storage 6180 Array Over 70% Better Price Performance than IBM

Significance of Results

Results on the Sun Storage 6180 Array with 8Gb connectivity are presented for the SPC-1 benchmark.
  • The Sun Storage 6180 Array outperforms the IBM DS5020 by 72% in price performance on the SPC-1 benchmark.

  • The Sun Storage 6180 Array is 50% faster than the previous generation, Sun Storage 6140 Array and IBM DS4700 on the SPC-1 benchmark.

  • The Sun Storage 6180 Array betters the HDS 2100 by 27% in price performance on the SPC-1 benchmark.

  • The Sun Storage 6180 Array has 16% better IOPS/Drive performance than the HDS 2100 on the SPC-1 benchmark.

Performance Landscape

SPC-1 Performance Chart (in increasing price-performance order)

Sponsor System SPC-1 IOPS $/SPC-1 IOPS ASU
Capacity
(GB)
TSC Price Data
Protection
Level
Date Results
Identifier
HDS AMD 2300 42,502.61 $6.96 7,955.000 $295,740 Mirroring 3/24/09 A00077
HDS AMD 2100 31,498.58 $5.85 3,967.500 $187,321 Mirroring 3/24/09 A00076
Sun SS6180 (8Gb) 26,090.03 $4.70 5,145.060 $122,623 Mirroring 10/09/09 A00084
IBM DS5020 (8Gb) 26,090.03 $8.08 5,145.060 $210,782 Mirroring 8/25/09 A00081
Fujitsu DX80 19,492.86 $3.45 5,355.400 $67,296 Mirroring 9/14/09 A00082
Sun STK6140 (4Gb) 17,395.53 $4.93 1,963.269 $85,823 Mirroring 10/16/06 A00048
IBM DS4700 (4Gb) 17,195.84 $11.67 1,963.270 $200,666 Mirroring 8/21/06 A00046

SPC-1 IOPS = the Performance Metric
$/SPC-1 IOPS = the Price/Performance Metric
ASU Capacity = the Capacity Metric
Data Protection = Data Protection Metric
TSC Price = Total Cost of Ownership Metric
Results Identifier = A unique identification of the result Metric

Complete SPC-1 benchmark results may be found at http://www.storageperformance.org.

Results and Configuration Summary

Storage Configuration:

    80 x 146.8GB 15K RPM drives
    8 Qlogic HBA

Server Configuration:

    IBM system x3850 M2

Software Configuration:

    MS Windows 2003 Server SP2
    SPC-1 benchmark kit

Benchmark Description

SPC Benchmark-1 (SPC-1): is the first industry standard storage benchmark and is the most comprehensive performance analysis environment ever constructed for storage subsystems. The I/O workload in SPC-1 is characterized by predominately random I/O operations as typified by multi-user OLTP, database, and email servers environments. SPC-1 uses a highly efficient multi-threaded workload generator to thoroughly analyze direct attach or network storage subsystems. The SPC-1 benchmark enables companies to rapidly produce valid performance and price/performance results using a variety of host platforms and storage network topologies.

SPC1 is built to:

  • Provide a level playing field for test sponsors.
  • Produce results that are powerful and yet simple to use.
  • Provide value for engineers as well as IT consumers and solution integrators.
  • Is easy to run, easy to audit/verify, and easy to use to report official results.

Key Points and Best Practices

See Also

Disclosure Statement

SPC-1, SPC-1 IOPS, $/SPC-1 IOPS reg tm of Storage Performance Council (SPC). More info www.storageperformance.org. Sun Storage 6180 Array 26,090.03 SPC-1 IOPS, ASU Capacity 5,145.060GB, $/SPC-1 IOPS $4.70, Data Protection Mirroring, Cost $122,623, Ident. A00084.


Sunday Oct 11, 2009

1.6 Million 4K IOPS in 1RU on Sun Storage F5100 Flash Array

The Sun Storage F5100 Flash Array is a high performance high density solid state flash array delivering over 1.6M IOPS (4K IO) and 12.8GB/sec throughput (1M reads). The Flash Array is designed to accelerate IO-intensive applications, such as databases, at a fraction of the power, space, and cost of traditional hard disk drives. It is based on enterprise-class SLC flash technology, with advanced wear-leveling, integrated backup protection, solid state robustness, and 3M hours MTBF reliability.

  • The Sun Storage F5100 Flash Array demonstrates breakthrough performance of 1.6M IOPS for 4K random reads
  • The Sun Storage F5100 Flash Array can also perform 1.2M IOPS for 4K random writes
  • The Sun Storage F5100 Flash Array has unprecedented throughput of 12.8 GB/sec.

Performance Landscape

Results were obtained using four hosts.

Bandwidth and IOPS Measurements

Test Flash Modules
80 40 20 1
Random 4K Read 1,591K IOPS 796K IOPS 397K IOPS 21K IOPS
Maximum Delivered Random 4K Write 1,217K IOPS 610K IOPS 304K IOPS 15K IOPS
Maximum Delivered 50-50 4K Read/Write 850K IOPS 426K IOPS 213K IOPS 11K IOPS
Sequential Read (1M) 12.8 GB/sec 6.4 GB/sec 3.2 GB/sec 265 MB/sec
Maximum Delivered Sequential Write (1M) 9.7 GB/sec 4.8 GB/sec 2.4 GB/sec 118 MB/sec

Sustained Random 4K Write\*

172K IOPS 9K IOPS

(\*) Maximum Delivered values measured over a 1 minute period. Sustained write performance measured over a 1 hour period and differs from maximum delivered performance. Over time, wear-leveling and erase operations are required and impact write performance levels.

Latency Measurements

The Sun Storage F5100 Flash Array is tuned for 4 KB or larger IO sizes, the write service for IOs smaller than 4 KB can be 10 times more than shown in the table below. It should also be noted that the service times shown below are both the latency and the time to transfer the data. This becomes the dominant portion the the service time for IOs over 64 KB in size.

Transfer Size Service Time (ms)
Read Write
4 KB 0.41 0.28
8 KB 0.42 0.35
16 KB 0.45 0.72
32 KB 0.51 0.77
64 KB 0.63 1.52
128 KB 0.87 2.99
256 KB 1.34 6.03
512 KB 2.29 12.14
1024 KB 4.19 23.79

- Latencies are measured application latencies via vdbench tool.
- Please note that the F5100 Flash Array is a 4KB sector device. Doing IOs of less than 4KB in size, or not aligned on 4KB boundaries, can result in a significant performance degradations on write operations.

Results and Configuration Summary

Storage:

    Sun Storage F5100 Flash Array
      80 Flash Modules
      16 ports
      4 domains (20 Flash Modules per)
      CAM zoning - 5 Flash Modules per port

Servers:

    4 x Sun SPARC Enterprise T5240
    4 x 4 HBAs each, firmware version 01.27.03.00-IT

Software:

    OpenSolaris 2009.06 or Solaris 10 10/09 (MPT driver enhancements)
    Vdbench 5.0
    Required Flash Array Patches SPARC, ses/sgen patch 138128-01 or later & mpt patch 141736-05
    Required Flash Array Patches x86, ses/sgen patch 138129-01 or later & mpt patch 141737-05

Benchmark Description

Sun measured a wide variety of IO performance metrics on the Sun Storage F5100 Flash Array using Vdbench 5.0 measuring 100% Random Read, 100% Random Write, 100% Sequential Read, 100% Sequential Write, and 50-50 read/write. This demonstrates the maximum performance and throughput of the storage system.

Vdbench profile parmfile.txt here

Vdbench is publicly available for download at: http://vdbench.org

Key Points and Best Practices

  • Drive each Flash Modules with 32 outstanding IO as shown in the benchmark profile above.
  • LSI HBA firmware level should be at Phase 15 maxq.
  • LSI HBAs either use single port HBAs or only 1 port per HBA.
  • SPARC platforms will align with the 4K boundary size set by the Flash Array. x86/windows platforms don't necessarily have this alignment built in and can show lower performance

See Also

Disclosure Statement

Sun Storage F5100 Flash Array delivered 1.6M 4K read IOPS and 12.8 GB/sec sequential read. Vdbench 5.0 (http://vdbench.org) was used for the test. Results as of September 12, 2009.

TPC-C World Record Sun - Oracle

TPC-C Sun SPARC Enterprise T5440 with Oracle RAC World Record Database Result

Sun and Oracle demonstrate the World's fastest database performance. Sun Microsystems using 12 Sun SPARC Enterprise T5440 servers, 60 Sun Storage F5100 Flash arrays and Oracle 11g Enterprise Edition with Real Application Clusters and Partitioning delivered a world-record TPC-C benchmark result.

  • The 12-node Sun SPARC Enterprise T5440 server cluster result delivered a world record TPC-C benchmark result of 7,646,486.7 tpmC and $2.36 $/tpmC (USD) using Oracle 11g R1 on a configuration available 3/19/10.

  • The 12-node Sun SPARC Enterprise T5440 server cluster beats the performance of the IBM Power 595 (5GHz) with IBM DB2 9.5 database by 26% and has 16% better price/performance on the TPC-C benchmark.

  • The complete Oracle/Sun solution used 10.7x better computational density than the IBM configuration (computational density = performance/rack).

  • The complete Oracle/Sun solution used 8 times fewer racks than the IBM configuration.

  • The complete Oracle/Sun solution has 5.9x better power/performance than the IBM configuration.

  • The 12-node Sun SPARC Enterprise T5440 server cluster beats the performance of the HP Superdome (1.6GHz Itanium2) by 87% and has 19% better price/performance on the TPC-C benchmark.

  • The Oracle/Sun solution utilized Sun FlashFire technology to deliver this result. The Sun Storage F5100 flash array was used for database storage.

  • Oracle 11g Enterprise Edition with Real Application Clusters and Partitioning scales and effectively uses all of the nodes in this configuration to produce the world record performance.

  • This result showed Sun and Oracle's integrated hardware and software stacks provide industry-leading performance.

More information on this benchmark will be posted in the next several days.

Performance Landscape

TPC-C results (sorted by tpmC, bigger is better)


System
tpmC Price/tpmC Avail Database Cluster Racks w/KtpmC
12 x Sun SPARC Enterprise T5440 7,646,487 2.36 USD 03/19/10 Oracle 11g RAC Y 9 9.6
IBM Power 595 6,085,166 2.81 USD 12/10/08 IBM DB2 9.5 N 76 56.4
Bull Escala PL6460R 6,085,166 2.81 USD 12/15/08 IBM DB2 9.5 N 71 56.4
HP Integrity Superdome 4,092,799 2.93 USD 08/06/07 Oracle 10g R2 N 46 to be added

Avail - Availability date
w/KtmpC - Watts per 1000 tpmC
Racks - clients, servers, storage, infrastructure

Results and Configuration Summary

Hardware Configuration:

    9 racks used to hold

    Servers:
      12 x Sun SPARC Enterprise T5440
      4 x 1.6 GHz UltraSPARC T2 Plus
      512 GB memory
      10 GbE network for cluster
    Storage:
      60 x Sun Storage F5100 Flash Array
      61 x Sun Fire X4275, Comstar SAS target emulation
      24 x Sun StorageTek 6140 (16 x 300 GB SAS 15K RPM)
      6 x Sun Storage J4400
      3 x 80-port Brocade FC switches
    Clients:
      24 x Sun Fire X4170, each with
      2 x 2.53 GHz X5540
      48 GB memory

Software Configuration:

    Solaris 10 10/09
    OpenSolaris 6/09 (COMSTAR) for Sun Fire X4275
    Oracle 11g Enterprise Edition with Real Application Clusters and Partitioning
    Tuxedo CFS-R Tier 1
    Sun Web Server 7.0 Update 5

Benchmark Description

TPC-C is an OLTP system benchmark. It simulates a complete environment where a population of terminal operators executes transactions against a database. The benchmark is centered around the principal activities (transactions) of an order-entry environment. These transactions include entering and delivering orders, recording payments, checking the status of orders, and monitoring the level of stock at the warehouses.

POSTSCRIPT: Here are some comments on IBM's grasping-at-straws-perf/core attacks on the TPC-C result:
c0t0d0s0 blog: "IBM's Reaction to Sun&Oracle TPC-C

See Also

Disclosure Statement

TPC Benchmark C, tpmC, and TPC-C are trademarks of the Transaction Performance Processing Council (TPC). 12-node Sun SPARC Enterprise T5440 Cluster (1.6GHz UltraSPARC T2 Plus, 4 processor) with Oracle 11g Enterprise Edition with Real Application Clusters and Partitioning, 7,646,486.7 tpmC, $2.36/tpmC. Available 3/19/10. IBM Power 595 (5GHz Power6, 32 chips, 64 cores, 128 threads) with IBM DB2 9.5, 6,085,166 tpmC, $2.81/tpmC, available 12/10/08. HP Integrity Superdome(1.6GHz Itanium2, 64 processors, 128 cores, 256 threads) with Oracle 10g Enterprise Edition, 4,092,799 tpmC, $2.93/tpmC. Available 8/06/07. Source: www.tpc.org, results as of 10/11/09.

Thursday Jul 23, 2009

World Record Performance of Sun CMT Servers

This week, Sun continues to highlight the record-breaking performance of its latest update to the chip multi-threaded (CMT) Sun SPARC Enterprise server family running Solaris.  Some of these benchmarks leverage the use of a variety of Sun's unique technologies including ZFS, SSD, various Storage Products and many more. These benchmarks were blogged about by various members or our team and the URLs are shown below.

Messages

  • Sun's CMT is the most powerful CPU regardless of architectural/implementation details (#transistors, #cores, threads, MHz, etc.)!
  • Performance tests show that Sun can outperform IBM Power6 by more than 2x on a variety of benchmarks.
  • Performance tests show Sun's new 1.6GHz CMT systems can be 20% faster than Sun's previous generation 1.4GHz processors, given Sun's continual advancements in both hardware and software.

Benchmark Results Recently Blogged

Sun T5440 Oracle BI EE World Record Performance
http://blogs.sun.com/BestPerf/entry/sun_t5440_oracle_bi_ee

Sun T5440 World Record SAP-SD 4-Processor Two-tier SAP ERP 6.0 EP 4 (Unicode), Beats IBM POWER6 (note1)
http://blogs.sun.com/BestPerf/entry/sun_t5440_world_record_sap

Zeus ZXTM Traffic Manager World Record on Sun T5240
http://blogs.sun.com/BestPerf/entry/top_performance_on_sun_sparc

Sun T5440 SPECjbb2005, Sun 1.6GHz T2 Plus chip is 2.3x IBM 4.7GHz POWER6 chip
http://blogs.sun.com/BestPerf/entry/sun_t5440_specjbb2005_beats_ibm

New SPECjAppServer2004 Performance on the Sun SPARC Enterprise T5440
http://blogs.sun.com/BestPerf/entry/new_specjappserver2004_performance_on_sun

1.6 GHz SPEC CPU2006: World Record 4-chip system, Rate Benchmarks, Beats IBM POWER6
http://blogs.sun.com/BestPerf/entry/1_6_ghz_spec_cpu2006

Sun Blade T6320 World Record 1-chip SPECjbb2005 performance, Sun 1.6GHz T2 Plus chip is 2.6x IBM 4.7GHz POWER6 chip
http://blogs.sun.com/BestPerf/entry/new_specjbb2005_performance_on_the

Comparison Table

Benchmark Sun CMT Tier Software Key Messages
Oracle BI EE Sun T5440 Appl,
Database
Oracle 11g,
Oracle BIEE,
ZFS,
Solaris
  • World Record: T5440
  • Achieved 28,000 users
  • Reference
SAP-SD 2-Tier Sun T5440 Appl,
Database
SAP ECC 6.0,EP4
Oracle 10g,
Solaris
  • World Record 4-socket: T5440
  • T5440 Beats 4-socket IBM 550 5GHz Power6 by 26% (note1)
  • T5440 Beats HP DL585 G6 4-socket Opteron (note1)
  • Unicode version
SPECjAppServer
2004
Sun T5440 Appl, Database Oracle WebLogic,
Oracle 11g,
JDK 1.6.0_14,
Solaris
  • World Record Single System (Appl Tier): T5440
  • T5440 is 6.4x faster of IBM Power 570 4.7GHz Power6
  • T5440 is 73% faster than HP DL 580 G5 Xeon 6C
  • Oracle Fusion Middleware
Sun T5440
SPECjbb2005
Sun T5440 Appl Java HotSpot,
OpenSolaris
  • 1.6GHz US T2 Plus CPU is 2.3x faster of IBM 4.7GHz Power6 CPU
  • 1.6GHz US T2 Plus CPU is 21% faster than previous generation 1.4GHz US T2 Plus CPU
  • Sun T5440 has 2.3x better power/perf than the IBM 570 (8 4.7GHz Power6)
Sun Blade T6320 SPECjbb2005 Sun T6320 Appl Java HotSpot,
OpenSolaris
  • World Record 1-socket: T6320
  • 1.6GHz US T2 Plus CPU is 2.6x faster than IBM 4.7GHz Power6 CPU
  • T6320 is 3% faster than Fujitsu 3.16GHz Xeon QC
SPEC CPU2006 Sun T5440,
Sun T5240,
Sun T5220,
Sun T5120,
Sun T6320
all tiers Sun Studio12,
Solaris,
ZFS
  • World Record 4-socket: T5440
  • 1.6GHz US T2 Plus CPU is 2.6x faster than IBM 4.7GHz Power6 CPU
  • T6320 is 3% faster than Fujitsu 3.16GHz Xeon QC
Zeus ZXTM
Traffic Manager
Sun T5240 Web Zeus ZXTM v5.1r1,
Solaris
  • World Record: T5240
  • T5240 Beats f5 BIG-IP VIPRON by 34%; 2.6x better $/perf
  • T5240 Beats f5 BIG-IP 8800 by 91%; 2.7x better $/perf⁞
  • T5240 Beats Citrix 12000 by 2.2x; 3.3x better $/perf
  • No IBM result

Virtualization

Sun's announcement also included updated virtualization software (LDOMs 1.1). Downloads are available to existing SPARC Enterprise server customers at: http://www.sun.com/servers/coolthreads/ldoms/index.jsp.  Also look the the blog posting "LDoms for Dummies" at http://blogs.sun.com/PierreReynes/entry/ldoms_for_dummies

Try & Buy Program

Sun is also offering free 60-day trials on Sun CMT servers with with a very popular Try and Buy program: http://www.sun.com/tryandbuy.

Benchmark Performance Disclosure Statements (the URLs listed above go into more detail on each of these benchmarks)

Note1: 4-processor world record on the 2-tier SAP SD Standard Application Benchmark with 4720 SD User, as of July 23, 2009, IBM System 550 (4 processors, 8 cores, 16 threads) 3,752 SAP SD Users, 4x 5 GHz Power6, 64 GB memory, DB2 9.5, AIX 6.1, Cert# 2009023. T5440 beats HP new 4-socket Opteron Servers (HPDL585 G6 with 4665 SD User and HP BL685c G6 with 4422 SD User)

Two-tier SAP Sales and Distribution (SD) standard SAP ERP 6.0 2005/EP4 (Unicode) application benchmarks as of 07/21/09: Sun SPARC Enterprise T5440 Server (4 processors, 32 cores, 256 threads) 4,720 SAP SD Users, 4x 1.6 GHz UltraSPARC T2 Plus, 256 GB memory, Oracle10g, Solaris10, Cert# 2009026. HP ProLiant DL585 G6 (4 processors, 24 cores, 24 threads) 4,665 SAP SD Users, 4x 2.8 GHz AMD Opteron Processor 8439 SE, 64 GB memory, SQL Server 2008, Windows Server 2008 Enterprise Edition, Cert# 2009025. HP ProLiant BL685c G6 (4 processors, 24 cores, 24 threads) 4,422 SAP SD Users, 4x 2.6 GHz AMD Opteron Processor 8435, 64 GB memory, SQL Server 2008, Windows Server 2008 Enterprise Edition, Cert# 2009021. IBM System 550 (4 processors, 8 cores, 16 threads) 3,752 SAP SD Users, 4x 5 GHz Power6, 64 GB memory, DB2 9.5, AIX 6.1, Cert# 2009023. HP ProLiant DL585 G5 (4 processors, 16 cores, 16 threads) 3,430 SAP SD Users, 4x 3.1 GHz AMD Opteron Processor 8393 SE, 64 GB memory, SQL Server 2008, Windows Server 2008 Enterprise Edition, Cert# 2009008. HP ProLiant BL685 G6 (4 processors, 16 cores, 16 threads) 3,118 SAP SD Users, 4x 2.9 GHz AMD Opteron Processor 8389, 64 GB memory, SQL Server 2008, Windows Server 2008 Enterprise Edition, Cert# 2009007. NEC Express5800 (4 processors, 24 cores, 24 threads) 2,957 SAP SD Users, 4x 2.66 GHz Intel Xeon Processor X7460, 64 GB memory, SQL Server 2008, Windows Server 2008 Enterprise Edition, Cert# 2009018. Dell PowerEdge M905 (4 processors, 16 cores, 16 threads) 2,129 SAP SD Users, 4x 2.7 GHz AMD Opteron Processor 8384, 96 GB memory, SQL Server 2005, Windows Server 2003 Enterprise Edition, Cert# 2009017. Sun Fire X4600M2 (8 processors, 32 cores, 32 threads) 7,825 SAP SD Users, 8x 2.7 GHz AMD Opteron 8384, 128 GB memory, MaxDB 7.6, Solaris 10, Cert# 2008070. IBM System x3650 M2 (2 Processors, 8 Cores, 16 Threads) 5,100 SAP SD users,2x 2.93 Ghz Intel Xeon X5570, DB2 9.5, Windows Server 2003 Enterprise Edition, Cert# 2008079. HP ProLiant DL380 G6 (2 processors, 8 cores, 16 threads) 4,995 SAP SD Users, 2x 2.93 GHz Intel Xeon x5570, 48 GB memory, SQL Server 2005, Windows Server 2003 Enterprise Edition, Cert# 2008071. SAP, R/3, reg TM of SAP AG in Germany and other countries. More info www.sap.com/benchmark.

Oracle Business Intelligence Enterprise Edition benchmark, see http://www.oracle.com/solutions/business_intelligence/resource-library-whitepapers.html for more. Results as of 7/20/09.

Zeus is TM of Zeus Technology Limited. Results as of 7/21/2009 on http://www.zeus.com/news/press_articles/zeus-price-performance-press-release.html?gclid=CLn4jLuuk5cCFQsQagod7gTkJA.

SPEC, SPECint, SPECfp reg tm of Standard Performance Evaluation Corporation. Competitive results from www.spec.org as of 16 July 2009. Sun's new results quoted on this page have been submitted to SPEC. Sun Blade T6320 89.2 SPECint_rate_base2006, 96.7 SPECint_rate2006, 64.1 SPECfp_rate_base2006, 68.5 SPECfp_rate2006; Sun SPARC Enterprise T5220/T5120 89.1 SPECint_rate_base2006, 97.0 SPECint_rate2006, 64.1 SPECfp_rate_base2006, 68.5 SPECfp_rate2006; Sun SPARC Enterprise T5240 172 SPECint_rate_base2006, 183 SPECint_rate2006, 124 SPECfp_rate_base2006, 133 SPECfp_rate2006; Sun SPARC Enterprise T5440 338 SPECint_rate_base2006, 360 SPECint_rate2006, 254 SPECfp_rate_base2006, 270 SPECfp_rate2006; Sun Blade T6320 76.4 SPECint_rate_base2006, 85.5 SPECint_rate2006, 58.1 SPECfp_rate_base2006, 62.3 SPECfp_rate2006; Sun SPARC Enterprise T5220/T5120 76.2 SPECint_rate_base2006, 83.9 SPECint_rate2006, 57.9 SPECfp_rate_base2006, 62.3 SPECfp_rate2006; Sun SPARC Enterprise T5240 142 SPECint_rate_base2006, 157 SPECint_rate2006, 111 SPECfp_rate_base2006, 119 SPECfp_rate2006; Sun SPARC Enterprise T5440 270 SPECint_rate_base2006, 301 SPECint_rate2006, 212 SPECfp_rate_base2006, 230 SPECfp_rate2006; IBM p 570 53.2 SPECint_rate_base2006, 60.9 SPECint_rate2006, 51.5 SPECfp_rate_base2006, 58.0 SPECfp_rate2006; IBM Power 520 102 SPECint_rate_base2006, 124 SPECint_rate2006, 88.7 SPECfp_rate_base2006, 105 SPECfp_rate2006; IBM Power 550 215 SPECint_rate_base2006, 263 SPECint_rate2006, 188 SPECfp_rate_base2006, 222 SPECfp_rate2006; HP Integrity BL870c 114 SPECint_rate_base2006; HP Integrity rx7640 87.4 SPECfp_rate_base2006, 90.8 SPECfp_rate2006.

SPEC, SPECjbb reg tm of Standard Performance Evaluation Corporation. Results as of 7/17/2009 on http://www.spec.org. SPECjbb2005, Sun Blade T6320 229576 SPECjbb2005 bops, 28697 SPECjbb2005 bops/JVM; IBM p 570 88089 SPECjbb2005 bops, 88089 SPECjbb2005 bops/JVM; Fujitsu TX100 223691 SPECjbb2005 bops, 111846 SPECjbb2005 bops/JVM; IBM x3350 194256 SPECjbb2005 bops, 97128 SPECjbb2005 bops/JVM; Sun SPARC Enterprise T5120 192055 SPECjbb2005 bops, 24007 SPECjbb2005 bops/JVM.

SPECjAppServer2004, Sun SPARC Enterprise T5440 (4 chips, 32 cores) 7661.16 SPECjAppServer2004 JOPS@Standard; HP DL580 G5 (4 chips, 24 cores) 4410.07 SPECjAppServer2004 JOPS@Standard; HP DL580 G5 (4 chips, 16 cores) 3339.94 SPECjAppServer2004 JOPS@Standard; Two Dell PowerEdge 2950 (4 chips, 16 cores) 4794.33 SPECjAppServer2004 JOPS@Standard; Dell PowerEdge R610 (2 chips, 8 cores) 3975.13 SPECjAppServer2004 JOPS@Standard; Two Dell PowerEdge R610 (4 chips, 16 cores) 7311.50 SPECjAppServer2004 JOPS@Standard; IBM Power 570 (2 chips, 4 cores) 1197.51 SPECjAppServer2004 JOPS@Standard; SPEC, SPECjAppServer reg tm of Standard Performance Evaluation Corporation. Results from http://www.spec.org as of 7/20/09.

SPECjbb2005 Sun SPARC Enterprise T5440 (4 chips, 32 cores) 841380 SPECjbb2005 bops, 26293 SPECjbb2005 bops/JVM. Results submitted to SPEC. HP DL585 G5 (4 chips, 24 cores) 937207 SPECjbb2005 bops, 234302 SPECjbb2005 bops/JVM. IBM Power 570 (8 chips, 16 cores) 798752 SPECjbb2005 bops, 99844 SPECjbb2005 bops/JVM. Sun SPARC Enterprise T5440 (4 chips, 32 cores) 692736 SPECjbb2005 bops, 21648 SPECjbb2005 bops/JVM. SPEC, SPECjbb reg tm of Standard Performance Evaluation Corporation. Results from www.spec.org as of 7/20/09.

IBM p 570 8P 4.7GHz (4 building blocks) power specifications calculated as 80% of maximum input power reported 7/8/09 in “Facts and Features Report”: ftp://ftp.software.ibm.com/common/ssi/pm/br/n/psb01628usen/PSB01628USEN.PDF

About

BestPerf is the source of Oracle performance expertise. In this blog, Oracle's Strategic Applications Engineering group explores Oracle's performance results and shares best practices learned from working on Enterprise-wide Applications.

Index Pages
Search

Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today