Tuesday Oct 13, 2009

Oracle PeopleSoft Payroll (NA) Sun SPARC Enterprise M4000 and Sun Storage F5100 World Record Performance

The Sun SPARC Enterprise M4000 server combined with Sun FlashFire technology, the Sun Storage F5100 flash array, has produced World Record Performance on PeopleSoft Payroll (North America) 9.0 benchmark.

  • A Sun SPARC Enterprise M4000 server with four new 2.53GHz SPARC64 VII processors and a Sun Storage F5100 flash array is 33% faster than the HP rx6600 (4 x 1.6GHz Itanium2 processors) on the PeopleSoft Payroll (NA) 9.0 benchmark. The Sun solution used the Oracle 11g database running on Solaris 10.

  • The Sun SPARC Enterprise M4000 server with four 2.53GHz SPARC64 VII processors and the Sun Storage F5100 flash array is 35% faster than the 2027 MIPs IBM Z990 (6 Z990 Gen1 processors) on the PeopleSoft Payroll (NA) 9.0 benchmark with Oracle 11g database running on Solaris 10. The IBM result used IBM DB2 for Z/OS 8.1 for the database.

  • The Sun SPARC Enterprise M4000 server with four 2.53GHz SPARC64 VII processors and a Sun Storage F5100 flash array processed 250K employee payroll checks using PeopleSoft Payroll (NA) 9.0 and Oracle 11g running on Solaris 10. Four different execution strategies were run with an average improvement of 25% compared to HP's results run on the rx6600. Sun achieved these results with 8 concurrent jobs using only 25% CPU utilization while HP required 16 concurrent jobs with a 88% CPU utilization.

  • The Sun SPARC Enterprise M4000 server combined with Sun FlashFire technology processed 8 Sequential Jobs and single run control with a total time of 527.85 mins, an improvement of 20% compared to HPs time of 633.09 mins.

  • The Sun SPARC Enterprise M4000 server combined with Sun FlashFire technology demonstrated a speedup of 81% going from 1 to 8 streams on the PeopleSoft Payroll (NA) 9.0 benchmark using the Oracle 11g database.

  • The Sun FlashFire technology dramatically improves IO performance for the PeopleSoft Payroll benchmark with significant performance boost over best optimized FC disks (60+).

  • The Sun Storage F5100 Flash Array is a high performance high density solid state flash array which provides a read latency of only 0.5 msec which is about 10 times faster than the normal disk latencies 5 msec measured on this benchmark.

  • Sun estimates that the MIPS rating for a Sun SPARC Enterprise M4000 server is over 2742 MIPS.

Performance Landscape

250K Employees

System Processor OS/Database Time in Minutes Version
Run 1 Run 2 Run 3
Sun M4000 4x 2.53GHz SPARC64 VII Solaris/Oracle 11g 79.35 288.47 527.85 9.0
HP rx6600 4x 1.6GHz Itanium2 HP-UX/Oracle 11g 81.17 350.16 633.25 9.0
IBM Z990 6x Gen1 2027 MIPS Z/OS /DB2 107.34 328.66 544.80 9.0
HP rx6600 4x 1.6GHz Itanium2 HP-UX/Oracle 11g 105.70 369.59 633.09 9.0

Note: IBM benchmark documents show that 6 Gen1 procs is 2027 mips. 13 Gen1 processors were in this config but only 6 were available for testing.

500K Employees

System Processor OS/Database Time in Minutes Version
Run 1 Run 2 Run 3
HP rx7640 8x 1.6GHz Itanium2 HP-UX/Oracle 11g 133.63 712.72 1665.01 9.0

Results and Configuration Summary

Hardware Configuration:

    1 x Sun SPARC Enterprise M4000 (4 x 2.53 GHz/32GB)
    1 x Sun Storage F5100 Flash Array (40 x 24GB FMODs)
    1 x Sun Storage J4200 (12 x 450GB SAS 15K RPM)

Software Configuration:

    Solaris 10 5/09
    Oracle PeopleSoft HCM 9.0
    Oracle PeopleSoft Enterprise (PeopleTools) 8.49
    Micro Focus Server Express 4.0 SP4
    Oracle RDBMS 11.1.0.7 64-bit
    HP's Mercury Interactive QuickTest Professional 9.0

Benchmark Description

The PeopleSoft 9.0 Payroll (North America) benchmark is a performance benchmark established by PeopleSoft to demonstrate system performance for a range of processing volumes in a specific configuration. This information may be used to determine the software, hardware, and network configurations necessary to support processing volumes. This workload represents large batch runs typical of OLTP workloads during a mass update.

To measure five application business process run times for a database representing large organization. The five processes are:

  • Paysheet Creation: generates payroll data worksheet for employees, consisting of std payroll information for each employee for given pay cycle.

  • Payroll Calculation: Looks at Paysheets and calculates checks for those employees.

  • Payroll Confirmation: Takes information generated by Payroll Calculation and updates the employees' balances with the calculated amounts.

  • Print Advice forms: The process takes the information generated by payroll Calculations and Confirmation and produces an Advice for each employee to report Earnings, Taxes, Deduction, etc.

  • Create Direct Deposit File: The process takes information generated by above processes and produces an electronic transmittal file use to transfer payroll funds directly into an employee bank a/c.

For the benchmark, we collect at least four data points with different number of job streams (parallel jobs). This batch benchmark allows a maximum of eight job streams to be configured to run in parallel.

Key Points and Best Practices

See Also

Disclosure Statement

Oracle PeopleSoft Payroll (NA) 9.0 benchmark, Sun M4000 (4 2.53GHz SPARC64) 79.35 min, IBM Z990 (6 gen1) 107.34 min, HP rx6600 (4 1.6GHz Itanium2) 105.70 min, www.oracle.com/apps_benchmark/html/white-papers-peoplesoft.html Results 10/13/2009.

Monday Oct 12, 2009

SPC-2 Sun Storage 6180 Array RAID 5 & RAID 6 Over 70% Better Price Performance than IBM

Significance of Results

Results on the Sun Storage 6180 Array with 8Gb connectivity are presented for the SPC-2 benchmark using RAID 5 and RAID 6.
  • The Sun Storage 6180 Array outperforms the IBM DS5020 by 77% in price performance for SPC-2 benchmark using RAID 5 data protection.

  • The Sun Storage 6180 Array outperforms the IBM DS5020 by 91% in price performance for SPC-2 benchmark using RAID 6 data protection.

  • The Sun Storage 6180 Array is 50% faster than the previous generation, the Sun Storage 6140 Array and IBM DS4700 on the SPC-2 benchmark using RAID 5 data protection.

Performance Landscape

SPC-2 Performance Chart (in increasing price-performance order)

Sponsor System SPC-2 MBPS $/SPC-2 MBPS ASU Capacity (GB) TSC Price Data Protection Level Date Results Identifier
Sun SS6180 1,286.74 $45.47 3,504.693 $58,512 RAID 6 10/08/09 B00044
IBM DS5020 1,286.74 $87.04 3,504.693 $112,002 RAID 6 10/08/09 B00042
Sun SS6180 1,244.89 $42.53 3,504.693 $52,951 RAID 5 10/08/09 B00043
IBM DS5020 1,244.89 $75.30 3,504.693 $93,742 RAID 5 10/08/09 B00041
Sun J4400 887.44 $25.63 23,965.918 $22,742 unprotected 08/15/08 B00034
IBM DS4700 823.62 $106.73 1,748.874 $87,903 RAID 5 04/01/08 B00028
Sun ST6140 790.67 $67.82 1,675.037 $53,622 RAID 5 02/13/07 B00017
Sun ST2540 735.62 $37.32 2,177.548 $27,451 RAID 5 04/10/07 B00021
IBM DS3400 731.25 $34.36 1,165.933 $25,123 RAID 5 02/27/08 B00027
Sun ST2530 672.05 $26.15 1,451.699 $17,572 RAID 5 08/16/07 B00026
Sun J4200 548.80 $22.92 11,995.295 $12,580 Unprotected 07/10/08 B00033

SPC-2 MBPS = the Performance Metric
$/SPC-2 MBPS = the Price/Performance Metric
ASU Capacity = the Capacity Metric
Data Protection = Data Protection Metric
TSC Price = Total Cost of Ownership Metric
Results Identifier = A unique identification of the result Metric

Complete SPC-2 benchmark results may be found at http://www.storageperformance.org.

Results and Configuration Summary

Storage Configuration:

    30 146.8GB 15K RPM drives (for RAID 5)
    36 146.8GB 15K RPM drives (for RAID 6)
    4 Qlogic HBA

Server Configuration:

    IBM system x3850 M2

Software Configuration:

    MS Win 2003 Server SP2
    SPC-2 benchmark kit

Benchmark Description

The SPC Benchmark-2™ (SPC-2) is a series of related benchmark performance tests that simulate the sequential component of demands placed upon on-line, non-volatile storage in server class computer systems. SPC-2 provides measurements in support of real world environments characterized by:
  • Large numbers of concurrent sequential transfers.
  • Demanding data rate requirements, including requirements for real time processing.
  • Diverse application techniques for sequential processing.
  • Substantial storage capacity requirements.
  • Data persistence requirements to ensure preservation of data without corruption or loss.

Key Points and Best Practices

  • This benchmark was performed using RAID 5 and RAID 6 protection.
  • The controller stripe size was set to 512k.
  • No volume manager was used.

See Also

Disclosure Statement

SPC-2, SPC-2 MBPS, $/SPC-2 MBPS are regular trademarks of Storage Performance Council (SPC). More info www.storageperformance.org. Sun Storage 6180 Array 1,286.74 SPC-2 MBPS, $/SPC-2 MBPS $45.47, ASU Capacity 3,504.693 GB, Protect RAID 6, Cost $58,512.00, Ident. B00044. Sun Storage 6180 Array 1,244.89 SPC-2 MBPS, $/SPC-2 MBPS $42.53, ASU Capacity 3,504.693 GB, Protect RAID 5, Cost $52,951.00, Ident. B00043.

SPC-1 Sun Storage 6180 Array Over 70% Better Price Performance than IBM

Significance of Results

Results on the Sun Storage 6180 Array with 8Gb connectivity are presented for the SPC-1 benchmark.
  • The Sun Storage 6180 Array outperforms the IBM DS5020 by 72% in price performance on the SPC-1 benchmark.

  • The Sun Storage 6180 Array is 50% faster than the previous generation, Sun Storage 6140 Array and IBM DS4700 on the SPC-1 benchmark.

  • The Sun Storage 6180 Array betters the HDS 2100 by 27% in price performance on the SPC-1 benchmark.

  • The Sun Storage 6180 Array has 16% better IOPS/Drive performance than the HDS 2100 on the SPC-1 benchmark.

Performance Landscape

SPC-1 Performance Chart (in increasing price-performance order)

Sponsor System SPC-1 IOPS $/SPC-1 IOPS ASU
Capacity
(GB)
TSC Price Data
Protection
Level
Date Results
Identifier
HDS AMD 2300 42,502.61 $6.96 7,955.000 $295,740 Mirroring 3/24/09 A00077
HDS AMD 2100 31,498.58 $5.85 3,967.500 $187,321 Mirroring 3/24/09 A00076
Sun SS6180 (8Gb) 26,090.03 $4.70 5,145.060 $122,623 Mirroring 10/09/09 A00084
IBM DS5020 (8Gb) 26,090.03 $8.08 5,145.060 $210,782 Mirroring 8/25/09 A00081
Fujitsu DX80 19,492.86 $3.45 5,355.400 $67,296 Mirroring 9/14/09 A00082
Sun STK6140 (4Gb) 17,395.53 $4.93 1,963.269 $85,823 Mirroring 10/16/06 A00048
IBM DS4700 (4Gb) 17,195.84 $11.67 1,963.270 $200,666 Mirroring 8/21/06 A00046

SPC-1 IOPS = the Performance Metric
$/SPC-1 IOPS = the Price/Performance Metric
ASU Capacity = the Capacity Metric
Data Protection = Data Protection Metric
TSC Price = Total Cost of Ownership Metric
Results Identifier = A unique identification of the result Metric

Complete SPC-1 benchmark results may be found at http://www.storageperformance.org.

Results and Configuration Summary

Storage Configuration:

    80 x 146.8GB 15K RPM drives
    8 Qlogic HBA

Server Configuration:

    IBM system x3850 M2

Software Configuration:

    MS Windows 2003 Server SP2
    SPC-1 benchmark kit

Benchmark Description

SPC Benchmark-1 (SPC-1): is the first industry standard storage benchmark and is the most comprehensive performance analysis environment ever constructed for storage subsystems. The I/O workload in SPC-1 is characterized by predominately random I/O operations as typified by multi-user OLTP, database, and email servers environments. SPC-1 uses a highly efficient multi-threaded workload generator to thoroughly analyze direct attach or network storage subsystems. The SPC-1 benchmark enables companies to rapidly produce valid performance and price/performance results using a variety of host platforms and storage network topologies.

SPC1 is built to:

  • Provide a level playing field for test sponsors.
  • Produce results that are powerful and yet simple to use.
  • Provide value for engineers as well as IT consumers and solution integrators.
  • Is easy to run, easy to audit/verify, and easy to use to report official results.

Key Points and Best Practices

See Also

Disclosure Statement

SPC-1, SPC-1 IOPS, $/SPC-1 IOPS reg tm of Storage Performance Council (SPC). More info www.storageperformance.org. Sun Storage 6180 Array 26,090.03 SPC-1 IOPS, ASU Capacity 5,145.060GB, $/SPC-1 IOPS $4.70, Data Protection Mirroring, Cost $122,623, Ident. A00084.


Sunday Oct 11, 2009

1.6 Million 4K IOPS in 1RU on Sun Storage F5100 Flash Array

The Sun Storage F5100 Flash Array is a high performance high density solid state flash array delivering over 1.6M IOPS (4K IO) and 12.8GB/sec throughput (1M reads). The Flash Array is designed to accelerate IO-intensive applications, such as databases, at a fraction of the power, space, and cost of traditional hard disk drives. It is based on enterprise-class SLC flash technology, with advanced wear-leveling, integrated backup protection, solid state robustness, and 3M hours MTBF reliability.

  • The Sun Storage F5100 Flash Array demonstrates breakthrough performance of 1.6M IOPS for 4K random reads
  • The Sun Storage F5100 Flash Array can also perform 1.2M IOPS for 4K random writes
  • The Sun Storage F5100 Flash Array has unprecedented throughput of 12.8 GB/sec.

Performance Landscape

Results were obtained using four hosts.

Bandwidth and IOPS Measurements

Test Flash Modules
80 40 20 1
Random 4K Read 1,591K IOPS 796K IOPS 397K IOPS 21K IOPS
Maximum Delivered Random 4K Write 1,217K IOPS 610K IOPS 304K IOPS 15K IOPS
Maximum Delivered 50-50 4K Read/Write 850K IOPS 426K IOPS 213K IOPS 11K IOPS
Sequential Read (1M) 12.8 GB/sec 6.4 GB/sec 3.2 GB/sec 265 MB/sec
Maximum Delivered Sequential Write (1M) 9.7 GB/sec 4.8 GB/sec 2.4 GB/sec 118 MB/sec

Sustained Random 4K Write\*

172K IOPS 9K IOPS

(\*) Maximum Delivered values measured over a 1 minute period. Sustained write performance measured over a 1 hour period and differs from maximum delivered performance. Over time, wear-leveling and erase operations are required and impact write performance levels.

Latency Measurements

The Sun Storage F5100 Flash Array is tuned for 4 KB or larger IO sizes, the write service for IOs smaller than 4 KB can be 10 times more than shown in the table below. It should also be noted that the service times shown below are both the latency and the time to transfer the data. This becomes the dominant portion the the service time for IOs over 64 KB in size.

Transfer Size Service Time (ms)
Read Write
4 KB 0.41 0.28
8 KB 0.42 0.35
16 KB 0.45 0.72
32 KB 0.51 0.77
64 KB 0.63 1.52
128 KB 0.87 2.99
256 KB 1.34 6.03
512 KB 2.29 12.14
1024 KB 4.19 23.79

- Latencies are measured application latencies via vdbench tool.
- Please note that the F5100 Flash Array is a 4KB sector device. Doing IOs of less than 4KB in size, or not aligned on 4KB boundaries, can result in a significant performance degradations on write operations.

Results and Configuration Summary

Storage:

    Sun Storage F5100 Flash Array
      80 Flash Modules
      16 ports
      4 domains (20 Flash Modules per)
      CAM zoning - 5 Flash Modules per port

Servers:

    4 x Sun SPARC Enterprise T5240
    4 x 4 HBAs each, firmware version 01.27.03.00-IT

Software:

    OpenSolaris 2009.06 or Solaris 10 10/09 (MPT driver enhancements)
    Vdbench 5.0
    Required Flash Array Patches SPARC, ses/sgen patch 138128-01 or later & mpt patch 141736-05
    Required Flash Array Patches x86, ses/sgen patch 138129-01 or later & mpt patch 141737-05

Benchmark Description

Sun measured a wide variety of IO performance metrics on the Sun Storage F5100 Flash Array using Vdbench 5.0 measuring 100% Random Read, 100% Random Write, 100% Sequential Read, 100% Sequential Write, and 50-50 read/write. This demonstrates the maximum performance and throughput of the storage system.

Vdbench profile parmfile.txt here

Vdbench is publicly available for download at: http://vdbench.org

Key Points and Best Practices

  • Drive each Flash Modules with 32 outstanding IO as shown in the benchmark profile above.
  • LSI HBA firmware level should be at Phase 15 maxq.
  • LSI HBAs either use single port HBAs or only 1 port per HBA.
  • SPARC platforms will align with the 4K boundary size set by the Flash Array. x86/windows platforms don't necessarily have this alignment built in and can show lower performance

See Also

Disclosure Statement

Sun Storage F5100 Flash Array delivered 1.6M 4K read IOPS and 12.8 GB/sec sequential read. Vdbench 5.0 (http://vdbench.org) was used for the test. Results as of September 12, 2009.

Tuesday Jul 14, 2009

Vdbench: Sun StorageTek Vdbench, a storage I/O workload generator.

Vdbench is written in Java (and a little C) and runs on Solaris Sparc and X86, Windows, AIX, Linux, zLinux, HP/UX, and OS/X.

I wrote the SPC1 and SPC2 workload generator using the Vdbench base code for the Storage Performance Council: http://www.storageperformance.org

Vdbench is a disk and tape I/O workload generator, allowing detailed control over numerous workload parameters like:

Options:

· For raw disk (and tape) and large disk files:

o Read vs. write

o Random vs. sequential or skip-sequential

o I/O rate

o Data transfer size

o Cache hit rates

o I/O queue depth control

o Unlimited amount of concurrent devices and workloads

o Compression (tape)

· For file systems:

o Number of directory and files

o File sizes

o Read vs. write

o Data transfer size

o Directory create/delete, file create/delete,

o Unlimited amount of concurrent file systems and workloads

Single host or Multi-host:

All work is centrally controlled, running either on a single host or on multiple hosts concurrently.

Reporting:

Centralized reporting, reporting and reporting using the simple idea that you can't understand performance of a workload unless you can see the detail. If you just look at run totals you'll miss the fact that for some reason the storage configuration was idle for several seconds or even minutes!

  • Second by second detail of by Vdbench accumulated performance statistics for total workload and for each individual logical device used by Vdbench.
  • For Solaris Sparc and X86: second by second detail of Kstat statistics for total workload and for each physical lun or NFS mounted device used.
  • All Vdbench reports are HTML files. Just point your browser to the summary.html file in your Vdbench output directory and all the reports link together.
  • Swat (an other of my tools) allows you to display performance charts of the data created by Vdbench: Just start SPM, then 'File' 'Import Vdbench data'.
  • Vdbench will (optionally) automatically call Swat to create JPG files of your performance charts.
  • Vdbench has a GUI that will allow you to compare the results of two different Vdbench workload executions. It shows the differences between the two runs in different grades of green, yellow and red. Green is good, red is bad.

Data Validation:

Data Validation is a highly sophisticated methodology to assure data integrity by always writing unique data contents to each block and then doing a compare after the next read or before the next write. The history tables containing information about what is written where is maintained in memory and optionally in journal files. Journaling allows data to be written to disk in one execution of Vdbench with Data Validation and then continued in a future Vdbench execution to make sure that after a system shutdown all data is still there. Great for testing mirrors: write some data using journaling, break the mirror, and have Vdbench validate the contents of the mirror.

I/O Replay

A disk I/O workload traced using Swat (an other of my tools) can be replayed using Vdbench on any test system to any type of storage. This allows you to trace a production I/O workload, bring the trace data to your lab, and then replay your I/O workload on whatever storage you want. Want to see how the storage performs when the I/O rate doubles? Vdbench Replay will show you. With this you can test your production workload without the hassle of having to get your data base software and licenses, your application software, or even your production data on your test system.

For more detailed information about Vdbench go to http://vdbench.org where you can download the documentation or the latest GA version of Vdbench.

You can find continuing updates about Swat and Vdbench on my blog: http://blogs.sun.com/henk/

Henk Vandenbergh

PS: If you're wondering where the name Vdbench came from :  Henk Vandenbergh benchmarking.

Storage performance and workload analysis using Swat.

Swat (Sun StorageTek Workload Analysis Tool) is a host-based, storage-centric Java application that thoroughly captures, summarizes, and analyzes storage workloads for both Solaris and Windows environments.

This tool was written to help Sun’s engineering, sales and service organizations and Sun’s customers understand storage I/O workloads.


 Sample screenshot:


Swat can be used for among many other reasons:

  • Problem analysis
  • Configuration sizing (just buying x GB of storage just won't do anymore)
  • Trend analysis: is my workload growing, and can I identify/resolve problems before they happen?

Swat is storage agnostic, so it does not matter what type or brand of storage you are trying to report on. Swat reports the host's view of the storage performance and workload, using the same Kstat (Solaris) data that iostat uses.

Swat consists of several different major functions:

· Swat Performance Monitor (SPM)

· Swat Trace Facility (STF)

· Swat Trace Monitor (STM)

· Swat Real Time Monitor

· Swat Local Real Time Monitor

· Swat Reporter

Swat Performance Monitor (SPM):

Works on Solaris and Windows. An attempt has been made in the current Swat 3.02 to also collect data on AIX and Linux. Swat 3.02 also reports Network Adapter statistics on Solaris, Windows, and Linux. A Swat Data Collector (agent) runs on some or all of your servers/hosts, collecting I/O performance statistics every 5, 10, or 15 minutes and writes the data to a disk file, one new file every day, automatically switched at midnight.

The data then can be analyzed using the Swat Reporter.

Swat Trace Facility (STF):

For Solaris and Windows. STF collects detailed I/O trace information. This data then goes through a data Extraction and Analysis phase that generates hundreds or thousands of second-by-second statistics counters. That data then can be analyzed using the Swat Reporter. You create this trace for between 30 and 60 minutes for instance at a time when you know you will have a performance problem.

A disk I/O workload traced using Swat can be replayed on any test system to any type of storage using Vdbench (an other of my tools, available at http://vdbench.org). This allows you to trace a production I/O workload, bring the trace data to your lab, and then replay that I/O workload on whatever storage you want. Want to see how the storage performs when the I/O rate doubles or triples? Vdbench Replay will show you. With this you can test your production workload without the hassle of having to get your data base software and licenses, your application software and licenses, or even your production data.

Note: STF is currently limited to the collection of about 20,000 IOPS. Some development effort is required to handle the current increase in IOPS made possible by Solid State Devices (SSDs).

Note: STF, while collecting the trace data is the only Swat function that requires root access. This functionality is all handled by one single KSH script which can be run independently. (Script uses TNF and ADB).

Swat Trace Monitor (STM):

With STF you need to know when the performance problem will occur so that you can schedule the trace data to be collected. Not every performance problem however is predictable. STM will run an in-memory trace and then monitors the overall storage performance. Once a certain threshold is reach, for instance response time greater than 100 milliseconds, the in-memory trace buffer is dumped to disk and the trace then continues collecting trace data for an amount of seconds before terminating.

Swat Real Time Monitor:

When a Data Collector is active on your current or any network-connected host, Swat Real Time Monitor will open a Java socket connection with that host, allowing you to actively monitor the current storage performance either from your local or any of your remote hosts.

Swat Local Real Time Monitor:

Local Real Time Monitor is the quickest way to start using Swat. Just enter './swat -l' and Swat will start a private Data Collector for your local system and then will show you exactly what is happening to your current storage workload. No more fiddling trying to get some useful data out of a pile of iostat output.

Swat Reporter:

The Swat Reporter ties everything together. All data collected by the above Swat functions can be displayed using this powerful GUI reporting and charting function. You can generate hundreds of different performance charts or tabulated reports giving you intimate understanding of your storage workload and performance. Swat will even create JPG files for you that then can be included in documents and/or presentations. There is even a batch utility (Swat Batch Reporter) that will automate the JPG generation for you. If you want, Swat will even create a script for this batch utility for you.

Some of the many available charts:

  • Response time per controller or device
  • I/O rate per controller or device
  • Read percentage
  • Data transfer size
  • Queue depth
  • Random vs. sequential (STF only)
  • CPU usage
  • Device skew
  • Etc. etc.

Swat has been written in Java. This means, that once your data has been collected on its originating system, the data can be displayed and analyzed using the Swat Reporter on ANY Java enabled system, including any type of laptop.

For more detailed information go to (long URL)where you can download the latest release, Swat 3.02.

You can find continuing updates about Swat and Vdbench on my blog: http://blogs.sun.com/henk/

Henk Vandenbergh

Thursday Jun 25, 2009

Sun SSD Server Platform Bandwidth and IOPS (Speeds & Feeds)

The Sun SSD (32 GB SATA 2.5" SSD) is the world's first enterprise-quality, open-standard Flash design. Built to an industry-standard JEDEC form factor, the module is being made available to developers and the OpenSolaris Storage community to foster Flash innovation. The Sun SSD delivers unprecedented IO performance, saves on power, space, and cooling, and will enable new levels of server optimization and datacenter efficiencies.

  • The Sun SSD demonstrated performance of 98K 4K random read IOPS on a Sun Fire X4450 server running the Solaris operating system.

Performance Landscape

Solaris 10 Results

Test SSD Result
X4450 T5240
Random Read (4K) 98.4K IOPS 71.5K IOPS
Random Write (4K) 31.8K IOPS 14.4K IOPS
50-50 Read/Write (4K) 14.9K IOPS 15.7K IOPS
Sequential Read (MB/sec) 764 MB/sec 1012 MB/sec
Sequential Write (MB/sec) 376 MB/sec 531 MB/sec

Results and Configuration Summary

Storage:

    4 x Sun SSD
    32 GB SATA 2.5" SSD (24 GB usable)
    2.5in drive form factor

Servers:

    Sun SPARC Enterprise T5240 - 4 internal drive slots used (LSI driver)
    Sun Fire X4450 - 4 internal drive slots used (LSI driver)

Software:

    OpenSolaris 2009.06 or Solaris 10 10/09 (MPT driver enhancements)
    Vdbench 5.0

Benchmark Description

Sun measured a wide variety of IO performance metrics on the Sun SSD using Vdbench 5.0 measuring 100% Random Read, 100% Random Write, 100% Sequential Read, 100% Sequential Write, and 50-50 read/write. This demonstrates the maximum performance and throughput of the storage system.

Vdbench profile:

    wd=wm_80dr,sd=sd\*,readpct=0,rhpct=0,seekpct=100
    wd=ws_80dr,sd=sd\*,readpct=0,rhpct=0,seekpct=0
    wd=rm_80dr,sd=(sd1-sd80),readpct=100,rhpct=0,seekpct=100
    wd=rs_80dr,sd=(sd1-sd80),readpct=100,rhpct=0,seekpct=0
    wd=rwm_80dr,sd=sd\*,readpct=50,rhpct=0,seekpct=100
    rd=default
    ###Random Read and writes tests varying transfer size
    rd=default,el=30m,in=6,forx=(4K),forth=(32),io=max,pause=20
    rd=run1_rm_80dr,wd=rm_80dr
    rd=run2_wm_80dr,wd=wm_80dr
    rd=run3_rwm_80dr,wd=rwm_80dr
    ###Sequential read and Write tests varying transfer size
    rd=default,el=30m,in=6,forx=(512k),forth=(32),io=max,pause=20
    rd=run4_rs_80dr,wd=rs_80dr
    rd=run5_ws_80dr,wd=ws_80dr
Vdbench is publicly available for download at: http://vdbench.org

Key Points and Best Practices

  • All measurements were done with the internal HBA and not the internal RAID.

See Also

Disclosure Statement

Sun SSD delivered 71.5K 4K read IOPS and 1012 MB/sec sequential read. Vdbench 5.0 (http://vdbench.org) was used for the test. Results as of June 17, 2009.

About

BestPerf is the source of Oracle performance expertise. In this blog, Oracle's Strategic Applications Engineering group explores Oracle's performance results and shares best practices learned from working on Enterprise-wide Applications.

Index Pages
Search

Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today