OLTP performance of the Sun SPARC Enterprise M9000 on Solaris 10 08/07

M5000_E2900_X4600
I recently published a performance comparison of the Sun Fire E25k and the new Sun SPARC Enterprise M9000.
 In this article, a lot of my readers noticed the following note :
"Oracle OLTP is disappointing on the M9000 with an increase in response time at peak throughput. Upcoming release of Solaris and Oracle 10g should improve this result"

Critical bug fixes

 The reason why I wrote this is because I knew that Sun engineering was working hard at fixing three key performance bugs specific to database performance on any of the M-serie systems. Here is a list of this bugs that were successfully fixed in Solaris 10 08/07 (Update 4) :

1. Bug 6451741
SPARC64 VI prefetch tuning needs to be completed
Impact : L2 cache efficiency is key to database memory performance. Corrected preferch values improve memory read and write performance.

2. Bug 6486343
Mutex performance on large M-serie system need improvement
Impact : The mutex retry and backoff algorithm needed to be retuned for M-series system due to out-of-order execution and platform specific branch prediction routines. Also improve lock concurrency on hot mermory pages

3. Bug 6487440
Memory copy operations needs tuning on M-serie systems
Impact : The least important fix but important for Oracle stored procedures , triggers and constraints

The big question was : How much of an improvement it would have on OLTP performance ?
Well, one thing is sure is that your mileage may vary but I measured on my workload a whooping 1.33
 lower response times for 1.38 faster throughput (compared to Solaris 10 Update 3) . It is also interesting to notice that all the other workloads tested have not moved significantly as they are not really sensitive to the issues tackled there.

Please find below the corrected comparative charts in throughput and response time after a reminder on the workloads :


Java workloads

Not exactly.So let's try to be a little bit more specific using five different 100% Java (1.6) workloads :
  1. iGenCPU v3 - Fractal simulation 50% Integer / 50% floating point
  2. iGenRAM v3 - Lotto simulation (Memory allocation and search
  3. iGenBATCH v2 - Oracle 10g batch using partionning, triggers, stored procedures and sequences
  4. iGenOLTP v4 -(Heavy-weight OLTP

Datapoints

The values showed hare are peak results obtained by building the complete scalability curve. The response times mentioned are average, at peak and in Milliseconds.



E25k
M9000

Throughput RT (ms) Throughput RT (ms)
iGenCPU v3 303 fractals/second 105 728 fractals/second 44
iGenRAM v3 2865 lottos/ms 55 4881 lottos/ms 17
iGenBatch v2 35 TPS 907 50 TPS 626
iGenOLTP v4 3938 TPM 271 6194 TPM 264

As we are trying to compare to the frequency 1.267 factor, let's look at those results  by giving a factor 1 to the E25k.

First, here is throughput :

Throughput E25k M9000
'iGenCPU v3 1 2.403
'iGenRAM v3 1 1.704
'iGenBATCH v2 1 1.450
'iGenOLTP v4 1 1.573
Frequency 1 1.267

Which would be this chart :


tp2


And here is the average  reponse time at peak throughput (still using a base 1 for the E25k) :

RT E25k M9000
iGenCPU v3 1 0.419
iGenRAM v3 1 0.301
iGenBATCH v2 1 0.690
iGenOLTP v4 1 0.970


And the chart :

rt2




This new numbers are illustrating how well placed are the M-serie servers to replace the current UltraSPARC-IV servers, from the smallest Sun Fire V490 to the largest Sun Fire E25k...As long as you use at least Solaris 10 08/07 .

See you next time in the wonderful world of benchmarking...



Comments:

i wait to next time..

Posted by Egitim on December 09, 2010 at 11:03 PM PST #

Post a Comment:
  • HTML Syntax: NOT allowed
About

mrbenchmark

Search

Categories
Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today
News
Blogroll
deepdive

No bookmarks in folder