OLTP performance of the Sun SPARC Enterprise M9000 on Solaris 10 08/07
By mrbenchmark on Nov 14, 2007
I recently published a performance comparison of the Sun Fire E25k and the new Sun SPARC Enterprise M9000.
In this article, a lot of my readers noticed the following note :
"Oracle OLTP is disappointing on the M9000 with an increase in response time at peak throughput. Upcoming release of Solaris and Oracle 10g should improve this result"
Critical bug fixes
The reason why I wrote this is because I knew that Sun engineering was working hard at fixing three key performance bugs specific to database performance on any of the M-serie systems. Here is a list of this bugs that were successfully fixed in Solaris 10 08/07 (Update 4) :
1. Bug 6451741
SPARC64 VI prefetch tuning needs to be completed
Impact : L2 cache efficiency is key to database memory performance. Corrected preferch values improve memory read and write performance.
2. Bug 6486343
Mutex performance on large M-serie system need improvement
Impact : The mutex retry and backoff algorithm needed to be retuned for M-series system due to out-of-order execution and platform specific branch prediction routines. Also improve lock concurrency on hot mermory pages
3. Bug 6487440
Memory copy operations needs tuning on M-serie systems
Impact : The least important fix but important for Oracle stored procedures , triggers and constraints
The big question was : How much of an improvement it would have on OLTP performance ?
Well, one thing is sure is that your mileage may vary but I measured on my workload a whooping 1.33
lower response times for 1.38 faster throughput (compared to Solaris 10 Update 3) . It is also interesting to notice that all the other workloads tested have not moved significantly as they are not really sensitive to the issues tackled there.
Please find below the corrected comparative charts in throughput and response time after a reminder on the workloads :
Not exactly.So let's try to be a little bit more specific using five different 100% Java (1.6) workloads :
- iGenCPU v3 - Fractal simulation 50% Integer / 50% floating point
- iGenRAM v3 - Lotto simulation (Memory allocation and search
- iGenBATCH v2 - Oracle 10g batch using partionning, triggers,
stored procedures and sequences
- iGenOLTP v4 -(Heavy-weight OLTP
The values showed hare are peak results obtained by building the complete scalability curve. The response times mentioned are average, at peak and in Milliseconds.
|Throughput||RT (ms)||Throughput||RT (ms)|
|iGenCPU v3||303 fractals/second||105||728 fractals/second||44|
|iGenRAM v3||2865 lottos/ms||55||4881 lottos/ms||17|
|iGenBatch v2||35 TPS||907||50 TPS||626|
|iGenOLTP v4||3938 TPM||271||6194 TPM||264
As we are trying to compare to the frequency 1.267 factor, let's look at those results by giving a factor 1 to the E25k.
First, here is throughput :
Which would be this chart :
And here is the average reponse time at peak throughput (still using a base 1 for the E25k) :
And the chart :
This new numbers are illustrating how well placed are the M-serie servers to replace the current UltraSPARC-IV servers, from the smallest Sun Fire V490 to the largest Sun Fire E25k...As long as you use at least Solaris 10 08/07 .
See you next time in the wonderful world of benchmarking...