Friday Jan 22, 2010

Comparative data of ORACLE 10g on SPARC & SOLARIS 10

Oracle 10g OLTP performance on SPARC chips








A boring ratio

Customers would love to have their performance levels linked to their hardware. But more often than you think, they migrate from System X (designed 10 years ago) to System Y (fresh from the oven) and are surprised with the performance improvements. In the past two years, we have completed many successful migrations from F15k/E25k servers to new Enterprise Servers M9000. Customer have reported great improvements in throughput and response time. But what can you really expect and what percentage of the improvement is actually due to the operating system enhancement ? Can the recent small frequency increase on our SPARC64 VII chipset be at all interesting ? The new SPARC64 VII 2.88Ghz available on our M8000 and M9000 flagships propose no architectural change, no additional features and a modest frequency increase going from 2.52 Ghz to 2.88 Ghz - for a ratio of 1.14. We could stop our analysis there and label this change 'marginal' or 'not interesting'. But my initial testings showed a comparative OLTP peak throughput to be way higher than this frequency-based ratio.



What happened ?












A passion for Solaris

Most of the long term Sun employees have a passion for Solaris. Solaris is the uncontested Unix leader and include such a huge amount of features that when you are a Solaris addict, it is difficult to get in love with another Operating System. And Oracle executives made no mistake : Sun has the best UNIX kernel & performance engineers in the world. Without them, Solaris would not scale today to a 512 hardware thread system (M9000-64).

But of course, Solaris is a moving target. Every release brings its truck load of features, bug fixes and other performance improvements. Here are critical fixes done between Solaris 10 Update 4 and the brand new Solaris 10 Update 8 influencing Oracle performance on the M9000 :

  • In Solaris 10 Update 5 (05/08), we optimized interrupt management ( cr=5017144), math operations (cr=6491717). We also streamlined CPU yield (cr=6495392) and cache hierarchy (cr=6495401).

  • In Solaris 10 Update 6 (10/08), we optimized libraries and implemented shared context for Jupiter (cr=6655597 & 6642758)

  • In Solaris 10 Update 7 (05/09), we enhanced MPXIO as well as the PCI framework (cr=6449810 and others) and improved thread scheduling (cr=6647538). We also enhanced Mutex operations (cr=6719447).

  • Finally, in Solaris 10 Update 8 , after long customer escalations, we fixed the single threaded nature of callout processing (cr=6565503-6311743). [This is critical for all calls made to nanosleep & usleep.] We also improved the throughput & latency of the very common e1000g driver (cr=6335837 + 5 more) and optimized the mpt driver (cr=6784459). We cleaned up interrupt management (cr=6799018) and optimized bcopy and kcopy operations (cr=6292199). Finally, we improved some single threaded operations (cr=6755069).

My initial SPARC64 VII iGenOLTP tests were done with Solaris 10 Update 4. But I could not test the new SPARC64 VII 2.88Ghz with this release because it was not supported ! Therefore, I had to compare the new chip performance to SPARC64VII 2.52Ghz using each S10U4 and S10U8. We will see below that most of the improvements are not coming from the frequency increase but from Solaris itself.






Chips & Chassis

Please find below , the key characteristics of the chips we have tested :



Chips

UltraSPARC IV+

SPARC64 VI

SPARC64 VII

SPARC64 VII (+)

Manufacturing

90nm

90nm

65nm

65nm

Die size

356 sq mm

421 sq mm

421 sq mm

421 sq mm

Transistors

295 million

540 million

600 million

600 million

Cores

2

2

4

4

Threads/core

1

2

2

2

Total threads

2

4

8

8

Frequency

1.5 Ghz

2.28 Ghz

2.5Ghz

2.88Ghz

L1 I-cache

64 KB

128 KB/core

512 KB

512 KB

L1 D-cache

64 KB

128 KB/core

512 KB

512 KB

On-chip L2

2 MB

6 MB

6 MB

6 MB

Off-chip L3

32 MB

None

None

None

Max Watts

56 W

120 W

135 W

140 W

Watts/thread

28 W

30 W

17 W

17 W



Note on (+): The new SPARC64 VII is not officially labeled with a plus sign in order to reflect the absence of new features.





Now, here is our hardware list. Note that to avoid the need for a huge Client system, we ran this iGenOltp workload in a Console/Server mode. It means that the Java processes sending SQL queries via JDBC are running directly on the server tested. While this model was unusual ten years ago in the era of Client/Server, it is more and more commonly found today in new customer deployments.



Servers

E25k

M9000-32

M9000-32

M9000-32

Chip

UltraSPARC-IV+

SPARC64 VI

SPARC64 VII

SPARC64 VII+

# chips

8

8

8

8

Total hardware threads

16

16

32

32

Frequency

1.5 Ghz

2.28 Ghz

2.52 Ghz

2.88 Ghz

System Clock

150 Mhz

960 Mhz

960 Mhz

960 Mhz~

RAM

64 GB

64 GB

64 GB\*

64 GB\*

Operating System

Solaris 10 Update 4

Solaris 10 Update 4

Solaris 10 Update 4 & 8

Solaris 10 Update 8






Console system


Storage

SE9990V


X4240


[shared]

64 GB cache


Opteron quad-core



25 TB


2x2.33Ghz



200 Hitachi HDD





15k RPM





8x2Gbit/s




Note on (~): While the system clock has not changed, the new M9000 CMUs are equipped with an optimized Memory Access Controller labeled MAC+. The MAC+ chip set is critical for system reliability, in particular for the memory mirroring and memory patrolling features. We have not identified performance improvements linked to this new feature.

Note on (\*): Those domains have 128GB total memory. To compare apple-to-apple, 64GB of memory are allocated, populated and locked in place with my very own _shmalloc tool.






Chart

The iGenOLTPv4 workload is a Java-based lightweight OLTP database workload. Simulating a classic Order Entry system, it is tested in stream mode (I.e no wait time between transactions). For this particular exercise, we have created a very large database of 8 Terabyte total. This database is stored on the SE9990V using Oracle ASM. We query 100 million customer identifiers on this very large database in order to create an I/O intensive (but not I/O bound) workload similar to the largest OLTP installations in the world. (Example : the E25ks running the bulk load of Oracle internal applications). The exact throughput in number of transactions per second and average response times are reported and coalesced for each scalability level. For this test, we used Solaris 10 Update 4 & 8, Java version 1.6 build 16, and the Oracle database server 10.2.0.4




Performance notes :

  • In peak, the new SPARC64VII 2.88Ghz produce 1.10x OLTP throughput compared to the 2.52Ghz on S10U8.

  • But compared to the 2.52Ghz chips on S10U4, the ratio is 1.54x and compared to the SPARC64 VI it is 2.38x.

  • For a customer willing to upgrade a E25k equipped with 1.5Ghz chips, the throughput ratio is 4.125 ! It means that we can easily replace a 8 boards E25k with a 2 boards M8000 for better throughput and improved response times.

  • Average transaction response times in peak are 126 ms on the UltraSPARC IV+ domain, 87ms on the SPARC64 VI, 82 ms on the SPARC64VII 2.52Ghz (U4), 77 ms on the SPARC64 VII 2.52Ghz (U8) and 72 ms on the latest chip.





Conclusion

As expected, Oracle OLTP improvements due to the new SPARC64VII chip are modest using the latest Solaris 10. However, all the customer already in production using previous release of Solaris 10 will see throughput improvement up to 1.54x. Most likely, this is enough to motivate a refresh of their system. And all E25k customers have now a very interesting value proposition with our M8000 and M9000 chassis.


See you next time in the wonderful world of benchmarking....



Monday May 11, 2009

Running your Oracle database on internal Solid State Disks : a good idea ?

Scaling MySQL and ZFS on T5440




Solid State Disks : a 2009 fashion

This technology is not new : it originates in 1874 when a German physicist named Karl Braun (pictured above) discovered that he could rectify alternating current with a point-contact semiconductor. Three years later, he had built the first CRT oscilloscope and four years later, he had built the first prototype of a Cat's whisker diode, later optimized by G. Marconi and G. Pickard. In 1909, K. Braun shared the Nobel Prize for physics with G. Marconi.

The Cat's whisker diodes are considered the first solid state devices. But it is only in the 1970s that they appeared in high-end mainframes produced by Amdahl and Cray Research. However, their high-cost of fabrication limited their industrialization. Several companies attempted later to introduce the technology to the mass market including StorageTek, Sharp and M-systems. But the market was not ready.

Nowadays, SSDs are composed of one of two technologies : DRAM volatile memory or NAND-flash non-volatile memory. Key recent announcements from Sun (Amber road and ZFS), HP (IO Accelerator) and Texas Instruments (Ram San 620) as well as lower cost of fabrication and larger capacities are making the NAND based technology a must-try for every company this year.

This article is looking at the Oracle database performance of our new 32Gbytes SSDs OEM'd from Intel. This new devices have improved their I/O capacity and MTBF with an architecture featuring 10 parallel NAND flash channels. See this announcement for more.

If you dig a little bit on the question, you will find this whitepaper . However, the 35% boost in performance that they measured seems insufficient to justify trashing HDDs for SSDs. In addition, as they compare a different number of HDDs and SSDs, it is very hard to determine the impact of a one-to-one replacement. Let's make our own observations.



Here is a picture of the SSD tested – thanks to Emie for the shot !





Goals

As any DBA knows, it is very difficult to characterize a database workload in general. We are all very familiar with the famous “Your mileage may vary” or “All customer database workloads are different”. And we can not trust Marketing department on SSDs performance claims because nobody is running a synthetic I/O generator for a living. What we need to determine is the impact for End-Users (Response time anyone ?) and how the Capacity Planners can benefit from the technology (How about Peak Throughput ?).

My plan is to perform two tests on a Sun Blade X6270 (Nehalem-based) equipped with two Xeon chips and 32Gb of RAM on one SSD and one HDD- with different expectations.

  1. Create a 16 Gigabytes database that will be entirely cached in the Oracle SGA. Will we observe any difference ?

  2. Create a 50 Gigabytes database that can only be cached about 50% of the time. We expect a significant performance impact. But how much ?


SLAMD and iGenOLTP
The SLAMD Distributed Load Generation Engine (SLAMD) is a Java-based application designed for stress testing and performance analysis of network-based applications. It was originally developed by Sun Microsystems, Inc., but it has been released as an open source application under the Sun Public License, which is an OSI-approved open source license. The main site for obtaining information about SLAMD is http://www.slamd.com/. It is also available as a java.net project.

iGenOLTP is a multi-processed and multi-threaded database benchmark. As a custom Java class for SLAMD, it is a lightweight workload composed of four select statements, one insert and one delete. It produces a 90% read/10% write workload simulating a global order system.



Software and Hardware summary

This study is using Solaris 10 Update 6 (released October 31st,2008), Java 1.7 build 38 (released Otober 23rd,2008), SLAMD 1.8.2, iGenOLTP v4 for Oracle and Oracle 10.2.0.2. The hardware tested is a Sun Blade X6270 with 2xINTEL XEON X5560 2.8Ghz and 32 GB of DDR3 RAM . This blade has four standard 2.5 inches disks slots in which we are installing 1x32 Gbytes Sun/Intel SSD and 1x146Gb 10k RPM SEAGATE-ST914602SS drive with read-cache and write-cache enabled.

Test 1 – Database mostly in memory

We are creating a 16 Gigabytes database (4k block size) on one Solid State Disk and on one Seagate HDD configured in one ZFS pool with the default block size. We are limiting the ZFS buffer cache to 1 Gigabytes and allow an Oracle SGA of 24 Gigabytes. All the database will be cached. We will feel the SSD impact only on random writes (about 10% of the I/O operations) and sequential writes (Oracle redo log). The test will become CPU bound as we increase concurrency. We are testing from 1 to 20 client threads (I.e database connections) in streams.


In this case and for Throughput [in Transactions per second], the difference between HDD and SSD are evoluting from significant to modest when concurrency increase. In fact, this is interestingly in the midrange of the scalability curve that we observe a peak of 71% more throughput on the SSD (at 4 threads). At 20 threads, we are mostly CPU bound, therefore the impact of the storage type is minimal and the SSD impact on throughput is only 9%.






For response times [in milliseconds], it is slightly lower with 42% better response times at 4 threads and 8% better at 20 threads.






Test 2 – Database mostly on disk

This time, we are creating a 50 Gigabytes database on one SSD and on one HDD configured in their dedicated ZFS pool. Memory usage will be sliced the same way than test 1 but will not be able to cache more than 50% of the entire database. As a result, we will become I/O bound before we become CPU bound. Please remember that the X6270 is equipped with two eight-threads X5560 - a very decent 16-way database server !

Here are the results :



The largest difference is observed at 12 threads with more than twice the transactional throughput on the SSD. In response times (below), we observe the SSD to be 57% faster in peak and 59% faster at 8 threads.




In a nutshell

My intent for this test was to show you (for a classic Oracle lightweight OLTP workload)

the good news :

When I/O bound, we can replace two Seagate 10k RPM HDDs with one INTEL/SUN SSD for a similar throughput and twice faster response times

On a one for one basis, the response time difference by itself (up to 67%) will make your end users love you instantly !

Peak throughput in memory compared to the SSD is very close : in peak, we observed 821 TPS (24ms RT) in memory and 685 TPS (30ms RT) on the SSD. Very nice !


and the bad news :

When the workload is CPU bound, the impact of replacing your HDD by a SSD is moderate while losing a lot of capacity.

The cost per gigabyte need to be carefully calculated to justify the investment. Ask you Sales rep for more...


See you next time in the wonderful world of benchmarking....

Monday Aug 20, 2007

Sun SPARC Enterprise M9000 vs Sun Fire E25k - Datapoints

Sun SPARC Enterprise M9000 vs Sun Fire E25k - Datapoints
A performance comparison of two high-end UNIX servers using the iGen benchmark suite
[Read More]

Wednesday Nov 08, 2006

Unbreakable Oracle 10g Release 2 : What if you have ORA-600 kcratr1_lastbwr ?

ORA-600_kcratr1 <script src="http://www.google-analytics.com/urchin.js" type="text/javascript"> </script> <script type="text/javascript"> _uacct = "UA-917120-1"; urchinTracker(); </script>
This an interesting story that happened yesterday on one of our customer site. An engineer powered off the wrong rack of equipment containing a Sun Fire X4600 running Oracle 10g Release 2.  Almost no transactions were performed at time so when the system came up the customer expected the database to be up and running very quickly.

In reality this is what happened :

Completed: ALTER DATABASE   MOUNT
Tue Nov  7 11:19:42 2006
ALTER DATABASE OPEN
Tue Nov  7 11:19:42 2006
Beginning crash recovery of 1 threads
 parallel recovery started with 16 processes
Tue Nov  7 11:19:44 2006
Started redo scan
Tue Nov  7 11:19:44 2006
Errors in file /xxx/oracle/oracle/product/10.2.0/db_1/admin/xxx/udump/xxx_ora_947.trc:
ORA-00600: internal error code, arguments: [kcratr1_lastbwr], [], [], [], [], [], [], []
Tue Nov  7 11:19:44 2006
Aborting crash recovery due to error 600
Tue Nov  7 11:19:44 2006
Errors in file /xxx/oracle/oracle/product/10.2.0/db_1/admin/xxxtest/udump/xxxtest_ora_947.trc:
ORA-00600: internal error code, arguments: [kcratr1_lastbwr], [], [], [], [], [], [], []
ORA-600 signalled during: ALTER DATABASE OPEN...

Not too pretty ! Checking the ASM configuration and the IO subsystem showed nothing wrong. So what to do if you do not have a backup handy ?

Well, here is the idea .... what would we do if we had a backup that was inconsistent ?
The recover database command will start an Oracle process which will roll forward all transactions stored in the restored archived logs necessary to make the database consistent again. The recovery process must run up to a point that corresponds with the time just before the error occurred after which the log sequence must be reset to prevent any further system changes from being applied to the database.

So we tried :

startup mount

ALTER DATABASE   MOUNT
Tue Nov  7 11:54:03 2006
Starting background process ASMB
ASMB started with pid=61, OS id=1070
Starting background process RBAL
RBAL started with pid=67, OS id=1074
Tue Nov  7 11:54:13 2006
SUCCESS: diskgroup xxxTESTDATA was mounted
Tue Nov  7 11:54:17 2006
Setting recovery target incarnation to 2
Tue Nov  7 11:54:17 2006
Successful mount of redo thread 1, with mount id 2364224219
Tue Nov  7 11:54:17 2006
Database mounted in Exclusive Mode
Completed: ALTER DATABASE   MOUNT
Tue Nov  7 11:54:32 2006

recover database


ALTER DATABASE RECOVER  database 
Tue Nov  7 11:54:32 2006
Media Recovery Start
 parallel recovery started with 16 processes
Tue Nov  7 11:54:33 2006
Recovery of Online Redo Log: Thread 1 Group 3 Seq 4 Reading mem 0
  Mem# 0 errs 0: +xxxTESTDATA/xxxtest/onlinelog/group_3.263.605819131
Tue Nov  7 11:59:25 2006
Media Recovery Complete (xxxtest)
Tue Nov  7 11:59:27 2006
Completed: ALTER DATABASE RECOVER  database 


alter database open

Tue Nov  7 12:03:01 2006
alter database open
Tue Nov  7 12:03:01 2006
Beginning crash recovery of 1 threads
 parallel recovery started with 16 processes
Tue Nov  7 12:03:01 2006
Started redo scan
Tue Nov  7 12:03:01 2006
Completed redo scan
 273 redo blocks read, 0 data blocks need recovery
Tue Nov  7 12:03:01 2006
Started redo application at
 Thread 1: logseq 4, block 12858574
Tue Nov  7 12:03:01 2006
Recovery of Online Redo Log: Thread 1 Group 3 Seq 4 Reading mem 0
  Mem# 0 errs 0: +xxxTESTDATA/xxxtest/onlinelog/group_3.263.605819131
Tue Nov  7 12:03:01 2006
Completed redo application
Tue Nov  7 12:03:01 2006
Completed crash recovery at
 Thread 1: logseq 4, block 12858847, scn 824040
 0 data blocks read, 0 data blocks written, 273 redo blocks read
Tue Nov  7 12:03:02 2006
Thread 1 advanced to log sequence 5
Thread 1 opened at log sequence 5
  Current log# 1 seq# 5 mem# 0: +xxxTESTDATA/xxxtest/onlinelog/group_1.261.605819081
Successful open of redo thread 1
Tue Nov  7 12:03:02 2006
MTTR advisory is disabled because FAST_START_MTTR_TARGET is not set
Tue Nov  7 12:03:02 2006
SMON: enabling cache recovery
Tue Nov  7 12:03:03 2006
Successfully onlined Undo Tablespace 1.
Tue Nov  7 12:03:03 2006
SMON: enabling tx recovery
Tue Nov  7 12:03:03 2006
Database Characterset is UTF8
replication_dependency_tracking turned off (no async multimaster replication found)
Starting background process QMNC
QMNC started with pid=56, OS id=1128
Tue Nov  7 12:03:05 2006
Completed: alter database open



And we are up and running ! The real thing that Oracle should work on is the quality and clarity of their error messages.
At this point this is quite poor ...

 Unbreakable database, maybe. Automatic (and simple) , not yet.

About

mrbenchmark

Search

Categories
Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today
News
Blogroll
deepdive

No bookmarks in folder