Running Sysbench Benchmark on MySQL using Solid State Drives


By Roger Bitar, Systems Technical Marketing


Introduction


Sun will soon introduce Solid State Drives (SSDs) to its lineup of systems. SSDs are bound to change the dynamics of the IO subsystem. A traditional 15K rpm disk can do around 150 random IOs a second. However a single SSD should be able to do up to 30,000 random reads per second and consume a maximum of 3W, as rated by the manufacturer. In addition SSDs can provide faster access to the data (in the order of microseconds), while traditional hard disk drives (HDDs) have access time thousands time slower (in the order of milliseconds).


Workload


We choose to test MySQL, the open source database (DB), using a simple MySQL benchmark called Sysbench. We populated the Sysbench table with 114 Million rows (around 27GB size) that fit on 1 SSD drive . We executed read-only queries while varying the buffer size. We mounted the file system in DIRECTIO mode to disable file system caching. We performed the tests with regular HDDs and repeated them with SSDs.

We were interested in measuring the performance while varying the size of the cache available to the MySQL DB. The following innodb_buffer_pool_size values were used: 8GB, 16GB, and 24GB.


Hardware


For the MySQL DB server, we used a SunFire X4150 system populated with two quad-core Intel X5355 Xeon processors, running at 2.66GHz. The system also was populated with 32GB of RAM and 4 disk drives, one of which was a 30GB SSD.

The Sysbench benchmark ran on SunFire X4440 system equipped with 4 quad-cores AMD 8356 Opteron processors, and 16GB RAM.


Software


We used OpenSolaris 2008.05 OS, MySQL 5.1.28 (64-bit release), and Sysbench 0.4.8.


Tuning


We followed the guidelines posted in Neelakanth Nadgir's blog. The following parameters were used in the MySQL configuration file my.cnf:




sort_buffer_size = 32768

table_open_cache = 2048

innodb_buffer_pool_size = 8192M

innodb_additional_mem_pool_size = 20M

innodb_log_file_size = 400M 

innodb_flush_log_at_trx_commit= 1

innodb_thread_concurrency = 0

innodb_log_buffer_size = 64M



Results on Solaris UFS


The following transactions per second (TPS) results were obtained for read only operations when mounting the file system with forcedirectio option:



 The following latency results were obtained, smaller is better: 




Conclusion



  1. 1.SSDs demonstrated a significant advantage (up to 7.25x) for this read-only workload in environments where memory was constrained when using smaller innodb_buffer_pool_size.

  2. SSDs can achieve around 95% of the performance of almost fully cached DB. This is evident when we used the buffer size of 24GB (or about 90% of the DB). That means that in environments where most I/Os are satisfied from disk, rather than system memory, SSDs should be capable to sustain about the same throughput.

  3. Database transaction latency is much better (65x) when using SSDs compared to HDDs.

  4. The best results with this type of workload are obtained on regular disks along with ample main memory. SSDs come a close second, even when main memory is severely constrained. Throughput is significantly worse when regular disks are combined with insufficient buffer memory.

Comments:

I think your graphs are the same. Did you miss Paste?

Posted by T on November 04, 2008 at 11:27 PM PST #

What kind of file systems were used? I saw UFS mentioned but no detial as to what was used on all tests. I would be interested in seeing SSD+zfs vs SSD+ufs. Are the SSDs in quesiton MLC or SLC (not there there is much difference in reads between the two)?

Posted by brad on January 09, 2009 at 05:31 AM PST #

Brad,

For the systems, please check the section under hardware above. The SSDs used in these tests, as well as all the SSDs that will be made available from Sun, use SLC, single level cell, because it is much more reliable and it is rated for enterprise applications. Results of the tests using SSD+zfs can be found at: http://blogs.sun.com/blueprints/entry/running_sysbench_benchmark_on_mysql1

Posted by Roger Bitar on February 24, 2009 at 03:34 AM PST #

Hello,

Could you please give some more information about test procedure.
i.e. What sysbench parameters did you use.
Was test run over ethernet or loopback?

I am also interested in brand and model of SSD disks you used.

Regards, Łukasz Osipiuk

Posted by Łukasz Osipiuk on March 24, 2009 at 09:09 PM PDT #

Sysbench (http://sysbench.sourceforge.net/docs/#database_mode) allows various tests to be run. Why was the read-only test chosen?

Not showing results from other tests skews the results. How about also including tests where there were 10% and 100% writes?

Strictly from a throughput perspective, one may draw conclusions of getting a bigger buffer pool. Or do you want to point out that Flash is much more inexpensive than DRAM...

Posted by Darpan Dinker on April 06, 2009 at 10:40 AM PDT #

Post a Comment:
  • HTML Syntax: NOT allowed
About

blueprints

Search

Categories
Archives
« July 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
  
       
Today
Feeds