Running Sysbench Benchmark on MySQL using Solid State Drives
By blueprints on Nov 04, 2008
By Roger Bitar, Systems Technical Marketing
Sun will soon introduce Solid State Drives (SSDs) to its lineup of systems. SSDs are bound to change the dynamics of the IO subsystem. A traditional 15K rpm disk can do around 150 random IOs a second. However a single SSD should be able to do up to 30,000 random reads per second and consume a maximum of 3W, as rated by the manufacturer. In addition SSDs can provide faster access to the data (in the order of microseconds), while traditional hard disk drives (HDDs) have access time thousands time slower (in the order of milliseconds).
We choose to test MySQL, the open source database (DB), using a simple MySQL benchmark called Sysbench. We populated the Sysbench table with 114 Million rows (around 27GB size) that fit on 1 SSD drive . We executed read-only queries while varying the buffer size. We mounted the file system in DIRECTIO mode to disable file system caching. We performed the tests with regular HDDs and repeated them with SSDs.
We were interested in measuring the performance while varying the size of the cache available to the MySQL DB. The following innodb_buffer_pool_size values were used: 8GB, 16GB, and 24GB.
For the MySQL DB server, we used a SunFire X4150 system populated with two quad-core Intel X5355 Xeon processors, running at 2.66GHz. The system also was populated with 32GB of RAM and 4 disk drives, one of which was a 30GB SSD.
The Sysbench benchmark ran on SunFire X4440 system equipped with 4 quad-cores AMD 8356 Opteron processors, and 16GB RAM.
We used OpenSolaris 2008.05 OS, MySQL 5.1.28 (64-bit release), and Sysbench 0.4.8.
We followed the guidelines posted in Neelakanth Nadgir's blog. The following parameters were used in the MySQL configuration file my.cnf:
sort_buffer_size = 32768
table_open_cache = 2048
innodb_buffer_pool_size = 8192M
innodb_additional_mem_pool_size = 20M
innodb_log_file_size = 400M
innodb_thread_concurrency = 0
innodb_log_buffer_size = 64M
Results on Solaris UFS
The following transactions per second (TPS) results were obtained for read only operations when mounting the file system with forcedirectio option:
The following latency results were obtained, smaller is better:
- 1.SSDs demonstrated a significant advantage (up to 7.25x) for this read-only workload in environments where memory was constrained when using smaller innodb_buffer_pool_size.
- SSDs can achieve around 95% of the performance of almost fully cached DB. This is evident when we used the buffer size of 24GB (or about 90% of the DB). That means that in environments where most I/Os are satisfied from disk, rather than system memory, SSDs should be capable to sustain about the same throughput.
- Database transaction latency is much better (65x) when using SSDs compared to HDDs.
- The best results with this type of workload are obtained on regular disks along with ample main memory. SSDs come a close second, even when main memory is severely constrained. Throughput is significantly worse when regular disks are combined with insufficient buffer memory.