Running Sysbench Benchmark on MySQL using Solid State Drives and ext3 on Linux
By blueprints on Mar 05, 2009
By Roger Bitar, Systems Technical Marketing
As a follow up to the previous Sysbench benchmark that ran on Solaris UFS, we re-ran the benchmark on Red Hat Enterprise Linux release 5.2 using ext3 filesystem on the same setup and configuration. According to Allan Packer's recommendation for tuning MySQL on Linux, we added the following parameter to the initial MySQL configuration file, my.cnf:
innodb_flush_method = O_DIRECT
This parameter will cause MySQL to bypass the filesystem cache, and avoid double buffering. This is similar to the forcedirectio option with Solaris UFS.
We also used the noop scheduler as it provided the best results:
# echo noop > /sys/block/sde/queue/scheduler
Results on RHEL 5.2
The following TPS (transactions per seconds) results were obtained for read only operations:
The following latency results were obtained, smaller is better:
- SSDs demonstrated a significant advantage (up to 8x) for this read-only workload in environments where memory was constrained when using smaller innodb_buffer_pool_size.
- SSDs can achieve around 100% of the performance of almost fully cached DB. This is evident when we used the buffer size of 24GB (or about 90% of the DB). That means that in environments where most I/Os are satisfied from disk, rather than system memory, SSDs should be capable to sustain about the same throughput.
- Database transaction latency is much better (14x) when using SSDs compared to HDDs.
- The best results with this type of workload are obtained on regular disks along with ample main memory. SSDs come a close second, even when main memory is severely constrained. Throughput is significantly worse when regular disks are combined with insufficient buffer memory.