Tuning MySQL on Linux

In this blog I'm sharing the results of a series of tests designed to explore the impact of various MySQL and, in particular, InnoDB tunables. Performance engineers from Sun have previously blogged on this subject - the main difference in this case is that these latest tests were based on Linux rather than Solaris.

It's worth noting that MySQL throughput doesn't scale linearly as you add large numbers of CPUs. This hasn't been a big issue to most users, since there are ways of deploying MySQL successfully on systems with only modest CPU counts. Technologies that are readily available and widely deployed include replication, which allows horizontal scale-out using query slaves, and memcached, which is very effective at reducing the load on a MySQL server. That said, scalability is likely to become more important as people increasingly deploy systems with quad-core processors, with the result that even two processor systems will need to scale eight ways to fully utilize the available CPU resources.

The obvious question is whether performance and scalability is going to attract the attention of a joint project involving the performance engineering groups at MySQL and Sun. You bet! Fruitful synergies should be possible as the two companies join forces. And in case you're wondering, Linux will be a major focus, not just Solaris - regard this blog as a small foretaste. Stay tuned in the months to come...

Test Details

On to the numbers. The tests were run on a Sun Fire X4150 server with two quad-core Intel Xeon processors (8 cores in total) and a Sun Fire X4450 server with four quad-core Intel Xeon processors (16 cores in total) running Red Hat Enterprise Linux 5.1. The workload was Sysbench with 10 million rows, representing a database about 2.5Gbytes in size, using the current 64-bit Community version of MySQL, 5.0.51a. My colleague Neel has blogged on the workload and how we used it. The graphs below do not list throughput values, since the goal was only to show relative performance improvements.

The first test varied innodb_thread_concurrency. In MySQL 5.0.7 and earlier, a value greater than 500 was required to allocate an unlimited number of threads. As of MySQL 5.0.18, a value of zero means unlimited threads. In the graph below, a value of zero clearly delivers better throughput beyond 4 threads for the read-only test.

The read-write test, however, benefits from a setting of 8 threads. These graphs show the throughput on the 8-core system, although both the 8- and the 16-core systems showed similar behavior for each of the read-only and the read-write tests.

The following graphs show the effect of increasing the InnoDB buffer cache with the innodb_buffer_cache_size parameter. The first graph shows read-only performance and the second shows read-write performance. As you would expect, throughput increases significantly as the cache increases in size, but eventually reaches a point where no benefit is derived from further increases.

Finally, we've seen that throughput is affected by the amount of memory we assign to the InnoDB buffer cache. But since the default Linux file system, ext3, also caches pages, why not let Linux do the caching rather than InnoDB. To test this, we tried comparing throughput with and without Linux file system caching. Setting the innodb_flush_method parameter to O_DIRECT will cause MySQL to bypass the file system cache. The results are shown in the graph below. Clearly the file system cache makes a difference, because throughput with the InnoDB buffer cache set to 1024 Mbytes supported by the file system cache is also as good as throughput with no file system caching and the InnoDB buffer cache set to 2048 Mbytes. But while the Linux file system cache can help protect you somewhat if you undersize your InnoDB buffer cache, for optimal performance, it's important to give the InnoDB buffer cache as much memory as it needs. Bypassing the Linux file system cache may not be a good idea unless you have properly sized the InnoDB buffer cache - disk read activity was very high when the buffer cache was too small and the file system cache was being bypassed. We also found that the CPU cost per transaction was higher when the InnoDB buffer cache was too small. That's not surprising, since the code path is longer when MySQL has to go outside the buffer cache to retrieve a block.

We tested a number of other parameters but found that none were as significant for this workload.

So to summarize, two key parameters to focus on are innodb_buffer_pool_size and innodb_thread_concurrency. Appropriate settings for these parameters are likely to help you ensure optimal throughput from your MySQL server.



Is there not a more interesting observation from these graphs? It seems performance does not scale anywhere near linearly upto 8 threads and seems to benefit not at all from 8-16 threads even on a 16-core machine. The drop above 8 threads is also pretty precipitous in some of these graphs. Obviously context switching between extra threads has an overhead but some of these graphs show the situation is dire.

I could be misreading these graphs though as the set of innodb_buffer_pool graphs don't seem to jibe with the first set of graphs. Surely at least one pair of curves should match the curve generated in the first two graphs?

Posted by Gerg on February 26, 2008 at 04:43 AM PST #

Hi Gerg,

Yes, I noted at the beginning of the article that MySQL doesn't scale linearly to large numbers of CPUs.

The graphs are from both tested systems, not just the same one, hence not always easy to compare.


Posted by Allan Packer on March 02, 2008 at 08:47 AM PST #

Post a Comment:
Comments are closed for this entry.

I'm a Principal Engineer in the Performance Technologies group at Sun. My current role is team lead for the MySQL Performance & Scalability Project.


« April 2014