Thursday Sep 17, 2009

querystat - DTrace script to monitor your queries, query cache and server thread pre-emption

I was recently helping some colleagues check what was happening with their MySQL queries, and wrote a DTrace script to do it. Time to share that script.

First of all, a look at some output from the script:

mashie[bash]# ./querystat.d -p `pgrep mysqld`
Tracing started at 2009 Sep 17 16:28:35
2009 Sep 17 16:28:38   throughput 3 queries/sec
2009 Sep 17 16:28:41   throughput 4 queries/sec
2009 Sep 17 16:28:44   throughput 528 queries/sec
2009 Sep 17 16:28:47   throughput 1603 queries/sec
2009 Sep 17 16:28:50   throughput 1676 queries/sec
Tracing ended   at 2009 Sep 17 16:28:51
Average latency, all queries: 107 us
Latency distribution, all queries (us): 
           value  ------------- Distribution ------------- count    
              16 |                                         0        
              32 |@@                                       170      
              64 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@        3728     
             128 |@@@@@                                    533      
             256 |                                         26       
             512 |                                         18       
            1024 |                                         2        
            2048 |                                         1        
            4096 |                                         0        
            8192 |                                         1        
           16384 |                                         1        
           32768 |                                         0        
Query cache statistics:
    count             hit: 6
    count            miss: 4474
    avg latency      miss: 107 (us)
    avg latency       hit: 407 (us)
Latency distribution, for query cache hit (us): 
           value  ------------- Distribution ------------- count    
              64 |                                         0        
             128 |@@@@@@@@@@@@@                            2        
             256 |@@@@@@@                                  1        
             512 |@@@@@@@@@@@@@@@@@@@@                     3        
            1024 |                                         0        
Latency distribution, for query cache miss (us): 
           value  ------------- Distribution ------------- count    
              16 |                                         0        
              32 |@@                                       170      
              64 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@        3728     
             128 |@@@@@                                    531      
             256 |                                         25       
             512 |                                         15       
            1024 |                                         2        
            2048 |                                         1        
            4096 |                                         0        
            8192 |                                         1        
           16384 |                                         1        
           32768 |                                         0        
Average latency when query WAS NOT pre-empted: 73 us
Average latency when query     WAS pre-empted: 127 us
   mysql                                     6
   Xorg                                     18
   sched                                    25
   firefox-bin                              44
   sysbench                               3095

You can see that while the script is running (prior to pressing <Ctrl>-C), we get a throughput count every 3 seconds.

Then we get some totals, some averages, and even some distribution histograms, covering all queries, then with breakdowns on whether we used the query cache, and whether the thread executing the query was pre-empted.

This may be useful for determining things like:

  • Do I have some queries in my workload that consume a lot more CPU than others?
  • Is the query cache helping or hurting?
  • Are my database server threads being pre-empted (kicked off the CPU) by (an)other process(es)?

Things have become easier since I first tried this, and had to use the PID provider to trace functions in the database server.

If you want to try my DTrace script, get it from here. NOTE: You will need a version of MySQL with DTrace probes for it to work.

Tuesday Apr 21, 2009

Expanding Google's InnoDB Synchronization Improvements to Solaris

There is much excitement today at the launch of MySQL 5.4, so I will relate my story about a project I contributed to this new version.

When we started looking at performance improvements for MySQL, we were interested in "low hanging fruit", or fixes and changes that could reap measurable benefits for users in the short term.

An obvious candidate at that time was the now well-known Google SMP patch. I had seen Mark Callaghan present on this at the MySQL User Conference in 2008, and was interested to investigate.

I was pretty new to InnoDB at that time, and was soon to discover that InnoDB was possibly experiencing poor scalability around its mutexes and read-write locks because InnoDB had a private implementation of adaptive mutexes and read-write locks, and this was probably not the best implementation on all or even most platforms MySQL is available on.

Now InnoDB's "private" mutexes and rw-locks were a good way to get spin-locks on all platforms, which may be a win in many cases, but as the Google team had demonstrated, it could be improved on. Indeed, I knew that adaptive spin-locks are available on Solaris, and they offer an extra advantage - if the holder of a lock is found to be off CPU, we don't bother spinning, but instead put the thread wanting the lock straight to sleep.

So, I decided to undertake a couple of performance studies of InnoDB's locking, being:

  1. Apply the Google SMP patch to MySQL 5.1 and test
  2. Modify InnoDB in 5.1 to use POSIX mutexes and RW-locks and test

The second step turned out to be quite complicated. I could not even change all of InnoDB's RW-locks to POSIX ones, as the InnoDB sychronization objects offer functionality not available via POSIX. It also meant we would be diverging more significantly from the InnoDB in 5.1, so this option - although looking promising - was shelved.

This left the Google SMP patch. It also looked promising. It was a less dramatic change, and offered scaling benefits in all the testing I did.

There was one last snag though - the mutex and RW-lock improvments in the Google SMP patch would only be applied if you were building on x86/x64 with GCC 4.1 or later, as they relied on GCC's atomic built-ins.

You can consider that we have a two-dimensional matrix of platforms that MySQL supports, being a compiler, then an Operating System. To make a feature portable across this matrix, you need to find a portable API, write code that is portable, or write code that uses a choice of different portable API's depending on what is available.

Now we definitely wanted to get a similar benefit for InnoDB on SPARC, and not necessarily just with GCC. In any case, GCC did not offer all of the built-in atomics for SPARC at the time. Happily, there are atomic functions available in Solaris that fit the job fine. MySQL 5.4 uses the functions if you build on Solaris without a version of GCC that supports built-in atomics.

Just so you understand though, here is (a simplified version of) what happens when you build MySQL 5.4 on your chosen platform with your chosen compiler:

  • IF (compiler has GCC built-in atomics)
    use GCC built-in atomics
  • ELSE IF (OS has atomic functions)
    use atomic functions
  • ELSE
    use traditional InnoDB synchronization objects, based on pthread_mutex\*.


As Neel points out in his blog, it was an exercise we learnt something from, even if we did develop functionality that will not be used. The important thing is we know we have improved the performance of MySQL, by extending the Google SMP improvements to all Solaris users, regardless of chosen compiler.

Thursday Apr 09, 2009

Testing the New Pool-of-Threads Scheduler in MySQL 6.0, Part 2

In my last blog, I introduced my investigation of the "Pool-of-Threads" scheduler in MySQL 6.0. Read on to see where I went next.

I now want to take a different approach to comparing the two schedulers. It is one thing to compare how the schedulers work "flat out" - with a transaction request rate that is limited only by the maximum throughput of the system under test. I would like to instead look at how the two schedulers compare when I drive mysqld at a consistent transaction rate, then vary only the number of connections over which the transaction requests are arriving. I will aim to come up with a transaction rate that sees CPU utilization somewhere in the 40-60% range.

This is more like how real businesses use MySQL every day, as opposed to the type of benchmarking that computer companies usually engage in. This will also allow me to look at how the schedulers run at much higher connection counts - which is where the pool-of-threads scheduler is supposed to shine.

Now, I will let you all know that I first conducted my experiments with mysqld and the load generator (sysbench) on the same system. I was again not sure this would be be the best methodology, primarily because I would end up having one operating system instance scheduling in some cases a very large number of sysbench threads along with the mysqld threads.

It turned out the results from this mode threw up some issues (like not being able to get my desired throughput with 2048 connections in pool-of-threads mode), so I repeated my experiments - the second set of results have the load generation coming from two remote systems, each with a dedicated 1 Gbit ethernet link to the DB server.

The CPU utilization I have captured was just the %USR plus %SYS for the mysqld process. This makes the two sets of metrics comparable.

Here are my results. First for experiments where sysbench ran on the same host as mysqld:

Then for experiments where sysbench ran on two remote hosts, each with a dedicated Gigabit Ethernet link to the database server:

As you can see, the pool-of-threads model does incur an overhead, both in terms of CPU consumption and response time, at low connections counts. As hoped though, the advantage swings in pool-of-threads' favour. This is particularly noticeable in the case where our clients are remote. It is arguable that an architecture involving many hundreds or thousands of client connections is more likely to have those clients located remote from the DB server.

Now, the first issue I have is that while pool-of-threads starts to win on response time, the response time is still increasing in a similar fashion to thread-per-connection's response time (note - the scale is logarithmic). This is not what I expected, so we have a scalability problem in there somewhere.

The second issue is where I have to confess - I only got one "lucky" run where my target transaction rate was achieved for pool-of-threads at 2048 connections. For many other runs, the target rate could not be achieved, as these raw numbers show:

connections tps mysqld
avg-resp 95%-resp

This indicates we have some sort of bottleneck right at or around the 2048 thread point. This is not what we want with pool-of-threads, so I will continue my investigation.

Wednesday Apr 08, 2009

Testing the New Pool-of-Threads Scheduler in MySQL 6.0

I have recently been investigating a bew feature of MySQL 6.0 - the "Pool-of-Threads" scheduler. This feature is a fairly significant change to the way MySQL completes tasks given to it by database clients.

To begin with, be advised that the MySQL database is implemented as a single multi-threaded process. The conventional threading model is that there are a number of "internal" threads doing administrative work (including accepting connections from clients wanting to connect to the database), then one thread for each database connection. That thread is responsible for all communication with that database client connection, and performs the bulk of database operations on behalf of the client.

This architecture exists in other RDBMS implementations. Another common implementation is a collection of processes all cooperating via a region of shared memory, usually with semaphores or other synchronization objects located in that shared memory.

The creation and management of threads can be said to be cheap, in a relative sense - it is usually significantly cheaper to create or destroy a thread rather than a process. However these overheads do not come for free. Also, the operations involved in scheduling a thread as opposed to a process are not significantly different. A single operating system instance scheduling several thousand threads on and off the CPUs is not much less work than one scheduling several thousand processes doing the same work.


The theory behind the Pool-of-Threads scheduler is to provide an operating mode which supports a large number of clients that will be maintaining their connections to the database, but will not be sending a constant stream of requests to the database. To support this, the database will maintain a (relatively) small pool of worker threads that take a single request from a client, complete the request, return the results, then return to the pool and wait for another request, which can come from any client. The database's internal threads still exist and operate in the same manner.

In theory, this should mean less work for the operating system to schedule threads that want CPU. On the other hand, it should mean some more overhead for the database, as each worker thread needs to restore the context of a database connection prior to working on each client request.

A smaller pool of threads should also consume less memory, as each thread requires a minimum amount of memory for a thread stack, before we add what is needed to store things like a connection context, or working space to process a request.

You can read more about the different threading models in the MySQL 6.0 Reference Manual.

Testing the Theory

Mark Callaghan of Google has recently had a look at whether this theory holds true. He has published his results under "No new global mutexes! (and how to make the thread/connection pool work)". Mark has identified (via this bug he logged) that the overhead for using Pool-of-Threads seems quite large - up to 63 percent.

So, my first task is see if I get the same results. I will note here that I am using Solaris, whereas Mark was no doubt using a Linux distro. We probably have different hardware as well (although both are Intel x86).

Here is what I found when running sysbench read-only (with the sysbench clients on the same host). The "conventional" scheduler inside MySQL is known as the "Thread-per-Connection" scheduler, by the way.

This is in contrast to Mark's results - I am only seeing a loss in throughput of up to 30%.

What about the bigger picture?

These results do show there is a definite reduction in maximum throughput if you use the pool-of-threads scheduler.

I believe it is worth looking at the bigger picture however. To do this, I am going to add in two more test cases:

  • sysbench read-only, with the sysbench client and MySQL database on separate hosts, via a 1 Gb network
  • sysbench read-write, via a 1 Gb network

What I want to see is what sort of impact the pool-of-threads scheduler has for a workload that I expect is still the more common one - where our database server is on a dedicated host, accessed via a network.

As you can see, the impact on throughput is far less significant when the client and server are separated by a network. This is because we have introduced network latency as a component of each transaction and increased the amount of work the server and client need to do - they now need to perform ethernet driver, IP and TCP tasks.

This reduces the relative overhead - in CPU consumed and latency - introduced by pool-of-threads.

This is a reminder that if you are conducting performance tests on a system prior to implementing or modifying your architecture, you would do well to choose a test architecture and workload that is as close as possible to that you are intending to deploy. The same is true if you are are trying to extrapolate performance testing someone else has done to your own architecture.

The Converse is Also True

On the other hand, if you are a developer or performance engineer conducting testing in order to test a specific feature or code change, a micro-benchmark or simplified test is more likely to be what you need. Indeed, Mark's use of the "blackhole" storage engine is a good idea to eliminate that processing from each transaction.

In this scenario, if you fail to make the portion of the software you have modified a significant part of the work being done, you run the risk of seeing performance results that are not significantly different, which may lead you to assume your change has negligible impact.

In my next posting, I will compare the two schedulers using a different perspective.

Wednesday Dec 17, 2008

MySQL 5.1 Memory Allocator Bake-Off

After getting sysbench running properly with a scalable memory allocator (see last post), I can now return to what I was originally testing - what memory allocator is best for the 5.1 server (mysqld).

This stems out of studies I have made of some patches that have been released by Google. You can read about the work Google has been doing here.

I decided I wanted to test a number of configurations based on the MySQL community source, 5.1.28-rc, namely:

  • The baseline - no Google SMP patch, default memory allocator (5.1.28-rc)
  • With Google SMP patch, mem0pool enabled, no custom malloc (pool)
  • With Google SMP patch, mem0pool enabled, linked with mtmalloc (pool-mtmalloc)
  • With Google SMP patch, mem0pool disabled, linked with tcmalloc (TCMalloc)
  • With Google SMP patch, mem0pool disabled, linked with umem (umem)
  • With Google SMP patch, mem0pool disabled, linked with mtmalloc (mtmalloc)

Here are some definitions, by the way:

mem0pool InnoDB's internal "memory pools" feature, found in mem0pool.c (NOTE: Even if this is enabled, other parts of the server will not use this memory allocator - they will use whatever allocator is linked with mysqld)
tcmalloc The "" that is built from google-perftools-0.99.2
Hoard The Hoard memory allocator, version 3.7.1
umem The libumem library (included with Solaris)
mtmalloc The mtmalloc library (included with Solaris)

My test setup was a 16-CPU Intel system, running Solaris Nevada build 100. I chose to use only an x86 platform, as I was not able to build tcmalloc on SPARC. I also chose to run with the database in TMPFS, and with an innoDB buffer size smaller than the database size. This was to ensure that we would be CPU-bound if possble, rather than slowed by I/O.

If I built any package (no need for mtmalloc or umem), I used GCC 4.3.1, except for Hoard, which seemed to prefer the Sun Studio 11 C compiler (over Sun Studio 12 or GCC).

My test was a sysbench OLTP read-write run, of 10 minutes. Each series of runs at different thread counts is preceded by a database re-build and 20 minute warmup. Here are my throughput results for 1-32 SysBench threads, in transactions per second:

These results show that while the Google SMP changes are a benefit, the disabling of InnoDB's mem0pool does not seem to provide any further benefit for my configuration. My results also show that TCMalloc is not a good allocator for this workload on this platform, and Hoard is particularly bad, with significant negative scaling above 16 threads.

The remaining configurations are pretty similar, with mtmalloc and umem a little ahead at higher thread counts.

Before I get a ton of comments and e-mails, I would like to point out that I did some verification of my TCMalloc builds, as the results I got surprised me. I verified that it was using the supplied assembler for atomic routines, and I built it with optimization (-O3) and without.

I also discovered that TCMalloc was emitting this diagnostic when mysqld was starting up:

src/] uname failed assuming no TLS support (errno=0)

I rectified this with a change in, and called this configuration "TCMalloc -O3, TLS". It is shown against the other two configurations below.

I often like to have a look at what the CPU cost of different configurations are. This helps to demonstrate headroom, and whether different throughput results may be due to less efficient code or something else. The chart below lists what I found - note that this is system-wide CPU (user & system) utilization, and I was running my SysBench client on the same system.

Lastly, I did do one other comparison, which was to measure how much each memory allocator affected the virtual size of mysqld. I did not expect much difference, as the most significant consumer - the InnoDB buffer pool - should dominate with large long-lived allocations. This was indeed the case, and memory consumption grew little after the initial start-up of mysqld. The only allocator that then caused any noticable change was mtmalloc, which for some reason made the heap grow by 35MB following a 5 minute run (it was originally 1430 MB)


Friday Dec 12, 2008

Scalability and Stability for SysBench on Solaris

My mind is playing "Suffering Succotash..."

I have been working on MySQL performance for a while now, and the team I am in have discovered that SysBench could do with a couple of tweaks for Solaris.

Sidebar - sysbench is a simple "OLTP" benchmark which can test multiple databases, including MySQL. Find out all about it here , but go to the download page to get the latest version.

To simulate multiple users sending requests to a database, sysbench uses multiple threads. This leads to two issues we have identified with SysBench on Solaris, namely:

  • The implementation of random() is explicitly identified as unsafe in multi-threaded applications on Solaris. My team has found this is a real issue, with occasional core-dumps happening to our multi-threaded SysBench runs.
  • SysBench does quite a bit of memory allocation, and could do with a more scalable memory allocator.

Neither of these issues are necessarily relevant only to Solaris, by the way.

Luckily there are simple solutions. We can fix the random() issue by using lrand48() - in effect a drop-in replacement. Then we can fix the memory allocator by simply choosing to link with a better allocator on Solaris.

To help with a decision on memory allocator, I ran a few simple tests to check the performance of the two best-known scalable allocators available in Solaris. Here are the results ("libc" is the default memory allocator):


To see the differences more clearly, lets do a relative comparison, using "umem" (A.K.A. libumem) as the reference:

Relative Throughput

So - around 20% less throughput at 16 or 32 threads. Very little difference at 1 thread, too (where the default memory allocator should be the one with the lowest synchronization overhead).

Where you see another big difference is CPU cost per transaction:

CPU Cost

I will just point out two other reasons why I would recommend libumem:

I have logged these two issues as sysbench bugs:

However, if you can't wait for the fixes to be released, try these:


Tim Cook's Weblog The views expressed on this blog are my own and do not necessarily reflect the views of Oracle.


« July 2016