Thursday Apr 09, 2009

Testing the New Pool-of-Threads Scheduler in MySQL 6.0, Part 2

In my last blog, I introduced my investigation of the "Pool-of-Threads" scheduler in MySQL 6.0. Read on to see where I went next.

I now want to take a different approach to comparing the two schedulers. It is one thing to compare how the schedulers work "flat out" - with a transaction request rate that is limited only by the maximum throughput of the system under test. I would like to instead look at how the two schedulers compare when I drive mysqld at a consistent transaction rate, then vary only the number of connections over which the transaction requests are arriving. I will aim to come up with a transaction rate that sees CPU utilization somewhere in the 40-60% range.

This is more like how real businesses use MySQL every day, as opposed to the type of benchmarking that computer companies usually engage in. This will also allow me to look at how the schedulers run at much higher connection counts - which is where the pool-of-threads scheduler is supposed to shine.

Now, I will let you all know that I first conducted my experiments with mysqld and the load generator (sysbench) on the same system. I was again not sure this would be be the best methodology, primarily because I would end up having one operating system instance scheduling in some cases a very large number of sysbench threads along with the mysqld threads.

It turned out the results from this mode threw up some issues (like not being able to get my desired throughput with 2048 connections in pool-of-threads mode), so I repeated my experiments - the second set of results have the load generation coming from two remote systems, each with a dedicated 1 Gbit ethernet link to the DB server.

The CPU utilization I have captured was just the %USR plus %SYS for the mysqld process. This makes the two sets of metrics comparable.

Here are my results. First for experiments where sysbench ran on the same host as mysqld:

Then for experiments where sysbench ran on two remote hosts, each with a dedicated Gigabit Ethernet link to the database server:

As you can see, the pool-of-threads model does incur an overhead, both in terms of CPU consumption and response time, at low connections counts. As hoped though, the advantage swings in pool-of-threads' favour. This is particularly noticeable in the case where our clients are remote. It is arguable that an architecture involving many hundreds or thousands of client connections is more likely to have those clients located remote from the DB server.

Now, the first issue I have is that while pool-of-threads starts to win on response time, the response time is still increasing in a similar fashion to thread-per-connection's response time (note - the scale is logarithmic). This is not what I expected, so we have a scalability problem in there somewhere.

The second issue is where I have to confess - I only got one "lucky" run where my target transaction rate was achieved for pool-of-threads at 2048 connections. For many other runs, the target rate could not be achieved, as these raw numbers show:

connections tps mysqld
%usr
mysqld
%sys
mysqld
%cpu
avg-resp 95%-resp
2048962.2225.2314.9340.161943.782368.78
20481197.0030.5911.2041.79317.98435.19
2048836.5021.9811.0933.072259.362287.03
2048963.0026.4912.0738.561333.671128.93
2048992.2525.8115.0840.891851.172280.50
2048915.7124.1615.0539.212220.452342.06
2048919.5424.2515.0539.302210.952331.45
2048917.0924.1515.0539.202217.862321.40
2048875.0923.2013.2936.492188.692344.91
20481180.6231.3514.5745.921439.961772.86
20481185.8030.7414.2444.981185.711814.24
20481146.9030.3415.2345.571602.851842.14
20481141.4730.2015.2245.421612.341873.95
20481158.7430.4712.9943.46999.761870.35
20481177.5930.6714.9745.641403.221838.84

This indicates we have some sort of bottleneck right at or around the 2048 thread point. This is not what we want with pool-of-threads, so I will continue my investigation.

Wednesday Apr 08, 2009

Testing the New Pool-of-Threads Scheduler in MySQL 6.0

I have recently been investigating a bew feature of MySQL 6.0 - the "Pool-of-Threads" scheduler. This feature is a fairly significant change to the way MySQL completes tasks given to it by database clients.

To begin with, be advised that the MySQL database is implemented as a single multi-threaded process. The conventional threading model is that there are a number of "internal" threads doing administrative work (including accepting connections from clients wanting to connect to the database), then one thread for each database connection. That thread is responsible for all communication with that database client connection, and performs the bulk of database operations on behalf of the client.

This architecture exists in other RDBMS implementations. Another common implementation is a collection of processes all cooperating via a region of shared memory, usually with semaphores or other synchronization objects located in that shared memory.

The creation and management of threads can be said to be cheap, in a relative sense - it is usually significantly cheaper to create or destroy a thread rather than a process. However these overheads do not come for free. Also, the operations involved in scheduling a thread as opposed to a process are not significantly different. A single operating system instance scheduling several thousand threads on and off the CPUs is not much less work than one scheduling several thousand processes doing the same work.

Pool-of-Threads

The theory behind the Pool-of-Threads scheduler is to provide an operating mode which supports a large number of clients that will be maintaining their connections to the database, but will not be sending a constant stream of requests to the database. To support this, the database will maintain a (relatively) small pool of worker threads that take a single request from a client, complete the request, return the results, then return to the pool and wait for another request, which can come from any client. The database's internal threads still exist and operate in the same manner.

In theory, this should mean less work for the operating system to schedule threads that want CPU. On the other hand, it should mean some more overhead for the database, as each worker thread needs to restore the context of a database connection prior to working on each client request.

A smaller pool of threads should also consume less memory, as each thread requires a minimum amount of memory for a thread stack, before we add what is needed to store things like a connection context, or working space to process a request.

You can read more about the different threading models in the MySQL 6.0 Reference Manual.

Testing the Theory

Mark Callaghan of Google has recently had a look at whether this theory holds true. He has published his results under "No new global mutexes! (and how to make the thread/connection pool work)". Mark has identified (via this bug he logged) that the overhead for using Pool-of-Threads seems quite large - up to 63 percent.

So, my first task is see if I get the same results. I will note here that I am using Solaris, whereas Mark was no doubt using a Linux distro. We probably have different hardware as well (although both are Intel x86).

Here is what I found when running sysbench read-only (with the sysbench clients on the same host). The "conventional" scheduler inside MySQL is known as the "Thread-per-Connection" scheduler, by the way.

This is in contrast to Mark's results - I am only seeing a loss in throughput of up to 30%.

What about the bigger picture?

These results do show there is a definite reduction in maximum throughput if you use the pool-of-threads scheduler.

I believe it is worth looking at the bigger picture however. To do this, I am going to add in two more test cases:

  • sysbench read-only, with the sysbench client and MySQL database on separate hosts, via a 1 Gb network
  • sysbench read-write, via a 1 Gb network

What I want to see is what sort of impact the pool-of-threads scheduler has for a workload that I expect is still the more common one - where our database server is on a dedicated host, accessed via a network.

As you can see, the impact on throughput is far less significant when the client and server are separated by a network. This is because we have introduced network latency as a component of each transaction and increased the amount of work the server and client need to do - they now need to perform ethernet driver, IP and TCP tasks.

This reduces the relative overhead - in CPU consumed and latency - introduced by pool-of-threads.

This is a reminder that if you are conducting performance tests on a system prior to implementing or modifying your architecture, you would do well to choose a test architecture and workload that is as close as possible to that you are intending to deploy. The same is true if you are are trying to extrapolate performance testing someone else has done to your own architecture.

The Converse is Also True

On the other hand, if you are a developer or performance engineer conducting testing in order to test a specific feature or code change, a micro-benchmark or simplified test is more likely to be what you need. Indeed, Mark's use of the "blackhole" storage engine is a good idea to eliminate that processing from each transaction.

In this scenario, if you fail to make the portion of the software you have modified a significant part of the work being done, you run the risk of seeing performance results that are not significantly different, which may lead you to assume your change has negligible impact.

In my next posting, I will compare the two schedulers using a different perspective.

About

Tim Cook's Weblog The views expressed on this blog are my own and do not necessarily reflect the views of Oracle.

Search

Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today