InnoDB recovery gets even faster in Plugin 1.1, thanks to native AIO

Note: this article was originally published on http://blogs.innodb.com on April 16, 2010 by Michael Izioumtchenko.

InnoDB Plugin 1.1 doesn’t add any recovery specific improvements on top of what we already have in Plugin 1.0.7. The details on the latter are available in this blog. Yet, when I tried to recover another big recovery dataset I created, I got the following results for total recovery time:

  • Plugin 1.0.7: 46min 21s
  • Plugin 1.1: 32min 41s

Plugin 1.1 recovery is 1.5 times faster. Why would that happen? The numerous concurrency improvements in Plugin 1.1 and MySQL 5.5 can’t really affect the recovery. The honor goes to Native Asynchronous IO on Linux. Let’s try without it:

  • Plugin 1.1 with –innodb-use-native-aio=0: 49min 07s

which is about the same as 1.0.7 time. My numerous other recovery runs showed that the random fluctuations account for 2-3min of a 30-45min test.

Why is native AIO good for you? Why is it better the  than the simulated AIO we already have? Here’s what Inaam Rana, our IO expert and the author of the AIO patch, says:

  • During recovery typically redo log application is performed by the IO helper threads in the completion routine.
  • With simulated aio the thread waits for IO to complete and then calls the completion routine.
  • With native aio the thread doesn’t have to wait for io to complete, instead it picks a completed request and applies redo to it.

Read more about native AIO here.

You don’t have to do anything to take advantage of this feature. It is enabled by default and is used where available as determined by configure.

Here are some details about the test environment:

Hardware: HP DL480, 32G RAM, 2×4 core Intel(R) Xeon(R) CPU E5450  @ 3.00GHz, RAID5, about 1T total storage

Dataset:  1757549 dirty pages, 2808364565 bytes of redo. For the curious, it was a sysbench table with 400 million rows and the workload I used was random row update by a simple perl script.  Note that this is over 28G worth of dirty pages which means I had to use a very abusive settings of innodb_buffer_pool=29G and innodb_max_dirty_pages_pct=99, given only 32G of RAM. The recovery was done using the same settings and in the first few attempts the recovery would fail because of what was eventually diagnosed as bug 53122. As it happens, InnoDB recovery uses some memory outside of the buffer pool and it wanted more of it that was really necessary.

InnoDB configuration parameters:

–innodb-buffer-pool-size=28g
–innodb-log-file-size=2047m
–innodb-adaptive-flushing=0
–innodb-io-capacity=100
–innodb-additional-mem-pool-size=16m
–innodb-log-buffer-size=16m
–innodb-adaptive-hash-index=0
–innodb-flush-log-at-trx-commit=2
–innodb-max-dirty-pages-pct=99

This is highly artificial setup that targets maximizing the generation of dirty pages and redo, and using as much memory as possible for those dirty pages.

Comments:

Post a Comment:
Comments are closed for this entry.
About

This is the InnoDB team blog.

Search

Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today