Friday Jan 10, 2014

Transaction life cycle improvements in 5.7.3

This is part of the ongoing work on improving the transaction life cycle management. In 5.7.2 we split the transaction list into two. The read-only transaction list and the read-write transaction list. There was another "virtual" list, the auto-commit non-locking  read-only (AC-NL-RO) transaction list. The change in 5.7.2 was that by default a transaction was treated as read only and added to the read-only transaction list. Only when it was determined that the transaction was going to do an update we removed the transaction from the read-only list and moved it to the read-write transaction list. This initial add to the the read-only list forced the acquisition of the trx_sys_t::mutex. Acquiring the mutex during transaction start/begin has a cost. Promoting a transaction from read-only to read-write we had to acquire the trx_sys_t::mutex to add to the read-write transaction list and so that is not too expensive and unavoidable. There is another transaction list for caching user transactions that we will ignore in this discussion (the mysql-trx-list) it is per connection. All user transactions both AC-NL-RO, plain read-only and read-write user transactions are on this list.

The optimization

The optimization in 5.7.3 is to eliminate the explicit read-only transaction list altogether. By default transactions are not put on any list unless they are explicitly tagged as read-write then they are added to the read-write transaction list. This makes the transaction begin mutex free. Additionally if the transaction commits as a read-only transaction then we don't need to acquire the trx_sys_t::mutex at commit time to remove from the read-only list either. This improves performance for read-only transactions, making them more or less equivalent to AC-NL-RO transactions. They will however incur some cost compared to AC-NL-RO transactions if they acquire shared locks.

Additionally,  the MVCC view open/close handling of read-only transactions is now equivalent to that of AC-NL-RO transactions, very low life cycle overhead. This will also help in improving performance significantly.

User impact

There is of course the positive performance impact but there is also a change in visibility semantics. With the elimination of the read-only list the read-only transactions are now no longer visible via SHOW ENGINE INNODB STATUS; they are however visible via the INFORMATION SCHEMA.

Wednesday Sep 25, 2013

InnoDB Temporary Tables just got faster

It all started with a goal to make InnoDB temporary tables more effective. Temporary table semantics are blessed with some important characteristics that can help us simplify lot of operations.

  • Temporary tables are not visible across connections
  • Temporary tables lifetime is limited to connection lifetime (unless user explicitly drops it).

What does this means in to InnoDB ?

  • REDO logging can be avoided for temporary tables and related objects since temporary tables do not survive a shutdown or crash.
  • Temporary table definitions can be maintained in-memory without persisting to the disk.
  • Locking constraints can be relaxed since only one client can see these tables.
  • Change buffering can be avoided since the majority of temporary tables are short-lived.

In order to implement these changes in InnoDB we took a bit different approach:

  • We introduced a dedicated tablespace for housing temporary tables. In 5.7, all non-compressed temporary tables (irrespective of innodb-file-per-table setting) are housed in this new temporary tablespace (ibtmp1). Compressed temporary tables continue to get housed in their independent tablespace. This new temporary tablespace is re-created on each server restart and can be pre-extended before starting server. (For more about temporary tablespace check MySQL documentation).
  • Dedicated rollback-segments (for housing undo logs) for temporary table objects.
  • All related objects including rollback-segments have been moved to this new temporary tablespace. This helps in maintaining locality and importantly eliminating REDO logging associated with all such objects.

What have we achieved ? (Performance Gain w.r.t to 5.6)

  • Create/Drop workload: (around 11x performance gain)
    create n tables, drop n tables, create+drop n tables, create + drop n tables with significant number of keys.
  • Insert/Update/Delete workload: (2-4x performance gain)
    • insert workload: insert n entries: sorted, n entries: reverse sorted, n entries: random (total 3n entries)
    • delete workload: delete n entries: first initiate insert workload to load table, delete n entries using primary key, delete n entries using secondary index, delete n entries using primary key such that complete table is empty.
    • update workload: update n entries: first initiate insert workload to load table, update n entries using primary key, update n entries using secondary index, update all 3n entries 2 times. (no explicit key specified).

All testing done using sql-bench (sql-bench modified to accommodate temporary table use-case).


Summing it up....

The benchmarks above show that InnoDB temporary tables are faster in 5.7 (than in 5.6). We plan to further enhance temporary table performance.

Saturday Sep 21, 2013

InnoDB 5.7 performance improvements

A quick overview of the InnoDB performance improvements for both read-only and read-write loads.
[Read More]

Saturday Sep 29, 2012

Helping to Reduce Page Compression Failures Rate

When InnoDB compresses a page it needs the result to fit into its predetermined compressed page size (specified with KEY_BLOCK_SIZE). When the result does not fit we call that a compression failure. In this case InnoDB needs to split up the page and try to compress again. That said, compression failures are bad for performance and should be minimized.

Whether the result of the compression will fit largely depends on the data being compressed and some tables and/or indexes may contain more compressible data than others. And so it would be nice if the compression failure rate, along with other compression stats, could be monitored on a per table or even on a per index basis, wouldn't it?

This is where the new INFORMATION_SCHEMA table in MySQL 5.6 kicks in. INFORMATION_SCHEMA.INNODB_CMP_PER_INDEX provides exactly this helpful information. It contains the following fields:

+-----------------+--------------+------+
| Field           | Type         | Null |
+-----------------+--------------+------+
| database_name   | varchar(192) | NO   |
| table_name      | varchar(192) | NO   |
| index_name      | varchar(192) | NO   |
| compress_ops    | int(11)      | NO   |
| compress_ops_ok | int(11)      | NO   |
| compress_time   | int(11)      | NO   |
| uncompress_ops  | int(11)      | NO   |
| uncompress_time | int(11)      | NO   |
+-----------------+--------------+------+

similarly to INFORMATION_SCHEMA.INNODB_CMP, but this time the data is grouped by "database_name,table_name,index_name" instead of by "page_size".

So a query like

SELECT
database_name,
table_name,
index_name,
compress_ops - compress_ops_ok AS failures
FROM information_schema.innodb_cmp_per_index
ORDER BY failures DESC;
would reveal the most problematic tables and indexes that have the highest compression failure rate.

From there on the way to improving performance would be to try to increase the compressed page size or change the structure of the table/indexes or the data being stored and see if it will have a positive impact on performance.

About

This is the InnoDB team blog.

Search

Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today