Monday Apr 16, 2012

Optimizing neighbor flush behavior

Note: this article was originally published on http://blogs.innodb.com on April 16, 2012 by Yasufumi Kinoshita.

The performance of flush_list flushing of InnoDB decides the basic performance for modifying workloads. So, it is important to optimize the flush behavior. In this post we’ll consider how to optimize the neighbor-flushing behavior.

Factor 1: Characteristics of storage

Depending on the characteristics of your storage’s throughput for write IO, you can term your storage as either “write amount bound” or “write times bound”. The minimum unit of the InnoDB datafile is page size (16KB or less). And InnoDB attempts to combines them in a single IO up to 1 extent (1MB) maximum, if they are contiguous.

<one HDD>:  Almost “write times bound”. Because head-seek time is the most effective factor for access time of HDD. And around 1MB size can be treated by the 1 head-seek.

<RAID-HDD>: It depends on the striping size of the RAID. In many cases, the striping size is set to 256KB ~ 1MB (much larger than the page size of datafile), with the intention that 1IO – 1HDD (both for keeping sequential access advantage of HDD and for keeping parallel ability for IO requests using several HDD in RAID). For the such general striping size, RAID-HDD is “write times bound”. (For the small striping size around same size as the page size, it should be “write amount bound”. But I don’t recommend such small striping size from the viewpoint of this post, because it just loses the sequential access advantage.)

<SSD>: It depends on internal write unit of SSD. For newer high-end SSD, the size is 4KB or more. It is not larger than InnoDB page size. Such high-end SSD is “write amount bound”. However, the unit size is very different according to each SSD’s internal implementations. Low-end or older SSD might have unit size over 1MB (and the throughput might be slow) and might be “write times bound”. You can estimate the write unit size of your SSD by random write benchmark with several block sizes 4,8,16KB,…,1MB, and the largest block size of the “write times bound” expected as the unit size.

Factor 2: Oldest modified age

The redo log in InnoDB is used in a circular fashion. The reusable redo space is limited by the oldest modification in the oldest modified block i.e.: the max oldest modified age which is equal to current_LSN (Log Sequenc Number) – the oldest modification LSN cannot be higher than the log capacity of the redo log files. When there is no reusable redo space available other modification operations cannot be done until the oldest modified age is decreased by flushing the oldest dirty pages.

The flushing throughput of the oldest dirty pages decides the workload throughput. It is important, how to effectively use the limited write IO bound for flushing “oldest” dirty pages.

Tuning flush_list flushing effective

The first priority of flushing is to reduce the oldest modified age assuming there is no shortage of  free blocks. So, this “flushing the oldest blocks only” is the basic strategy.

For “write amount bound” storage (e.g. high-end SSD), this is already the best strategy. It equals to “innodb_flush_neighbors = false“.

On the other hand, for “write times bound” storage (e.g. HDD base), the contiguous dirty neighbors of the oldest dirty pages can be flushed without wasting the write IO bound, because of the sequential advantage. So, flushing also the contiguous pages is really worth to do. But non-contiguous and non-oldest blocks should not be flushed at the same time, because non-contiguous flushing will become another IO request and has high probability to be treated as another raw block writing in the storage (waste the write IO bound).

The flush_neighbors of InnoDB traditional implementation flushes non-contiguous dirty blocks also. And it is not the best behavior for both type of storage “write times bound” and “write amount bound”. In MySQL labs release 2012 we have fixed this behavior to flush contiguous pages only, for “write times bound” storage.

Conclusion

In the end, the conclusion is followings

  • For HDD or HDD-RAID (stripe size about 256KB ~ 1MB): use the new flush_neighbors (flushing contiguous dirty blocks only)
  • For SSD (internal write unit size =< InnoDB data page size): disable flush_neighbors

Friday Apr 13, 2012

New flushing algorithm in InnoDB

Note: this article was originally published on http://blogs.innodb.com on April 13, 2012 by Inaam Rana.

In MySQL labs release April 2012 we have reworked the flushing heuristics in InnoDB. In this post I’ll give an overview of what we have changed and the various configuration variables we have introduced to fine tune the flushing algorithm. If you are interested in finding out how our new flushing algorithm fares in benchmarks you can get these details in Dimitri’s well-explained blog here.

Flushing means writing dirty pages to disk. I have explained in some detail about adaptive_flushing and types of flushing in my previous notes. Please go through these notes if you want to make sense of what follows.

The page_cleaner thread checks the state of the system every second and takes into account number of dirty pages, amount of reusable redo space, the rate at which redo is generated and the IO capacity for which the server is configured and based on these factors decide how many pages we need to flush.

In the new scheme of things the page_cleaner thread uses a single non-linear formula to calculate how many pages we need to flush to have sufficient reusable redo space. This is different from current flushing heuristic where async_water_mark is taken as point where we change our flushing algorithm. Similarly, instead of using innodb_max_dirty_pages_pct as a switch which triggers flushing we have introduced the concept of a range where the flushing to control the dirty pages percentage starts once we cross the low water mark and gets more and more aggressive as we near the high water mark.

There are four new configuration variables. Note that if your system is not experiencing any IO spikes due to checkpoints then you can probably leave all of the following as is. All the variables are global in scope and can be set dynamically.

  • innodb_adaptive_flushing_lwm: Low water mark measured in %age of total redo log size at which adaptive flushing kicks in. If currently unusable redo space is less then this value no background flushing will happen. Default value is 10 and permissible values are 0 – 70.
  • innodb_max_dirty_pages_pct_lwm: Low water mark of dirty pages in %age where preflushing to control dirty page ratio kicks in. Default value is 0 which has the special meaning of this value having no effect. Allowable values are 0 – 99.
  • innodb_max_io_capacity: InnoDB generally attempts to work within the limits of innodb_io_capacity. However, if it needs to do aggressive flushing then innodb_max_io_capacity defines the limit to which the write IOPs can be stretched. Default value is 2000 which is ten times the default value for innodb_io_capacity.
  • innodb_flushing_avg_loops: Number of iterations for which we keep the previously caculated snapshot of the flushing state. This variable is roughly a measure of how smooth you want the transition in the flushing activity to be. The higher the value the smoother will be the transition in flushing in face of rapidly changing workload. A lower value implies that the flushing algorithm is more responsive but it also means that flushing acitivity can become spiky when confronted with a quickly changing workload. Default value is 30 and permissible range is 1 – 1000.

Finally, there is a wealth of information availabe to you to monitor the impact of changing these variables. You can get a good inside view of how flushing activity is working by querying innodb_metrics table.

mysql> select name, comment from information_schema.innodb_metrics where name like ‘Buffer_flush%’;
+————————————-+—————————————————————-+
| name | comment |
+————————————-+—————————————————————-+
| buffer_flush_batch_scanned | Total pages scanned as part of flush batch |
| buffer_flush_batch_num_scan | Number of times buffer flush list flush is called |
| buffer_flush_batch_scanned_per_call | Pages scanned per flush batch scan |
| buffer_flush_batch_total_pages | Total pages flushed as part of flush batch |
| buffer_flush_batches | Number of flush batches |
| buffer_flush_batch_pages | Pages queued as a flush batch |
| buffer_flush_neighbor_total_pages | Total neighbors flushed as part of neighbor flush |
| buffer_flush_neighbor | Number of times neighbors flushing is invoked |
| buffer_flush_neighbor_pages | Pages queued as a neighbor batch |
| buffer_flush_n_to_flush_requested | Number of pages requested for flushing. |
| buffer_flush_avg_page_rate | Average number of pages at which flushing is happening |
| buffer_flush_lsn_avg_rate | Average redo generation rate |
| buffer_flush_pct_for_dirty | Percent of IO capacity used to avoid max dirty page limit |
| buffer_flush_pct_for_lsn | Percent of IO capacity used to avoid reusable redo space limit |
| buffer_flush_adaptive_total_pages | Total pages flushed as part of adaptive flushing |
| buffer_flush_adaptive | Number of adaptive batches |
| buffer_flush_adaptive_pages | Pages queued as an adaptive batch |
| buffer_flush_sync_total_pages | Total pages flushed as part of sync batches |
| buffer_flush_sync | Number of sync batches |
| buffer_flush_sync_pages | Pages queued as a sync batch |
| buffer_flush_background_total_pages | Total pages flushed as part of background batches |
| buffer_flush_background | Number of background batches |
| buffer_flush_background_pages | Pages queued as a background batch |
+————————————-+————————————————————-+
23 rows in set (0.00 sec)

Wednesday Apr 11, 2012

InnoDB persistent stats got a friendly UI

Note: this article was originally published on http://blogs.innodb.com on April 11, 2012 by Vasil Dimov.

After introducing InnoDB persistent statistics in MySQL 5.6, in this April Labs release we have dressed it up in a nice UI and refactored the internals a bit to make the code more elegant and straight-forward.

The persistent stats are now controlled globally and can also be overridden at table level, should any table require a different behavior.

Global

The server global flag –innodb-stats-persistent (boolean) now controls whether all InnoDB tables use persistent statistics or not. Keep in mind that if a table is using persistent stats then its statistics will not be updated automatically and you are responsible for running ANALYZE TABLE periodically, whenever you think the table contents has changed too much. Thus the default for –innodb-stats-persistent is currently set to OFF.

Per table

The persistent stats ON/OFF setting can be overridden at a table level with a table option like this:

CREATE TABLE t (a INT) ENGINE=INNODB PERSISTENT_STATS=0|1|default;

or

ALTER TABLE t PERSISTENT_STATS=0|1|default;

where ’0′ means no persistent stats for this table (regardless of the setting of the global –innodb-stats-persistent), ’1′ means to always use persistent stats for this table and ‘default’ means to use the server global setting, whatever it is.

Another goodie is that now you do not need to restart the server if you have manually updated the stats values in mysql.innodb_table_stats and/or mysql.innodb_index_stats and want your changes to take effect. Just do

FLUSH TABLE t;

to force InnoDB to reread what’s in mysql.innodb_table_stats and mysql.innodb_index_stats.

What to expect next? Maybe a fully controlled automatic persistent stats update so you do not have to worry about having to run ANALYZE TABLE periodically but still your query plans are stable? Wouldn’t that be nice?

Memcached With SASL Support

Note: this article was originally published on http://blogs.innodb.com on April 11, 2012 by Jimmy Yang.

In this April MySQL Lab release, we’ll provide you a more robust and release-ready InnoDB Memcached Engine with a few enhancements. The most notable addition is the SASL support, which gives users the capability to protect their MySQL database from unauthenticated access through memcached clients. In this blog, I will walk you through steps of getting this option enabled.

Background Info:
SASL stands for “Simple Authentication and Security Layer”, which is a Standard for adding authentication support to connection-based protocols. Memcached added SASL support starting its 1.4.3 release. And here is a good article that gives you some background on why and how SASL is supported in Memcached.

For InnoDB Memcached, the “Memcached mapped” user table must be registered in the “container” “system table”. And memcached client(s) can only access such “registered” table.  Even though the DBA can add access restrictions on such table,  he/she has no control over who can access it through the memcached client(s). And this is exactly the reason we want to provide a means (in this case SASL) for DBA being able to have some control over who can access our InnoDB table(s).

In the following section, we will go through with you the steps to build, enable and test an SASL-enabled InnoDB Memcached plugin.

Steps to Build and Enable SASL in InnoDB Memcached Plugin:

By default, SASL-enabled InnoDB Memcached is not built-in (and included in the release package), since it relies on some SASL libraries to build SASL into Memcached Engine. So you will need to download the source and rebuild the InnoDB Memcached plugin after you download the SASL libraries. The detail is described in following sections:

1) First, you would need to get SASL development and utility libraries. For example, on Ubuntu, you can get these libraries through:

> sudo apt-get -f install libsasl2-2 sasl2-bin libsasl2-2 libsasl2-dev libsasl2-modules

2) Then build InnoDB Memcached Engine plugin (shared libraries) with SASL capability. This is done by adding ENABLE_MEMCACHED_SASL=1 to the cmake option. In addition, Memcached provides a simple plaintext passwords support, which is easier to use for testing, so we have support for that too. And this is enabled by setting the option ENABLE_MEMCACHED_SASL_PWDB=1.

So overall, we will need to add following three options to the cmake:

> cmake ... -DWITH_INNODB_MEMCACHED=1
  -DENABLE_MEMCACHED_SASL=1 -DENABLE_MEMCACHED_SASL_PWDB=1

3) The third step is to install the InnoDB Memcached Plugin as before. Please refer to my earlier blog posts on the procedures.

4) As mentioned in section 2), Memcached provides a simple plaintext password support through SASL, which will be used for this demo.  There was a good blog from Thond Norbye describes the steps, so you can follow the instruction there too. I will repeat the important steps here.

a) Create a user named “testname” and its password as “testpasswd” in a file:

> echo "testname:testpasswd:::::::" >/home/jy/memcached-sasl-db

b) Let memcached know about it by setting environment variable MEMCACHED_SASL_PWDB:

> export MEMCACHED_SASL_PWDB=/home/jy/memcached-sasl-db

c) Also tell memcached that it is a plaintext password:

> echo "mech_list: plain" > /home/jy/work2/msasl/clients/memcached.conf
> export SASL_CONF_PATH=/home/jy/work2/msasl/clients/memcached.conf

4) Then we are ready to reboot the server, and add a “daemon_memcached” option “-S”, to enable SASL:

> mysqld .. --daemon_memcached_option="-S"

5) Now we are done the setup. Let’s test it. To do so, you might need SASL-enabled client. I used a SASL-enabled libmemcached as described in Thond Norbye’s blog, and tested it accordingly:

> memcp --servers=localhost:11211 --binary  --username=testname --password=testpasswd myfile.txt
> memcat --servers=localhost:11211 --binary --username=testname --password=testpasswd myfile.txt

Without appropriate user name or password, above operation will be rejected by error message “memcache error AUTHENTICATION FAILURE”. Otherwise, the operation will be completed. You can also play with the plaintext password set in /home/jy/memcached-sasl-db to verify it.

There are other methods to test the SASL with memcahced. But the one described above is the most straightforward.

Other changes for InnoDB Memcached:

Besides the SASL support, there are a few changes in this release that worth mentioning:
1) We added a configuration option innodb_api_trx_level, so that user can control the transaction isolation level on the queries through InnoDB APIs, or in this case, the memcached.

In theory, for memcached, there is no such concept of “transactions”, so this is an extra property that we added on top of it, so that user has some level of control when issuing DMLs through the SQL interface. By default, it is set to “read uncommitted”.

2) Another option we added is innodb_api_enable_mdl, the “mdl” stands for “metadata locking”. This basically “locks” the table from the MySQL level, so that the mapped table cannot be dropped or altered by DDL through the SQL interfaces. Without the lock, the table can be dropped from MySQL layer, but will be kept in the InnoDB storage until memcached or any other user stops using it.

3) A configure option name change. To enable binlog, the configure variable name has changed from innodb_direct_access_enable_binlog to innodb_api_enable_binlog.

Summary:

In summary, this release provides you a more robust InnoDB Memcached Engine with SASL support. The steps to enable such support is fairly straightforward and almost identical to those you would do to enable SASL for a Memcached server. So if you are familiar with using SASL for memcached, then it would just some name flipping to build and enable it. And if you are not familiar with the operation, above steps also give you a quick start to use it.

InnoDB performance improvements

Note: this article was originally published on http://blogs.innodb.com on April 11, 2012 by Sunny Bains.

The problem
After making several performance fixes, notable among them being the kernel mutex split and the new handling of read-only transaction and in particular non-locking auto-commit read-only transactions, we weren’t seeing any increase in transaction per second (TPS) on our high-end hardware. On this one particular host, a 24 core with 2 threads per core host. The TPS using Sysbench was a tepid 5.6K at 16 threads and more or less plateaued till 1K user threads. No matter what config setting we used, we would more or less end up with the same result.

We ended up getting together for a meeting at Paris to discuss this issue and during the brain storming, one of the potential issues that cropped up was the effect of cache coherence and/or false sharing. After using the excellent Linux tool perf we were able to narrow it down to a global statistic counter in row_sel_search_for_mysql(). Mikael Ronstrom explains this in more detail.

The solution
Create a generic counter class (InnoDB code is now C++) that splits the counter into multiple (configurable) slots that are on separate 64 byte cache lines. Use the thread id of the updating thread to index into a slot to reduce the contention/sharing and it had the desired effect. The TPS went from 5.6 to 15K at 64 user threads and stayed close to stable right up to 1K, very slow degradation. This was using Sysbench OLTP_RO for autocommit-non-locking-read-only queries (Sysench option –oltp-skip-trx=off).

The code and binary can be downloaded from labs release downloads, the current release is mysql-5.6.6-labs-april-2012-*. See the code in include/os0thread.h. The new class is ib_counter_t.

We have now refactored the code and grouped all the InnoDB statistic counters in srv_counter_t. This will help in further consolidation and improvements. Currently, most of the InnoDB config and statistics variables are defined in srv0srv.cc (with a few exceptions). We need to start paying even more attention to their layout and alignment from now on. There seem to be some false sharing issues that we haven’t completely identified yet.

Results
I think it is better to look at Dimitri’s blog for results that reflect the improvements.

Enjoy!

April 2012 Labs Release – Online DDL Improvements

Note: this article was originally published on http://blogs.innodb.com on April 11, 2012 by John Russell.

This feature is a continuation of the “Fast Index Creation” feature introduced in Fast Index Creation in the InnoDB Storage Engine. Now you can perform other kinds of DDL operations on InnoDB tables online: that is, with minimal delay for operations on that table, and without rebuilding the entire table. This enhancement improves responsiveness and availability in busy production environments, where making a table unavailable for seconds or minutes whenever its column definitions change is not practical.

The DDL operations enhanced by this feature are these variations on the ALTER TABLE statement:

  • Create secondary indexes: CREATE INDEX name ON table (col_list) or ALTER TABLE table ADD INDEX name (col_list)

    Drop secondary indexes: DROP INDEX name ON table; or ALTER TABLE table DROP INDEX name

    Creating and dropping secondary indexes on InnoDB tables has avoided the table-copying behavior since the days of MySQL 5.1 with the InnoDB Plugin. Now, the table remains available for read and write operations while the index is being created or dropped. The CREATE INDEX or DROP INDEX statement only finishes after all transactions that are modifying the table are completed, so that the initial state of the index reflects the most recent contents of the table.

    Previously, modifying the table while an index was being created or dropped typically resulted in a deadlock that cancelled the insert, update, or delete statement on the table.

  • Changing the auto-increment value for a column: ALTER TABLE table AUTO_INCREMENT=next_value;

    Especially in a distributed system using replication or sharding, you sometimes reset the auto-increment counter for a table to a specific value. The next row inserted into the table uses the specified value for its auto-increment column. You might also use this technique in a data warehousing environment where you periodically empty all the tables and reload them, and you can restart the auto-increment sequence from 1.

  • Drop a foreign key constraint: ALTER TABLE tbl DROP FOREIGN KEY fk_name

    Currently, online DDL only includes the DROP operation for foreign keys, not ADD to create foreign keys.

    If you do not know the names of the foreign key constraints on a particular table, issue the following statement and find the constraint name in the CONSTRAINT clause for each foreign key:

    show create table table\G
    

    Or, query the information_schema.table_constraints table and use the constraint_name and constraint_type columns to identify the foreign key names.

    As a consequence of this enhancement, you can now also drop a foreign key and its associated index in a single statement, which previously required separate statements in a strict order:

    ALTER TABLE  table DROP FOREIGN KEY  constraint, DROP INDEX index;
    
  • Rename a column: ALTER TABLE tbl CHANGE old_col_name new_col_name datatype

    When you keep the same data type and only change the column name, this operation can always be performed online. As part of this enhancement, you can now rename a column that is part of a foreign key constraint, which was not allowed before.

Note

As your database schema evolves with new columns, data types, constraints, indexes, and so on, keep your CREATE TABLE statements up to date with the latest table definitions. Even with the performance improvements of online DDL, it is more efficient to create stable database structures at the beginning, rather than creating part of the schema and then issuing ALTER TABLE statements afterward.

The main exception to this guideline is for secondary indexes on tables with large numbers of rows. It is typically most efficient to create the table with all details specified except the secondary indexes, load the data, then create the secondary indexes.

Whatever sequence of CREATE TABLE, CREATE INDEX, ALTER TABLE, and similar statements went into putting a table together, you can capture the SQL needed to reconstruct the current form of the table by issuing the statement SHOW CREATE TABLE table\G (uppercase \G required for tidy formatting). This output shows clauses such as numeric precision, NOT NULL, and CHARACTER SET that are sometimes added behind the scenes, and you might otherwise leave out when cloning the table on a new system or setting up foreign key columns with identical type.

Performance and Availability Considerations

Avoiding a table copy during DDL improves several aspects of MySQL operation, such as performance, concurrency, availability, and scalability:

  • By avoiding the disk I/O and CPU cycles to rebuild the table, you minimize the overall load on the database and maintain good performance and high throughput during the DDL operation.

  • Because the DDL operation completes in less time, there is a shorter period when queries and DML operations on that table are blocked, making your application more responsive.

  • Because less data is read into the buffer pool, you avoid purging frequently accessed data from the memory cache, which typically causes a temporary performance dip after a DDL operation.

  • Because there is a shorter period when queries and DML operations queue up waiting for the DDL to finish, there is less locking and waiting for other resources all throughout the MySQL server. Reducing this type of overhead leads to greater scalability, even for operations not involving the table being altered.

Benchmarking

To judge the relative performance of online DDL operations, you can run such operations on a big InnoDB table using current and earlier versions of MySQL. You can also run all the performance tests under the latest MySQL version, simulating the previous DDL behavior for the “before” results, by setting the old_alter_table system variable. Issue the statement set old_alter_table=1 in the session, and measure DDL performance to record the “before” figures. Then set old_alter_table=0 to re-enable the newer, faster behavior, and run the DDL operations again to record the “after” figures.

For a basic idea of whether a DDL operation does its changes in-place or performs a table copy, look at the “rows affected” value displayed after the command finishes. For example, here are lines you might see after doing different types of DDL operations:

  • Changing the default value of a column (super-fast, does not affect the table data at all):

    Query OK, 0 rows affected (0.07 sec)
  • Adding an index (takes time, but 0 rows affected shows that the table is not copied):

    Query OK, 0 rows affected (21.42 sec)
  • Changing the data type of a column (takes substantial time and does require rebuilding all the rows of the table):

    Query OK, 1671168 rows affected (1 min 35.54 sec)

For example, before running a DDL operation on a big table, you might check whether the operation will be fast or slow as follows:

  1. Clone the table structure.

  2. Populate the cloned table with a tiny amount of data.

  3. Run the DDL operation on the cloned table.

  4. Check whether the “rows affected” value is zero or not. A non-zero value means the operation will require rebuilding the entire table, which might require special planning. For example, you might do the DDL operation during a period of scheduled downtime, or on each replication slave server one at a time.

For a deeper understanding of the reduction in MySQL processing, examine the PERFORMANCE_SCHEMA and INFORMATION_SCHEMA tables related to InnoDB before and after DDL operations, to see the number of physical reads, writes, memory allocations, and so on.

Background Information

Historically, the MySQL server and InnoDB have each kept their own metadata about table and index structures. The MySQL server stores this information in .frm files that are not protected by a transactional mechanism, while InnoDB has its own data dictionary as part of the system tablespace. If a DDL operation was interrupted by a crash or other unexpected event partway through, the metadata could be left inconsistent between these two locations, causing problems such as startup errors or inability to access the table that was being altered. Now that InnoDB is the default storage engine, addressing such issues is a high priority. These enhancements to DDL operations reduce the window of opportunity for such issues to occur.

Examples

Here are code examples showing some operations whose performance, concurrency, and scalability are improved by this new feature.

Example 1. Schema Setup Code for Online DDL Experiments

Here is the code that sets up the initial tables used in these demonstrations:

/*
Setup code for the online DDL demonstration:
- Set up some config variables.
- Create 2 tables that are clones of one of the INFORMATION_SCHEMA tables
  that always has some data. The "small" table has a couple of thousand rows.
  For the "big" table, keep doubling the data until it reaches over a million rows.
- Set up a primary key for the sample tables, since we are demonstrating InnoDB aspects.
*/ 

set autocommit = 0;
set foreign_key_checks = 1;
set global innodb_file_per_table = 1;
set old_alter_table=0;
prompt mysql: 

use test;

\! echo "Setting up 'small' table:"
drop table if exists small_table;
create table small_table as select * from information_schema.columns;
alter table small_table add id int unsigned not null primary key auto_increment;
select count(id) from small_table;

\! echo "Setting up 'big' table:"
drop table if exists big_table;
create table big_table as select * from information_schema.columns;
show create table big_table\G

insert into big_table select * from big_table;
insert into big_table select * from big_table;
insert into big_table select * from big_table;
insert into big_table select * from big_table;
insert into big_table select * from big_table;
insert into big_table select * from big_table;
insert into big_table select * from big_table;
insert into big_table select * from big_table;
insert into big_table select * from big_table;
insert into big_table select * from big_table;
commit;

alter table big_table add id int unsigned not null primary key auto_increment;
select count(id) from big_table;

Running this code gives this output, condensed for brevity and with the most important points bolded:

Setting up 'small' table:

Query OK, 1678 rows affected (0.13 sec)
Query OK, 1678 rows affected (0.07 sec)

+-----------+
| count(id) |
+-----------+
|      1678 |
+-----------+
1 row in set (0.00 sec)

Setting up 'big' table:

Query OK, 1678 rows affected (0.17 sec)

*************************** 1. row ***************************
       Table: big_table
Create Table: CREATE TABLE `big_table` (
  `TABLE_CATALOG` varchar(512) CHARACTER SET utf8 NOT NULL DEFAULT '',
  `TABLE_SCHEMA` varchar(64) CHARACTER SET utf8 NOT NULL DEFAULT '',
  `TABLE_NAME` varchar(64) CHARACTER SET utf8 NOT NULL DEFAULT '',
  `COLUMN_NAME` varchar(64) CHARACTER SET utf8 NOT NULL DEFAULT '',
  `ORDINAL_POSITION` bigint(21) unsigned NOT NULL DEFAULT '0',
  `COLUMN_DEFAULT` longtext CHARACTER SET utf8,
  `IS_NULLABLE` varchar(3) CHARACTER SET utf8 NOT NULL DEFAULT '',
  `DATA_TYPE` varchar(64) CHARACTER SET utf8 NOT NULL DEFAULT '',
  `CHARACTER_MAXIMUM_LENGTH` bigint(21) unsigned DEFAULT NULL,
  `CHARACTER_OCTET_LENGTH` bigint(21) unsigned DEFAULT NULL,
  `NUMERIC_PRECISION` bigint(21) unsigned DEFAULT NULL,
  `NUMERIC_SCALE` bigint(21) unsigned DEFAULT NULL,
  `DATETIME_PRECISION` bigint(21) unsigned DEFAULT NULL,
  `CHARACTER_SET_NAME` varchar(32) CHARACTER SET utf8 DEFAULT NULL,
  `COLLATION_NAME` varchar(32) CHARACTER SET utf8 DEFAULT NULL,
  `COLUMN_TYPE` longtext CHARACTER SET utf8 NOT NULL,
  `COLUMN_KEY` varchar(3) CHARACTER SET utf8 NOT NULL DEFAULT '',
  `EXTRA` varchar(30) CHARACTER SET utf8 NOT NULL DEFAULT '',
  `PRIVILEGES` varchar(80) CHARACTER SET utf8 NOT NULL DEFAULT '',
  `COLUMN_COMMENT` varchar(1024) CHARACTER SET utf8 NOT NULL DEFAULT ''
) ENGINE=InnoDB DEFAULT CHARSET=latin1
1 row in set (0.00 sec)

Query OK, 1678 rows affected (0.09 sec)
Query OK, 3356 rows affected (0.07 sec)
Query OK, 6712 rows affected (0.17 sec)
Query OK, 13424 rows affected (0.44 sec)
Query OK, 26848 rows affected (0.63 sec)
Query OK, 53696 rows affected (1.72 sec)
Query OK, 107392 rows affected (3.02 sec)
Query OK, 214784 rows affected (6.28 sec)
Query OK, 429568 rows affected (13.25 sec)
Query OK, 859136 rows affected (28.16 sec)
Query OK, 1718272 rows affected (1 min 9.22 sec)

+-----------+
| count(id) |
+-----------+
|   1718272 |
+-----------+
1 row in set (1.75 sec)

Example 2. Speed and Efficiency of CREATE INDEX and DROP INDEX

Here is a sequence of statements demonstrating the relative speed of CREATE INDEX and DROP INDEX statements. For a small table, the elapsed time is less than a second whether we use the fast or slow technique, so we look at the “rows affected” output to verify which operations can avoid the table rebuild. For a large table, the difference in efficiency is obvious because skipping the table rebuild saves substantial time.

\! clear

-- Make sure we're using the new-style fast DDL.
-- Outside of benchmarking and testing, you would
-- never enable the old_alter_table setting.
set old_alter_table=0;

\! echo "=== Create and drop index (small table, new/fast technique) ==="
\! echo
\! echo "Data size (kilobytes) before index created: "
\! du -k data/test/small_table.ibd
create index i_dtyp_small on small_table (data_type);
\! echo "Data size after index created: "
\! du -k data/test/small_table.ibd
drop index i_dtyp_small on small_table;

-- Revert to the older slower DDL for comparison.
set old_alter_table=1;

\! echo "=== Create and drop index (small table, old/slow technique) ==="
\! echo
\! echo "Data size (kilobytes) before index created: "
\! du -k data/test/small_table.ibd
create index i_dtyp_small on small_table (data_type);
\! echo "Data size after index created: "
\! du -k data/test/small_table.ibd
drop index i_dtyp_small on small_table;

-- In the above example, we examined the "rows affected" number,
-- ideally looking for a zero figure. Let's try again with a larger
-- sample size, where we'll see that the actual time taken can
-- vary significantly.

-- Back to the new/fast behavior:
set old_alter_table=0;

\! echo "=== Create and drop index (big table, new/fast technique) ==="
\! echo
\! echo "Data size (kilobytes) before index created: "
\! du -k data/test/big_table.ibd
create index i_dtyp_big on big_table (data_type);
\! echo "Data size after index created: "
\! du -k data/test/big_table.ibd
drop index i_dtyp_big on big_table;

-- Let's see that again, in slow motion:
set old_alter_table=1;

\! echo "=== Create and drop index (big table, old/slow technique) ==="
\! echo
\! echo "Data size (kilobytes) before index created: "
\! du -k data/test/big_table.ibd
create index i_dtyp_big on big_table (data_type);
\! echo "Data size after index created: "
\! du -k data/test/big_table.ibd
drop index i_dtyp_big on big_table;

Running this code gives this output, condensed for brevity and with the most important points bolded:

=== Create and drop index (small table, new/fast technique) ===

Data size (kilobytes) before index created:
384	data/test/small_table.ibd
Query OK, 0 rows affected (0.04 sec)

Data size after index created:
432	data/test/small_table.ibd
Query OK, 0 rows affected (0.02 sec)

=== Create and drop index (small table, old/slow technique) ===

Data size (kilobytes) before index created:
432	data/test/small_table.ibd
Query OK, 1678 rows affected (0.12 sec)

Data size after index created:
448	data/test/small_table.ibd
Query OK, 1678 rows affected (0.10 sec)

=== Create and drop index (big table, new/fast technique) ===

Data size (kilobytes) before index created:
315392	data/test/big_table.ibd
Query OK, 0 rows affected (33.32 sec)

Data size after index created:
335872	data/test/big_table.ibd
Query OK, 0 rows affected (0.02 sec)

=== Create and drop index (big table, old/slow technique) ===

Data size (kilobytes) before index created:
335872	data/test/big_table.ibd
Query OK, 1718272 rows affected (1 min 5.01 sec)

Data size after index created:
348160	data/test/big_table.ibd
Query OK, 1718272 rows affected (46.59 sec)

Example 3. Concurrent DML During CREATE INDEX and DROP INDEX

Here are some snippets of code that I ran in separate mysql sessions connected to the same database, to illustrate DML statements (insert, update, or delete) running at the same time as CREATE INDEX and DROP INDEX.

CREATE INDEX statement (in session 1):

/*
CREATE INDEX statement to run against a table while
insert/update/delete statements are modifying the
column being indexed.
*/

-- We'll run this script in one session, while simultaneously creating and dropping
-- an index on test/big_table.table_name in another session.

use test;
create index i_concurrent on big_table(table_name);

DROP INDEX statement (in session 1):

/*
DROP INDEX statement to run against a table while
insert/update/delete statements are modifying the
column being indexed.
*/

-- We'll run this script in one session, while simultaneously creating and dropping
-- an index on test/big_table.table_name in another session.

use test;
drop index i_concurrent on big_table;

DML statements (in session 2). I ran these DELETE statements while the CREATE INDEX was running.
Because DROP INDEX in this case takes less than a second, I started the DELETE first and then ran the DROP INDEX while the DML was in progress. In each case, the DML statement waited until the transaction with the DELETE was finished. (That’s why the timing numbers are higher than in the other examples. I waited for a while before issuing the ROLLBACK statement at the mysql command line.)

/*
Some queries and insert/update/delete statements to run against a table
while an index is being created or dropped. Previously, these operations
would have stalled during the index create/drop period and possibly
timed out or deadlocked.
*/

-- We'll run this script in one session, while simultaneously creating and dropping
-- an index on test/big_table.table_name in another session.

-- In our test instance, that column has about 1.7M rows, with 136 different values.

set autocommit = 0;
use test;

select distinct character_set_name from big_table where table_name = 'FILES';
delete from big_table where table_name = 'FILES';
select distinct character_set_name from big_table where table_name = 'FILES';

-- I'll issue the final rollback interactively, not via script,
-- the better to control the timing.
-- rollback;

Running this code gives this output, condensed for brevity and with the most important points bolded:

mysql: source concurrent_ddl_create.sql
Database changed
Query OK, 0 rows affected (1 min 25.15 sec)
Records: 0  Duplicates: 0  Warnings: 0

mysql: source concurrent_ddl_drop.sql
Database changed
Query OK, 0 rows affected (24.98 sec)
Records: 0  Duplicates: 0  Warnings: 0

mysql: source concurrent_dml.sql
Query OK, 0 rows affected (0.00 sec)

Database changed
+--------------------+
| character_set_name |
+--------------------+
| NULL               |
| utf8               |
+--------------------+
2 rows in set (0.32 sec)

Query OK, 38912 rows affected (1.84 sec)

Empty set (0.01 sec)

mysql: rollback;
Query OK, 0 rows affected (1.05 sec)

Example 4. Renaming a Column

Here is a demonstration of using ALTER TABLE to rename a column. We use the new, fast DDL mechanism to change the name, then the old, slow DDL mechanism (with old_alter_table=1) to restore the original column name.

Note
  • Because the syntax for renaming a column also involves re-specifying the data type, be very careful to specify exactly the same data type to avoid a costly table rebuild. In this case, we checked the output of show create table table\G and copied any clauses such as CHARACTER SET and NOT NULL from the original column definition.

  • Again, renaming a column for a small table is fast enough that we need to examine the “rows affected” number to verify that the new DDL mechanism is more efficient than the old one. With a big table, the difference in elapsed time makes the improvement obvious.

/*
Run through a sequence of 'rename column' statements.
Because this operation involves only metadata, not table data,
it is fast for big and small tables, with new or old DDL mechanisms.
*/

\! clear

\! echo "Rename column (fast technique, small table):"
set old_alter_table=0;
alter table small_table change `IS_NULLABLE` `NULLABLE`
  varchar(3) character set utf8 not null;
\! echo "Rename back to original name (slow technique):"
set old_alter_table=1;
alter table small_table change `NULLABLE` `IS_NULLABLE`
  varchar(3) character set utf8 not null;

\! echo "Rename column (fast technique, big table):"
set old_alter_table=0;
alter table big_table change `IS_NULLABLE` `NULLABLE`
  varchar(3) character set utf8 not null;
\! echo "Rename back to original name (slow technique):"
set old_alter_table=1;
alter table big_table change `NULLABLE` `IS_NULLABLE`
  varchar(3) character set utf8 not null;
set old_alter_table=0;

Running this code gives this output, condensed for brevity and with the most important points bolded:

Rename column (fast technique, small table):
Query OK, 0 rows affected (0.13 sec)

Rename back to original name (slow technique):
Query OK, 1678 rows affected (0.35 sec)

Rename column (fast technique, big table):
Query OK, 0 rows affected (0.11 sec)

Rename back to original name (slow technique):
Query OK, 1718272 rows affected (1 min 0.00 sec)

Example 5. Dropping Foreign Keys

Here is a demonstration of foreign keys, including improvement to the speed of dropping a foreign key constraint.

/*
Demonstrate aspects of foreign keys that are or aren't affected by the DDL improvements.
- Create a new table with only a few values to serve as the parent table.
- Set up the 'small' and 'big' tables as child tables using a foreign key.
- Verify that the ON DELETE CASCADE clause makes changes ripple from parent to child tables.
- Drop the foreign key constraints, and optionally associated indexes.
  (This is the operation that is sped up.)
*/

\! clear

-- Make sure foreign keys are being enforced, and allow
-- rollback after doing some DELETEs that affect both
-- parent and child tables.
set foreign_key_checks = 1;
set autocommit = 0;

-- Create a parent table, containing values that we know are already present
-- in the child tables.
drop table if exists schema_names;
create table schema_names (id int unsigned not null primary key auto_increment,
  schema_name varchar(64) character set utf8 not null, index i_schema (schema_name))
  as select distinct table_schema schema_name from small_table;

show create table schema_names\G
show create table small_table\G
show create table big_table\G

-- Creating the foreign key constraint isn't any faster than before.
-- It still involves a table rebuild, as illustrated by the "rows affected" figure.
alter table small_table add constraint small_fk
  foreign key i_table_schema (table_schema) references schema_names(schema_name)
  on delete cascade;
alter table big_table add constraint big_fk
  foreign key i_table_schema (table_schema) references schema_names(schema_name)
  on delete cascade;

show create table small_table\G
show create table big_table\G

select schema_name from schema_names order by schema_name;
select count(table_schema) howmany, table_schema from small_table group by table_schema;
select count(table_schema) howmany, table_schema from big_table group by table_schema;

-- big_table is the parent table.
-- schema_names is the parent table.
-- big_table is the child table.
-- (One row in the parent table can have many "children" in the child table.)
-- Changes to the parent table can ripple through to the child table.
-- For example, removing the value 'test' from schema_names.schema_name will
-- result in the removal of 20K or so rows from big_table.

delete from schema_names where schema_name = 'test';

select schema_name from schema_names order by schema_name;
select count(table_schema) howmany, table_schema from small_table group by table_schema;
select count(table_schema) howmany, table_schema from big_table group by table_schema;

-- Because we've turned off autocommit, we can still get back those deleted rows
-- if the DELETE was issued by mistake.
rollback;

select schema_name from schema_names order by schema_name;
select count(table_schema) howmany, table_schema from small_table group by table_schema;
select count(table_schema) howmany, table_schema from big_table group by table_schema;

-- All of the cross-checking between parent and child tables would be
-- deadly slow if there wasn't the requirement for the corresponding
-- columns to be indexed!

-- But we can get rid of the foreign key using a fast operation
-- that doesn't rebuild the table.
-- If we didn't specify a constraint name when setting up the foreign key, we would
-- have to find the auto-generated name such as 'big_table_ibfk_1' in the
-- output from 'show create table'.

-- For the small table, we'll drop the foreign key and the associated index.
-- Having an index on a small table is less critical.

\! echo "DROP FOREIGN KEY and INDEX from small_table:"
alter table small_table drop foreign key small_fk, drop index small_fk;

-- For the big table, we'll drop the foreign key and leave the associated index.
-- If we are still doing queries that reference the indexed column, the index is
-- very important to avoid a full table scan of the big table.
\! echo "DROP FOREIGN KEY from big_table:"
alter table big_table drop foreign key big_fk;

show create table small_table\G
show create table big_table\G

Running this code gives this output, condensed for brevity and with the most important points bolded:

Query OK, 4 rows affected (0.03 sec)

*************************** 1. row ***************************
       Table: schema_names
Create Table: CREATE TABLE `schema_names` (
  `id` int(10) unsigned NOT NULL AUTO_INCREMENT,
  `schema_name` varchar(64) CHARACTER SET utf8 NOT NULL,
  PRIMARY KEY (`id`),
  KEY `i_schema` (`schema_name`)
) ENGINE=InnoDB AUTO_INCREMENT=8 DEFAULT CHARSET=latin1
1 row in set (0.00 sec)

*************************** 1. row ***************************
       Table: small_table
Create Table: CREATE TABLE `small_table` (
  `TABLE_CATALOG` varchar(512) CHARACTER SET utf8 NOT NULL DEFAULT '',
  `TABLE_SCHEMA` varchar(64) CHARACTER SET utf8 NOT NULL DEFAULT '',
  `TABLE_NAME` varchar(64) CHARACTER SET utf8 NOT NULL DEFAULT '',
  `COLUMN_NAME` varchar(64) CHARACTER SET utf8 NOT NULL DEFAULT '',
  `ORDINAL_POSITION` bigint(21) unsigned NOT NULL DEFAULT '0',
  `COLUMN_DEFAULT` longtext CHARACTER SET utf8,
  `IS_NULLABLE` varchar(3) CHARACTER SET utf8 NOT NULL,
  `DATA_TYPE` varchar(64) CHARACTER SET utf8 NOT NULL DEFAULT '',
  `CHARACTER_MAXIMUM_LENGTH` bigint(21) unsigned DEFAULT NULL,
  `CHARACTER_OCTET_LENGTH` bigint(21) unsigned DEFAULT NULL,
  `NUMERIC_PRECISION` bigint(21) unsigned DEFAULT NULL,
  `NUMERIC_SCALE` bigint(21) unsigned DEFAULT NULL,
  `DATETIME_PRECISION` bigint(21) unsigned DEFAULT NULL,
  `CHARACTER_SET_NAME` varchar(32) CHARACTER SET utf8 DEFAULT NULL,
  `COLLATION_NAME` varchar(32) CHARACTER SET utf8 DEFAULT NULL,
  `COLUMN_TYPE` longtext CHARACTER SET utf8 NOT NULL,
  `COLUMN_KEY` varchar(3) CHARACTER SET utf8 NOT NULL DEFAULT '',
  `EXTRA` varchar(30) CHARACTER SET utf8 NOT NULL DEFAULT '',
  `PRIVILEGES` varchar(80) CHARACTER SET utf8 NOT NULL DEFAULT '',
  `COLUMN_COMMENT` varchar(1024) CHARACTER SET utf8 NOT NULL DEFAULT '',
  `id` int(10) unsigned NOT NULL AUTO_INCREMENT,
  PRIMARY KEY (`id`)
) ENGINE=InnoDB AUTO_INCREMENT=1679 DEFAULT CHARSET=latin1
1 row in set (0.00 sec)

*************************** 1. row ***************************
       Table: big_table
Create Table: CREATE TABLE `big_table` (
  `TABLE_CATALOG` varchar(512) CHARACTER SET utf8 NOT NULL DEFAULT '',
  `TABLE_SCHEMA` varchar(64) CHARACTER SET utf8 NOT NULL DEFAULT '',
  `TABLE_NAME` varchar(64) CHARACTER SET utf8 NOT NULL DEFAULT '',
  `COLUMN_NAME` varchar(64) CHARACTER SET utf8 NOT NULL DEFAULT '',
  `ORDINAL_POSITION` bigint(21) unsigned NOT NULL DEFAULT '0',
  `COLUMN_DEFAULT` longtext CHARACTER SET utf8,
  `IS_NULLABLE` varchar(3) CHARACTER SET utf8 NOT NULL,
  `DATA_TYPE` varchar(64) CHARACTER SET utf8 NOT NULL DEFAULT '',
  `CHARACTER_MAXIMUM_LENGTH` bigint(21) unsigned DEFAULT NULL,
  `CHARACTER_OCTET_LENGTH` bigint(21) unsigned DEFAULT NULL,
  `NUMERIC_PRECISION` bigint(21) unsigned DEFAULT NULL,
  `NUMERIC_SCALE` bigint(21) unsigned DEFAULT NULL,
  `DATETIME_PRECISION` bigint(21) unsigned DEFAULT NULL,
  `CHARACTER_SET_NAME` varchar(32) CHARACTER SET utf8 DEFAULT NULL,
  `COLLATION_NAME` varchar(32) CHARACTER SET utf8 DEFAULT NULL,
  `COLUMN_TYPE` longtext CHARACTER SET utf8 NOT NULL,
  `COLUMN_KEY` varchar(3) CHARACTER SET utf8 NOT NULL DEFAULT '',
  `EXTRA` varchar(30) CHARACTER SET utf8 NOT NULL DEFAULT '',
  `PRIVILEGES` varchar(80) CHARACTER SET utf8 NOT NULL DEFAULT '',
  `COLUMN_COMMENT` varchar(1024) CHARACTER SET utf8 NOT NULL DEFAULT '',
  `id` int(10) unsigned NOT NULL AUTO_INCREMENT,
  PRIMARY KEY (`id`),
  KEY `big_fk` (`TABLE_SCHEMA`)
) ENGINE=InnoDB AUTO_INCREMENT=1718273 DEFAULT CHARSET=latin1
1 row in set (0.00 sec)

Query OK, 1678 rows affected (0.10 sec)
Query OK, 1718272 rows affected (1 min 14.54 sec)

*************************** 1. row ***************************
       Table: small_table
Create Table: CREATE TABLE `small_table` (
  `TABLE_CATALOG` varchar(512) CHARACTER SET utf8 NOT NULL DEFAULT '',
  `TABLE_SCHEMA` varchar(64) CHARACTER SET utf8 NOT NULL DEFAULT '',
  `TABLE_NAME` varchar(64) CHARACTER SET utf8 NOT NULL DEFAULT '',
  ...
  `COLUMN_COMMENT` varchar(1024) CHARACTER SET utf8 NOT NULL DEFAULT '',
  `id` int(10) unsigned NOT NULL AUTO_INCREMENT,
  PRIMARY KEY (`id`),
  KEY `small_fk` (`TABLE_SCHEMA`),
  CONSTRAINT `small_fk` FOREIGN KEY (`TABLE_SCHEMA`)
  REFERENCES `schema_names` (`schema_name`)
  ON DELETE CASCADE
) ENGINE=InnoDB AUTO_INCREMENT=1679 DEFAULT CHARSET=latin1
1 row in set (0.12 sec)

*************************** 1. row ***************************
       Table: big_table
Create Table: CREATE TABLE `big_table` (
  `TABLE_CATALOG` varchar(512) CHARACTER SET utf8 NOT NULL DEFAULT '',
  `TABLE_SCHEMA` varchar(64) CHARACTER SET utf8 NOT NULL DEFAULT '',
  `TABLE_NAME` varchar(64) CHARACTER SET utf8 NOT NULL DEFAULT '',
  ...
  `COLUMN_COMMENT` varchar(1024) CHARACTER SET utf8 NOT NULL DEFAULT '',
  `id` int(10) unsigned NOT NULL AUTO_INCREMENT,
  PRIMARY KEY (`id`),
  KEY `big_fk` (`TABLE_SCHEMA`),
  CONSTRAINT `big_fk` FOREIGN KEY (`TABLE_SCHEMA`)
  REFERENCES `schema_names` (`schema_name`)
  ON DELETE CASCADE
) ENGINE=InnoDB AUTO_INCREMENT=1718273 DEFAULT CHARSET=latin1
1 row in set (0.01 sec)

+--------------------+
| schema_name        |
+--------------------+
| information_schema |
| mysql              |
| performance_schema |
| test               |
+--------------------+
4 rows in set (0.00 sec)

+---------+--------------------+
| howmany | table_schema       |
+---------+--------------------+
|     563 | information_schema |
|     286 | mysql              |
|     786 | performance_schema |
|      43 | test               |
+---------+--------------------+
4 rows in set (0.01 sec)

+---------+--------------------+
| howmany | table_schema       |
+---------+--------------------+
|  576512 | information_schema |
|  292864 | mysql              |
|  804864 | performance_schema |
|   44032 | test               |
+---------+--------------------+
4 rows in set (2.10 sec)

Query OK, 1 row affected (1.52 sec)

+--------------------+
| schema_name        |
+--------------------+
| information_schema |
| mysql              |
| performance_schema |
+--------------------+
3 rows in set (0.00 sec)

+---------+--------------------+
| howmany | table_schema       |
+---------+--------------------+
|     563 | information_schema |
|     286 | mysql              |
|     786 | performance_schema |
+---------+--------------------+
3 rows in set (0.00 sec)

+---------+--------------------+
| howmany | table_schema       |
+---------+--------------------+
|  576512 | information_schema |
|  292864 | mysql              |
|  804864 | performance_schema |
+---------+--------------------+
3 rows in set (1.74 sec)

Query OK, 0 rows affected (0.60 sec)

+--------------------+
| schema_name        |
+--------------------+
| information_schema |
| mysql              |
| performance_schema |
| test               |
+--------------------+
4 rows in set (0.00 sec)

+---------+--------------------+
| howmany | table_schema       |
+---------+--------------------+
|     563 | information_schema |
|     286 | mysql              |
|     786 | performance_schema |
|      43 | test               |
+---------+--------------------+
4 rows in set (0.01 sec)

+---------+--------------------+
| howmany | table_schema       |
+---------+--------------------+
|  576512 | information_schema |
|  292864 | mysql              |
|  804864 | performance_schema |
|   44032 | test               |
+---------+--------------------+
4 rows in set (1.59 sec)

DROP FOREIGN KEY and INDEX from small_table:
Query OK, 0 rows affected (0.02 sec)

DROP FOREIGN KEY from big_table:
Query OK, 0 rows affected (0.02 sec)

*************************** 1. row ***************************
       Table: small_table
Create Table: CREATE TABLE `small_table` (
  `TABLE_CATALOG` varchar(512) CHARACTER SET utf8 NOT NULL DEFAULT '',
  `TABLE_SCHEMA` varchar(64) CHARACTER SET utf8 NOT NULL DEFAULT '',
  `TABLE_NAME` varchar(64) CHARACTER SET utf8 NOT NULL DEFAULT '',
  ...
  `COLUMN_COMMENT` varchar(1024) CHARACTER SET utf8 NOT NULL DEFAULT '',
  `id` int(10) unsigned NOT NULL AUTO_INCREMENT,
  PRIMARY KEY (`id`)
) ENGINE=InnoDB AUTO_INCREMENT=1679 DEFAULT CHARSET=latin1
1 row in set (0.00 sec)

*************************** 1. row ***************************
       Table: big_table
Create Table: CREATE TABLE `big_table` (
  `TABLE_CATALOG` varchar(512) CHARACTER SET utf8 NOT NULL DEFAULT '',
  `TABLE_SCHEMA` varchar(64) CHARACTER SET utf8 NOT NULL DEFAULT '',
  `TABLE_NAME` varchar(64) CHARACTER SET utf8 NOT NULL DEFAULT '',
  ...
  `COLUMN_COMMENT` varchar(1024) CHARACTER SET utf8 NOT NULL DEFAULT '',
  `id` int(10) unsigned NOT NULL AUTO_INCREMENT,
  PRIMARY KEY (`id`),
  KEY `big_fk` (`TABLE_SCHEMA`)
) ENGINE=InnoDB AUTO_INCREMENT=1718273 DEFAULT CHARSET=latin1
1 row in set (0.00 sec)

Example 6. Changing Auto-Increment Value

Here is an illustration of increasing the auto-increment lower limit for a table column, demonstrating how this operation now avoids a table rebuild, plus some other fun facts about InnoDB auto-increment columns.

/*
If this script is run after foreign_key.sql, the schema_names table is
already set up. But to allow this script to run multiple times without
running into duplicate ID errors, we set up the schema_names table
all over again.
*/

\! clear

\! echo "=== Adjusting the Auto-Increment Limit for a Table ==="
\! echo

drop table if exists schema_names;
create table schema_names (id int unsigned not null primary key auto_increment,
  schema_name varchar(64) character set utf8 not null,
  index i_schema (schema_name))
  as select distinct table_schema schema_name from small_table;

\! echo "Initial state of schema_names table."
\! echo "AUTO_INCREMENT is included in SHOW CREATE TABLE output."
\! echo "Note how MySQL reserved a block of IDs,"
\! echo "but only needed 4 of them in this transaction,"
\! echo "so the next inserted values would get IDs 8 and 9."
show create table schema_names\G
select * from schema_names order by id;

\! echo "Inserting even a tiny amount of data can produce gaps in the ID sequence."
insert into schema_names (schema_name) values ('eight'), ('nine');

set old_alter_table=0;
\! echo "Bumping auto-increment lower limit to 20 (fast mechanism):"
alter table schema_names auto_increment=20;

\! echo "Inserting 2 rows that should get IDs 20 and 21:"
insert into schema_names (schema_name) values ('foo'), ('bar');
commit;

set old_alter_table=1;
\! echo "Bumping auto-increment lower limit to 30 (slow mechanism):"
alter table schema_names auto_increment=30;

\! echo "Inserting 2 rows that should get IDs 30 and 31:"
insert into schema_names (schema_name) values ('bletch'),('baz');
commit;

select * from schema_names order by id;

set old_alter_table=0;

\! echo "Final state of schema_names table."
\! echo "AUTO_INCREMENT value shows the next inserted row would get ID=32."
show create table schema_names\G

Running this code gives this output, condensed for brevity and with the most important points bolded:

=== Adjusting the Auto-Increment Limit for a Table ===

Query OK, 0 rows affected (0.01 sec)

Query OK, 4 rows affected (0.02 sec)
Records: 4  Duplicates: 0  Warnings: 0

Initial state of schema_names table.
AUTO_INCREMENT is included in SHOW CREATE TABLE output.
Note how MySQL reserved a block of IDs,
but only needed 4 of them in this transaction,
so the next inserted values would get IDs 8 and 9.
*************************** 1. row ***************************
       Table: schema_names
Create Table: CREATE TABLE `schema_names` (
  `id` int(10) unsigned NOT NULL AUTO_INCREMENT,
  `schema_name` varchar(64) CHARACTER SET utf8 NOT NULL,
  PRIMARY KEY (`id`),
  KEY `i_schema` (`schema_name`)
) ENGINE=InnoDB AUTO_INCREMENT=8 DEFAULT CHARSET=latin1
1 row in set (0.00 sec)

+----+--------------------+
| id | schema_name        |
+----+--------------------+
|  1 | information_schema |
|  2 | mysql              |
|  3 | performance_schema |
|  4 | test               |
+----+--------------------+
4 rows in set (0.00 sec)

Inserting even a tiny amount of data can produce gaps in the ID sequence.
Query OK, 2 rows affected (0.00 sec)
Records: 2  Duplicates: 0  Warnings: 0

Bumping auto-increment lower limit to 20 (fast mechanism):
Query OK, 0 rows affected (0.01 sec)

Inserting 2 rows that should get IDs 20 and 21:
Query OK, 2 rows affected (0.00 sec)

Bumping auto-increment lower limit to 30 (slow mechanism):
Query OK, 8 rows affected (0.02 sec)

Inserting 2 rows that should get IDs 30 and 31:
Query OK, 2 rows affected (0.00 sec)

Query OK, 0 rows affected (0.01 sec)

+----+--------------------+
| id | schema_name        |
+----+--------------------+
|  1 | information_schema |
|  2 | mysql              |
|  3 | performance_schema |
|  4 | test               |
|  8 | eight              |
|  9 | nine               |
| 20 | foo                |
| 21 | bar                |
| 30 | bletch             |
| 31 | baz                |
+----+--------------------+
10 rows in set (0.00 sec)

Final state of schema_names table.
AUTO_INCREMENT value shows the next inserted row would get ID=32.
*************************** 1. row ***************************
       Table: schema_names
Create Table: CREATE TABLE `schema_names` (
  `id` int(10) unsigned NOT NULL AUTO_INCREMENT,
  `schema_name` varchar(64) CHARACTER SET utf8 NOT NULL,
  PRIMARY KEY (`id`),
  KEY `i_schema` (`schema_name`)
) ENGINE=InnoDB AUTO_INCREMENT=32 DEFAULT CHARSET=latin1
1 row in set (0.00 sec)

Tuesday Apr 10, 2012

InnoDB transportable tablespaces

Note: this article was originally published on http://blogs.innodb.com on April 10, 2012 by Sunny Bains.

The Problem

In the past, users were unable to take full advantage of the FLUSH TABLES WITH READ LOCK statement. InnoDB simply ignored the flush to disk part. If the table did not have any dirty pages in the InnoDB buffer that weren’t synced to disk (due to sheer luck) then it was safe to copy the .ibd file to another location. Also, the restore was not without its limitations and complications. The .ibd file could not be copied over to another server because InnoDB during import did not fix up metadata required for a trouble-free import. The main problems during import were:

  • If the tablespace ID of the IMPORTing instance had changed, for example if the table had been dropped and re-created, then the import would fail.
  • If the table and index IDs in the tablespace were different than on the importing server.
  • InnoDB uses a global row ID for tables without an explicit primary key. Any such row IDs in an imported tablespace had to be lower than the current maximum on the new server.
  • The maximum LSN of the tablespace had to be lower than the server’s current max LSN.
  • The maximum transaction (trx) ID of the tablespace had to be lower than the server current max trx id.
  • Purge and change buffer issues.

In short, if it worked you were lucky that the above constraints were satisfied, or your tables were probably read-only.

The Solution

Do the flush (export) properly and also the import. During flushing we disable purge, merge all pending change buffer entries to the tablespace and then flush all the dirty pages to disk. Note: Purge will remain disabled until UNLOCK TABLES; is issued. Write out a metadata file that contains the table schema, hostname of the exporting server, page size of the exporting instance, the highest autoinc value in memory etc. So that when we do an import, we can check if the table schema matches (currently, it ignores foreign key relationships) and also restore the autoinc value. This metadata file is created in the same directory as the tablespace, with a suffix of .cfg. If the table is named customer then the meta-data file will be named customer.cfg and the tablespace will be named customer.ibd. Note: FLUSH TABLES and IMPORT only work for tables that are stored outside the system tablespace, in their own tablespace. The InnoDB config parameter innodb_file_per_table must be set when creating the table.

Because InnoDB stores data in big-endian format, this allows copying data between different architectures. Copying between different operating systems will also work.

Example

If you want to backup a single table:

CREATE DATABASE ftwrl;

SET GLOBAL innodb_file_per_table=1;

CREATE TABLE ftwrl.t (C INT) ENGINE=InnoDB;

INSERT INTO ftwrl.t VALUES(1),(2),(3);

SELECT * FROM ftwrl.t;

FLUSH TABLES ftwrl.t WITH READ LOCK;

From the command line, copy the t.ibd and t.cfg to your backup directory. The files should be in the ftwrl/ sub-directory under datadir.

UNLOCK TABLES; — Note: This will remove the t.cfg file.

DROP TABLE ftwrl.t;

CREATE TABLE ftwrl.t (C INT) ENGINE=InnoDB;

ALTER TABLE ftwrl.t DISCARD TABLESPACE;

The DISCARD tablespace will rename the t.ibd file to t.ibt in the same directory; you have to remove this file manually for now. Once you’ve removed this file, copy the backed up tablespace t.ibd and the metadata file t.cfg to the ftwrl/ sub-directory. Then issue the folowing:

ALTER TABLE ftwrl.t IMPORT TABLESPACE;
CHECK TABLE ftwrl.t;
SELECT * FROM t;

You should be good to go. If you want to backup multiple tables together, the syntax for that is:

FLUSH TABLES t1, t2, …, tn WITH READ LOCK;

Copy as before but this time the multiple tablespaces and metadata files.

UNLOCK TABLES;

Then first discard the tablespace as above followed by an import as above.

Other changes
Errors and warnings will be pushed to the client so that the user doesn’t need to poke around in the server error log files to check why an operation failed. The intention is to send enough information to the user so that the problem can be clearly identified and understood.

Conclusion

The syntax for “export” is not finalised yet and may change; for now the above examples using FLUSH TABLES WITH READ LOCK should work. To export tables that are in a foreign key relationship, you must export and import all the tables together, otherwise you may end up with inconsistencies. The export/import doesn’t work on partitioned tables, this limitation will be addressed later. A matching table definition must exist in the importing server instance, currently we don’t automagically create the table using the saved metadata during import. The .cfg format is not written in stone yet, we may add more data to this file, for example for partition support.

Your feedback is important to us, so that we can improve this feature further.

Choose the Location of your InnoDB File-per-Table Tablespace

Note: this article was originally published on http://blogs.innodb.com on April 10, 2012 by Kevin Lewis.

The April 2012 InnoDB labs release introduces a new feature in InnoDB that allows you to choose the location of specific tables.  For example, you can place critical tables onto an SSD drive while leaving the system tablespace on a hard drive.  Conversely, you can store you primary database files on an SSD and put a seldom used but very large archive or reference table on a larger cheaper hard drive.

Innodb now makes use of the following existing syntax in MySQL ;

CREATE TABLE  . . .  DATA DIRECTORY = ‘absolute path of data directory’;

CREATE TABLE  . . .  PARTITION . . . DATA DIRECTORY = ‘absolute path of data directory’;

This syntax is used in MyISAM and Archive engines to make use of symbolic links in those operating systems that support it.  But InnoDB can use this syntax on any OS since it stores the path in a new system table called SYS_DATAFILES.  There is also a new system table called SYS_TABLESPACES.  Both of these can be viewed in the April Labs release of 5.6 by using information schema. For example;

mysql> SET GLOBAL innodb_file_per_table=on;
Query OK, 0 rows affected (0.00 sec)
mysql> CREATE TABLE t1 (a int KEY, b text) DATA DIRECTORY='/ssd1/my_data';
 Query OK, 0 rows affected (0.12 sec)
mysql> SELECT * FROM INFORMATION_SCHEMA.INNODB_SYS_TABLESPACES;
 +-------+---------+------+-------------+----------------------+-----------+---------------+
 | SPACE | NAME    | FLAG | FILE_FORMAT | ROW_FORMAT           | PAGE_SIZE | ZIP_PAGE_SIZE |
 +-------+---------+------+-------------+----------------------+-----------+---------------+
 |     1 | test/t1 |    0 | Antelope    | Compact or Redundant |     16384 |             0 |
 +-------+---------+------+-------------+----------------------+-----------+---------------+
 1 row in set (0.00 sec)
mysql> SELECT * FROM INFORMATION_SCHEMA.INNODB_SYS_DATAFILES;
 +-------+---------------------------+
 | SPACE | PATH                      |
 +-------+---------------------------+
 |     1 | /ssd1/my_data/test/t1.ibd |
 +-------+---------------------------+
 1 row in set (0.00 sec)

Notice that a directory named for the database is added after the DATA DIRECTORY provided.  This is the same as what is done in the normal data directory.  It allows you to chose the same remote location for all your tablespaces, even if the tables in different databases have the same name.

The last 4 columns in innodb_sys_tablespaces are interpretations of the tablespace flags field. Tablespace flags do not distinguish between Compact and Redundant row format, so you see what is displayed above.  If you want to see the row format of the table, innodb_sys_tables now also interprets the table flags;

mysql> SELECT * FROM INFORMATION_SCHEMA.INNODB_SYS_TABLES WHERE name like 'test%';
 +----------+---------+------+--------+-------+-------------+------------+---------------+
 | TABLE_ID | NAME    | FLAG | N_COLS | SPACE | FILE_FORMAT | ROW_FORMAT | ZIP_PAGE_SIZE |
 +----------+---------+------+--------+-------+-------------+------------+---------------+
 |       24 | test/t1 |   65 |      5 |     1 | Antelope    | Compact    |             0 |
 +----------+---------+------+--------+-------+-------------+------------+---------------+
 1 row in set (0.00 sec)

In addition to SYS_DATAFILES, the absolute file path to the tablespace is also written to a text file in the normal datafiles location of the tablespace.  This is also the location of the FRM file written by the MYSQL server.  It has the same name as the FRM except it has the extension ‘.isl’ which stands for InnoDB Symbolic Link.  It is there to support recovery which happens before the system tables are opened.  For the example above, the sole contents of t1.isl are;

/ssd1/my_data/test/t1.ibd

The ISL file also allows a DBA to move one of these remote tablespaces since ALTER TABLE … DATA DIRECTORY =  ‘. . .’ is currently not supported.  In order to move a tablespace to another location, or to port an entire database along with its remote tablespaces, you must edit this ISL file after moving the OS files.  The ISL file is a simple text file that contains the absolute path and filename of the tablespace.  Just change that to the new location and save your changes.  Do not leave a copy of the tablespace in the old location.

If you have a large existing database that uses file-per-table tablespaces (files with ‘.ibd’ extensions), it is possible to move these from the default location next to the FRM file to a remote location such as a new SSD drive.  But you must shut down the MySQL server first. After it is shut down, move the IBD tablespace to the location that you would like, create a text file where it was with the same name except using a ‘isl’ extension.  Make the contents of the text file to be the full file path of the ibd file.  Then restart the server.  It will read the isl file, find the tablespace, and update the internal system tables to show where the file resides.

Conclusion

This new feature provides a valuable option to configure your database to make the best use of the hardware available.

InnoDB 2012 Spring Labs Release

Note: this article was originally published on http://blogs.innodb.com on April 10, 2012 by Calvin Sun.

InnoDB team is pleased to announce the 2012 Spring labs release, with several much anticipated new features and performance enhancements. Please download mysql-5.6-labs-april-2012 from MySQL Labs and give a try. Do not forget to provide your feedback.

The 2012 Spring labs release on MySQL Labs consists of the following InnoDB new features, which are not in the newly released MySQL 5.6.5 DMR yet:

  • Online DDL: some of the DDLs are now truly online, including ADD INDEX, SET DEFAULT, and DROP FOREIGN KEY.
  • Memcached plugin: with additional features, such as SASL support.
  • Transportable tablespace: allow user to export data files and import them into another MySQL instance.
  • Persistent statistics ON/OFF switch: the ability of controlling persistent statistics on table level.
  • Option for specifying locations of InnoDB tables: allows user to choose the location of specific tables.

This labs release also includes several performance and scalability improvements, specially on modern CPUs:

  • Reduced false sharing
  • Configurable fast mutexes
  • my_hash_sort_simple() loop unrolling
  • Improved adaptive flushing
  • Improved neighbor flushing

With those improvements, (InnoDB) read-only performance reaches a new high. Please see Mikael’s blog for some of the improvements. You will see the benchmark results on DimitriK’s blog. And the InnoDB team will also continue publishing technical details in the coming days on this site.

We intend to make those new features & improvements into future development milestone releases and GA releases. Thanks for being interested in InnoDB!

Friday Dec 23, 2011

Improving InnoDB memory usage continued

Note: this article was originally published on http://blogs.innodb.com on Dec 23, 2011 by Vasil Dimov.

Continues from Improving InnoDB memory usage.

Here are some numbers from the fixups described in the above article:

The workload consists of 10 partitioned tables, each one containing 1000 partitions. This means 10’000 InnoDB tables. We truncate the tables, then restart mysqld and run:

1. INSERT a single row into each of the 10 tables
2. SELECT * from each table
3. FLUSH TABLES (this causes the tables to be closed and reopened on the next run)
4. wait for 10 seconds

we repeat the above steps 10 times. Here is the total memory consumption by mysqld with 1GB InnoDB buffer pool during the workload:

In the fixed case (green line) you can clearly see the peaks when the commands 1. – 4. are run and the valley bottoms during the 10 seconds waits.

Notice that no memory leaks were fixed in these improvements. This is all about interaction between InnoDB and the system allocator. InnoDB always frees memory it has allocated, but the problem was that the system allocator could not reuse the freed memory optimally due to memory fragmentation. We mainly reduced the number of and the sizes of some of the hottest allocations, mitigating the malloc/free storm.

Notice also that this is somewhat unusual workload – 10’000 InnoDB tables, row size of about 64KB and frequent FLUSH TABLES.

Tuesday Dec 20, 2011

Improving InnoDB memory usage

Note: this article was originally published on http://blogs.innodb.com on Dec 20, 2011 by Vasil Dimov.

Last month we did a few improvements in InnoDB memory usage. We solved a challenging issue about how InnoDB uses memory in certain places of the code.

The symptom of the issue was that under a certain workloads the memory used by InnoDB kept growing infinitely, until OOM killer kicked in. It looked like a memory leak, but Valgrind wasn’t reporting any leaks and the issue was not reproducible on FreeBSD – it only happened on Linux (see Bug#57480). Especially the latest fact lead us to think that there is something in the InnoDB memory usage pattern that reveals a nasty side of the otherwise good-natured Linux’s memory manager.

It turned out to be an interesting memory fragmentation caused by a storm of malloc/free calls of various sizes. We had to track and analyze each call to malloc during the workload, including the code path that lead to it. We collected a huge set of analysis data – some code paths were executed many 10’000s of times! A hurricane of allocations and deallocations! We looked at the hottest ones hoping that some of them are not necessary, can be eliminated, avoided, minimized or stuck together. Luckily there were plenty of them!

After an extensive testing we did a numerous improvements, allocating the smallest chunks of the memory from the stack instead of from the heap, grouping allocations together where possible, removing unnecessary allocations altogether, estimating exactly how much memory will be consumed by a given operation and allocating it in advance and others and others and others.

This not only fixed Bug#57480 but improved InnoDB memory usage in general.

Note: the fix is not in the 5.6.4 release.

Continues with some numbers here.

Better scaling of read-only workloads

Note: this article was originally published on http://blogs.innodb.com on Dec 20, 2011 by Sunny Bains.

The problem and its cause

There have been several complaints over the years about InnoDB’s inability to scale beyond 256 connections. One of the main issues behind this scalability bottleneck was the read view creation that is required for MVCC (Multi Version Concurrency Control) to work. When the user starts a transaction this is what InnoDB does under the hood:

  • Create or reuse a transaction instance – usually it is reused, the transactions are reused from a pool (trx_sys_t::mysql_trx_list).
  • Initialize the transaction start time and assign a rollback segment
  • Append the transaction to an active  transaction list ordered on trx_t::id in descending order

The append to  the trx_sys_t::trx_list and corresponding remove during commit is covered by trx_sys_t::mutex. After the transaction is “started” and if the transaction has an isolation greater than or equal to REPEATABLE-READ then before the first record/row is accessed by the transaction, InnoDB creates a view (snapshot) of the running system state. It does this by examining the transactions that are active at the time of the MVCC snapshot, so that their changes can be excluded from the creating transaction’s read view. This read view creation is also covered by the trx_sys_t::mutex. As the number of active transactions in the system increases this read view creation takes longer and longer. This increases the wait times on the trx_sys_t::mutex (during transaction start and commit) and once threads are forced to wait on a condition variable (in contrast to simply spinning while waiting for the mutex) the system throughput drops dramatically.

The solution

While investigating this problem there were two observations that I made:

  • Read only transactions should not be considered in the MVCC snapshot
  • Auto commit non-locking read-only selects should not be in the trx_sys_t::trx_list at all

For the first to work we need to tag the transactions as READ ONLY when the transaction is started e.g.,

  •       START TRANSACTION READ ONLY;

I will not be discussing this functionality in this blog because the syntax for this doesn’t exist in MySQL (yet). However, the functionality exists in InnoDB to handle this case and is  in the 5.6.4 release. Once the above syntax exists, InnoDB can take advantage of the new syntax trivially. What I want to talk about is the second case. This special case can be detected by InnoDB using existing state information and handled transparently without any syntax change in user applications and is fully functional in the 5.6.4 release.

InnoDB transaction life cycle redesign

Split the trx_sys_t::trx_list (the active transactions list) into two, trx_sys_t::ro_trx_list and trx_sys_t::rw_trx_list. Only transactions that are in the trx_sys_t::rw_trx_list are taken into consideration when creating the MVCC snapshot. For a read-only heavy work load the benefits are obvious,  the smaller size of the RW active transaction list makes the read view creation for MVCC (and purge) very fast. For auto-commit read-only non-locking selects the additional benefit is that we don’t need to acquire the trx_sys_t::mutex at all because we don’t put them on the active list. This removes the bottleneck around the trx_sys_t::mutex and improves concurrency and scalability.

Auto-commit read-only non-locking transactions go from state NOT STARTED -> ACTIVE -> NOT STARTED, in contrast to locking read-only (and read-write) transactions which go from state NOT STARTED -> ACTIVE -> COMMIT TO MEMORY -> NOT STARTED. The additional advantage in skipping the COMMIT TO MEMORY state is that we know that they cannot acquire any locks and therefore it is pointless to acquire the lock_sys_t::mutex and attempt lock release. Also, during COMMIT because they are not on any active transaction list we don’t need to acquire the trx_sys_t::mutex to remove them from the list, improving concurrency and performance further.

Changes to transaction state visibility

Currently (5.6.4) doesn’t display the auto-commit read-only non-locking selects in “SHOW ENGINE INNODB STATUS“, however they are visible in the INFORMATION_SCHEMA.innodb_trx table. The innodb_trx table has two new fields that can be queried to determine whether  a transaction is tagged as a read-only transaction and additionally whether it qualifies for the special handling of auto-commit read-only non-locking select. The new fields are:

  • trx_read_only – 0 or 1 (INT)
  • trx_autocommit_non_locking – 0 or 1 (INT)

Some additional minor tweaks, read-only transactions aren’t assigned a rollback segment and if they are flagged as autocommit non-locking selects then we only set the start time once every 32 times, his reduces the overhead of a system call.

Test results

These tests were run by Alexey Stroganov (a.k.a Ranger) using Sysbench (with –oltp-skip-trx=on), wl6046 refers to the internal worklog number of this performance fix. Note how InnoDB scales easily up to 4K threads on the 24 host and 1.5K threads on the 8 core host, there is very little (negligible) drop in the TPS as the number of threads is cranked up. In particular note how the peak TPS on the 24 core host is nearly double:-)

Test POINT_SELECT
  • QUERY - SELECT c FROM sbtest WHERE id=N


Test SIMPLE_RANGES

  • QUERY – SELECT c FROM sbtest WHERE id BETWEEN N AND M

The same result for SIMPLE RANGES, at lower thread levels 5.6 has less TPS than 5.5 on the 8 core host however that is due to another known and unrelated issue that is currently being worked on actively.

Conclusion

This is a significant step forward in InnoDB read-only performance and will allow InnoDB to scale up to a very high number of concurrent queries and take advantage of greater number of processors with the improved parallelism. Note: mixed workloads with a read-only heavy component will also benefit from this improvement. Currently, only if the read-only transactions are auto-commit non-locking selects. The better news is that with this split of the active transaction list into two, we can now optimize deadlock detection, lock release and transaction scheduling around lock grants and waits, stay tuned!

InnoDB Full-Text Search is in MySQL 5.6.4

Note: this article was originally published on http://blogs.innodb.com on Dec 20, 2011 by Jimmy Yang.

InnoDB full-text search (FTS) is finally available in MySQL 5.6.4 release. The feature has been on trial through MySQL’s summer lab release, thus we had several blogs covering the feature. In this blog, we will leave the “how to” part of the feature to those blogs, and focus on some important characteristics of this new feature, so you will have a better understanding when trying on the feature.

The InnoDB Full-text Index as an Inverted Index

When comes to the basic design, InnoDB takes a traditional way to implementation the full-text index, which is a so called “Inverted Index”. It composes of a set of auxiliary “index tables” that stores the “Word” “Doc ID” pair, as well as each word’s position in the original text.

Incoming text strings in the inserting records are extracted out, tokenized and decomposed into individual words, and such words are then inserted into the auxiliary “index tables” along with their position info and the Doc ID associated with the record. With the position info, we will be able to support proximity search that is lacking in MyISAM FTS.

Create the Full-text Index in Parallel

The Full-text Index is created in parallel. By default, the parallel degree is two, which means two threads are used to tokenize, sort and final insertion to the Full-text “index tables”.

To support parallel create index and future parallel processing, InnoDB partitions the “full-text index” into six auxiliary “index tables”. The words are divided among these tables based on their first character’s charset sort weight. Currently, the partition criteria is hard-coded, targeted to the Latin characters. In the future, we will allow user to define their own partition criteria, to better support other character sets.

Batching the insert value with InnoDB Full-text Index’s “index cache”

As just discussed, when inserting into an “Inverted Index”, each inserting string will be tokenized, and decomposed into individual words before inserting into the auxiliary table. A single insertion could result in many small insertions into the inverted index tables. This magnifies the insertion operations by several or even dozens of fold, thus becomes normal insertion a disruptive fact for concurrent access to the table.

Our solution is to build an “internal” FTS “index cache” to temporarily cache these insertions, and batch flush to the disk once this index cache is full. Thus, it avoids frequent updates to the “Index Table” during busy inserts and updates, so the index table only needs to be synchronized when the index cache is full. The batching technique also avoids repetitively storing the same word, minimizes the number of entries for each word in the “Index Table”. So instead of flushing each word with a single “ilist” entry (with a single Doc ID), it batches result from many inserted documents so to create an “ilist” with multiple DocID/position pairs, before we flush this info to disk and also make a single entry. This reduces the redundancy and make the “Index Table” smaller.

The index cache is index specific. And has a configuration parameter (innodb_ft_cache_size) to configure the size of the cache. It stores the same information as those auxiliary tables, using a red-black tree data structure. However, this cache is different from an index tree in that it is only used to cache the word of the inserted documents, words on disk do not need to be brought back to this cache before they can be queried. In another word, the words in the “Index Table” are queried directly as we query into relational tables. And such result from “Index Table” will merge (union) with the query results from the Index Cache before sending back to user.

How InnoDB Full-text Index handles Deletes

Similar to inserts, each delete of the record with full-text index could result in many small “deletes”, if we had implemented it as normal deletes. Again, this could be very disruptive.

To avoid such disruptive operation, InnoDB Full-text Index develops a new scheme that only logs the Doc ID of the deleted Documents in a separate “DELETED” auxiliary table. The actually “indexed” record would still remain in the FTS index. However, these deleted Doc Ids will be filtered out from the final results by consulting the “DELETED” table before returning the query result.

The benefit of this design is obvious. Delete operation then becomes a trivial and fast operation, with minimum impact on the index itself. The shortcoming is that the index would not shrink along with record deletion. However, the benefits far overweight the drawbacks.

To cleanup the deleted records in the index, you will resort to the InnoDB optimize table utility, which rebuilt the Full-text index online. And we will discuss it in the next section.

Index Optimization

The optimize table on the Full-Text index essentially rebuilds the full-text index, and leaves those “deleted” Docs out of the new index. This operation does two things:

1) Remove the deleted Doc ID from the word’s DocID/Position pair list (ilist).

2) Consolidate multiple entries for the same word to one (or less) entries if possible by consolidating their DocID/Position pair list (ilist), as we will have as little entry as possible.

Currently, we overload this optimization operation with the “Optimize Table” command, and runs only if you turn on “innodb_optimize_fulltext_only” system configure variable:

mysql> set global innodb_optimize_fulltext_only=1;

mysql> optimize table articles;

Since the table could be large, and optimization (reorg rebuild index) may take a tremendous amount of time, we allow the optimization to be done in stages. The system configuration variable “innodb_ft_num_word_optimize” specifies how many words to optimize for each “optimize table” command. Its default value is 2000, meaning each time it optimizes 2000 words. And when the next “optimize table” command is issued, the server continues the process from where it left off from last run.

InnoDB Full-text search Transaction Behavior

As you could see from last two sections, the main idea behind InnoDB Full-text index DML is to batch the operations as much as possible. This determines there will be some special characteristics when coming to transaction handling.

In fact, all of the DML operations (insert/delete/update) involving Full-text indexed columns are processed at the transaction commit time. For example, for insertion, the tokenization for inserted strings is performed only at commit time, as a result, a full-text search only sees the committed data. This “delayed” process/synchronization behavior presents in all transactional DBMS text handling. And we do so at the transaction commit time.

InnoDB FTS Document ID

All of above design can’t be done without the support of a so called FTS Doc ID identifier. The mapping from word to original table record goes through such a unique ID. So there must be a Doc ID column in the table with Full-text index.

In our case, it is represented by the “FTS_DOC_ID” (uppercase required) column. If this column is not defined, InnoDB will automatically add it to the user table when creating the full-text index. The column itself must be of “BIGINT UNSIGNED NOT NULL” type, with a unique index named FTS_DOC_ID_INDEX. When you define this column during table creation, it saves considerable time in creating the full-text index after loading data, since InnoDB does not need to copy the table for adding such column (and also rebuild all indexes include primary index). In such case, user will be responsible to properly manage the Doc ID column so that document will not be inappropriately represented. If create index performance is not a concern, you can leave Doc ID management to InnoDB, by leaving out this column. InnoDB will added “FTS_DOC_ID” as a hidden columns. The “ FTS_DOC_ID_INDEX” will also be created automatically.

Stopword Handling

Stopwords are common or trivial words that are omitted from the full-text index. The InnoDB FTS provides two sources of stopwords:

1) InnoDB predefined default stopwords – If no user stopword list is defined, this default stopwords list is used.

You can view this list by querying the table INFORMATION_SCHEMA.INNODB_FT_DEFAULT_STOPWORD:

mysql> select * from INNODB_FT_DEFAULT_STOPWORD;

2) User-defined stopwords – You can define your own stopwords by creating a table with a single column named “value”, with datatype “varchar”, and pointing the global variable “innodb_ft_server_stopword_table” to this table. MySQL loads stopwords from this user table, rather than the default stopword list, when creating the FTS index. And an example will be here:

  • # Define a correctly formatted user stopword table

create table user_stopword(value varchar(30)) engine = innodb;

  • # Point to this stopword table with “db name/table name”

set global innodb_ft_server_stopword_table = “test/user_stopword”

Ranking

Currently, we use a very simple ranking mechanism (term frequency, inverse document frequency) for document ranking. This means the more the word appears in a document, and the less frequent the word appears in overall documents, the higher the selected document is ranked. We do have plans to add more sophisticated ranking mechanism and allow users to supplied the weight in the ranking calculation too.

Limitation

As the first release of InnoDB FTS, it provides limited and very basic functionalities for Full-text searches. And it has some limitations that worth mentioning:

  1. It does not yet support stemming.
  2. For ideographic languages such as CJK (Chinese, Japanese and Korea), which do not have word delimiters, InnoDB FTS does not yet support N-GRAM parsing (similar to MyISAM FTS).
  3. Although the use of multiple character sets within a single table is supported, all columns in a FULLTEXT index must use the same character set and collation (similar to MyISAM FTS).
  4. It does not support external parser plugins
  5. Ranking mechanism is relatively simple

Summary

The InnoDB full-text search gives InnoDB an important capability in using MySQL to handle text documents, and we will further develop it to continue increase its performance and usability.

InnoDB 5.6.4 supports databases with 4k and 8k page sizes

Note: this article was originally published on http://blogs.innodb.com on Dec 20, 2011 by Kevin Lewis.

In the 5.6.4 release it is now possible to create an InnoDB database with 4k or 8k page sizes in addition to the original 16k page size. Previously, it could be done by recompiling the engine with a different value for UNIV_PAGE_SIZE_SHIFT and UNIV_PAGE_SIZE. With this release, you can set –innodb-page-size=n when starting mysqld, or put innodb_page_size=n in the configuration file in the [mysqld] section where n can be 4k, 8k, 16k, or 4096, 8192, 16384.

The support of smaller page sizes may be useful for certain storage media such as SSDs. Performance results can vary depending on your data schema, record size, and read/write ratio. But this provides you more options to optimize your performance.

When this new setting is used, the page size is set for all tablespaces used by that InnoDB instance. You can query the current value with;

SHOW VARIABLES LIKE ‘innodb_page_size’;
or
SELECT variable_value FROM information_schema.global_status  WHERE LOWER(variable_name) = ‘innodb_page_size’;

It is a read-only variable while the engine is running since it must be set before InnoDB starts up and creates a new system tablespace. That happens when InnoDB does not find ibdata1 in the data directory. If you start mysqld with a page size other than the standard 16k, the error log will contain something like this;

111214 15:55:05 InnoDB: innodb-page-size has been changed from the default value 16384 to 4096.

If your system tablespace already exists using one page size and innodb-page-size is something else, the engine will not start. There will be a message logged like this;

111214 16:06:51 InnoDB: Error: data file .\ibdata1 uses page size 4096,
111214 16:06:51 InnoDB: but the start-up parameter is innodb-page-size=16384

InnoDB knows the page size used in an existing tablespace created by version 5.6.4 because it stamps that page size in the header page. But this is not readable to older engines, of course. If an older engine opens a database with a page size other than 16k, it will report a corrupted page and quit.

All features in InnoDB work the same with smaller page sizes. But some limits are affected. The maximum record size is proportionately less with smaller pages. Record length limits are calculated within InnoDB based on a variety of factors including row type, column type and length, number of columns, secondary indexes, index prefix lengths, and of course, the page size. The main record is stored in a clustered index and a minimum of two records must fit into each page. So the maximum record length for 8k pages is about half that of 16k pages and the maximum record size with 4k pages is about half that of 8k pages.

In addition to record lengths, the maximum key lengths are proportionately smaller. For 16k pages, MySQL prevents key definitions from containing over 3072 bytes of data. This means that the primary key in the clustered index cannot use more than 3072 bytes of data. Secondary indexes can only contain up to 3072 bytes of data in the key also. But within a secondary index, InnoDB must store the primary key as the record pointer.  So if you have a 3072 byte primary key and a 3072 byte secondary key, each entry in the secondary index contains 6144 bytes of data plus internal record overhead. The amount of internal overhead is dependent upon the column types, but this gets the secondary record close to half the page size which is its natural limit. So this limit of 3072 bytes per key is less than but close to what InnoDB would have to impose on a table by table basis.  Based on that, InnoDB now reports to MySQL that any key defined for 8k page sizes must be less than 1536 bytes.  Likewise, the maximum key length when InnoDB uses 4k page sizes is 768 bytes.

If you have a database schema with any large records or keys defined, you may not be able to use smaller page sizes. Even if your records do barely fit in the clustered index page, it may not be advisable to use these smaller pages because the btree will be a lot deeper.  For example, if only 2 records fit on the page and there are 1,000,000 records, leaf pages are 20 levels deep, meaning InnoDB will need to read 20 pages to find the leaf page.  If that were on 4k pages, then using the same table on 16k pages would give 8 records per page and the leaf pages would only be 7 levels down.

There is a trick to reducing the size of records on the clustered index page;
1) Use Dynamic or Compressed row format.
2) Convert VARCHAR fields to TEXT fields.  (VARBINARY can be converted to BLOB)

There are 4 ROW FORMATS in InnoDB. The first two, Redundant and Compact, which are considered the Antelope file version, store at least 768 bytes of each field in the clustered record.  The Barracuda file version consists of Compact and Dynamic ROW FORMATS. These have the ability to put all of a VARCHAR, VARBINARY, TEXT or BLOB field onto a separate BLOB page for storage.  So one good way to prepare a table structure to use smaller page sizes is to use Dynamic or Compressed row format.

The decision of how big a record will be in the clustered record is made during the INSERT or UPDATE.  Each record is evaluated for its own length.  If the record is too long to fit in half the page, InnoDB will look for the longest actual field in that record and put as much of that field off-page as possible based on the row type.  Then if it is still to long, it will shorten the longest field remaining.  Since this is done when the record is added to the page, different records may have different columns stored off-page when there are multiple long fields in the record.

VARCHAR/VARBINARY fields are treated like TEXT/BLOB fields if they are over 255 bytes long.  If you are using Compressed or Dynamic row format and your record is too long because you have too many VARCHAR fields 255 bytes or less, you can reduce the record length by converting them to TEXT fields. Likewise, VARBINARY fields 255 bytes or less can be converted to BLOB fields to take advantage of this ability to store the whole TEXT or BLOB field on a separate page.

A file extent in InnoDB is 1 Mb independent of the page size. So an extent will hold 64 16k pages, 128 8k pages and 256 4k pages.  This means that the read ahead mechanisms will read more pages with smaller page sizes since they read a whole extent at a time.  The doublewrite buffer, which is based on the size of an extent, will also contain more pages.

If you want to use smaller page sizes with existing data, export the data first with a logical export utility such as mysqldump. Then create the new mysql instance with innodb-page-size=4k or 8k and import the data. Do not use a physical export method such as alter table … discard tablespace.

Summary:

This feature makes it easier to try smaller page sizes in an InnoDB database. And with the 5.6.4 release, those smaller page sizes are fully supported by MySQL. Just export your data, move or delete the system database (ibdata1) and the log files (ib_logfile0 & ib_logfile1), set innodb-page-size to either 4k or 8k, and restart MySQL. A new InnoDB instance will be created with the smaller page size. Then you can import your data and run your tests, all without recompiling InnoDB.

Monday Oct 03, 2011

InnoDB Memcached with Binlog Capability

Note: this article was originally published on http://blogs.innodb.com on Oct 3, 2011 by Jimmy Yang.

In our earlier Labs release, we presented a MySQL NoSQL solution through InnoDB Memcached Daemon Plugin (see earlier Calvin’s and my posts). That earlier release allows the Memcached Plugin directly interacting with InnoDB, completely bypassing MySQL optimizer and QP layers. It differs with another popular MySQL NoSQL solution, HandlerSocket, by skipping even the Handler API layer and directly access the InnoDB through InnoDB APIs. Thus, in theory, it would be simpler and more efficient.

However, there is one major functionality we did not implement in the first release: the replication (binlog) capability. And this important feature is being done in the latest labs release (mysql-5.6.4-labs-innodb-memcached), in which all memcached commands are now supported for binlogging.

In the following sections, we will quick walk you through the prototype and show how to use the binlog capability.

Set up the InnoDB Memcached:

Our earlier post (Get started with InnoDB Memcached Daemon plugin) details the step to set up the InnoDB Memcached. We will go very briefly on some important steps here:

  • Copy the plugins to the plugin directory

Find out your plugin directory:

        mysql> select @@plugin_dir;

@@plugin_dir
/home/jy/work2/mysql-5.6-memcached-labs/lib/plugin

       After you download and unzipped mysql-5.6.4-labs-innodb-memcached-linux2.6-i686.tar.gz,

in the lib/plugin directory, you will find two plugin share lib:

1) innodb_engine.so

2) libmemcached.so

Copy them to your plugin directory:

       shell > cp mysql-5.6.4-labs-innodb-memcached-linux2.6-i686/lib/plugin/innodb_engine.so /home/jy/work2/mysql-5.6-memcached-labs/lib/plugin

       shell > cp mysql-5.6.4-labs-innodb-memcached-linux2.6-i686/lib/plugin/libmemcached.so /home/jy/work2/mysql-5.6-memcached-labs/lib/plugin

  • Install the configuration tables:

In the scripts/ directory, find the configuration scripts: innodb_memcached_config.sql, and install it:

      mysql < scripts/nnodb_memcached_config.sql

  • Install the Daemon Plugin

Now install the daemon plugin:

       mysql> install plugin daemon_memcached soname ‘libmemcached.so’;

      Query OK, 0 rows affected (0.05 sec)

Enable InnoDB Memcached binlog with “innodb_direct_access_enable_binlog”:

  • One new system configuration option – “innodb_direct_access_enable_binlog” is added to enable the InnoDB Memcached binlog. This configure option is a Server Boot-time option, so you must turn it on at server start up time. In addition, you must enable MySQL binlog with the –log-bin option. You can add these option to your server configure file my.cnf or just add these at command line.

mysqld …. –log-bin –innodb_direct_access_enable_binlog=1

  • Use mysqldump to create a master data snapshot, and sync it to slave.

master shell > mysqldump –all-databases –lock-all-tables > dbdump.db

       slave shell > mysql < dbdump.db

  • In Master, call “show master status” to obtain Master Binary Log Coordinates

       mysql> show master status;

File Position Binlog_Do_DB Binlog_Ignore_DB
0 114

1 row in set (0.00 sec)

  • In Slave, use “change master to” to set up slave with above coordinates:

         mysql> CHANGE MASTER TO

                 MASTER_HOST=’localhost’,

                 MASTER_USER=’root’,

                 MASTER_PASSWORD=”,

                MASTER_PORT = 13000,

                MASTER_LOG_FILE=’0.000001,

                MASTER_LOG_POS=114;

  • Then start the slave:

mysql>start slave;


If the errolog prints following info, then the slave is ready for the replication:

111002 18:47:15 [Note] Slave I/O thread: connected to master ‘root@localhost:13000′,replication started in log ’0.000001′ at position 114

Test with memcached telnet interfaces:

Now let’s test the server with above replication setup. We use the memcached telnet interface for demonstration purpose. And we also connect to the master and slave server through SQL interfaces, and query the target table to verify the result.

In our configuration setup sql, one example table “demo_test” is created in the “test” database to take the memcached command by default. And we will use this default table for the demonstrations:

  • Use set to insert a record, key “test1”, value “t1” and flag is “10”:

       telnet 127.0.0.1 11211

       Trying 127.0.0.1…

       Connected to 127.0.0.1.

       Escape character is ‘^]’.

       set test1 10 0 2

       t1

       STORED

         In the master server, you can see the row is inserted (c1 maps to the key, c2 maps to value, c3 is the flag, and c4 is the cas, c5 is the expiration):

         mysql> select * from demo_test;

c1 c2 c3 c4 c5
test1 t1 10 2 0

        1 row in set (0.00 sec)

         In the slave server, you will see the same record is inserted by replication:

       mysql> select * from demo_test;

c1 c2 c3 c4 c5
test1 t1 10 2 0

      1 row in set (0.00 sec)

  • Use set command to update the key “test1” to a new value “new”:

Connected to 127.0.0.1.

         Escape character is ‘^]’.

         set test1 10 0 3

         new

         STORED

From the slave server, the update is replicated (notice the cas value also updated):

         mysql> select * from demo_test;

c1 c2 c3 c4 c5
test1 new 10 3 0

  1 row in set (0.00 sec)

  • Delete the record with delete command:

Connected to 127.0.0.1.

         Escape character is ‘^]’.

         delete test1

         DELETED

The delete is replicated to the slave, the record in slave is also deleted:

  mysql> select * from demo_test;

         Empty set (0.00 sec)

  • Truncated the table with the “flush all” command

First insert two records:

Connected to 127.0.0.1.

         Escape character is ‘^]’

         set test2 10 0 5

         again

         STORED

         set test3 10 0 6

again1

         STORED

In the slave server, there two records replicated:

mysql> select * from demo_test;

c1 c2 c3 c4 c5
test2 again 10 5 0
test3 again1 10 6 0

2 rows in set (0.00 sec)

Call “flush_all” in telnet interface to truncate the table:

         Connected to 127.0.0.1.

         Escape character is ‘^]’.

         flush_all

         OK

Then check the slave server, the truncate table is replicated:

   mysql> select * from demo_test;

         Empty set (0.00 sec)

In short, all memcached commands are supported in terms of replication. And we would not enumerate each of them here.

Notes for the InnoDB Memcached Binlog:

Binlog Format:

  • Most Memcached operations are mapped to DMLs (insert/delete/update), and since there is no actually “statement” comes through memcached, all memcached commands (except for flush_all) use Row Based (RBR) logging. This is independent of any server binlog_format stetting.
  • The Memcached “flush_all” command is mapped to “truncate table” command. Since “DDL” type commands can only use statement-based logging, this “flush_all” command is replicated with a “hand-crafted” “truncate table” statement.

Transactions:

  • The transaction concept is not well defined when coming to Memcached. We use “daemon_memcached_r_batch_size” and “daemon_memcached_w_batch_size” to control the read and write “transaction” batch size for performance consideration. However, for replication, each command is replicated right after successful completion of each command.
  • In this release, we adjusted the default value of “daemon_memcached_w_batch_size” from 32 to 1, so each Memcached “write” operation are directly committed (it used to commit every 32 operations). This is to avoid any confusion that records shows up in slave server while they can’t be seen from master server without turning on “set transaction isolation level read uncommitted” to view uncommitted records. The side effect of this reduced is certain degradation of performance.

Other minor updates:

  • The memcached used has upgraded to the latest 1.6.0 (beta 1). And the libevent used for memcached has been updated to version 1.4.12.
  • InnoDB memcached now supports on Linux, Solaris and OSX platforms.

Summary:

In this release, we added the binlog support for InnoDB Memcached Daemon Plugins. This is an important feature to allow any updates through the memcached interface being able to be replicated for backup and high availability. More importantly, this demonstrate the capability of InnoDB APIs,  which not only support direct access to the InnoDB storage, but also being able to backup  any of such operations.

About

This is the InnoDB team blog.

Search

Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today