Monday Apr 16, 2012

Optimizing neighbor flush behavior

Note: this article was originally published on http://blogs.innodb.com on April 16, 2012 by Yasufumi Kinoshita.

The performance of flush_list flushing of InnoDB decides the basic performance for modifying workloads. So, it is important to optimize the flush behavior. In this post we’ll consider how to optimize the neighbor-flushing behavior.

Factor 1: Characteristics of storage

Depending on the characteristics of your storage’s throughput for write IO, you can term your storage as either “write amount bound” or “write times bound”. The minimum unit of the InnoDB datafile is page size (16KB or less). And InnoDB attempts to combines them in a single IO up to 1 extent (1MB) maximum, if they are contiguous.

<one HDD>:  Almost “write times bound”. Because head-seek time is the most effective factor for access time of HDD. And around 1MB size can be treated by the 1 head-seek.

<RAID-HDD>: It depends on the striping size of the RAID. In many cases, the striping size is set to 256KB ~ 1MB (much larger than the page size of datafile), with the intention that 1IO – 1HDD (both for keeping sequential access advantage of HDD and for keeping parallel ability for IO requests using several HDD in RAID). For the such general striping size, RAID-HDD is “write times bound”. (For the small striping size around same size as the page size, it should be “write amount bound”. But I don’t recommend such small striping size from the viewpoint of this post, because it just loses the sequential access advantage.)

<SSD>: It depends on internal write unit of SSD. For newer high-end SSD, the size is 4KB or more. It is not larger than InnoDB page size. Such high-end SSD is “write amount bound”. However, the unit size is very different according to each SSD’s internal implementations. Low-end or older SSD might have unit size over 1MB (and the throughput might be slow) and might be “write times bound”. You can estimate the write unit size of your SSD by random write benchmark with several block sizes 4,8,16KB,…,1MB, and the largest block size of the “write times bound” expected as the unit size.

Factor 2: Oldest modified age

The redo log in InnoDB is used in a circular fashion. The reusable redo space is limited by the oldest modification in the oldest modified block i.e.: the max oldest modified age which is equal to current_LSN (Log Sequenc Number) – the oldest modification LSN cannot be higher than the log capacity of the redo log files. When there is no reusable redo space available other modification operations cannot be done until the oldest modified age is decreased by flushing the oldest dirty pages.

The flushing throughput of the oldest dirty pages decides the workload throughput. It is important, how to effectively use the limited write IO bound for flushing “oldest” dirty pages.

Tuning flush_list flushing effective

The first priority of flushing is to reduce the oldest modified age assuming there is no shortage of  free blocks. So, this “flushing the oldest blocks only” is the basic strategy.

For “write amount bound” storage (e.g. high-end SSD), this is already the best strategy. It equals to “innodb_flush_neighbors = false“.

On the other hand, for “write times bound” storage (e.g. HDD base), the contiguous dirty neighbors of the oldest dirty pages can be flushed without wasting the write IO bound, because of the sequential advantage. So, flushing also the contiguous pages is really worth to do. But non-contiguous and non-oldest blocks should not be flushed at the same time, because non-contiguous flushing will become another IO request and has high probability to be treated as another raw block writing in the storage (waste the write IO bound).

The flush_neighbors of InnoDB traditional implementation flushes non-contiguous dirty blocks also. And it is not the best behavior for both type of storage “write times bound” and “write amount bound”. In MySQL labs release 2012 we have fixed this behavior to flush contiguous pages only, for “write times bound” storage.

Conclusion

In the end, the conclusion is followings

  • For HDD or HDD-RAID (stripe size about 256KB ~ 1MB): use the new flush_neighbors (flushing contiguous dirty blocks only)
  • For SSD (internal write unit size =< InnoDB data page size): disable flush_neighbors

Friday Apr 13, 2012

New flushing algorithm in InnoDB

Note: this article was originally published on http://blogs.innodb.com on April 13, 2012 by Inaam Rana.

In MySQL labs release April 2012 we have reworked the flushing heuristics in InnoDB. In this post I’ll give an overview of what we have changed and the various configuration variables we have introduced to fine tune the flushing algorithm. If you are interested in finding out how our new flushing algorithm fares in benchmarks you can get these details in Dimitri’s well-explained blog here.

Flushing means writing dirty pages to disk. I have explained in some detail about adaptive_flushing and types of flushing in my previous notes. Please go through these notes if you want to make sense of what follows.

The page_cleaner thread checks the state of the system every second and takes into account number of dirty pages, amount of reusable redo space, the rate at which redo is generated and the IO capacity for which the server is configured and based on these factors decide how many pages we need to flush.

In the new scheme of things the page_cleaner thread uses a single non-linear formula to calculate how many pages we need to flush to have sufficient reusable redo space. This is different from current flushing heuristic where async_water_mark is taken as point where we change our flushing algorithm. Similarly, instead of using innodb_max_dirty_pages_pct as a switch which triggers flushing we have introduced the concept of a range where the flushing to control the dirty pages percentage starts once we cross the low water mark and gets more and more aggressive as we near the high water mark.

There are four new configuration variables. Note that if your system is not experiencing any IO spikes due to checkpoints then you can probably leave all of the following as is. All the variables are global in scope and can be set dynamically.

  • innodb_adaptive_flushing_lwm: Low water mark measured in %age of total redo log size at which adaptive flushing kicks in. If currently unusable redo space is less then this value no background flushing will happen. Default value is 10 and permissible values are 0 – 70.
  • innodb_max_dirty_pages_pct_lwm: Low water mark of dirty pages in %age where preflushing to control dirty page ratio kicks in. Default value is 0 which has the special meaning of this value having no effect. Allowable values are 0 – 99.
  • innodb_max_io_capacity: InnoDB generally attempts to work within the limits of innodb_io_capacity. However, if it needs to do aggressive flushing then innodb_max_io_capacity defines the limit to which the write IOPs can be stretched. Default value is 2000 which is ten times the default value for innodb_io_capacity.
  • innodb_flushing_avg_loops: Number of iterations for which we keep the previously caculated snapshot of the flushing state. This variable is roughly a measure of how smooth you want the transition in the flushing activity to be. The higher the value the smoother will be the transition in flushing in face of rapidly changing workload. A lower value implies that the flushing algorithm is more responsive but it also means that flushing acitivity can become spiky when confronted with a quickly changing workload. Default value is 30 and permissible range is 1 – 1000.

Finally, there is a wealth of information availabe to you to monitor the impact of changing these variables. You can get a good inside view of how flushing activity is working by querying innodb_metrics table.

mysql> select name, comment from information_schema.innodb_metrics where name like ‘Buffer_flush%’;
+————————————-+—————————————————————-+
| name | comment |
+————————————-+—————————————————————-+
| buffer_flush_batch_scanned | Total pages scanned as part of flush batch |
| buffer_flush_batch_num_scan | Number of times buffer flush list flush is called |
| buffer_flush_batch_scanned_per_call | Pages scanned per flush batch scan |
| buffer_flush_batch_total_pages | Total pages flushed as part of flush batch |
| buffer_flush_batches | Number of flush batches |
| buffer_flush_batch_pages | Pages queued as a flush batch |
| buffer_flush_neighbor_total_pages | Total neighbors flushed as part of neighbor flush |
| buffer_flush_neighbor | Number of times neighbors flushing is invoked |
| buffer_flush_neighbor_pages | Pages queued as a neighbor batch |
| buffer_flush_n_to_flush_requested | Number of pages requested for flushing. |
| buffer_flush_avg_page_rate | Average number of pages at which flushing is happening |
| buffer_flush_lsn_avg_rate | Average redo generation rate |
| buffer_flush_pct_for_dirty | Percent of IO capacity used to avoid max dirty page limit |
| buffer_flush_pct_for_lsn | Percent of IO capacity used to avoid reusable redo space limit |
| buffer_flush_adaptive_total_pages | Total pages flushed as part of adaptive flushing |
| buffer_flush_adaptive | Number of adaptive batches |
| buffer_flush_adaptive_pages | Pages queued as an adaptive batch |
| buffer_flush_sync_total_pages | Total pages flushed as part of sync batches |
| buffer_flush_sync | Number of sync batches |
| buffer_flush_sync_pages | Pages queued as a sync batch |
| buffer_flush_background_total_pages | Total pages flushed as part of background batches |
| buffer_flush_background | Number of background batches |
| buffer_flush_background_pages | Pages queued as a background batch |
+————————————-+————————————————————-+
23 rows in set (0.00 sec)

Wednesday Apr 11, 2012

InnoDB persistent stats got a friendly UI

Note: this article was originally published on http://blogs.innodb.com on April 11, 2012 by Vasil Dimov.

After introducing InnoDB persistent statistics in MySQL 5.6, in this April Labs release we have dressed it up in a nice UI and refactored the internals a bit to make the code more elegant and straight-forward.

The persistent stats are now controlled globally and can also be overridden at table level, should any table require a different behavior.

Global

The server global flag –innodb-stats-persistent (boolean) now controls whether all InnoDB tables use persistent statistics or not. Keep in mind that if a table is using persistent stats then its statistics will not be updated automatically and you are responsible for running ANALYZE TABLE periodically, whenever you think the table contents has changed too much. Thus the default for –innodb-stats-persistent is currently set to OFF.

Per table

The persistent stats ON/OFF setting can be overridden at a table level with a table option like this:

CREATE TABLE t (a INT) ENGINE=INNODB PERSISTENT_STATS=0|1|default;

or

ALTER TABLE t PERSISTENT_STATS=0|1|default;

where ’0′ means no persistent stats for this table (regardless of the setting of the global –innodb-stats-persistent), ’1′ means to always use persistent stats for this table and ‘default’ means to use the server global setting, whatever it is.

Another goodie is that now you do not need to restart the server if you have manually updated the stats values in mysql.innodb_table_stats and/or mysql.innodb_index_stats and want your changes to take effect. Just do

FLUSH TABLE t;

to force InnoDB to reread what’s in mysql.innodb_table_stats and mysql.innodb_index_stats.

What to expect next? Maybe a fully controlled automatic persistent stats update so you do not have to worry about having to run ANALYZE TABLE periodically but still your query plans are stable? Wouldn’t that be nice?

Memcached With SASL Support

Note: this article was originally published on http://blogs.innodb.com on April 11, 2012 by Jimmy Yang.

In this April MySQL Lab release, we’ll provide you a more robust and release-ready InnoDB Memcached Engine with a few enhancements. The most notable addition is the SASL support, which gives users the capability to protect their MySQL database from unauthenticated access through memcached clients. In this blog, I will walk you through steps of getting this option enabled.

Background Info:
SASL stands for “Simple Authentication and Security Layer”, which is a Standard for adding authentication support to connection-based protocols. Memcached added SASL support starting its 1.4.3 release. And here is a good article that gives you some background on why and how SASL is supported in Memcached.

For InnoDB Memcached, the “Memcached mapped” user table must be registered in the “container” “system table”. And memcached client(s) can only access such “registered” table.  Even though the DBA can add access restrictions on such table,  he/she has no control over who can access it through the memcached client(s). And this is exactly the reason we want to provide a means (in this case SASL) for DBA being able to have some control over who can access our InnoDB table(s).

In the following section, we will go through with you the steps to build, enable and test an SASL-enabled InnoDB Memcached plugin.

Steps to Build and Enable SASL in InnoDB Memcached Plugin:

By default, SASL-enabled InnoDB Memcached is not built-in (and included in the release package), since it relies on some SASL libraries to build SASL into Memcached Engine. So you will need to download the source and rebuild the InnoDB Memcached plugin after you download the SASL libraries. The detail is described in following sections:

1) First, you would need to get SASL development and utility libraries. For example, on Ubuntu, you can get these libraries through:

> sudo apt-get -f install libsasl2-2 sasl2-bin libsasl2-2 libsasl2-dev libsasl2-modules

2) Then build InnoDB Memcached Engine plugin (shared libraries) with SASL capability. This is done by adding ENABLE_MEMCACHED_SASL=1 to the cmake option. In addition, Memcached provides a simple plaintext passwords support, which is easier to use for testing, so we have support for that too. And this is enabled by setting the option ENABLE_MEMCACHED_SASL_PWDB=1.

So overall, we will need to add following three options to the cmake:

> cmake ... -DWITH_INNODB_MEMCACHED=1
  -DENABLE_MEMCACHED_SASL=1 -DENABLE_MEMCACHED_SASL_PWDB=1

3) The third step is to install the InnoDB Memcached Plugin as before. Please refer to my earlier blog posts on the procedures.

4) As mentioned in section 2), Memcached provides a simple plaintext password support through SASL, which will be used for this demo.  There was a good blog from Thond Norbye describes the steps, so you can follow the instruction there too. I will repeat the important steps here.

a) Create a user named “testname” and its password as “testpasswd” in a file:

> echo "testname:testpasswd:::::::" >/home/jy/memcached-sasl-db

b) Let memcached know about it by setting environment variable MEMCACHED_SASL_PWDB:

> export MEMCACHED_SASL_PWDB=/home/jy/memcached-sasl-db

c) Also tell memcached that it is a plaintext password:

> echo "mech_list: plain" > /home/jy/work2/msasl/clients/memcached.conf
> export SASL_CONF_PATH=/home/jy/work2/msasl/clients/memcached.conf

4) Then we are ready to reboot the server, and add a “daemon_memcached” option “-S”, to enable SASL:

> mysqld .. --daemon_memcached_option="-S"

5) Now we are done the setup. Let’s test it. To do so, you might need SASL-enabled client. I used a SASL-enabled libmemcached as described in Thond Norbye’s blog, and tested it accordingly:

> memcp --servers=localhost:11211 --binary  --username=testname --password=testpasswd myfile.txt
> memcat --servers=localhost:11211 --binary --username=testname --password=testpasswd myfile.txt

Without appropriate user name or password, above operation will be rejected by error message “memcache error AUTHENTICATION FAILURE”. Otherwise, the operation will be completed. You can also play with the plaintext password set in /home/jy/memcached-sasl-db to verify it.

There are other methods to test the SASL with memcahced. But the one described above is the most straightforward.

Other changes for InnoDB Memcached:

Besides the SASL support, there are a few changes in this release that worth mentioning:
1) We added a configuration option innodb_api_trx_level, so that user can control the transaction isolation level on the queries through InnoDB APIs, or in this case, the memcached.

In theory, for memcached, there is no such concept of “transactions”, so this is an extra property that we added on top of it, so that user has some level of control when issuing DMLs through the SQL interface. By default, it is set to “read uncommitted”.

2) Another option we added is innodb_api_enable_mdl, the “mdl” stands for “metadata locking”. This basically “locks” the table from the MySQL level, so that the mapped table cannot be dropped or altered by DDL through the SQL interfaces. Without the lock, the table can be dropped from MySQL layer, but will be kept in the InnoDB storage until memcached or any other user stops using it.

3) A configure option name change. To enable binlog, the configure variable name has changed from innodb_direct_access_enable_binlog to innodb_api_enable_binlog.

Summary:

In summary, this release provides you a more robust InnoDB Memcached Engine with SASL support. The steps to enable such support is fairly straightforward and almost identical to those you would do to enable SASL for a Memcached server. So if you are familiar with using SASL for memcached, then it would just some name flipping to build and enable it. And if you are not familiar with the operation, above steps also give you a quick start to use it.

April 2012 Labs Release – Online DDL Improvements

Note: this article was originally published on http://blogs.innodb.com on April 11, 2012 by John Russell.

This feature is a continuation of the “Fast Index Creation” feature introduced in Fast Index Creation in the InnoDB Storage Engine. Now you can perform other kinds of DDL operations on InnoDB tables online: that is, with minimal delay for operations on that table, and without rebuilding the entire table. This enhancement improves responsiveness and availability in busy production environments, where making a table unavailable for seconds or minutes whenever its column definitions change is not practical.

The DDL operations enhanced by this feature are these variations on the ALTER TABLE statement:

  • Create secondary indexes: CREATE INDEX name ON table (col_list) or ALTER TABLE table ADD INDEX name (col_list)

    Drop secondary indexes: DROP INDEX name ON table; or ALTER TABLE table DROP INDEX name

    Creating and dropping secondary indexes on InnoDB tables has avoided the table-copying behavior since the days of MySQL 5.1 with the InnoDB Plugin. Now, the table remains available for read and write operations while the index is being created or dropped. The CREATE INDEX or DROP INDEX statement only finishes after all transactions that are modifying the table are completed, so that the initial state of the index reflects the most recent contents of the table.

    Previously, modifying the table while an index was being created or dropped typically resulted in a deadlock that cancelled the insert, update, or delete statement on the table.

  • Changing the auto-increment value for a column: ALTER TABLE table AUTO_INCREMENT=next_value;

    Especially in a distributed system using replication or sharding, you sometimes reset the auto-increment counter for a table to a specific value. The next row inserted into the table uses the specified value for its auto-increment column. You might also use this technique in a data warehousing environment where you periodically empty all the tables and reload them, and you can restart the auto-increment sequence from 1.

  • Drop a foreign key constraint: ALTER TABLE tbl DROP FOREIGN KEY fk_name

    Currently, online DDL only includes the DROP operation for foreign keys, not ADD to create foreign keys.

    If you do not know the names of the foreign key constraints on a particular table, issue the following statement and find the constraint name in the CONSTRAINT clause for each foreign key:

    show create table table\G
    

    Or, query the information_schema.table_constraints table and use the constraint_name and constraint_type columns to identify the foreign key names.

    As a consequence of this enhancement, you can now also drop a foreign key and its associated index in a single statement, which previously required separate statements in a strict order:

    ALTER TABLE  table DROP FOREIGN KEY  constraint, DROP INDEX index;
    
  • Rename a column: ALTER TABLE tbl CHANGE old_col_name new_col_name datatype

    When you keep the same data type and only change the column name, this operation can always be performed online. As part of this enhancement, you can now rename a column that is part of a foreign key constraint, which was not allowed before.

Note

As your database schema evolves with new columns, data types, constraints, indexes, and so on, keep your CREATE TABLE statements up to date with the latest table definitions. Even with the performance improvements of online DDL, it is more efficient to create stable database structures at the beginning, rather than creating part of the schema and then issuing ALTER TABLE statements afterward.

The main exception to this guideline is for secondary indexes on tables with large numbers of rows. It is typically most efficient to create the table with all details specified except the secondary indexes, load the data, then create the secondary indexes.

Whatever sequence of CREATE TABLE, CREATE INDEX, ALTER TABLE, and similar statements went into putting a table together, you can capture the SQL needed to reconstruct the current form of the table by issuing the statement SHOW CREATE TABLE table\G (uppercase \G required for tidy formatting). This output shows clauses such as numeric precision, NOT NULL, and CHARACTER SET that are sometimes added behind the scenes, and you might otherwise leave out when cloning the table on a new system or setting up foreign key columns with identical type.

Performance and Availability Considerations

Avoiding a table copy during DDL improves several aspects of MySQL operation, such as performance, concurrency, availability, and scalability:

  • By avoiding the disk I/O and CPU cycles to rebuild the table, you minimize the overall load on the database and maintain good performance and high throughput during the DDL operation.

  • Because the DDL operation completes in less time, there is a shorter period when queries and DML operations on that table are blocked, making your application more responsive.

  • Because less data is read into the buffer pool, you avoid purging frequently accessed data from the memory cache, which typically causes a temporary performance dip after a DDL operation.

  • Because there is a shorter period when queries and DML operations queue up waiting for the DDL to finish, there is less locking and waiting for other resources all throughout the MySQL server. Reducing this type of overhead leads to greater scalability, even for operations not involving the table being altered.

Benchmarking

To judge the relative performance of online DDL operations, you can run such operations on a big InnoDB table using current and earlier versions of MySQL. You can also run all the performance tests under the latest MySQL version, simulating the previous DDL behavior for the “before” results, by setting the old_alter_table system variable. Issue the statement set old_alter_table=1 in the session, and measure DDL performance to record the “before” figures. Then set old_alter_table=0 to re-enable the newer, faster behavior, and run the DDL operations again to record the “after” figures.

For a basic idea of whether a DDL operation does its changes in-place or performs a table copy, look at the “rows affected” value displayed after the command finishes. For example, here are lines you might see after doing different types of DDL operations:

  • Changing the default value of a column (super-fast, does not affect the table data at all):

    Query OK, 0 rows affected (0.07 sec)
  • Adding an index (takes time, but 0 rows affected shows that the table is not copied):

    Query OK, 0 rows affected (21.42 sec)
  • Changing the data type of a column (takes substantial time and does require rebuilding all the rows of the table):

    Query OK, 1671168 rows affected (1 min 35.54 sec)

For example, before running a DDL operation on a big table, you might check whether the operation will be fast or slow as follows:

  1. Clone the table structure.

  2. Populate the cloned table with a tiny amount of data.

  3. Run the DDL operation on the cloned table.

  4. Check whether the “rows affected” value is zero or not. A non-zero value means the operation will require rebuilding the entire table, which might require special planning. For example, you might do the DDL operation during a period of scheduled downtime, or on each replication slave server one at a time.

For a deeper understanding of the reduction in MySQL processing, examine the PERFORMANCE_SCHEMA and INFORMATION_SCHEMA tables related to InnoDB before and after DDL operations, to see the number of physical reads, writes, memory allocations, and so on.

Background Information

Historically, the MySQL server and InnoDB have each kept their own metadata about table and index structures. The MySQL server stores this information in .frm files that are not protected by a transactional mechanism, while InnoDB has its own data dictionary as part of the system tablespace. If a DDL operation was interrupted by a crash or other unexpected event partway through, the metadata could be left inconsistent between these two locations, causing problems such as startup errors or inability to access the table that was being altered. Now that InnoDB is the default storage engine, addressing such issues is a high priority. These enhancements to DDL operations reduce the window of opportunity for such issues to occur.

Examples

Here are code examples showing some operations whose performance, concurrency, and scalability are improved by this new feature.

Example 1. Schema Setup Code for Online DDL Experiments

Here is the code that sets up the initial tables used in these demonstrations:

/*
Setup code for the online DDL demonstration:
- Set up some config variables.
- Create 2 tables that are clones of one of the INFORMATION_SCHEMA tables
  that always has some data. The "small" table has a couple of thousand rows.
  For the "big" table, keep doubling the data until it reaches over a million rows.
- Set up a primary key for the sample tables, since we are demonstrating InnoDB aspects.
*/ 

set autocommit = 0;
set foreign_key_checks = 1;
set global innodb_file_per_table = 1;
set old_alter_table=0;
prompt mysql: 

use test;

\! echo "Setting up 'small' table:"
drop table if exists small_table;
create table small_table as select * from information_schema.columns;
alter table small_table add id int unsigned not null primary key auto_increment;
select count(id) from small_table;

\! echo "Setting up 'big' table:"
drop table if exists big_table;
create table big_table as select * from information_schema.columns;
show create table big_table\G

insert into big_table select * from big_table;
insert into big_table select * from big_table;
insert into big_table select * from big_table;
insert into big_table select * from big_table;
insert into big_table select * from big_table;
insert into big_table select * from big_table;
insert into big_table select * from big_table;
insert into big_table select * from big_table;
insert into big_table select * from big_table;
insert into big_table select * from big_table;
commit;

alter table big_table add id int unsigned not null primary key auto_increment;
select count(id) from big_table;

Running this code gives this output, condensed for brevity and with the most important points bolded:

Setting up 'small' table:

Query OK, 1678 rows affected (0.13 sec)
Query OK, 1678 rows affected (0.07 sec)

+-----------+
| count(id) |
+-----------+
|      1678 |
+-----------+
1 row in set (0.00 sec)

Setting up 'big' table:

Query OK, 1678 rows affected (0.17 sec)

*************************** 1. row ***************************
       Table: big_table
Create Table: CREATE TABLE `big_table` (
  `TABLE_CATALOG` varchar(512) CHARACTER SET utf8 NOT NULL DEFAULT '',
  `TABLE_SCHEMA` varchar(64) CHARACTER SET utf8 NOT NULL DEFAULT '',
  `TABLE_NAME` varchar(64) CHARACTER SET utf8 NOT NULL DEFAULT '',
  `COLUMN_NAME` varchar(64) CHARACTER SET utf8 NOT NULL DEFAULT '',
  `ORDINAL_POSITION` bigint(21) unsigned NOT NULL DEFAULT '0',
  `COLUMN_DEFAULT` longtext CHARACTER SET utf8,
  `IS_NULLABLE` varchar(3) CHARACTER SET utf8 NOT NULL DEFAULT '',
  `DATA_TYPE` varchar(64) CHARACTER SET utf8 NOT NULL DEFAULT '',
  `CHARACTER_MAXIMUM_LENGTH` bigint(21) unsigned DEFAULT NULL,
  `CHARACTER_OCTET_LENGTH` bigint(21) unsigned DEFAULT NULL,
  `NUMERIC_PRECISION` bigint(21) unsigned DEFAULT NULL,
  `NUMERIC_SCALE` bigint(21) unsigned DEFAULT NULL,
  `DATETIME_PRECISION` bigint(21) unsigned DEFAULT NULL,
  `CHARACTER_SET_NAME` varchar(32) CHARACTER SET utf8 DEFAULT NULL,
  `COLLATION_NAME` varchar(32) CHARACTER SET utf8 DEFAULT NULL,
  `COLUMN_TYPE` longtext CHARACTER SET utf8 NOT NULL,
  `COLUMN_KEY` varchar(3) CHARACTER SET utf8 NOT NULL DEFAULT '',
  `EXTRA` varchar(30) CHARACTER SET utf8 NOT NULL DEFAULT '',
  `PRIVILEGES` varchar(80) CHARACTER SET utf8 NOT NULL DEFAULT '',
  `COLUMN_COMMENT` varchar(1024) CHARACTER SET utf8 NOT NULL DEFAULT ''
) ENGINE=InnoDB DEFAULT CHARSET=latin1
1 row in set (0.00 sec)

Query OK, 1678 rows affected (0.09 sec)
Query OK, 3356 rows affected (0.07 sec)
Query OK, 6712 rows affected (0.17 sec)
Query OK, 13424 rows affected (0.44 sec)
Query OK, 26848 rows affected (0.63 sec)
Query OK, 53696 rows affected (1.72 sec)
Query OK, 107392 rows affected (3.02 sec)
Query OK, 214784 rows affected (6.28 sec)
Query OK, 429568 rows affected (13.25 sec)
Query OK, 859136 rows affected (28.16 sec)
Query OK, 1718272 rows affected (1 min 9.22 sec)

+-----------+
| count(id) |
+-----------+
|   1718272 |
+-----------+
1 row in set (1.75 sec)

Example 2. Speed and Efficiency of CREATE INDEX and DROP INDEX

Here is a sequence of statements demonstrating the relative speed of CREATE INDEX and DROP INDEX statements. For a small table, the elapsed time is less than a second whether we use the fast or slow technique, so we look at the “rows affected” output to verify which operations can avoid the table rebuild. For a large table, the difference in efficiency is obvious because skipping the table rebuild saves substantial time.

\! clear

-- Make sure we're using the new-style fast DDL.
-- Outside of benchmarking and testing, you would
-- never enable the old_alter_table setting.
set old_alter_table=0;

\! echo "=== Create and drop index (small table, new/fast technique) ==="
\! echo
\! echo "Data size (kilobytes) before index created: "
\! du -k data/test/small_table.ibd
create index i_dtyp_small on small_table (data_type);
\! echo "Data size after index created: "
\! du -k data/test/small_table.ibd
drop index i_dtyp_small on small_table;

-- Revert to the older slower DDL for comparison.
set old_alter_table=1;

\! echo "=== Create and drop index (small table, old/slow technique) ==="
\! echo
\! echo "Data size (kilobytes) before index created: "
\! du -k data/test/small_table.ibd
create index i_dtyp_small on small_table (data_type);
\! echo "Data size after index created: "
\! du -k data/test/small_table.ibd
drop index i_dtyp_small on small_table;

-- In the above example, we examined the "rows affected" number,
-- ideally looking for a zero figure. Let's try again with a larger
-- sample size, where we'll see that the actual time taken can
-- vary significantly.

-- Back to the new/fast behavior:
set old_alter_table=0;

\! echo "=== Create and drop index (big table, new/fast technique) ==="
\! echo
\! echo "Data size (kilobytes) before index created: "
\! du -k data/test/big_table.ibd
create index i_dtyp_big on big_table (data_type);
\! echo "Data size after index created: "
\! du -k data/test/big_table.ibd
drop index i_dtyp_big on big_table;

-- Let's see that again, in slow motion:
set old_alter_table=1;

\! echo "=== Create and drop index (big table, old/slow technique) ==="
\! echo
\! echo "Data size (kilobytes) before index created: "
\! du -k data/test/big_table.ibd
create index i_dtyp_big on big_table (data_type);
\! echo "Data size after index created: "
\! du -k data/test/big_table.ibd
drop index i_dtyp_big on big_table;

Running this code gives this output, condensed for brevity and with the most important points bolded:

=== Create and drop index (small table, new/fast technique) ===

Data size (kilobytes) before index created:
384	data/test/small_table.ibd
Query OK, 0 rows affected (0.04 sec)

Data size after index created:
432	data/test/small_table.ibd
Query OK, 0 rows affected (0.02 sec)

=== Create and drop index (small table, old/slow technique) ===

Data size (kilobytes) before index created:
432	data/test/small_table.ibd
Query OK, 1678 rows affected (0.12 sec)

Data size after index created:
448	data/test/small_table.ibd
Query OK, 1678 rows affected (0.10 sec)

=== Create and drop index (big table, new/fast technique) ===

Data size (kilobytes) before index created:
315392	data/test/big_table.ibd
Query OK, 0 rows affected (33.32 sec)

Data size after index created:
335872	data/test/big_table.ibd
Query OK, 0 rows affected (0.02 sec)

=== Create and drop index (big table, old/slow technique) ===

Data size (kilobytes) before index created:
335872	data/test/big_table.ibd
Query OK, 1718272 rows affected (1 min 5.01 sec)

Data size after index created:
348160	data/test/big_table.ibd
Query OK, 1718272 rows affected (46.59 sec)

Example 3. Concurrent DML During CREATE INDEX and DROP INDEX

Here are some snippets of code that I ran in separate mysql sessions connected to the same database, to illustrate DML statements (insert, update, or delete) running at the same time as CREATE INDEX and DROP INDEX.

CREATE INDEX statement (in session 1):

/*
CREATE INDEX statement to run against a table while
insert/update/delete statements are modifying the
column being indexed.
*/

-- We'll run this script in one session, while simultaneously creating and dropping
-- an index on test/big_table.table_name in another session.

use test;
create index i_concurrent on big_table(table_name);

DROP INDEX statement (in session 1):

/*
DROP INDEX statement to run against a table while
insert/update/delete statements are modifying the
column being indexed.
*/

-- We'll run this script in one session, while simultaneously creating and dropping
-- an index on test/big_table.table_name in another session.

use test;
drop index i_concurrent on big_table;

DML statements (in session 2). I ran these DELETE statements while the CREATE INDEX was running.
Because DROP INDEX in this case takes less than a second, I started the DELETE first and then ran the DROP INDEX while the DML was in progress. In each case, the DML statement waited until the transaction with the DELETE was finished. (That’s why the timing numbers are higher than in the other examples. I waited for a while before issuing the ROLLBACK statement at the mysql command line.)

/*
Some queries and insert/update/delete statements to run against a table
while an index is being created or dropped. Previously, these operations
would have stalled during the index create/drop period and possibly
timed out or deadlocked.
*/

-- We'll run this script in one session, while simultaneously creating and dropping
-- an index on test/big_table.table_name in another session.

-- In our test instance, that column has about 1.7M rows, with 136 different values.

set autocommit = 0;
use test;

select distinct character_set_name from big_table where table_name = 'FILES';
delete from big_table where table_name = 'FILES';
select distinct character_set_name from big_table where table_name = 'FILES';

-- I'll issue the final rollback interactively, not via script,
-- the better to control the timing.
-- rollback;

Running this code gives this output, condensed for brevity and with the most important points bolded:

mysql: source concurrent_ddl_create.sql
Database changed
Query OK, 0 rows affected (1 min 25.15 sec)
Records: 0  Duplicates: 0  Warnings: 0

mysql: source concurrent_ddl_drop.sql
Database changed
Query OK, 0 rows affected (24.98 sec)
Records: 0  Duplicates: 0  Warnings: 0

mysql: source concurrent_dml.sql
Query OK, 0 rows affected (0.00 sec)

Database changed
+--------------------+
| character_set_name |
+--------------------+
| NULL               |
| utf8               |
+--------------------+
2 rows in set (0.32 sec)

Query OK, 38912 rows affected (1.84 sec)

Empty set (0.01 sec)

mysql: rollback;
Query OK, 0 rows affected (1.05 sec)

Example 4. Renaming a Column

Here is a demonstration of using ALTER TABLE to rename a column. We use the new, fast DDL mechanism to change the name, then the old, slow DDL mechanism (with old_alter_table=1) to restore the original column name.

Note
  • Because the syntax for renaming a column also involves re-specifying the data type, be very careful to specify exactly the same data type to avoid a costly table rebuild. In this case, we checked the output of show create table table\G and copied any clauses such as CHARACTER SET and NOT NULL from the original column definition.

  • Again, renaming a column for a small table is fast enough that we need to examine the “rows affected” number to verify that the new DDL mechanism is more efficient than the old one. With a big table, the difference in elapsed time makes the improvement obvious.

/*
Run through a sequence of 'rename column' statements.
Because this operation involves only metadata, not table data,
it is fast for big and small tables, with new or old DDL mechanisms.
*/

\! clear

\! echo "Rename column (fast technique, small table):"
set old_alter_table=0;
alter table small_table change `IS_NULLABLE` `NULLABLE`
  varchar(3) character set utf8 not null;
\! echo "Rename back to original name (slow technique):"
set old_alter_table=1;
alter table small_table change `NULLABLE` `IS_NULLABLE`
  varchar(3) character set utf8 not null;

\! echo "Rename column (fast technique, big table):"
set old_alter_table=0;
alter table big_table change `IS_NULLABLE` `NULLABLE`
  varchar(3) character set utf8 not null;
\! echo "Rename back to original name (slow technique):"
set old_alter_table=1;
alter table big_table change `NULLABLE` `IS_NULLABLE`
  varchar(3) character set utf8 not null;
set old_alter_table=0;

Running this code gives this output, condensed for brevity and with the most important points bolded:

Rename column (fast technique, small table):
Query OK, 0 rows affected (0.13 sec)

Rename back to original name (slow technique):
Query OK, 1678 rows affected (0.35 sec)

Rename column (fast technique, big table):
Query OK, 0 rows affected (0.11 sec)

Rename back to original name (slow technique):
Query OK, 1718272 rows affected (1 min 0.00 sec)

Example 5. Dropping Foreign Keys

Here is a demonstration of foreign keys, including improvement to the speed of dropping a foreign key constraint.

/*
Demonstrate aspects of foreign keys that are or aren't affected by the DDL improvements.
- Create a new table with only a few values to serve as the parent table.
- Set up the 'small' and 'big' tables as child tables using a foreign key.
- Verify that the ON DELETE CASCADE clause makes changes ripple from parent to child tables.
- Drop the foreign key constraints, and optionally associated indexes.
  (This is the operation that is sped up.)
*/

\! clear

-- Make sure foreign keys are being enforced, and allow
-- rollback after doing some DELETEs that affect both
-- parent and child tables.
set foreign_key_checks = 1;
set autocommit = 0;

-- Create a parent table, containing values that we know are already present
-- in the child tables.
drop table if exists schema_names;
create table schema_names (id int unsigned not null primary key auto_increment,
  schema_name varchar(64) character set utf8 not null, index i_schema (schema_name))
  as select distinct table_schema schema_name from small_table;

show create table schema_names\G
show create table small_table\G
show create table big_table\G

-- Creating the foreign key constraint isn't any faster than before.
-- It still involves a table rebuild, as illustrated by the "rows affected" figure.
alter table small_table add constraint small_fk
  foreign key i_table_schema (table_schema) references schema_names(schema_name)
  on delete cascade;
alter table big_table add constraint big_fk
  foreign key i_table_schema (table_schema) references schema_names(schema_name)
  on delete cascade;

show create table small_table\G
show create table big_table\G

select schema_name from schema_names order by schema_name;
select count(table_schema) howmany, table_schema from small_table group by table_schema;
select count(table_schema) howmany, table_schema from big_table group by table_schema;

-- big_table is the parent table.
-- schema_names is the parent table.
-- big_table is the child table.
-- (One row in the parent table can have many "children" in the child table.)
-- Changes to the parent table can ripple through to the child table.
-- For example, removing the value 'test' from schema_names.schema_name will
-- result in the removal of 20K or so rows from big_table.

delete from schema_names where schema_name = 'test';

select schema_name from schema_names order by schema_name;
select count(table_schema) howmany, table_schema from small_table group by table_schema;
select count(table_schema) howmany, table_schema from big_table group by table_schema;

-- Because we've turned off autocommit, we can still get back those deleted rows
-- if the DELETE was issued by mistake.
rollback;

select schema_name from schema_names order by schema_name;
select count(table_schema) howmany, table_schema from small_table group by table_schema;
select count(table_schema) howmany, table_schema from big_table group by table_schema;

-- All of the cross-checking between parent and child tables would be
-- deadly slow if there wasn't the requirement for the corresponding
-- columns to be indexed!

-- But we can get rid of the foreign key using a fast operation
-- that doesn't rebuild the table.
-- If we didn't specify a constraint name when setting up the foreign key, we would
-- have to find the auto-generated name such as 'big_table_ibfk_1' in the
-- output from 'show create table'.

-- For the small table, we'll drop the foreign key and the associated index.
-- Having an index on a small table is less critical.

\! echo "DROP FOREIGN KEY and INDEX from small_table:"
alter table small_table drop foreign key small_fk, drop index small_fk;

-- For the big table, we'll drop the foreign key and leave the associated index.
-- If we are still doing queries that reference the indexed column, the index is
-- very important to avoid a full table scan of the big table.
\! echo "DROP FOREIGN KEY from big_table:"
alter table big_table drop foreign key big_fk;

show create table small_table\G
show create table big_table\G

Running this code gives this output, condensed for brevity and with the most important points bolded:

Query OK, 4 rows affected (0.03 sec)

*************************** 1. row ***************************
       Table: schema_names
Create Table: CREATE TABLE `schema_names` (
  `id` int(10) unsigned NOT NULL AUTO_INCREMENT,
  `schema_name` varchar(64) CHARACTER SET utf8 NOT NULL,
  PRIMARY KEY (`id`),
  KEY `i_schema` (`schema_name`)
) ENGINE=InnoDB AUTO_INCREMENT=8 DEFAULT CHARSET=latin1
1 row in set (0.00 sec)

*************************** 1. row ***************************
       Table: small_table
Create Table: CREATE TABLE `small_table` (
  `TABLE_CATALOG` varchar(512) CHARACTER SET utf8 NOT NULL DEFAULT '',
  `TABLE_SCHEMA` varchar(64) CHARACTER SET utf8 NOT NULL DEFAULT '',
  `TABLE_NAME` varchar(64) CHARACTER SET utf8 NOT NULL DEFAULT '',
  `COLUMN_NAME` varchar(64) CHARACTER SET utf8 NOT NULL DEFAULT '',
  `ORDINAL_POSITION` bigint(21) unsigned NOT NULL DEFAULT '0',
  `COLUMN_DEFAULT` longtext CHARACTER SET utf8,
  `IS_NULLABLE` varchar(3) CHARACTER SET utf8 NOT NULL,
  `DATA_TYPE` varchar(64) CHARACTER SET utf8 NOT NULL DEFAULT '',
  `CHARACTER_MAXIMUM_LENGTH` bigint(21) unsigned DEFAULT NULL,
  `CHARACTER_OCTET_LENGTH` bigint(21) unsigned DEFAULT NULL,
  `NUMERIC_PRECISION` bigint(21) unsigned DEFAULT NULL,
  `NUMERIC_SCALE` bigint(21) unsigned DEFAULT NULL,
  `DATETIME_PRECISION` bigint(21) unsigned DEFAULT NULL,
  `CHARACTER_SET_NAME` varchar(32) CHARACTER SET utf8 DEFAULT NULL,
  `COLLATION_NAME` varchar(32) CHARACTER SET utf8 DEFAULT NULL,
  `COLUMN_TYPE` longtext CHARACTER SET utf8 NOT NULL,
  `COLUMN_KEY` varchar(3) CHARACTER SET utf8 NOT NULL DEFAULT '',
  `EXTRA` varchar(30) CHARACTER SET utf8 NOT NULL DEFAULT '',
  `PRIVILEGES` varchar(80) CHARACTER SET utf8 NOT NULL DEFAULT '',
  `COLUMN_COMMENT` varchar(1024) CHARACTER SET utf8 NOT NULL DEFAULT '',
  `id` int(10) unsigned NOT NULL AUTO_INCREMENT,
  PRIMARY KEY (`id`)
) ENGINE=InnoDB AUTO_INCREMENT=1679 DEFAULT CHARSET=latin1
1 row in set (0.00 sec)

*************************** 1. row ***************************
       Table: big_table
Create Table: CREATE TABLE `big_table` (
  `TABLE_CATALOG` varchar(512) CHARACTER SET utf8 NOT NULL DEFAULT '',
  `TABLE_SCHEMA` varchar(64) CHARACTER SET utf8 NOT NULL DEFAULT '',
  `TABLE_NAME` varchar(64) CHARACTER SET utf8 NOT NULL DEFAULT '',
  `COLUMN_NAME` varchar(64) CHARACTER SET utf8 NOT NULL DEFAULT '',
  `ORDINAL_POSITION` bigint(21) unsigned NOT NULL DEFAULT '0',
  `COLUMN_DEFAULT` longtext CHARACTER SET utf8,
  `IS_NULLABLE` varchar(3) CHARACTER SET utf8 NOT NULL,
  `DATA_TYPE` varchar(64) CHARACTER SET utf8 NOT NULL DEFAULT '',
  `CHARACTER_MAXIMUM_LENGTH` bigint(21) unsigned DEFAULT NULL,
  `CHARACTER_OCTET_LENGTH` bigint(21) unsigned DEFAULT NULL,
  `NUMERIC_PRECISION` bigint(21) unsigned DEFAULT NULL,
  `NUMERIC_SCALE` bigint(21) unsigned DEFAULT NULL,
  `DATETIME_PRECISION` bigint(21) unsigned DEFAULT NULL,
  `CHARACTER_SET_NAME` varchar(32) CHARACTER SET utf8 DEFAULT NULL,
  `COLLATION_NAME` varchar(32) CHARACTER SET utf8 DEFAULT NULL,
  `COLUMN_TYPE` longtext CHARACTER SET utf8 NOT NULL,
  `COLUMN_KEY` varchar(3) CHARACTER SET utf8 NOT NULL DEFAULT '',
  `EXTRA` varchar(30) CHARACTER SET utf8 NOT NULL DEFAULT '',
  `PRIVILEGES` varchar(80) CHARACTER SET utf8 NOT NULL DEFAULT '',
  `COLUMN_COMMENT` varchar(1024) CHARACTER SET utf8 NOT NULL DEFAULT '',
  `id` int(10) unsigned NOT NULL AUTO_INCREMENT,
  PRIMARY KEY (`id`),
  KEY `big_fk` (`TABLE_SCHEMA`)
) ENGINE=InnoDB AUTO_INCREMENT=1718273 DEFAULT CHARSET=latin1
1 row in set (0.00 sec)

Query OK, 1678 rows affected (0.10 sec)
Query OK, 1718272 rows affected (1 min 14.54 sec)

*************************** 1. row ***************************
       Table: small_table
Create Table: CREATE TABLE `small_table` (
  `TABLE_CATALOG` varchar(512) CHARACTER SET utf8 NOT NULL DEFAULT '',
  `TABLE_SCHEMA` varchar(64) CHARACTER SET utf8 NOT NULL DEFAULT '',
  `TABLE_NAME` varchar(64) CHARACTER SET utf8 NOT NULL DEFAULT '',
  ...
  `COLUMN_COMMENT` varchar(1024) CHARACTER SET utf8 NOT NULL DEFAULT '',
  `id` int(10) unsigned NOT NULL AUTO_INCREMENT,
  PRIMARY KEY (`id`),
  KEY `small_fk` (`TABLE_SCHEMA`),
  CONSTRAINT `small_fk` FOREIGN KEY (`TABLE_SCHEMA`)
  REFERENCES `schema_names` (`schema_name`)
  ON DELETE CASCADE
) ENGINE=InnoDB AUTO_INCREMENT=1679 DEFAULT CHARSET=latin1
1 row in set (0.12 sec)

*************************** 1. row ***************************
       Table: big_table
Create Table: CREATE TABLE `big_table` (
  `TABLE_CATALOG` varchar(512) CHARACTER SET utf8 NOT NULL DEFAULT '',
  `TABLE_SCHEMA` varchar(64) CHARACTER SET utf8 NOT NULL DEFAULT '',
  `TABLE_NAME` varchar(64) CHARACTER SET utf8 NOT NULL DEFAULT '',
  ...
  `COLUMN_COMMENT` varchar(1024) CHARACTER SET utf8 NOT NULL DEFAULT '',
  `id` int(10) unsigned NOT NULL AUTO_INCREMENT,
  PRIMARY KEY (`id`),
  KEY `big_fk` (`TABLE_SCHEMA`),
  CONSTRAINT `big_fk` FOREIGN KEY (`TABLE_SCHEMA`)
  REFERENCES `schema_names` (`schema_name`)
  ON DELETE CASCADE
) ENGINE=InnoDB AUTO_INCREMENT=1718273 DEFAULT CHARSET=latin1
1 row in set (0.01 sec)

+--------------------+
| schema_name        |
+--------------------+
| information_schema |
| mysql              |
| performance_schema |
| test               |
+--------------------+
4 rows in set (0.00 sec)

+---------+--------------------+
| howmany | table_schema       |
+---------+--------------------+
|     563 | information_schema |
|     286 | mysql              |
|     786 | performance_schema |
|      43 | test               |
+---------+--------------------+
4 rows in set (0.01 sec)

+---------+--------------------+
| howmany | table_schema       |
+---------+--------------------+
|  576512 | information_schema |
|  292864 | mysql              |
|  804864 | performance_schema |
|   44032 | test               |
+---------+--------------------+
4 rows in set (2.10 sec)

Query OK, 1 row affected (1.52 sec)

+--------------------+
| schema_name        |
+--------------------+
| information_schema |
| mysql              |
| performance_schema |
+--------------------+
3 rows in set (0.00 sec)

+---------+--------------------+
| howmany | table_schema       |
+---------+--------------------+
|     563 | information_schema |
|     286 | mysql              |
|     786 | performance_schema |
+---------+--------------------+
3 rows in set (0.00 sec)

+---------+--------------------+
| howmany | table_schema       |
+---------+--------------------+
|  576512 | information_schema |
|  292864 | mysql              |
|  804864 | performance_schema |
+---------+--------------------+
3 rows in set (1.74 sec)

Query OK, 0 rows affected (0.60 sec)

+--------------------+
| schema_name        |
+--------------------+
| information_schema |
| mysql              |
| performance_schema |
| test               |
+--------------------+
4 rows in set (0.00 sec)

+---------+--------------------+
| howmany | table_schema       |
+---------+--------------------+
|     563 | information_schema |
|     286 | mysql              |
|     786 | performance_schema |
|      43 | test               |
+---------+--------------------+
4 rows in set (0.01 sec)

+---------+--------------------+
| howmany | table_schema       |
+---------+--------------------+
|  576512 | information_schema |
|  292864 | mysql              |
|  804864 | performance_schema |
|   44032 | test               |
+---------+--------------------+
4 rows in set (1.59 sec)

DROP FOREIGN KEY and INDEX from small_table:
Query OK, 0 rows affected (0.02 sec)

DROP FOREIGN KEY from big_table:
Query OK, 0 rows affected (0.02 sec)

*************************** 1. row ***************************
       Table: small_table
Create Table: CREATE TABLE `small_table` (
  `TABLE_CATALOG` varchar(512) CHARACTER SET utf8 NOT NULL DEFAULT '',
  `TABLE_SCHEMA` varchar(64) CHARACTER SET utf8 NOT NULL DEFAULT '',
  `TABLE_NAME` varchar(64) CHARACTER SET utf8 NOT NULL DEFAULT '',
  ...
  `COLUMN_COMMENT` varchar(1024) CHARACTER SET utf8 NOT NULL DEFAULT '',
  `id` int(10) unsigned NOT NULL AUTO_INCREMENT,
  PRIMARY KEY (`id`)
) ENGINE=InnoDB AUTO_INCREMENT=1679 DEFAULT CHARSET=latin1
1 row in set (0.00 sec)

*************************** 1. row ***************************
       Table: big_table
Create Table: CREATE TABLE `big_table` (
  `TABLE_CATALOG` varchar(512) CHARACTER SET utf8 NOT NULL DEFAULT '',
  `TABLE_SCHEMA` varchar(64) CHARACTER SET utf8 NOT NULL DEFAULT '',
  `TABLE_NAME` varchar(64) CHARACTER SET utf8 NOT NULL DEFAULT '',
  ...
  `COLUMN_COMMENT` varchar(1024) CHARACTER SET utf8 NOT NULL DEFAULT '',
  `id` int(10) unsigned NOT NULL AUTO_INCREMENT,
  PRIMARY KEY (`id`),
  KEY `big_fk` (`TABLE_SCHEMA`)
) ENGINE=InnoDB AUTO_INCREMENT=1718273 DEFAULT CHARSET=latin1
1 row in set (0.00 sec)

Example 6. Changing Auto-Increment Value

Here is an illustration of increasing the auto-increment lower limit for a table column, demonstrating how this operation now avoids a table rebuild, plus some other fun facts about InnoDB auto-increment columns.

/*
If this script is run after foreign_key.sql, the schema_names table is
already set up. But to allow this script to run multiple times without
running into duplicate ID errors, we set up the schema_names table
all over again.
*/

\! clear

\! echo "=== Adjusting the Auto-Increment Limit for a Table ==="
\! echo

drop table if exists schema_names;
create table schema_names (id int unsigned not null primary key auto_increment,
  schema_name varchar(64) character set utf8 not null,
  index i_schema (schema_name))
  as select distinct table_schema schema_name from small_table;

\! echo "Initial state of schema_names table."
\! echo "AUTO_INCREMENT is included in SHOW CREATE TABLE output."
\! echo "Note how MySQL reserved a block of IDs,"
\! echo "but only needed 4 of them in this transaction,"
\! echo "so the next inserted values would get IDs 8 and 9."
show create table schema_names\G
select * from schema_names order by id;

\! echo "Inserting even a tiny amount of data can produce gaps in the ID sequence."
insert into schema_names (schema_name) values ('eight'), ('nine');

set old_alter_table=0;
\! echo "Bumping auto-increment lower limit to 20 (fast mechanism):"
alter table schema_names auto_increment=20;

\! echo "Inserting 2 rows that should get IDs 20 and 21:"
insert into schema_names (schema_name) values ('foo'), ('bar');
commit;

set old_alter_table=1;
\! echo "Bumping auto-increment lower limit to 30 (slow mechanism):"
alter table schema_names auto_increment=30;

\! echo "Inserting 2 rows that should get IDs 30 and 31:"
insert into schema_names (schema_name) values ('bletch'),('baz');
commit;

select * from schema_names order by id;

set old_alter_table=0;

\! echo "Final state of schema_names table."
\! echo "AUTO_INCREMENT value shows the next inserted row would get ID=32."
show create table schema_names\G

Running this code gives this output, condensed for brevity and with the most important points bolded:

=== Adjusting the Auto-Increment Limit for a Table ===

Query OK, 0 rows affected (0.01 sec)

Query OK, 4 rows affected (0.02 sec)
Records: 4  Duplicates: 0  Warnings: 0

Initial state of schema_names table.
AUTO_INCREMENT is included in SHOW CREATE TABLE output.
Note how MySQL reserved a block of IDs,
but only needed 4 of them in this transaction,
so the next inserted values would get IDs 8 and 9.
*************************** 1. row ***************************
       Table: schema_names
Create Table: CREATE TABLE `schema_names` (
  `id` int(10) unsigned NOT NULL AUTO_INCREMENT,
  `schema_name` varchar(64) CHARACTER SET utf8 NOT NULL,
  PRIMARY KEY (`id`),
  KEY `i_schema` (`schema_name`)
) ENGINE=InnoDB AUTO_INCREMENT=8 DEFAULT CHARSET=latin1
1 row in set (0.00 sec)

+----+--------------------+
| id | schema_name        |
+----+--------------------+
|  1 | information_schema |
|  2 | mysql              |
|  3 | performance_schema |
|  4 | test               |
+----+--------------------+
4 rows in set (0.00 sec)

Inserting even a tiny amount of data can produce gaps in the ID sequence.
Query OK, 2 rows affected (0.00 sec)
Records: 2  Duplicates: 0  Warnings: 0

Bumping auto-increment lower limit to 20 (fast mechanism):
Query OK, 0 rows affected (0.01 sec)

Inserting 2 rows that should get IDs 20 and 21:
Query OK, 2 rows affected (0.00 sec)

Bumping auto-increment lower limit to 30 (slow mechanism):
Query OK, 8 rows affected (0.02 sec)

Inserting 2 rows that should get IDs 30 and 31:
Query OK, 2 rows affected (0.00 sec)

Query OK, 0 rows affected (0.01 sec)

+----+--------------------+
| id | schema_name        |
+----+--------------------+
|  1 | information_schema |
|  2 | mysql              |
|  3 | performance_schema |
|  4 | test               |
|  8 | eight              |
|  9 | nine               |
| 20 | foo                |
| 21 | bar                |
| 30 | bletch             |
| 31 | baz                |
+----+--------------------+
10 rows in set (0.00 sec)

Final state of schema_names table.
AUTO_INCREMENT value shows the next inserted row would get ID=32.
*************************** 1. row ***************************
       Table: schema_names
Create Table: CREATE TABLE `schema_names` (
  `id` int(10) unsigned NOT NULL AUTO_INCREMENT,
  `schema_name` varchar(64) CHARACTER SET utf8 NOT NULL,
  PRIMARY KEY (`id`),
  KEY `i_schema` (`schema_name`)
) ENGINE=InnoDB AUTO_INCREMENT=32 DEFAULT CHARSET=latin1
1 row in set (0.00 sec)

Tuesday Apr 10, 2012

InnoDB transportable tablespaces

Note: this article was originally published on http://blogs.innodb.com on April 10, 2012 by Sunny Bains.

The Problem

In the past, users were unable to take full advantage of the FLUSH TABLES WITH READ LOCK statement. InnoDB simply ignored the flush to disk part. If the table did not have any dirty pages in the InnoDB buffer that weren’t synced to disk (due to sheer luck) then it was safe to copy the .ibd file to another location. Also, the restore was not without its limitations and complications. The .ibd file could not be copied over to another server because InnoDB during import did not fix up metadata required for a trouble-free import. The main problems during import were:

  • If the tablespace ID of the IMPORTing instance had changed, for example if the table had been dropped and re-created, then the import would fail.
  • If the table and index IDs in the tablespace were different than on the importing server.
  • InnoDB uses a global row ID for tables without an explicit primary key. Any such row IDs in an imported tablespace had to be lower than the current maximum on the new server.
  • The maximum LSN of the tablespace had to be lower than the server’s current max LSN.
  • The maximum transaction (trx) ID of the tablespace had to be lower than the server current max trx id.
  • Purge and change buffer issues.

In short, if it worked you were lucky that the above constraints were satisfied, or your tables were probably read-only.

The Solution

Do the flush (export) properly and also the import. During flushing we disable purge, merge all pending change buffer entries to the tablespace and then flush all the dirty pages to disk. Note: Purge will remain disabled until UNLOCK TABLES; is issued. Write out a metadata file that contains the table schema, hostname of the exporting server, page size of the exporting instance, the highest autoinc value in memory etc. So that when we do an import, we can check if the table schema matches (currently, it ignores foreign key relationships) and also restore the autoinc value. This metadata file is created in the same directory as the tablespace, with a suffix of .cfg. If the table is named customer then the meta-data file will be named customer.cfg and the tablespace will be named customer.ibd. Note: FLUSH TABLES and IMPORT only work for tables that are stored outside the system tablespace, in their own tablespace. The InnoDB config parameter innodb_file_per_table must be set when creating the table.

Because InnoDB stores data in big-endian format, this allows copying data between different architectures. Copying between different operating systems will also work.

Example

If you want to backup a single table:

CREATE DATABASE ftwrl;

SET GLOBAL innodb_file_per_table=1;

CREATE TABLE ftwrl.t (C INT) ENGINE=InnoDB;

INSERT INTO ftwrl.t VALUES(1),(2),(3);

SELECT * FROM ftwrl.t;

FLUSH TABLES ftwrl.t WITH READ LOCK;

From the command line, copy the t.ibd and t.cfg to your backup directory. The files should be in the ftwrl/ sub-directory under datadir.

UNLOCK TABLES; — Note: This will remove the t.cfg file.

DROP TABLE ftwrl.t;

CREATE TABLE ftwrl.t (C INT) ENGINE=InnoDB;

ALTER TABLE ftwrl.t DISCARD TABLESPACE;

The DISCARD tablespace will rename the t.ibd file to t.ibt in the same directory; you have to remove this file manually for now. Once you’ve removed this file, copy the backed up tablespace t.ibd and the metadata file t.cfg to the ftwrl/ sub-directory. Then issue the folowing:

ALTER TABLE ftwrl.t IMPORT TABLESPACE;
CHECK TABLE ftwrl.t;
SELECT * FROM t;

You should be good to go. If you want to backup multiple tables together, the syntax for that is:

FLUSH TABLES t1, t2, …, tn WITH READ LOCK;

Copy as before but this time the multiple tablespaces and metadata files.

UNLOCK TABLES;

Then first discard the tablespace as above followed by an import as above.

Other changes
Errors and warnings will be pushed to the client so that the user doesn’t need to poke around in the server error log files to check why an operation failed. The intention is to send enough information to the user so that the problem can be clearly identified and understood.

Conclusion

The syntax for “export” is not finalised yet and may change; for now the above examples using FLUSH TABLES WITH READ LOCK should work. To export tables that are in a foreign key relationship, you must export and import all the tables together, otherwise you may end up with inconsistencies. The export/import doesn’t work on partitioned tables, this limitation will be addressed later. A matching table definition must exist in the importing server instance, currently we don’t automagically create the table using the saved metadata during import. The .cfg format is not written in stone yet, we may add more data to this file, for example for partition support.

Your feedback is important to us, so that we can improve this feature further.

Choose the Location of your InnoDB File-per-Table Tablespace

Note: this article was originally published on http://blogs.innodb.com on April 10, 2012 by Kevin Lewis.

The April 2012 InnoDB labs release introduces a new feature in InnoDB that allows you to choose the location of specific tables.  For example, you can place critical tables onto an SSD drive while leaving the system tablespace on a hard drive.  Conversely, you can store you primary database files on an SSD and put a seldom used but very large archive or reference table on a larger cheaper hard drive.

Innodb now makes use of the following existing syntax in MySQL ;

CREATE TABLE  . . .  DATA DIRECTORY = ‘absolute path of data directory’;

CREATE TABLE  . . .  PARTITION . . . DATA DIRECTORY = ‘absolute path of data directory’;

This syntax is used in MyISAM and Archive engines to make use of symbolic links in those operating systems that support it.  But InnoDB can use this syntax on any OS since it stores the path in a new system table called SYS_DATAFILES.  There is also a new system table called SYS_TABLESPACES.  Both of these can be viewed in the April Labs release of 5.6 by using information schema. For example;

mysql> SET GLOBAL innodb_file_per_table=on;
Query OK, 0 rows affected (0.00 sec)
mysql> CREATE TABLE t1 (a int KEY, b text) DATA DIRECTORY='/ssd1/my_data';
 Query OK, 0 rows affected (0.12 sec)
mysql> SELECT * FROM INFORMATION_SCHEMA.INNODB_SYS_TABLESPACES;
 +-------+---------+------+-------------+----------------------+-----------+---------------+
 | SPACE | NAME    | FLAG | FILE_FORMAT | ROW_FORMAT           | PAGE_SIZE | ZIP_PAGE_SIZE |
 +-------+---------+------+-------------+----------------------+-----------+---------------+
 |     1 | test/t1 |    0 | Antelope    | Compact or Redundant |     16384 |             0 |
 +-------+---------+------+-------------+----------------------+-----------+---------------+
 1 row in set (0.00 sec)
mysql> SELECT * FROM INFORMATION_SCHEMA.INNODB_SYS_DATAFILES;
 +-------+---------------------------+
 | SPACE | PATH                      |
 +-------+---------------------------+
 |     1 | /ssd1/my_data/test/t1.ibd |
 +-------+---------------------------+
 1 row in set (0.00 sec)

Notice that a directory named for the database is added after the DATA DIRECTORY provided.  This is the same as what is done in the normal data directory.  It allows you to chose the same remote location for all your tablespaces, even if the tables in different databases have the same name.

The last 4 columns in innodb_sys_tablespaces are interpretations of the tablespace flags field. Tablespace flags do not distinguish between Compact and Redundant row format, so you see what is displayed above.  If you want to see the row format of the table, innodb_sys_tables now also interprets the table flags;

mysql> SELECT * FROM INFORMATION_SCHEMA.INNODB_SYS_TABLES WHERE name like 'test%';
 +----------+---------+------+--------+-------+-------------+------------+---------------+
 | TABLE_ID | NAME    | FLAG | N_COLS | SPACE | FILE_FORMAT | ROW_FORMAT | ZIP_PAGE_SIZE |
 +----------+---------+------+--------+-------+-------------+------------+---------------+
 |       24 | test/t1 |   65 |      5 |     1 | Antelope    | Compact    |             0 |
 +----------+---------+------+--------+-------+-------------+------------+---------------+
 1 row in set (0.00 sec)

In addition to SYS_DATAFILES, the absolute file path to the tablespace is also written to a text file in the normal datafiles location of the tablespace.  This is also the location of the FRM file written by the MYSQL server.  It has the same name as the FRM except it has the extension ‘.isl’ which stands for InnoDB Symbolic Link.  It is there to support recovery which happens before the system tables are opened.  For the example above, the sole contents of t1.isl are;

/ssd1/my_data/test/t1.ibd

The ISL file also allows a DBA to move one of these remote tablespaces since ALTER TABLE … DATA DIRECTORY =  ‘. . .’ is currently not supported.  In order to move a tablespace to another location, or to port an entire database along with its remote tablespaces, you must edit this ISL file after moving the OS files.  The ISL file is a simple text file that contains the absolute path and filename of the tablespace.  Just change that to the new location and save your changes.  Do not leave a copy of the tablespace in the old location.

If you have a large existing database that uses file-per-table tablespaces (files with ‘.ibd’ extensions), it is possible to move these from the default location next to the FRM file to a remote location such as a new SSD drive.  But you must shut down the MySQL server first. After it is shut down, move the IBD tablespace to the location that you would like, create a text file where it was with the same name except using a ‘isl’ extension.  Make the contents of the text file to be the full file path of the ibd file.  Then restart the server.  It will read the isl file, find the tablespace, and update the internal system tables to show where the file resides.

Conclusion

This new feature provides a valuable option to configure your database to make the best use of the hardware available.

About

This is the InnoDB team blog.

Search

Categories
Archives
« April 2012 »
SunMonTueWedThuFriSat
1
2
3
4
5
6
7
8
9
12
14
15
17
18
19
20
21
22
23
24
25
26
27
28
29
30
     
       
Today