Friday Jun 07, 2013

MySQL Connect Tutorials

MySQL Connect 2013 will be bigger and even better. In addition to the keynotes, conference sessions, Birds-of-a-Feather sessions, and hands-on labs, you have the opportunity to include an additional day of tutorials on Monday, September 23.

Whether you are getting started with MySQL or an experienced user, the following tutorials will allow you to acquire in-depth knowledge, directly from Oracle's MySQL Engineers:

  • Getting Started with MySQL: Learn the Essentials
  • Enhancing Productivity with MySQL 5.6 New Features
  • How to Analyze and Tune SQL Queries for Better Performance
  • MySQL 5.6 Replication Tips and Tricks
  • MySQL 5.6 Performance Tuning and Best Practices
  • Getting Started with MySQL Cluster

Learn more about MySQL Connect tutorials and sign up now to secure your seat - space is limited.

Early Bird Discount: Register for MySQL Connect by July 19, 2013 and you’ll save US$500 off the onsite price. Plan to Attend Oracle OpenWorld or JavaOne? You can add the MySQL Connect event to your Oracle OpenWorld or JavaOne registration for only US$100.

Don’t miss MySQL Connect, and MySQL Connect Tutorials! Register Now!

Thursday Apr 25, 2013

Driving MySQL Innovation

Oracle's VP of MySQL Engineering Tomas Ulin delivered on Tuesday a keynote entitled "Driving MySQL Innovation for Next Generation Applications" at the Percona Live Conference.

If you haven't seen it yet, we highly encourage you to watch it.

Tomas covers:

  • Oracle's Investment in MySQL
  • MySQL 5.6
  • Trends and Product Directions

He makes it very clear that Oracle:

  • Invests in MySQL like Never Before
  • Drives MySQL Innovation
  • Makes MySQL Better for Next Generation Web, Cloud and Big Data Applications

Enjoy the keynote!

Thursday Apr 18, 2013

Adzuna Relies on MySQL to Support Explosive Growth

Adzuna is a fast growing search engine for classified ads specialized in jobs, properties and cars. Headquartered in the UK and launched in 2011, Adzuna searches thousands of sites and tens of millions of ads to make it very easy to find the perfect job, home or car locally. It furthermore provides a wealth of statistics such as salaries trends graphs and comparisons, geographic jobs maps, house prices...and more. Additionally, Adzuna is integrated with Facebook and LinkedIn and shows open vacancies one is connected to through his/her own network. The search engine powers a number of government applications and is integrated into the UK's Prime Minister economic dashboard.


  • When Adzuna's founders were selecting the database powering the search engine's architecture, they were planning for scalability and reliability. Not only did they expect fast growth but also unpredictable growth. The number of users could indeed jump from ten thousand to one million in a single day, and any downtime or scalability was simply not an option as it could turn away new users from the site forever, and undermine its reputation form the start.
  • As a Web startup, low Total Cost of Ownership was essential, and the team also desired to implement a database solution they would be able to customize and tailor to their specific needs.


  • The Adzuna team selected MySQL as the database powering their search engine. They had very positive previous experiences of the world’s most popular open source database, and particularly appreciated its performance and scalability, reliability and ease of use, including the quality of its technical documentation. They were confident it would scale according to their requirements, and were ready to bet their business on the database.
  • Key MySQL strong points included its overall suitability for highly demanding web-based applications, its Geographic Information System (GIS) support and integration with the Perl programming language used by the company.
  • Adzuna's search infrastructure uses MySQL to normalize on a minute by minute basis the data from tens of millions of ads from thousands of websites across four continents, resulting in a database size exceeding 100 GB. The Adzuna engine constantly scrolls the Web and receives XML feeds from its partners, and a key to its success is its ability to turn a massive amount of unstructured data into structured data that can be stored, understood and searched by it users. The team decided to use the Apache Solr open source search platform as front end, leveraging its NoSQL features. Data is subsequently transferred into MySQL. The following diagram describes the Adzuna architecture:
  • In order to help ensure scalability and reliability while focusing its resources on developing its business, The Adzuna team decided to rely on MySQL in the cloud, working with cloud services provider Media Temple.
  • The startup has experienced explosive growth since its launch, currently serving 2 million unique visitors per month, and it recently expanded in Germany, Australia, Canada, South Africa and Brazil.  MySQL totally fulfills the expectations of the Adzuna team, who is confident the database will scale to support their ambition to expand in 30 more countries and become the leading search engine for jobs, homes and cars worldwide.
  • As Taleo, Oracle's talent management cloud service, is increasingly used by Adzuna's partners, the company is currently looking at optimizing data transfer from Taleo to MySQL.

“My advice to young startups is to use MySQL, especially if you have high growth expectations. You’ll need to plan for unpredictability and to have a very robust backbone to support it, and that’s exactly what MySQL provides.” Andrew Hunter, Co-Founder, Adzuna

Wednesday Apr 10, 2013

MySQL Replication: Self-Healing Recovery with GTIDs and MySQL Utilities

MySQL 5.6 includes a host of enhancements to replication, enabling DevOps teams to reliably scale-out their MySQL infrastructure across commodity hardware, on-premise or in the cloud.

One of the most significant enhancements is the introduction of Global Transaction Identifiers (GTIDs) where the primary development motivation was:

- enabling seamless failover or switchover from a replication master to slave

- promoting that slave to the new master

- without manual intervention and with minimal service disruption.

You can download the new MySQL Replication High Availability Guide to learn more.  The following sections provide an overview of how GTIDs and new MySQL utilities work together to enable self-healing replication clusters.  


To understand the implementation and capabilities of GTIDs, we will use an example illustrated below:

- Server “A” crashes

- We need to failover to one of the slaves, promoting it to master

- The remaining server becomes a slave of that new master

Figure 1: MySQL Master Failover and Slave Promotion

As MySQL replication is asynchronous by default, servers B and C may not have both replicated and executed the same number of transactions, i.e. one may be ahead of the other. Consider the following scenarios:

Scenario #1

- Server B is ahead of C and B is chosen as the new master;

- [Then] Server C needs to start to replicate from the first transaction in server B that it is yet to receive;

Scenario #2

- Server C has executed transactions that have so far not been received by Server B, but the administrator designates Server B as the new master (for example, it is configured with higher performing hardware than Server C).

- Server B therefore needs to execute later transactions from Server C, before assuming the role of the master, otherwise lost transactions and conflicts can ensue.

GTIDs apply a unique identifier to each transaction, so it is becomes easy to track when it is executed on each slave. When the master commits a transaction, a GTID is generated which is written to the binary log prior to the transaction. The GTID and the transaction are replicated to the slave.

If the slave is configured to write changes to its own binary log, the slave ensures that the GTID and transaction are preserved and written after the transaction has been committed.

The set of GTIDs executed on each slave is exposed to the user in a new, read-only, global server variable, gtid_executed. The variable can be used in conjunction with GTID_SUBTRACT() to determine if a slave is up to date with a master, and if not, which transactions are missing.

A new replication protocol was created to make this process automatic. When a slave connects to the master, the new protocol ensures it sends the range of GTIDs that the slave has executed and committed and requests any missing transactions. The master then sends all other transactions, i.e. those that the slave has not yet executed. This is illustrated in the following example (note that the GTID numbering and binlog format is simplified for clarity):

Figure 2: Automatically Synchronizing a New Master with it's Slaves

In this example, Server B has executed all transactions before Server A crashed. Using the MySQL replication protocol, Server C will send “id1” to Server B, and then B will send “id 2” and “id3” to Server C, before then replicating new transactions as they are committed. If the roles were reversed and Server C is ahead of Server B, this same process also ensures that Server B receives any transactions it has so far not executed, before being promoted to the role of the new master.

We therefore have a foundation for reliable slave promotion, ensuring that any transactions executed on a slave are not lost in the event of an outage to the master.

MySQL Utilities for GTIDs

Implemented as Python-based command-line scripts, the Replication utilities are available as a standalone download as well as packaged with MySQL Workbench. The source code is available from the MySQL Utilities Launchpad page.

The utilities supporting failover and recovery are components of a broader suite of MySQL utilities that simplify the maintenance and administration of MySQL servers, including the provisioning and verification of replication, comparing and cloning databases, diagnostics, etc. The utilities are available under the GPLv2 license, and are extendable using a supplied library. They are designed to work with Python 2.6 and above.

Replication Utility: mysqlfailover

While providing continuous monitoring of the replication topology, mysqlfailover enables automatic or manual failover to a slave in the event of an outage to the master. Its default behavior is to promote the most viable slave, as defined by the following slave election criteria.

• The slave is running and reachable;

• GTIDs are enabled;

• The slaves replication filters do not conflict;

• The replication user exists;

• Binary logging is enabled.

Once a viable slave is elected (called the candidate), the process to retrieve all transactions active in the replication cluster is initiated. This is done by connecting the candidate slave to all of the remaining slaves thereby gathering and executing all transactions in the cluster on the candidate slave. This ensures no replicated transactions are lost, even if the candidate is not the most current slave when failover is initiated.

The election process is configurable. The administrator can use a list to nominate a specific set of candidate slaves to become the new master (e.g. because they have better performing hardware).

At pre-configured intervals, the utility will check to see if the server is alive via a ping operation, followed by a check of the connector to detect if the server is still reachable. If the master is found to be offline or unreachable, the utility will execute one of the following actions based on the value of the failover mode option, which enables the user to define failover policies:

• The auto mode tells the utility to failover to the list of specified candidates first and if none are viable, search the list of remaining slaves for a candidate.

• The elect mode limits election to the candidate slave list and if none are viable, automatic failover is aborted.

• The fail mode tells the utility to not perform failover and instead stop execution, awaiting further manual recovery actions from the DevOps team.

Via a series of Extension Points users can also bind in their own scripts to trigger failover actions beyond the database (such as Virtual IP failover). Review the mysqlfailover documentation for more detail on configuration and options of this utility.

Replication Utility: mysqlrpladmin

If a user needs to take a master offline for scheduled maintenance, mysqlrpladmin can perform a switchover to a specific slave (called the new master). When performing a switchover, the original master is locked and all slaves are allowed to catch up. Once the slaves have read all events from the original master, the original master is shutdown and control is switched to the new master.

There are many Operations teams that prefer to take failover decisions themselves, and so mysqlrpladmin provides a mechanism for manual failover after an outage to the master has been identified, either by using the health reporting provided by the utilities, or by alerting provided by a tool such as the MySQL Enterprise Monitor.

It is also possible to easily re-introduce recovered masters back to the replication topology by using the demote-master command and initiating a switchover to the recovered master.

Review the mysqlrpladmin documentation for more detail on configuration and options of the utility.

Resources to Get Started

In addition to the documentation, two additional resources are useful to enable you to learn more:

- MySQL replication utilities tutorial video

- MySQL replication guide: building a self healing replication topology


The combination of options to control failover or switchover and the ability to inform applications of the failover event are powerful features that enable self-healing for critical applications.

Thursday Apr 04, 2013

MySQL 5.6 Replication: All That Is New, On-Demand

The new MySQL 5.6 GA release delivers a host of new capabilities to support developers releasing new services faster, with more agility, performance and security .

One of the areas with the most far-reaching set of enhancements is MySQL replication used by the largest web, mobile and social properties to horizontally scale highly-available MySQL databases across distributed clusters of low cost, commodity servers.

A new on-demand MySQL 5.6 replication webinar takes you on a guided tour through all of those enhancements, including:

- 5x higher master and slave performance with Binary Log Group Commit, Multi-Threaded Slaves and Optimized Row-based replication

- Self healing replication with Global Transaction Identifiers (GTIDs), utilities and crash-safe masters and slaves

- Replication event checksums for rock-solid data integrity across a replication cluster

- DevOps features with Time-Delayed Replication, Remote binary log backup and new CLI utilities

We had some great questions during the live session - here are a few:

Question: When multi-threaded slave is configured, how do you handle multi-database DML? 

Answer: When that happens the slave switches to single-threaded mode for that transaction. After it is applied, the slave switches back to multi-threaded mode.

Question: We have seen in past that replication would sometime break when complex queries get fired or alter statements take place. Neither row based or Statement based replication guaranteed 100% fail safe replication. Does MySQL5.6 address this issue ?

Answer: Replication can fail for several specific reasons. There is no general algorithm enabling "100% fail safe" replication. But we made sure that all that are unsafe are either converted to row based replication or issue warnings.

Question: We are looking for a HA cluster solution where most of the websites are Wordpress based. Do you have any recommendations on what kind of replication solution should we choose? The tables are using innodb

Answer: Using MySQL 5.6 with the master/slave replication we are discussing today, configured with GTIDs and using the mysqlfailover utility will provide you High Availability for your Wordpress cluster and InnoDB tables. Nothing you need to add - it is all part of MySQL you download today

Question: I have question about data loss and translating my-bin.000132 to my-bin.00101 when a master fails?

Answer: That's exactly the problem GTIDs address. Without GITDs it's hard to know the correct position on my-bin.000101 that corresponds to my-bin.000132.

With GTIDs we don't need to know positions, only what the slave has already applied, knowing that it's simple to send to the new master any missing transactions that are on other slaves. The data that was already on my-bin.00132 can be ensured that it was sent to a slave using Semi-syncronous replication.

Question: How do the new utilities work with a Pacemaker/Corosync setup?

Answer: The utilities integrate with Pacemaker/Corosync via extension points. The MySQL Pacemaker agent can call rpladmin to perform a switchover.

Question: Where can I learn more about supported HA solutions for MySQL?

Answer: From the Guide to MySQL HA solutions. I hope you find this useful

Question: Are there any plans to move replication from host-based to either 'database' or 'table' based?

Answer: Current MySQL replication supports replication of specific databases or tables from a Host using Replication filters. This means, we support Replication of a host (all data which could be replicated), replication of specific databases per host and specific tables per host. (You can ignore databases and tables using filters too).

For these and other questions, along with the full replay of the webinar, register here

Tuesday Mar 26, 2013

Upcoming MySQL Events in Europe

Oracle's MySQL team in Europe is very busy running or participating to a number of events during the upcoming couple of months.

Join us to learn about the latest developments and get all your questions answered!

More events will be scheduled worldwide and listed on our events page.

MySQL User Group meeting, Stockholm, Sweden
April 10

MySQL Tech Tour: Delivering Flexible, Robust and Elastic Solutions for the Web and the Modern Enterprise - Reading, UK
April 11

MySQL Tech Tour: From the Web to the Cloud - München, Germany

April 17

Norwegian User Group Conference 2013 - Oslo, Norway
April 17

MySQL Tech Tour: From the Web to the Cloud - Hannover, Germany

April 18

MySQL Tech Tour: From the Web to the Cloud - Baden, Switzerland
April 24

MySQL Tech Tour: From the Web to the Cloud – Rome, Italy
May 7

MySQL Tech Tour for Software & Hardware Vendors: Improving Your Products with New MySQL Features and Faster Performance - London, UK
May 15

We hope to see many of you there!

Thursday Mar 21, 2013

ip.access Relies on MySQL to Support Mobile Traffic Growth

ip.access specialises in delivering full end-to-end small cell solutions for Tier 1 and Tier 2 telecom operators around the world. Headquartered in Cambridge, UK, ip.access has been deploying small cell solutions for over 10 years, initially in 2G but now with 3G and 4G products.  They have products integrated in over 100 customer networks around the world, with the capability to accelerate the introduction of small cell solutions into networks irrespective of use case, technology or deployment model.

Small Cells are low-power wireless Access Points that are deployed via standard internet connections and operate within the Mobile Operators' licensed spectrum. Network coverage and signal quality are major influences on subscriber satisfaction and retention, so operators leverage small cells to cost-effectively support the growth in mobile data traffic and cover areas of poor network coverage.

In addition to the small cell access points, the ip.access complete solutions include gateways that bridge between the small cell network and the existing mobile core networks, and a Network Orchestration System for configuration, management, monitoring and reporting across the network.


  • ip.access solutions are deployed within mission critical telecom environments. The database they would integrate needed to provide five nines (99,999%) availability in order to ensure continuous coverage and access to the mobile operators’ networks.
  • The Network Orchestration System (NOS) required a database consolidating the network inventory, status and monitoring data into one centralized, scalable and secure system.  It had to be capable of serving a network of over a million small cell units. Moreover, the database was to provide the high performance read/write access needed today as well as in the future, with transactional demands rising as the next generation self-organizing networks drive ever more sophisticated event processing within the NOS.
  • A cost-effective database solution was critical in order to allow operators to test the ip.access offerings during a trial period, or to start with relatively small deployments. Yet, the database had to be highly scalable in order to support the larger deployments of Tier 1 mobile operators as they take early stage pilots to full national roll-out.
  • The system was designed around the same concepts as the so-called “brain” within mobile networks, the HLR (Home Location Register) which stores details of every subscriber and their location. Like traditional telecoms Intelligent Network (IN) platform architectures, there is a master database running on centralized servers, accessed by application services running on front-end servers. Consequently, the database had to be plug & play across multiple back-end servers to scale the solution in an N+1 environment.
  • Database integrity was paramount, including strong referential integrity checking, as losing data or its full relationship, was not an option.
  • Other considerations included but were not limited to:
  • Non-service affecting backup of the database and High Availability mechanisms
  • Geographical redundancy
  • Low to zero administration overhead
  • Access to maintenance releases, bug fixes, patches and updates
  • Availability of database performance monitoring and tuning tools
  • 24 X 7 Support
  • Globally deployable and locally supportable through local partners
  • Access to database tuning experts and architecture consultancy services to optimise performance in the core product and in individual projects.


  • The ip.access team evaluated several databases, and also considered the option to build its own database solution. They ultimately selected MySQL, which delivered the required performance & scalability, high availability and zero administration capabilities, while being a cost-effective solution fulfilling their other requirements. Additionally, the MySQL database and MySQL Cluster are proven solutions in telecom environments that demand five nines availability, powering the mission critical HLR systems of numerous Network Equipment Providers.
  • MySQL databases are integrated within the small cell access controller, providing real-time information about the access points connected to the network, as well as within the NOS, ensuring the configuration, monitoring and management of the system and delivering real-time performance information, alarms and various Key Performance Indicators (KPIs).
  • For installations counting up to 250K small cells, MySQL Enterprise Edition is deployed on active-standby servers. High Availability and geographical redundancy are achieved through DRBD (Distributed Replicated Block Device) and three-way replication.
  • In networks comprising over 250K small cells, or requiring utmost read/write access performance for the new breed of location presence services, performance KPIs threshold calculation and correlation…etc, MySQL Cluster Carrier Grade Edition is typically implemented.
  • The MySQL Enterprise Monitor and Query Analyzer management tools included in the MySQL Enterprise and MySQL Cluster Carrier Grade Editions enable simple and rapid performance monitoring and tuning when needed.
  • Relying on Oracle’s MySQL database solutions, ip.access delivers complete end-to-end systems to mobile operators, beyond access points, enabling them to “small cell enable” their networks. Moreover, the company’s broad portfolio allows Mobile Service Providers to start pilot projects and run tests cost-effectively, and then to very rapidly scale as needed.
  • Choosing MySQL paid off for ip.access. It enabled the company to focus on what they do best, investing their resources in the development of a differentiated solution. It accelerated time to market and allowed ip.access to offer flexible solutions to its customers, without imposing major pricing constraints on entry-level deployments. Finally, the features and MySQL innovation driven by Oracle ensure ip.access that the MySQL database and MySQL Cluster will continue to represent a foundation of choice for their next generation offerings.                
  • Oracle is a strategic partner to ip.access, which furthermore uses the AcmePacket products for its security gateways, and offers Oracle Sun hardware products to operators wishing the hardware to be supplied as part of the end-to-end small cell solutions. Oracle RAC can also be used within the ip.access solutions should the operator requirement be defined.

“Oracle’s MySQL solutions allow us to focus on building differentiated solutions on top of a solid foundation that we don’t need to worry about. The ability to define a cost effective standard solution that can scale up to meet any operator's need was critical to us.  MySQL made the build vs buy decision an easy one.” Gavin Ray, VP Product & Marketing, ip.access

Thursday Mar 14, 2013

MySQL User Group Meeting

MySQL User Group meeting, Stockholm, Sweden - April 10, 2013
After some time the Swedish MySQL User Group is organizing and announcing the upcoming MySQL User Group meeting in Stockholm!!! Everyone who are around that area on April 10, 2013, please make sure that you do not miss this great opportunity to listen and ask questions the main guest Mats Kindahl, (the Senior Principal Software Developer MySQL) who will be talking on Replication news on MySQL 5.6.!!!  

Above this main topic and Q&A on MySQL Replication, you will also have an opportunity to suggest topics for the upcoming SMUG meetings as well as enjoy the food & mingle.

Date: April 10, 2013
Start: at 4pm
Where (address) : Emineo; Lilla Västerbron 20; 11219 Stockholm; Sweden
URL for registration & updates: LinkedIn; Facebook
Proposed Agenda:

  • Replication news in MySQL 5.6 by Mats Kindahl, Senior Principal Software Developer MySQL
  • Q&A MySQL Replication
  • Topics for upcoming SMUG meetings
  • Food & mingle

Come to join us!!!

The MySQL Connect 2013 Call for Papers is Open!

Following the success of its first MySQL Connect edition, Oracle will hold MySQL Connect 2013 on September 21-23 in San Francisco.

The Call for Papers is now open, and will be running until April 12. We highly encourage MySQL users, customers, partners and community members to submit session proposals now.

MySQL Connect represents a unique opportunity to learn about the latest features of the best MySQL product releases ever, discuss product roadmaps, and connect directly with the engineers driving MySQL innovation.

The Conference is chock-full with technical sessions, hands-on labs, Birds of a Feather (BOF) sessions and tutorials delivered by MySQL community members, users, customers, partners and Oracle’s MySQL engineers.

The event will include five tracks: Performance and Scalability, High Availability & Replication, Cloud & Big Data, Database Administration & DevOps, Architecture and Application Development.

This is your chance to share real-world experiences, best practices and insights you’ve gained with the MySQL Community at large. You’ll be able to address not only long time MySQL users and highly experienced professionals but also newer and eager MySQL customers traditionally attending Oracle OpenWorld. Don’t miss this opportunity and submit your sessions today.

Interested in exhibiting and sponsoring? Email for details.

Friday Mar 08, 2013

MySQL Web Reference Architectures - Your Guide to Innovating on the Web

MySQL is deployed in 9 of the top 10 most trafficked sites on the web including Facebook, Twitter, eBay and YouTube, as well as in some of the fastest growing services such as Tumblr, Pinterest and

Working with these companies has given MySQL developers, consultants and support engineers unique insight into how to design database-driven web architectures – whether deployed on-premise or in the cloud.

The MySQL Web Reference Architectures are a set of documented and repeatable best practices for building infrastructure that deliver the highest levels of scalability, agility and availability with the lowest levels of cost, risk and complexity. 

Four components common to most web and mobile properties are sized, with optimum deployment architectures for each:

User authentication and session management

Content management


Analytics and big data integration

The sizing is defined by database size and load, as shown below

For each reference architecture, strategies for scaling the service and ensuring high availability are discussed, along with approaches to secure, audit and backup user data, and tools to monitor and manage the environment.

The Reference Architectures cover the core underlying technologies supporting today’s most successful web services including:

- MySQL Database

- MySQL Cluster

- MySQL Replication

- Caching with Memcached and Redis

- Big Data with Hadoop


- Geographic Redundancy

- Hardware Recommendations

- Operational Best Practices

An example of the "Large" reference architecture is shown below

To learn more:

- Download the MySQL Web Reference Architectures Guide

- View the MySQL Web Reference Architectures slides

The Reference Architecture are designed as a starting point which we hope will enable you build the next web and mobile phenomenon!

Tuesday Feb 26, 2013

MySQL User Group Meeting & MySQL Tech Tour in Moscow - March 21-22!

There is going to be a MySQL Tech Tour in Moscow planned for March 21, 2013 and followed by the local MySQL User Group meeting to talk about the MySQL 5.6! Please find details below.

MySQL Tech Tour:

Date: March 21, 2013

MySQL User Group meeting:

Date: March 22, 2013
: Mail.Ru Airport; Leningradsky prospect 47 building 2
Planned time
: 19:00
Dmitry Lenev & Victoria Reznitchenko will speak about MySQL 5.6

Registration needed!

Thursday Feb 07, 2013

MySQL 5.6 Replication Performance

With data volumes and user populations growing, its no wonder that database performance is a hot topic in developer and DBA circles.  

Its also no surprise that continued performance improvements were one of the top design goals of the new MySQL 5.6 release which was declared GA on February 5th (note: GA means “Generally Available”, not “Gypsy Approved” @mysqlborat)

And the performance gains haven’t disappointed:

- Dimitri Kravtchuk’s Sysbench tests showed MySQL delivering up to 4x higher performance than the previous 5.5 release.

- Mikael Ronstrom’s testing showed up to 4x better scalability as thread counts rose to 48 and 60 threads (not so uncommon in commodity systems today)

Of course, YMMV (Your Mileage May Vary) in real-world workloads, but you can be reasonably certain you will see performance gains by upgrading to MySQL 5.6.  It is up to you whether you invest these gains to serve more applications and users from your MySQL databases, or you consolidate to fewer instances to reduce operational costs.

How about if you are using MySQL replication in order to scale out your database across commodity nodes, and as a foundation for High Availability (HA)? Performance here has also been improved through a combination of:

- Binary Log Group Commit

- Multi-Threaded Slaves

- Optimized Row-Based Replication

As well as giving you the benefits above; higher replication performance directly translates to:

- Reduced risk of losing data in the event of a failure on the master

- Improved read consistency from slaves

- Resource-efficient binlogs traversing the replication cluster

We will look at each of these in more detail.

Binlog Group Commit

Rather than applying writes one at a time, Binary Log Group Commit batches writes to the Binlog, significantly reducing overhead to the master. This is demonstrated by the benchmark results below.

Using the SysBench RW tests, enabling the binlog and using the default sync_binlog=0 reduces the throughput of the master by around 10%. With sync_binlog=1 (where the MySQL server synchronizes its binary log to disk after every write to the binlog, thereby giving maximum data safety), throughput is reduced by a further 5%.

To understand the difference this makes, the tests were repeated comparing MySQL 5.6 to MySQL 5.5 

Even with sync_binlog=1, MySQL 5.6 was still up to 4.5x faster than 5.5 with sync_binlog=0

Gone are the days when configuring replication resulted in a 50% or more hit to performance of your master.

The result of Binary Log Group Commit is that MySQL replication is much better able to keep pace with the demands of write-intensive workloads, imposing much less overhead on the master, even when the binlog is flushed to disk after each commit!

Details of the configurations used for the benchmark are in the Appendix at the end of this post.

You can learn more about the implementation of Binlog Group Commit from Mats Kindahl’s blog.

Multi-Threaded Slave

Looking beyond the replication master, it is also necessary to bring performance enhancements to the replication slaves.

Using Multi-Threaded Slaves, processing is split between worker threads based on schema, allowing updates to be applied in parallel, rather than sequentially. This delivers benefits to those workloads that isolate application data using databases - e.g. multi-tenant systems deployed in cloud environments.

Benchmarks demonstrate that Multi-Threaded Slaves increase performance by 5x.  

This performance enhancement translates to improved read consistency for clients accessing the replication cluster. Slaves are better able to keep up with the master, and so users are much less likely to need to throttle the sustained throughput of writes, just so that the slaves don't indefinitely fall further and further behind (previously some users had to reduce the capacity of their systems in order to reduce slave lag).

You can get all of the details on this benchmark and the configurations used in this earlier blog posting

Optimized Row-Based Replication

The final piece in improving replication performance is to introduce efficiencies to the binary log itself.

By only replicating those elements of the row image that have changed following INSERT, UPDATE and DELETE operations, replication throughput for both the master and slave(s) can be increased while binary log disk space, network resource and server memory footprint are all reduced.

This is especially useful when replicating across datacenters or cloud availability zones.

Another performance enhancements is the was Row-Based Replication events are handled on the slave against tables without Primary Keys. Updates would be made via multiple table scans. In MySQL 5.6, no matter size of the event, only one table scan is performed, significantly reducing the apply time.

You can learn more about Optimized Row Based Replication from the MySQL documentation.  

Crash-Safe Slaves and Binlog

Aka “Transactional Replication” this is more of an availability than a performance feature, but there is a nice performance angle to it.

The key concept behind crash safe replication is that replication positions are stored in tables rather than files and can therefore be updated transactionally, together with the data. This enables the slave or master to automatically roll back replication to the last committed event before a crash, and resume replication without administrator intervention. Not only does this reduce operational overhead, it also eliminates the risk of data loss or corruption.

From a performance perspective, it also means there is one less disk sync, since we leverage the storage engine's fsync and don't need an additional fsync for the file, giving users a performance gain.

Wrapping Up

So with a faster MySQL Server, InnoDB storage engine and replication, developers and DBAs can get ahead of performance demands. Bear in mind there is more to MySQL 5.6 replication than performance – for example Global Transaction Identifiers (GTIDs), replication event checksums, time-delayed replication and more.

To learn more about all of the new replication features in MySQL 5.6, download the Replication Introduction guide

To get up and running with MySQL replication, download the new Replication Tutorial Guide 

Appendix – Binary Log Group Commit Benchmarks

System Under Test:

5 x 12 thread Intel Xeon E7540 CPUs @2.00 GHz

512 GB memory

Oracle Linux 6.1

Configuration file:

1) --innodb_purge_threads=1

2) --innodb_file_format=barracuda

3) --innodb-buffer-pool-size=8192M

4) --innodb-support-xa=FALSE

5) --innodb-thread-concurrency=0

6) --innodb-flush-log-at-trx-commit=2

7) --innodb-log-file-size=8000M

8) --innodb-log-buffer-size=256M

9) --innodb-io-capacity=2000

10) --innodb-io-capacity-max=4000

11) --innodb-flush-neighbors=0

12) --skip-innodb-adaptive-hash-index

13) --innodb-read-io-threads=8

14) --innodb-write-io-threads=8

15) --innodb_change_buffering=all

16) --innodb-spin-wait-delay=48

17) --innodb-stats-on-metadata=off

18) --innodb-buffer-pool-instances=12

19) --innodb-monitor-enable='%'

20) --max-tmp-tables=100

21) --performance-schema

22) --performance-schema-instrument='%=on'

23) --query-cache-size=0

24) --query-cache-type=0

25) --max-connections=4000

26) --max-prepared-stmt-count=1048576

27) --sort-buffer-size=32768

28) --table-open-cache=4000

29) --table-definition-cache=4000

30) --table-open-cache-instances=16

31) --tmp-table-size=100M

32) --max-heap-table-size=1000M

33) --key-buffer-size=50M

34) --join-buffer-size=1000000

Tuesday Feb 05, 2013

The Best MySQL Release Ever - MySQL 5.6 is now GA

MySQL 5.6 is now generally available. Read the press release.

Chock-full of new enhancements and features around performance, scalability and availability, MySQL 5.6 is the best MySQL release ever. Read Rob Young's blog article on the key enhancements in MySQL 5.6.

This is open source goodness all around.

Congratulations to the MySQL Engineering team on delivering a stellar product release yet again for the MySQL community and users!

Thursday Jan 31, 2013

Tecnotree Empowers Communications Service Providers with Embedded MySQL

Tecnotree is a global provider of telecom IT solutions for the management of products, customers and revenue. Tecnotree helps communications service providers to transform their business towards a marketplace of digital services. Tecnotree empowers service providers to monetise on service bundles, provide personalized user experiences and augment value throughout the customer lifecycle. With over 1100 telecom experts, Tecnotree serves more than 100 service providers in over 70 countries. Tecnotree is listed on the main list of NASDAQ OMX Helsinki.


  • Integrate a database well suited to power the mission critical Tecnotree solutions deployed by telecom operators in networks counting over 50 million subscribers.
  • Standardize on one database for the core Tecnotree products, while meeting the company’s technical and economic requirements.
  • Select a database delivering the configuration flexibility needed to provide the best performance and reliability in different settings.


  • Evaluation and selection of MySQL as embedded database based on its high performance, reliability, flexibility of deployment and cost-effectiveness.
  • Tecnotree leverages the built-in MySQL replication to confer up to 5 nines (99.999%) availability to its solutions; ensuring subscribers will continuously be able to access the operators’ network and billing infrastructure.
  • Implemented at the heart of front-end signaling systems transmitting information to other applications, MySQL commonly handles 20,000 queries and 2,000 transactions per second, for example allowing the real-time billing of subscribers.
  • MySQL databases powering the voice-mail and conferencing applications deployed by operators store up to a terabyte of data.
  • Tecnotree has been relying on MySQL for several years, and plans to embed it in additional products in the future, highly valuing the “hassle free” nature of the database.
  • The company also uses the Oracle database within its customer care and billing solutions.

“With MySQL we get a high performance solution that simultaneously delivers the flexibility to serve our different database needs.” Timo Ahomäki, CTO, Tecnotree

Thursday Jan 24, 2013

MySQL 5.6: What's New in Performance, Scalability, Availability

With the MySQL 5.6 production-ready GA set for release in the coming days, it’s good to re-cap the key features that make 5.6 the best release of the database ever.  At a glance, MySQL 5.6 is simply a better MySQL with improvements that enhance every functional area of the database kernel, including:
  • Improved Security for worry-free application deployments
  • Better Performance and Scalability
    • Improved InnoDB storage engine for better transactional throughput
    • Improved Optimizer for better query execution times and diagnostics
  • Better Application Availability with Online DDL/Schema changes
  • Better Developer Agility with NoSQL Access with Memcached API to InnoDB
  • Improved Replication for high performance, self-healing distributed deployments
  • Improved Performance Schema for better instrumentation
  • And other Important Enhancements

Improved Security for worry-free deployments
Security is near and dear to every DBA and Sys Admin's heart.  With this in mind, MySQL 5.6 introduces a major overhaul to how passwords are internally handled and encrypted.  The new options and features include:

New alternative to password in – MySQL 5.6 extends the replication START SLAVE command to enable DBAs to specify master user and password as part of the replication slave options and to authenticate the account used to connect to the master through an external authentication plugin (user defined or those provided under MySQL Enterprise Edition).  With these options the user and password no longer need to be exposed in plain text in the file.
New encryption for passwords in general query log, slow query log, and binary log – Passwords in statements written to these logs are no longer recorded in plain text.
New password hashing with appropriate strength – Default password hashing for internal MySQL server authentication via PASSWORD() is now done using the SHA-256 password hashing algorithm using a random salt value.
New options for passwords on the command line – MySQL 5.6 introduces a new “scrambled” option/config file (.mylogin.cnf) that can be used to securely store user passwords that are used for command line operations.
New change password at next login – DBAs and developers can now control when account passwords must be changed via a new password_expired flag in the mysql.user table.
New policy-based Password validations – Passwords can now be validated for appropriate strength, length, mixed case, special chars, and other user defined policies based on LOW, MEDIUM and STRONG designation settings. 

Learn about these and all of MySQL 5.6 Security improvements and features, along with all technical documentation, in the MySQL docs

Better Performance and Scalability: Improved InnoDB Storage Engine

From an operational standpoint MySQL 5.6 provides better sustained linear performance and scale on systems supporting multi-processors and high CPU thread concurrency.  Key to this are improvements to Oracle’s InnoDB storage engine efficiency and concurrency that remove legacy thread contention and mutex locking within the InnoDB kernel.  These improvements enable MySQL to fully exploit the advanced multi-threaded processing power of today’s x86-based commodity-off-the-shelf hardware.

Internal benchmarks for SysBench Read/Write and Read Only workloads show a marked improvement in sustained scale over the most current version of MySQL 5.5.  The following shows that MySQL 5.6 provides “up and to the right” linear read/write transactions per second (“TPS”) scale on systems that support upwards of 48 concurrent CPU threads. 

Read only TPS workload sustained scale is also improved as demonstrated here:

Better Transactional Throughput

MySQL 5.6 improves InnoDB for better performance and scalability on highly concurrent, transactional and read intensive workloads.  In these cases performance gains are best measured by how an application performs and scales as concurrent user workloads grow.  In support of these use cases, InnoDB has a new re-factored architecture that minimizes mutex contentions and bottlenecks and provides a more consistent access path to underlying data.  Improvements include:

  • Kernel mutex split to remove a single point of contention
  • New thread for flushing operations
  • New multi-threaded purge
  • New adaptive hashing algorithm
  • Less buffer pool contention
  • Better, more consistent query execution via persistent optimizer statistics that are collected at more regular, predictable intervals

The net result of these improvements is reflected in the SysBench read/write benchmarks shown here: 

For Linux, MySQL 5.6 shows up to a 150% improvement in transactional TPS throughput over MySQL 5.5, while similar tests run on Windows 2008 reveal a 47% performance gain. 

Better Read Only Workload Throughput
New optimizations have been made to InnoDB for read only transactions that greatly improve the performance of high concurrency web-based lookups and report-generating applications.  These optimizations bypass transactional overhead and are enabled by default when autocommit = 1, or can be atomically controlled by the developer using the new START_TRANSACTION_READ_ONLY syntax:

SET autocommit = 0;

The results of these optimizations are shown here:

For Linux, MySQL 5.6 shows up to a 230% improvement in read only TPS throughput over MySQL 5.5, while similar tests run on Windows 2008 show a 65% performance gain.

For context, all benchmarks shown above were run on the following platform configuration:

  • Oracle Linux 6
  • Intel(R) Xeon(R) E7540 x86_64
  • MySQL leveraging:
    • 48 of 96 available CPU threads
    • 2 GHz, 512GB RAM

The SysBench benchmark tool is freely available for application use-case specific benchmarks and can be downloaded here.

You can also get in depth MySQL 5.6 performance and feature specific benchmarks by following related blogs by Mikael Ronstrom and Dimitri Kravtchuk.  Both share the test cases and configurations they use to arrive at the conclusions drawn above.

Better Performance with Solid State Drives (SSD)
Spinning disks are among the most common bottlenecks on any system, simply because they have mechanical parts that physically limiit the ability to scale as concurrency grows.  As a result, many MySQL applications are being deployed on SSD enabled systems which provide the memory-based speed and reliability required to support the highest levels of concurrency on today’s web-based systems.  With this in mind, MySQL 5.6 includes several key enhancements designed specifically for use with SSD, including:

  • Support for smaller 4k and 8k page sizes to better fit the standard storage algorithm of SSD.
  • Portable .ibd (InnoDB data) files that allow “hot” InnoDB tables to be easily moved from the default data directory to SSD or network storage devices.
  • Separate tablespaces for the InnoDB unlog log that optionally moves the undo log out of the system tablespace into one or more separate tablespaces.  The read-intensive I/O patterns for the undo log make these new tablespaces good candidates to move to SSD storage, while keeping the system tablespace on hard drive storage.

Learn about all supporting SSD optimizations here.

Better Query Execution Times and Diagnostics: Improved Optimizer
The MySQL 5.6 Optimizer has been re-factored for better efficiency and performance and provides an improved feature set for better query execution times and diagnostics.  They key 5.6 optimizer improvements include:

Subquery Optimizations – Using semi-JOINs and materialization, the MySQL Optimizer delivers greatly improved subquery performance, simplifying how developers construct queries.  Specifically, the optimizer is now more efficient in handling subqueries in the FROM clause; materialization of subqueries in the FROM clause is now postponed until their contents are needed during execution, greatly improving performance.  Additionally, the optimizer may add an index to derived tables during execution to speed up row retrieval.  Tests run using the DBT-3 benchmark Query #13, shown below, demonstrate an order of magnitude improvement in execution times (from days to seconds) over previous versions.

select c_name, c_custkey, o_orderkey, o_orderdate, o_totalprice, sum(l_quantity)
from customer, orders, lineitem
where o_orderkey in (
                select l_orderkey
                from lineitem
                group by l_orderkey
                having sum(l_quantity) > 313
  and c_custkey = o_custkey
  and o_orderkey = l_orderkey
group by c_name, c_custkey, o_orderkey, o_orderdate, o_totalprice
order by o_totalprice desc, o_orderdate
LIMIT 100;

File Sort Optimizations with Small Limit – For queries with ORDER BY and small LIMIT values, the optimizer now produces an ordered result set using a single table scan.  These queries are common in web applications that display only a few rows from a large result set such as:

SELECT col1, ... FROM t1 ... ORDER BY name LIMIT 10;

Internal benchmarks have shown up to a 4x improvement in query execution times which helps improve overall user experience and response times. 

Index Condition Pushdown (ICP) – By default, the optimizer now pushes WHERE conditions down to the storage engine for evaluation, table scan and return of ordered result set to the MySQL server. 

      personid INTEGER PRIMARY KEY,
      firstname CHAR(20),
      lastname CHAR(20),
      postalcode INTEGER,
      age INTEGER,
      address CHAR(50),
      KEY k1 (postalcode,age)‏
   ) ENGINE=InnoDB;

SELECT lastname, firstname FROM person
   WHERE postalcode BETWEEN 5000 AND 5500 AND age BETWEEN 21 AND 22; 

Internal benchmarks on this type of table and query have shown up to 15x improved execution times with the ICP default behavior. 

Batched Key Access (BKA) and Multi-Range Read (MRR) – The optimizer now provides the storage engine with all primary keys in batches and enables the storage engine to access, order and return the data more efficiently greatly improving query execution times. 

Together, BKA and MRR show up to 280x improvement in query execution times for DBT-3 Query 13 and other disk-bound query benchmarks.

Better Optimizer Diagnostics – The MySQL 5.6 optimizer also provides better diagnostics and debugging with:

  • EXPLAIN for INSERT, UPDATE, and DELETE operations,
  • EXPLAIN plan output in JSON format with more precise optimizer metrics and better readability
  • Optimizer Traces for tracking the optimizer decision-making process.

Learn about all of MySQL 5.6 Optimizer improvements and features, in the MySQL docs.

For a deep technical dive into the implementation, how to enable/disable where applicable, related benchmarks and the use case specific performance improvements you can expect with each of these new features check out the MySQL Optimizer Engineering team blog.

Better Application Availability: Online DDL/Schema Changes
Today's web-based applications are designed to rapidly evolve and adapt to meet business and revenue-generation requirements. As a result, development SLAs are now most often measured in minutes vs days or weeks. So when an application must quickly support new product lines or new products within existing product lines, the backend database schema must adapt in kind, most commonly while the application remains available for normal business operations.  MySQL 5.6 supports this level of online schema flexibility and agility by providing the following new ALTER TABLE DDL syntax additions:

Change AUTO_INCREMENT value for a column
Change ROW FORMAT, KEY_BLOCK_SIZE for a table
Add, drop, reorder COLUMN

DBAs and Developers can add indexes and perform standard InnoDB table alterations while the database remains available for application updates. This is especially beneficial for rapidly evolving applications where developers need schema flexibility to accommodate changing business requirements.

Learn about all of MySQL 5.6 InnoDB online DDL improvements and features, in the MySQL docs.

Better Developer Agility: NoSQL Access to InnoDB
MySQL 5.6 provides simple, key-value interaction with InnoDB data via the familiar Memcached API.  Implemented via a new Memcached daemon plug-in to mysqld, the new Memcached protocol is mapped directly to the native InnoDB API and enables developers to use existing Memcached clients to bypass the expense of query parsing and go directly to InnoDB data for lookups and transactional compliant updates.  The API makes it possible to re-use standard Memcached libraries and clients, while extending Memcached functionality by integrating a persistent, crash-safe, transactional database back-end.  The implementation is shown here:

So does this option provide a performance benefit over SQL?  Internal performance benchmarks using a customized Java application and test harness show some very promising results with a 9X improvement in overall throughput for SET/INSERT operations:

Not only do developers and DBAs get more performance and flexibility, they also reduce complexity as it is possible to compress previously separate caching and database layers into a single data management tier, as well as eliminate the overhead of maintaining cache consistency.

You can follow the InnoDB team blog for the methodology, implementation and internal test cases that generated the above results.

Learn more about the details and how to get started with the new Memcached API to InnoDB in the MySQL docs.

Better Developer Agility: Extended InnoDB Use Cases
New MySQL 5.6 optimizations and features extend InnoDB into more use cases so developers can simplify applications by standardizing on a single storage engine.

New Full Text Search (FTS) – Provided as a better alternative to MyISAM FTS, InnoDB now enables developers to build FULLTEXT indexes on InnoDB tables to represent text-based content and speed up application searches for words and phrases.  InnoDB full-text search supports Natural language/Boolean modes, proximity search and relevance ranking.  A simple use case example looks like: 

(id int unsigned auto_increment primary key
, author varchar(64)
, quote varchar(4000)
, source varchar(64)
, fulltext(quote)
) engine=innodb;

SELECT author AS “Apple" FROM quotes
    WHERE match(quote) against (‘apple' in natural language mode);

New Transportable Tablespaces – InnoDB .ibd files created in file-per-table mode are now transportable between physical storage devices and database servers; when creating a table developers can now designate a storage location for the .idb file outside of the MySQL data directory.  This enables “hot” or busy tables to be easily moved to an external network storage device (SSD, HDD) that does not compete with application or database overhead. This new feature also enables quick, seamless application scale by allowing users to easily export/import InnoDB tables between running MySQL servers, as shown here:

Example Export:
CREATE TABLE t(c1 INT) engine=InnoDB;
FLUSH TABLE t FOR EXPORT; -- quiesce the table and create the meta data file

Corresponding Import:
CREATE TABLE t(c1 INT) engine=InnoDB; -- if it doesn't already exist
-- The user must stop all updates on the tables, prior to the IMPORT

The InnoDB improvements noted here are by no means exhaustive.  The complete accounting of all MySQL 5.6 features, along with all technical documentation, is available in the MySQL docs.

For a deep technical dive into the implementation, how to enable/disable where applicable and the use case specific improvements you can expect with each of these new features follow the MySQL InnoDB Engineering team blog.

Improved Replication and High Availability
Replication is the most widely used MySQL feature for scale-out and High Availability (HA) and MySQL 5.6 includes new features designed to enable developers building next generation web, cloud, social and mobile applications and services with self-healing replication topologies and high performance master and slaves.  The key features include:

New Global Transactions Identifiers (GTIDs) – GTIDs enable replication transactional integrity to be tracked through a replication master/slave topology, providing a foundation for self-healing recovery, and enabling DBAs and developers to easily identify the most up to date slave in the event of a master failure.  Built directly into the Binlog stream, GTIDs eliminate the need for complex third-party add-ons to provide this same level of tracking intelligence.

New MySQL Replication utilities – A new set of Python Utilities are designed to leverage the new replication GTIDs to provide replication administration and monitoring with automatic fail-over in the event of a failed master, or switchover in the event of maintenance to the master. This eliminates the need for additional third party High-Availability solutions, protecting web and cloud-based services against both planned and unplanned downtime without operator intervention.

New Multi-threaded Slaves - Splits processing between worker threads based on schema, allowing updates to be applied in parallel, rather than sequentially. This delivers benefits to those workloads that isolate application data using databases - e.g. multi-tenant systems. 

SysBench benchmarks using a graduated number of worker threads across 10 schemas show up to 5x in performance gain with multi-threading enabled. 

New Binary Log Group Commit (BGC) – In MySQL 5.6 replication masters now group writes to the Binlog rather than committing them one at a time, significantly improving performance on the master side of the topology.  BGC also enables finer grained locking which reduces lock waits, again, adding to the performance gain, shown here:

MySQL 5.6 shows up to a 180% performance gain over 5.5 in master server throughput with replication enabled (Binlog=1). BGC largely eliminates the trade-off users had to make between performance overhead to the master and the scale-out, HA benefits offered by MySQL replication.

New Optimized Row-based Replication – MySQL 5.6 provides a new option variable binlog-row-image=minimal that enables applications to replicate only data elements of the row image that have changed following DML operations.  This improves replication throughput for both the master and slave(s) and minimizes binary log disk space, network resource and server memory footprint.
New Crash-Safe Slaves – MySQL 5.6 stores Binlog positional data within tables so slaves can automatically roll back replication to the last committed event before a failure, and resume replication without administrator intervention. Not only does this reduce operational overhead, it also eliminates the risk of data loss caused by a slave attempting to recover from a corrupted data file.  Further, if a crash to the master causes corruption of the binary log, the server will automatically recover it to a position where it can be read correctly.
New Replication Checksums – MySQL 5.6 ensure the integrity of data being replicated to a slave by detecting data corruption and returning an error before corrupt events are applied to the slave, preventing the slave itself from becoming corrupt.
New Time-delayed Replication – MySQL 5.6 provides protection against operational errors made on the master from propagating to attached slaves by allowing developers to add defined delays in the replication stream.  With configurable master to slave time delays, in the event of failure or mishap, slaves can be promoted to the new master in order to restore the database to its previous state. It also becomes possible to inspect the state of a database before an error or outage without the need to reload a back up.

Learn about these and all of MySQL 5.6 Replication and High Availability improvements and features, along with all technical documentation, in the MySQL docs
For a rundown of the details, use cases and related benchmarks of all of these features check out Mat Keep’s Developer Zone article.

Improved Performance Schema
The MySQL Performance Schema was introduced in MySQL 5.5 and is designed to provide point in time metrics for key performance indicators.  MySQL 5.6 improves the Performance Schema in answer to the most common DBA and developer problems.  New instrumentation includes:

Statements/Stages -  What are my most resource intensive queries? Where do they spend time?
Table/Index I/O, Table Locks - Which application tables/indexes cause the most load or contention?
Users/Hosts/Accounts - Which application users, hosts, accounts are consuming the most resources?
Network I/O - What is the network load like? How long do sessions idle?
Summaries - Aggregated statistics grouped by statement, thread, user, host, account or object.

The MySQL 5.6 Performance Schema is now enabled by default in the my.cnf file with optimized and auto-tune settings that minimize overhead (< 5%, but mileage will vary), so using the Performance Schema a production server to monitor the most common application use cases is less of an issue.  In addition, new atomic levels of instrumentation enable the capture of granular levels of resource consumption by users, hosts, accounts, applications, etc. for billing and chargeback purposes in cloud computing environments.

MySQL Engineering has several champions behind the 5.6 Performance Schema, and many have published excellent blogs that you can reference for technical and practical details.  To get started see blogs by Mark Leith and Marc Alff.

The MySQL docs are also an excellent resource for all that is available and that can be done with the 5.6 Performance Schema. 

Other Important Enhancements
New default configuration optimizations – MySQL 5.6 introduces changes to the server defaults that provide better out-of-the-box performance on today’s system architectures.  These new defaults are designed to minimize the upfront time spent on changing the most commonly updated variables and configuration options.   Many configuration options are now auto sized based on environment, and can also be set and controlled at server start up.


  • TIME/TIMESTAMP/DATETIME – Now allow microsecond level precision for more precise time/date comparisons and data selection.
  • TIMESTAMP/DATETIME – Improves on 5.5. by allowing developers to assign the current timestamp, an auto-update value, or both, as the default value for TIMESTAMP and DATETIME columns, the auto-update value, or both.
  • TIMESTAMP - Columns are now nullable by default.  TIMESTAMP columns no longer get DEFAULT NOW() or ON UPDATE NOW() attributes automatically without them being explicitly specified and non-NULLable TIMESTAMP columns without explicit default value treated as having no default value.

Better Condition Handling – GET DIAGNOSTICS
MySQL 5.6 enables developers to easily check for error conditions and code for exceptions by introducing the new MySQL Diagnostics Area and corresponding GET DIAGNOSTICS interface command. The Diagnostic Area can be populated via multiple options and provides 2 kinds of information:

  • Statement - which provides affected row count and number of conditions that occurred
  • Condition - which provides error codes and messages for all conditions that were returned by a previous operation

The addressable items for each are:

The new GET DIAGNOSTICS command provides a standard interface into the Diagnostics Area and can be used via the CLI or from within application code to easily retrieve and handle the results of the most recent statement execution:

mysql> DROP TABLE test.no_such_table;
ERROR 1051 (42S02): Unknown table 'test.no_such_table'
mysql> SELECT @p1, @p2;
| @p1   | @p2                                |
| 42S02 | Unknown table 'test.no_such_table' |

Options for leveraging the MySQL Diagnostics Area are detailed here. You can learn more about GET DIAGNOSTICS here.  

Improved IPv6 Support

  • MySQL 5.6 improves INET_ATON() to convert and store string-based IPv6 addresses as binary data for minimal space consumption.
  • MySQL 5.6 changes the default value for the bind-address option from “” to “0::0” so the MySQL server accepts connections for all IPv4 and IPv6 addresses.  You can learn more here.

Improved Partitioning

  • Improved performance for tables with large number of partitions – MySQL 5.6 now performs and scales on highly partitioned systems, specifically for INSERT operations that span upwards of hundreds of partitions.
  • Import/export tables to/from partitioned tables - MySQL 5.6 enables users to exchange a table partition or sub-partition with a table using the ALTER TABLE ... EXCHANGE PARTITION statement; existing rows in a partition or subpartition can be moved to a non-partitioned table, and conversely, any existing rows in a non-partitioned table can be moved to an existing table partition or sub-partition. 
  • Explicit partition selection - MySQL 5.6 supports explicit selection of partitions and subpartitions that are checked for rows matching a given WHERE condition. Similar to automatic partition pruning, the partitions to be checked are specified/controlled by the issuer of the statement, and is supported for both queries and a number of DML statements (SELECT, DELETE, INSERT, REPLACE, UPDATE, LOAD DATA, LOAD XML). 

Improved GIS: Precise spatial operations - MySQL 5.6 provides geometric operations via precise object shapes that conform to the OpenGIS standard for testing the relationship between two geometric values. 


MySQL 5.5 has been called the best release of MySQL ever.  MySQL 5.6 builds on this by providing across the board improvements in performance, scalability, transactional throughput, availability and performance related instrumentation all designed to keep pace with requirements of the most demanding web, cloud and embedded use cases. The MySQL 5.6 Release Candidate is now available for download for early adopter and development purposes.

Next Steps

As always, thanks for reading, and thanks for your continued support of MySQL!


Get the latest updates on products, technology, news, events, webcasts, customers and more.




« July 2014