Monday Oct 13, 2014

MySQL Cluster Automatic Sharding for High Availability and Scalability

MySQL Cluster automatically shards data across data nodes on separate hosts which means that your database performance scales naturally as you add more nodes. Data nodes belong to node groups that replicate data so that if one data node fails, another data node carries on working with a copy of the failed node's data. This ensures high availability and scalability.

To learn more about this topic and many other key topics for understanding and using MySQL Cluster, take the MySQL Cluster training course.

You can take this 4-day instructor-led course as a:

  • Live-Virtual Event: Attend a live event from your own desk, no travel required. Events will be added to the schedule to suits different time-zones. Register a request to influence the schedule.
  • In-Class Event: Travel to an education center to take this class. Below is a selection of the events already on the schedule.



 Delivery Language

 Sao Paulo, Brazil

 3 November 2014

 Brazilian Portuguese

 London, England

 16 February 2015


 Munich, Germany

 15 December 2014


 Petaling Jaya, Malaysia

  20 October 2014



 20 October 2014


To register for an event, request an additional event or learn more about the authentic MySQL curriculum, go to

Monday Aug 25, 2014

Learn All About MySQL Cluster

Just released - the all new MySQL Cluster training course.

This MySQL Cluster training teaches you how to install and configure a real-time database cluster at the core of your application. Expert instructors will teach you how to design and maintain your clusters for high availability and scalability by using MySQL Cluster's open-source and enterprise components.

This 4-day training course is a must for those who want to learn about MySQL Cluster as you will not only learn about the concepts and features but you will get extensive hands-on experience. You can follow this training course from your own desk via a live-virtual training or by traveling to an education center to follow this course.

Be the first to influence the schedule for this newly released course by registering your interest on the Oracle University portal.

For more information about the authentic MySQL curriculum, go to

Monday Jul 07, 2014

MySQL Cluster High Availability Through Data Replicas

MySQL Cluster enables high availability by storing data replicas on multiple hosts. MySQL Cluster maintains connections between data nodes by using high-speed interconnects over TCP/IP - standard or direct connections - or SCI (Scalable Coherent Interface) sockets.

To learn more about MySQL Cluster, take the MySQL Cluster training course.

This course is currently scheduled for the following locations:



 Delivery Language

 Sao Paulo, Brazil

 28 July 2014

 Brazilian Portuguese

 Berlin, Germany

 11 August 2014


 Hamburg, Germany

 29 September 2014


 Munich, Germany

 15 December 2014


 Rome, Italy

 8 September 2014


Seoul, Korea

28 July 2014 


Petaling Jaya, Malaysia

16 July 2014 


Warsaw, Poland

28 July 2014 



 29 July 2014


Edison, NJ, United States

18 August 2014 


San Francisco, CA, United States

13 August 2014 


To register for an event, request an additional event, or learn more about the authentic MySQL Curriculum, go to

Friday Apr 26, 2013

MySQL Web Reference Architectures: On Demand

Last week we ran a live webinar presenting the new MySQL Web Reference Architectures, a set of documented and repeatable best practices for building highly available, scaleable and secure database infrastructure to power new generations of web and mobile services.

The webinar replay is now available on-demand so you can listen in from the comfort of your own desk…or commute.

As a taster - we discuss sizing and design patterns - you can see a sample below:

We received some great questions during the Q&A session which I felt would be useful to you, so I've included them below:

Q. You are talking a lot about MySQL 5.6. We are on MySQL 5.1 - what are the reasons for us to upgrade:

There is a great summary in the MySQL 5.6 developer / DBA guide. You will see the advantage of 5.6 for web and cloud applications in performance, scaling, agility,security, instrumentation and reliability.

Q. When using replication, the master sends only the log file for the slave?

A. The binary log contains all of the database writes on the master. The master then sends the data from this log file to each of the slaves so that they can apply the same writes. It doesn't send the log file itself but sends the changes as they're written to the file (or if a slave disconnects and then reconnects the master will send over all changes that it missed). You can find out more in the introduction to MySQL replication paper 

Q. Are there any open-source utilities to provide Virtual IP failover that can be integrated with mysqlfailover utility? Or should we use corosync and pacemaker with mysqlrpadmin?

Certainly you could use corosync and pacemaker - in which case you're right that mysqlrpladmin might be the best option as then you only have one brain deciding when to fail everything over. Alternatively you can bind your own scripts into mysqlfailover and you could use those to trigger a VIP failover

Q. What should we do to address any potential replication lag?

A. Fast cores on the slave would help. If you're able split your tables between different schemas (databases in MySQL terms) then you can exploit mutil-threaded slaves (parallel slave apply) in MySQL 5.6. Also take a look at optimized row-based replication in MySQL 5.6

Q. Does MySQL 5.6 offer any replication related improvements over 5.5?

A. You bet. Performance, reliability, flexibility. See the introduction paper referenced in the answer above 

Q. Any resources you know of on implementing the Medium-size MySQL web reference architecture with Ruby on Rails apps in the application layer?

A. General best practices outlines in the MySQL web reference architectures guide will apply. There are more specific MySQL / Ruby on Rails best practices here. You can also use community-developed MySQL Ruby connectors

Q. How can we integrate MySQL with Hadoop?

A. You have several options which can be used independently or together. Apache Sqoop provides batch transfers between MySQL and Hadoop, and is also fully bi-directional, so you can replicate the results of Hadoop Map Reduce jobs back to MySQL tables. There is also the new MySQL Applier for Hadoop which enables the streaming of events in real-time from MySQL to Hadoop.

You can learn more about all of the option from the MySQL and Hadoop guide to big data

Q. MySQL Cluster looks interesting, but when we looked at it in the past, it didn't support Foreign Keys, so not sure how you can suggest it as an option for eCommerce apps?

A. There are many eCommerce providers already running MySQL Cluster - Shopatron is a great example. The latest development milestone release of MySQL cluster 7.3 adds native support for Foreign Keys as well - so worth taking another look! 

Q. is it possible to customise auto-sharding when using MySQL Cluster?

A. Yes, you can choose which column within the primary key is to be used as the 'sharding key' - that way you can for example make sure that all data for a specific user will be in the same data node; this can help ensure linear scalability. Details in the MySQL Cluster Performance Optimization guide 

Q. Any advice on UserID generation for sharding?

A. One design pattern is to just use an auto-increment column and then hash on that. Another option is to use a UUID - you can then shard by ranges or by hashing

Q. Can we choose one : clustering or sharding? which one better for HA?

A. It depends on a number of factors such as what level of availability you need, how your application is architected, etc. A good resource to help you weigh up the pros and cons of the different approaches is the MySQL guide to high availability

Q. My company is using 5.5 with one master and slave for our web application. We are currently looking into a new system infrastructure using VM technology. What would you recommend?

A. There are many options available to you on multiple hypervisors. We have created a VM Template for MySQL running on Oracle VM and Oracle Linux. This is a pre-installed, pre-configured and certified software stack which you download as a single file and then provision to an Oracle VM Server farm. You can learn more about it from the Oracle VM Template for MySQL Enterprise Edition guide

Q. I'm interested in the MySQL monitoring tools and in the audit plug-in - how do I get these?

A. These are available as part of MySQL Enterprise Edition which also packages support, backup, security plug-ins, etc in an annual subscription.

Q. How much would it cost to run MySQL Enterprise Edition on that "perfect server"?

A. Pricing is available in the on-line store. Note that the list price is the same regardless of number of cores unless you exceed 4 CPU *sockets*

Q. i am sorry for this very novice question, but could I have some help regarding references that would allow me to digest the" base" of these technologies presented her?

A. Assuming it is case studies you are after, we have a lot of them published at the MySQL customers page. In the "Industry" drop down box, take a look at Web eCommerce, games, SaaS/Hosting and social networks

Wednesday Apr 10, 2013

MySQL Replication: Self-Healing Recovery with GTIDs and MySQL Utilities

MySQL 5.6 includes a host of enhancements to replication, enabling DevOps teams to reliably scale-out their MySQL infrastructure across commodity hardware, on-premise or in the cloud.

One of the most significant enhancements is the introduction of Global Transaction Identifiers (GTIDs) where the primary development motivation was:

- enabling seamless failover or switchover from a replication master to slave

- promoting that slave to the new master

- without manual intervention and with minimal service disruption.

You can download the new MySQL Replication High Availability Guide to learn more.  The following sections provide an overview of how GTIDs and new MySQL utilities work together to enable self-healing replication clusters.  


To understand the implementation and capabilities of GTIDs, we will use an example illustrated below:

- Server “A” crashes

- We need to failover to one of the slaves, promoting it to master

- The remaining server becomes a slave of that new master

Figure 1: MySQL Master Failover and Slave Promotion

As MySQL replication is asynchronous by default, servers B and C may not have both replicated and executed the same number of transactions, i.e. one may be ahead of the other. Consider the following scenarios:

Scenario #1

- Server B is ahead of C and B is chosen as the new master;

- [Then] Server C needs to start to replicate from the first transaction in server B that it is yet to receive;

Scenario #2

- Server C has executed transactions that have so far not been received by Server B, but the administrator designates Server B as the new master (for example, it is configured with higher performing hardware than Server C).

- Server B therefore needs to execute later transactions from Server C, before assuming the role of the master, otherwise lost transactions and conflicts can ensue.

GTIDs apply a unique identifier to each transaction, so it is becomes easy to track when it is executed on each slave. When the master commits a transaction, a GTID is generated which is written to the binary log prior to the transaction. The GTID and the transaction are replicated to the slave.

If the slave is configured to write changes to its own binary log, the slave ensures that the GTID and transaction are preserved and written after the transaction has been committed.

The set of GTIDs executed on each slave is exposed to the user in a new, read-only, global server variable, gtid_executed. The variable can be used in conjunction with GTID_SUBTRACT() to determine if a slave is up to date with a master, and if not, which transactions are missing.

A new replication protocol was created to make this process automatic. When a slave connects to the master, the new protocol ensures it sends the range of GTIDs that the slave has executed and committed and requests any missing transactions. The master then sends all other transactions, i.e. those that the slave has not yet executed. This is illustrated in the following example (note that the GTID numbering and binlog format is simplified for clarity):

Figure 2: Automatically Synchronizing a New Master with it's Slaves

In this example, Server B has executed all transactions before Server A crashed. Using the MySQL replication protocol, Server C will send “id1” to Server B, and then B will send “id 2” and “id3” to Server C, before then replicating new transactions as they are committed. If the roles were reversed and Server C is ahead of Server B, this same process also ensures that Server B receives any transactions it has so far not executed, before being promoted to the role of the new master.

We therefore have a foundation for reliable slave promotion, ensuring that any transactions executed on a slave are not lost in the event of an outage to the master.

MySQL Utilities for GTIDs

Implemented as Python-based command-line scripts, the Replication utilities are available as a standalone download as well as packaged with MySQL Workbench. The source code is available from the MySQL Utilities Launchpad page.

The utilities supporting failover and recovery are components of a broader suite of MySQL utilities that simplify the maintenance and administration of MySQL servers, including the provisioning and verification of replication, comparing and cloning databases, diagnostics, etc. The utilities are available under the GPLv2 license, and are extendable using a supplied library. They are designed to work with Python 2.6 and above.

Replication Utility: mysqlfailover

While providing continuous monitoring of the replication topology, mysqlfailover enables automatic or manual failover to a slave in the event of an outage to the master. Its default behavior is to promote the most viable slave, as defined by the following slave election criteria.

• The slave is running and reachable;

• GTIDs are enabled;

• The slaves replication filters do not conflict;

• The replication user exists;

• Binary logging is enabled.

Once a viable slave is elected (called the candidate), the process to retrieve all transactions active in the replication cluster is initiated. This is done by connecting the candidate slave to all of the remaining slaves thereby gathering and executing all transactions in the cluster on the candidate slave. This ensures no replicated transactions are lost, even if the candidate is not the most current slave when failover is initiated.

The election process is configurable. The administrator can use a list to nominate a specific set of candidate slaves to become the new master (e.g. because they have better performing hardware).

At pre-configured intervals, the utility will check to see if the server is alive via a ping operation, followed by a check of the connector to detect if the server is still reachable. If the master is found to be offline or unreachable, the utility will execute one of the following actions based on the value of the failover mode option, which enables the user to define failover policies:

• The auto mode tells the utility to failover to the list of specified candidates first and if none are viable, search the list of remaining slaves for a candidate.

• The elect mode limits election to the candidate slave list and if none are viable, automatic failover is aborted.

• The fail mode tells the utility to not perform failover and instead stop execution, awaiting further manual recovery actions from the DevOps team.

Via a series of Extension Points users can also bind in their own scripts to trigger failover actions beyond the database (such as Virtual IP failover). Review the mysqlfailover documentation for more detail on configuration and options of this utility.

Replication Utility: mysqlrpladmin

If a user needs to take a master offline for scheduled maintenance, mysqlrpladmin can perform a switchover to a specific slave (called the new master). When performing a switchover, the original master is locked and all slaves are allowed to catch up. Once the slaves have read all events from the original master, the original master is shutdown and control is switched to the new master.

There are many Operations teams that prefer to take failover decisions themselves, and so mysqlrpladmin provides a mechanism for manual failover after an outage to the master has been identified, either by using the health reporting provided by the utilities, or by alerting provided by a tool such as the MySQL Enterprise Monitor.

It is also possible to easily re-introduce recovered masters back to the replication topology by using the demote-master command and initiating a switchover to the recovered master.

Review the mysqlrpladmin documentation for more detail on configuration and options of the utility.

Resources to Get Started

In addition to the documentation, two additional resources are useful to enable you to learn more:

- MySQL replication utilities tutorial video

- MySQL replication guide: building a self healing replication topology


The combination of options to control failover or switchover and the ability to inform applications of the failover event are powerful features that enable self-healing for critical applications.

Tuesday Feb 05, 2013

MySQL 5.6 Replication: New Resources for Database Scaling and HA

MySQL 5.6 reached GA (General Availability) today and is packed with a wealth of new features and capabilities.  Exciting stuff!

MySQL 5.6 also introduces the largest set of enhancements to replication ever delivered in a single release, including:
- 5x higher performance to improve consistency across a cluster and reduce the risks of data loss in the event of a master failing
- Self-healing clusters with automatic failover and recovery from outages or planned maintenance
- Assured data integrity with checksums implemented across the replication workflow
- DevOps automation

Of course, getting started with all of these enhancements can be a challenge - whether you are new to MySQL replication or an experienced user. So two new Guides are available to help take advantage of everything replication offers in MySQL 5.6.

The introduction takes you through new 5.6 features, including Global Transaction Identifiers (GTIDs), BinLog Group Commit, Multi-Threaded Slaves, Crash-safe replication, Checksums, etc.
To provide context to these features, the Introduction Guide takes Developers and DBAs through the concepts of replication, including: 

- Use-cases such as scaling out on commodity hardware, high availability, geo-redundancy for DR and data placement, and supporting complex analytics alongside high velocity OLTP operations in the same cluster

- The replication workflow and internals, discussing the roles of different threads and log files 

- Controlling data consistency using different synchronisation models

- Pros and cons of Statement-Based and Row-Based Replication 

- Monitoring and managing replication using tools such as the replication dashboard within MySQL Enterprise Monitor 

Having learned the concepts of replication and what's new in MySQL 5.6, the Tutorial provides step-by-step guides to configuring, provisioning and managing replication, covering:

- Configuring replication with master and slave .cnf files, creating users, introducing replication to an existing application, initialisation and checking proper operation, using GTIDs and other new features

- Migration to semi-synchronous replication for improved data consistency and reduced risk of data loss

- Administration and configuration by checking slave status, suspending replication and viewing binary logs

- Failover and recovery, including pre-requisites, detecting an outage, slave promotion, redirecting the application and recovering a failed master

These new resources provide everything you need to get started in building your next generation web, cloud, social or mobile application with the world's most popular open source database.

You will find more information in the Docs.  Also, look out for a live webinar in March where we will demonstrate all that is new in MySQL 5.6 replication.

Tuesday Jan 08, 2013

Deep Dive into GTIDs and MySQL 5.6 - What, Why and How

Global Transaction Identifiers (GTIDs) are one of the key replication enhancements in MySQL 5.6. GTIDs make it simple to track and compare replication across a master - slave topology. This enables:

- Much simpler recovery from failures of the master,

- Introduces great flexibility in the provisioning and on-going management of multi-tier or ring (circular) replication topologies.

A new on-demand MySQL 5.6 GTID webinar delivered by the replication engineering team is now available, providing deep insight into the design and implementation of GTIDs, and how they enable users to simplify MySQL scaling and HA. The webinar covers:

- Concepts: What is a GTID? How does the server generate GTIDs? What is the life cycle of GTIDs? How are GTIDs used to connect to a master?

- Handling conflicts

- How to skip transactions using GTIDs

- What happens when binary logs are purged

- How to provision a new slave or restore from a backup

- MySQL utilities for automated failover and controlled switchover

To whet your appetite, an extract of the Q&A from the webinar is as follows. These, and many other questions were answered during the session:

Q. Which versions of MySQL support GTIDs?

A. MySQL 5.6.5 and above

Q. Is GTID ON by default in 5.6?

A. It is OFF by default

Q. What does the GTID contain?

A. It is made up of a unique ID for the server followed by an ever-increasing counter that's specific to that server

Q: Do GTIDs introduce any increased space requirements?

A: Yes, since GTIDs are stored in the binary log, the binary logs will be larger. However, we expect the overhead to be relatively small. GTIDs are written to the binary log in two places:

(1) A small header is stored in the beginning of the binary log. This contains the variable @@gtid_purged, i.e., a list of previously logged GTIDS. Since the list is range-compressed, this is expected to be small: a small fixed-size header plus 40 bytes times the number of master servers in your topology.

(2) A fixed size header is added before each transaction in the binary log. This is 44 bytes, so will typically be small compared to the transaction itself.

Q. I understand GTID's are associated with Transactions. How do they map to the events within each transaction, or do GTID's map as an event itself in a binlog file?

A. Yes, GTIDs are associated with transactions. In the binary log, the GTID is realized as an event written prior to the events that constitute the transaction. The event is called a Gtid_log_event.

Q What if a transaction spans a filtered out table and a non-filtered out table? How does it get recorded on the slave?

A. If the filters are on the master, then a partly logged transaction will be replicated with its GTID.

If filtering on the slave side, a partial image will be processed on the slave and the original GTID is logged (to the slave's binlog) with the processed transaction.

Q. Prior to GTID, to build a new slave, we use mysqldump --master-data=1 to get the slave starting sync point in the dump. With GTID enabled, does it set the gtid_executed / purged in the dump instead?

A. Yes, mysqldump will detect that the server uses GTIDs and output a SET GTID_PURGED statement. (And there is an option to turn off that, e.g., in case you want to execute the output on an old server).

Q. How do GTIDs enable failover and recovery?

A. GTIDs are using in combination with the MySQL utilities. The mysqlfailover and rpladmin utilities provide administration of GTID-enabled slaves, enabling monitoring with automatic failover and on-demand switchover, coupled with slave promotion. GTIDs make it straightforward to reliably failover from the master to the most current slave automatically in the event of a failure. DBAs no longer need to manually analyze the status of each of their slaves to identify the most current when seeking a target to promote to the new master.

Resources to Get Started

In addition to the webinar, here are some other key resources that will give you the detail you need to take advantage of GTIDs in your most important MySQL workloads:

- Engineering blog: Global Transaction Identifiers – why, what, and how

- Engineering blog: Advanced use of GTIDs

- Documentation: Setting up replication with GTIDs 

- Video Tutorial: MySQL replication utilities for auto-failover and switchover 

- Engineering Blog: Controlling read consistency with GTIDs 

If you have any comments, questions or feature requests, don't hesitate to leave a comment on this blog

Monday Oct 08, 2012

New Options for MySQL High Availability

Data is the currency of today’s web, mobile, social, enterprise and cloud applications. Ensuring data is always available is a top priority for any organization – minutes of downtime will result in significant loss of revenue and reputation.

There is not a “one size fits all” approach to delivering High Availability (HA). Unique application attributes, business requirements, operational capabilities and legacy infrastructure can all influence HA technology selection. And then technology is only one element in delivering HA – “People and Processes” are just as critical as the technology itself.

For this reason, MySQL Enterprise Edition is available supporting a range of HA solutions, fully certified and supported by Oracle. MySQL Enterprise HA is not some expensive add-on, but included within the core Enterprise Edition offering, along with the management tools, consulting and 24x7 support needed to deliver true HA.

At the recent MySQL Connect conference, we announced new HA options for MySQL users running on both Linux and Solaris:

- DRBD for MySQL

- Oracle Solaris Clustering for MySQL

DRBD (Distributed Replicated Block Device) is an open source Linux kernel module which leverages synchronous replication to deliver high availability database applications across local storage. DRBD synchronizes database changes by mirroring data from an active node to a standby node and supports automatic failover and recovery. Linux, DRBD, Corosync and Pacemaker, provide an integrated stack of mature and proven open source technologies.

DRBD Stack: Providing Synchronous Replication for the MySQL Database with InnoDB

Download the DRBD for MySQL whitepaper to learn more, including step-by-step instructions to install, configure and provision DRBD with MySQL

Oracle Solaris Cluster provides high availability and load balancing to mission-critical applications and services in physical or virtualized environments. With Oracle Solaris Cluster, organizations have a scalable and flexible solution that is suited equally to small clusters in local datacenters or larger multi-site, multi-cluster deployments that are part of enterprise disaster recovery implementations. The Oracle Solaris Cluster MySQL agent integrates seamlessly with MySQL offering a selection of configuration options in the various Oracle Solaris Cluster topologies.

Putting it All Together

When you add MySQL Replication and MySQL Cluster into the HA mix, along with 3rd party solutions, users have extensive choice (and decisions to make) to deliver HA services built on MySQL

To make the decision process simpler, we have also published a new MySQL HA Solutions Guide.

Exploring beyond just the technology, the guide presents a methodology to select the best HA solution for your new web, cloud and mobile services, while also discussing the importance of people and process in ensuring service continuity.

This is subject recently presented at Oracle Open World, and the slides are available here.

Whatever your uptime requirements, you can be sure MySQL has an HA solution for your needs

Please don't hesitate to let us know of your HA requirements in the comments section of this blog. You can also contact MySQL consulting to learn more about their HA Jumpstart offering which will help you scope out your scaling and HA requirements.

Friday Sep 21, 2012

MySQL Connect 8 Days Away - Replication Sessions

Following on from my post about MySQL Cluster sessions at the forthcoming Connect conference, its now the turn of MySQL Replication - another technology at the heart of scaling and high availability for MySQL.

Unless you've only just returned from a 6-month alien abduction, you will know that MySQL 5.6 includes the largest set of replication enhancements ever packaged into a single new release:

- Global Transaction IDs + HA utilities for self-healing cluster..(yes both automatic failover and manual switchover available!)

- Crash-safe slaves and binlog

- Binlog Group Commit and Multi-Threaded Slaves for high performance

- Replication Event Checksums and Time-Delayed replication

- and many more

There are a number of sessions dedicated to learn more about these important new enhancements, delivered by the same engineers who developed them. Here is a summary

Saturday 29th, 13.00

Replication Tips and Tricks, Mats Kindahl

In this session, the developers of MySQL Replication present a bag of useful tips and tricks related to the MySQL 5.5 GA and MySQL 5.6 development milestone releases, including multisource replication, using logs for auditing, handling filtering, examining the binary log, using relay slaves, splitting the replication stream, and handling failover.

Saturday 29th, 17.30

Enabling the New Generation of Web and Cloud Services with MySQL 5.6 Replication, Lars Thalmann

This session showcases the new replication features, including

• High performance (group commit, multithreaded slave)

• High availability (crash-safe slaves, failover utilities)

• Flexibility and usability (global transaction identifiers, annotated row-based replication [RBR])

• Data integrity (event checksums)

Saturday 29th, 1900

MySQL Replication Birds of a Feather

In this session, the MySQL Replication engineers discuss all the goodies, including global transaction identifiers (GTIDs) with autofailover; multithreaded, crash-safe slaves; checksums; and more. The team discusses the design behind these enhancements and how to get started with them. You will get the opportunity to present your feedback on how these can be further enhanced and can share any additional replication requirements you have to further scale your critical MySQL-based workloads.

Sunday 30th, 10.15

Hands-On Lab, MySQL Replication, Luis Soares and Sven Sandberg

But how do you get started, how does it work, and what are the best practices and tools? During this hands-on lab, you will learn how to get started with replication, how it works, architecture, replication prerequisites, setting up a simple topology, and advanced replication configurations. The session also covers some of the new features in the MySQL 5.6 development milestone releases.

Sunday 30th, 13.15

Hands-On Lab, MySQL Utilities, Chuck Bell

Would you like to learn how to more effectively manage a host of MySQL servers and manage high-availability features such as replication? This hands-on lab addresses these areas and more. Participants will get familiar with all of the MySQL utilities, using each of them with a variety of options to configure and manage MySQL servers.

Sunday 30th, 14.45

Eliminating Downtime with MySQL Replication, Luis Soares

The presentation takes a deep dive into new replication features such as global transaction identifiers and crash-safe slaves. It also showcases a range of Python utilities that, combined with the Release 5.6 feature set, results in a self-healing data infrastructure. By the end of the session, attendees will be familiar with the new high-availability features in the whole MySQL 5.6 release and how to make use of them to protect and grow their business.

Sunday 30th, 17.45

Scaling for the Web and the Cloud with MySQL Replication, Luis Soares

In a Replication topology, high performance directly translates into improving read consistency from slaves and reducing the risk of data loss if a master fails. MySQL 5.6 introduces several new replication features to enhance performance. In this session, you will learn about these new features, how they work, and how you can leverage them in your applications. In addition, you will learn about some other best practices that can be used to improve performance.

So how can you make sure you don't miss out - the good news is that registration is still open ;-)

And just to whet your appetite, listen to the On-Demand webinar that presents an overview of MySQL 5.6 Replication.  

Wednesday May 23, 2012

MySQL 5.6 Replication: FAQ

On Wednesday May 16th, we ran a webinar to provide an overview of all of the new replication features and enhancements that are previewed in the MySQL 5.6 Development Release – including Global Transaction IDs, auto-failover and self-healing, multi-threaded, crash-safe slaves and more.

Collectively, these new capabilities enable MySQL users to scale for next generation web and cloud applications.

Attendees posted a number of great questions to the MySQL developers, serving to provide additional insights into how these new features are implemented. So I thought it would be useful to post those below, for the benefit of those unable to attend the live webinar (note, you can listen to the On-Demand replay which is available now).

Before getting to the Q&A, there are a couple of other resources that maybe useful to those wanting to learn more about Replication in MySQL 5.6

On-Demand webinar

Slides used during the webinar

For more detail on any of the features discussed below, be sure to check out the Developer Zone article: Replication developments in MySQL 5.6

So here is the Q&A from the event 

Multi-Threaded Slaves

The results from recent benchmarking of the Multi-Threaded Slave enhancement were discussed, prompting the following questions

Q. Going from 0 - 10 threads, did you notice any increase in IOwait on CPU?

A. In this case, yes. The actual amount depends on a number of factors: your hardware, query/transaction distribution across databases, server general and InnoDB specific configuration parameters.

Q. Will the multiple threads work on different transactions on the same database, or each thread works on a separate database?

A. Each worker thread works on separate databases (schemas).

Q. If I set the slave-parallel-workers to less than the number of databases, can I configure which databases use which worker threads?

A. There is no such configuration option to assign a certain Worker thread to a database. Configuring slave-parallel-workers to less than the number of databases is a good setup. A Worker can handle multiple databases.

Q. If I create 4 threads and I have 100 databases, can I configure which databases use which threads?

A. There won't be 1 worker to a mapping of exactly 25 databases, but it will be very close to that type of distribution.

Q. Thank You. I ask as we have about 8 databases that have a lot of transactions and about 30 that are used less frequently. It would be nice to have the ability to create 9 workers, 1 for each if the heavy databases and 1 for all the others

A. The current design enables relatively equal distribution of transactions across your databases, but it could be that 9 workers will fit anyway.

Q. Does multi-thread slave still work if there is foreign key across database?

A. MTS preserves slave consistency *within* a database, but not necessarily *between* databases. With MTS enabled and replication ongoing, updates to one database can be executed before updates to another causing them to be temporarily out of sync with each other.

Q. How is auto-increment managed with the multi-thread slave?

A. MTS makes sure Worker threads do not execute concurrently on the same table. Auto-increment is guaranteed to be handled in the same way as the single threaded "standard" slave handles auto-increment

Q. Can you use semi-synchronous replication on one of the slaves when using MTS?

A. Semi-sync is friendly to MTS. MTS is about parallel execution - so they cooperate well.

Optimized Row Based Replication

Q. If you only store the PK column in the Before Image for updates, does this mean you don't care about the slave's data potentially being out-of-sync? Will we be able to control how much data is stored in the binary logs?

A. The rule is that we ship a *PK equivalent* so that the slave is always able to find a row. This means:
  1. if master table has PK, then we ship the PK only
  2. if master table does not have PK, then we ship the entire row

Global Transaction IDs and HA Utilities

Q. Would the failover utility need to sit on a 3rd party host to allow arbitration?

A. The utility would typically run on a client, not on the hosts it is monitoring

Q. Can you explain the upgrade process to move to MySQL 5.6? I am assuming that the slave(s) are upgraded first and that replication would be backwards compatible. And after the slave(s) are upgraded, the master would be next. But then how do you turn on the GTID?

A. Right: the slave(s) are upgraded first. After upgrading Slaves they would be started with --gtid-mode=on (technically a couple of other options are needed as well). And then the same process would be followed for the upgraded Master.

Q. For failover functionality, if I had a setup like this: Server1 replicates to Server2, Server2 replicates to Server3, Server4, and Server5. If Server2 were to fail, can I have it configured so that Server1 can become the new master for Server3/4/5?

A. Yes - you can either configure the failover to the most recent slave based on GTID, or list specific candidates. What will happen is that Slave 1 will temporarily become a slave of 3, 4 and 5 to ensure it replicates any more recent transactions those slaves may have, and then it will become the master

Replication Event Checksums

Q. What is the overhead of replication checksums?

A. It is minimal - we plan to publish benchmarks over the summer to better characterize any overhead

Q. Are you exposing the checksum to the plugin api so we can add our own checksum types?

A. We prepared some of interfaces, i.e. checksum methods are identified by a specific code byte (1-127) values. But it's not really a plugin at this point.

Q. Do checksums verify both replicated data and the data on slave?

A. Yes - checksums are implemented across the entire path - so you can check for issues in replication itself, or in the hardware or network

Q. I think, it is better to turn on checksums at the relaylog to avoid overhead on the master, but if we do that and checksum fails (i.e. not matching the master's data) then what happens – will the slave throw an error

A. I agree, it's better to relax the Master, which verifies the checksum only optionally when the Dump Thread reads from the binlog prior to the replication event being sent out to the Slave. The slave mandatorily checks the checksummed events when they are sent across the network, and optionally when they are read from Relay-log. In either case, an error is thrown.

Q. Are checksums optional? In some cases we don't care for huge data loads

A. Yes, checksums are optional.

Time Delayed Replication

Q. Is Time delayed replication applied at the database level only and not for the entire slave?

A. Applied for the slave as execution is global.

Informational Log Events

Q. When we configure information log event, does it show meaningful query log for any binlog format? (Row based especially)

A. When using row based replication, you get the original query in human readable format... obviously you don’t want to see all the rows modified in a table of size, which can be huge

Q. Will the binlog include user id?

A. User id is replicated in some cases for Query-log-event - namely user id of the invoker when a stored routine is called.

Remote Binlog Backup

Q. What level of access does the remote binlog backup need?

A. Replication slave


As you can see, our Engineering team was kept busy with questions over the course of the webinar. Be sure to check out the MySQL 5.6 Replication webinar replay and if you have further questions, please don’t hesitate to use the comments below!

Thursday May 17, 2012

Meet the MySQL Experts Podcast: MySQL Replication Global Transaction Identifiers & HA Utilities

In the latest episode of our “Meet The MySQL Experts” podcast, Luis Soares, Engineering Manager of MySQL Replication discusses the new Global Transaction Identifiers (GTIDs) that are part of the latest MySQL 5.6 Development Release. We are also joined by Chuck Bell who discusses how the new MySQL HA utilities use GTIDs to create a self-healing replication topology.

In the podcast, we cover how GTIDs and the HA utilities are implemented, how they are configured and considerations for their use.

You can also learn more from Luis’ blog on GTIDs in MySQL 5.6 and Chuck’s blog on the HA utilities.

Of course, GTIDs are just one of the major new features of MySQL replication. For a complete overview, take a look at our DevZone article: MySQL 5.6 Replication - Enabling the Next Generation of Web & Cloud Services.

You can try out MySQL 5.6 and GTIDs by downloading the Development Release (select Development Release tab)

Enjoy the GTID podcast and let us know what topics you would like covered in future podcasts!  Also check out the library of Meet the MySQL Experts podcasts

Friday Oct 07, 2011

MySQL Cluster 7.2 (DMR2): NoSQL, Key/Value, Memcached

70x Higher Performance, Cross Data Center Scalability and New NoSQL Interface

Its been an exciting week for all involved with MySQL Cluster, with the announcement of the second Development Milestone Release (7.2.1) at Oracle Open World. Highlights include:

- Enabling next generation web services: 70x higher complex query performance, native memcached API and integration with the latest MySQL 5.5 server

- Enhancing cross data scalability: new multi-site clustering and enhanced active/active replication

- Simplified provisioning: consolidated user privileges.

You can download the DMR for evaluation now from: (select Development Milestone Release tab).

You can also read up on the detail of each of these features in the new article posted at the MySQL Developer Zone. In this blog, I’ll summarize the main parts of the announcement.

70x Higher Performance with Adaptive Query Localization (AQL)

Previewed as part of the first MySQL Cluster DMR, AQL is enabled by a new Index Statistics function that allows the SQL optimizer to build a better execution plan for each query.

As a result, JOIN operations are pushed down to the data nodes where the query executes in parallel on local copies of the data. A merged result set is then sent back to the MySQL Server, significantly enhancing performance by reducing network trips.

Take a look at how this is used by a web-based content management to increase performance by 70x

Adaptive Query Localization enables MySQL Cluster to better serve those use-cases that have the need to run real-time analytics across live data sets, along with high throughput OLTP operations. Examples include recommendations engines and clickstream analysis in web applications, pre-pay billing promotions in mobile telecoms networks or fraud detection in payment systems.

New NoSQL Interface and Schema-less Storage with the memcached API

The memcached interface released as an Early Access project with the first MySQL Cluster DMR is now integrated directly into the MySQL Cluster 7.2.1 trunk, enabling simpler evaluation.

The popularity of Key/Value stores has increased dramatically. With MySQL Cluster and the new memcached API, you have all the benefits of an ACID RDBMS, combined with the performance capabilities of Key/Value store.

By default, every Key / Value is written to the same table with each Key / Value pair stored in a single row – thus allowing schema-less data storage. Alternatively, the developer can define a key-prefix so that each value is linked to a pre-defined column in a specific table.

Of course if the application needs to access the same data through SQL then developers can map key prefixes to existing table columns, enabling Memcached access to schema-structured data already stored in MySQL Cluster.

You can read more about the design goals and implementation of the memcached API for MySQL Cluster here.

Integration with MySQL 5.5

MySQL Cluster 7.2.1 is integrated with MySQL Server 5.5, providing binary compatibility to existing MySQL Server deployments. Users can now fully exploit the latest capabilities of both the InnoDB and MySQL Cluster storage engines within a single application.

Users simply install the new MySQL Cluster binary including the MySQL 5.5 release, restart the server and immediate have access to both InnoDB and MySQL Cluster!

Enhancing Cross Data Center Scalability: Simplified Active / Active Replication

MySQL Cluster has long offered Geographic Replication, distributing clusters to remote data centers to reduce the affects of geographic latency by pushing data closer to the user, as well as providing a capability for disaster recovery.

Geographic replication has always been designed around an Active / Active technology, so if applications are attempting to update the same row on different clusters at the same time, the conflict can be detected and resolved. With the release of MySQL Cluster 7.2.1, implementing Active / Active replication has become a whole lot simpler. Developers no longer need to implement and manage timestamp columns within their applications. Also rollbacks can be made to whole transactions rather than just individual operations.

You can learn more here.

Enhancing Cross Data Center Scalability: Multi-Site Clustering

MySQL Cluster 7.2.1 DMR provides a new option for cross data center scalability – multi-site clustering. For the first time splitting data nodes across data centers is a supported deployment option.

Improvements to MySQL Cluster’s heartbeating mechanism with a new “ConnectivityCheckPeriod” parameter enables greater resilience to temporary latency spikes on a WAN, thereby maintaining operation of the cluster.

With this deployment model, users can synchronously replicate updates between data centers without needing conflict detection and resolution, and automatically failover between those sites in the event of a node failure.

Users need to characterize their network bandwidth and latencies, and observe best practices in configuring both their network environment and Cluster. More guidance is available here.

User Privilege Consolidation

User privilege tables are now consolidated into the data nodes and centrally accessible by all MySQL servers accessing the cluster.

Previously the privilege tables were local to each MySQL server, meaning users and their associated privileges had to be managed separately on each server. By consolidating privilege data, users need only be defined once and managed centrally, saving Systems Administrators significant effort and reducing cost of operations.


The MySQL Cluster 7.2.1 DMR enables new classes of use-cases to benefit from web-scale performance with carrier-grade availability.  We also have a great webinar coming up on Wednesday October 19th  where the engineering and product management team will discuss the enhancements in more detail, and how you can use them today. You can sign up here.

You can download the DMR for evaluation now from: (select Development Milestone Release tab).

You can learn more about the MySQL Cluster architecture from our Guide to scaling web databases

Let us know what you think of these enhancements directly in comments of this or the associated blogs. We look forward to working with the community to perfect these new features.

Monday Oct 03, 2011

Synchronously Replicating Databases Across Data Centers – Are you Insane?


Well actually….no. The second Development Milestone Release of MySQL Cluster 7.2 introduces support for what we call “Multi-Site Clustering”. In this post, I’ll provide an overview of this new capability, and considerations you need to make when considering it as a deployment option to scale geographically dispersed database services.

You can read more about MySQL Cluster 7.2.1 in the article posted on the MySQL Developer Zone

MySQL Cluster has long offered Geographic Replication, distributing clusters to remote data centers to reduce the affects of geographic latency by pushing data closer to the user, as well as providing a capability for disaster recovery.

Multi-Site Clustering provides a new option for cross data center scalability. For the first time splitting data nodes across data centers is a supported deployment option. With this deployment model, users can synchronously replicate updates between data centers without needing to modify their application or schema for conflict handling, and automatically failover between those sites in the event of a node failure.

MySQL Cluster offers high availability by maintaining a configurable number of data replicas.  All replicas are synchronously maintained by a built-in 2 phase commit protocol.  Data node and communication failures are detected and handled automatically.  On recovery, data nodes automatically rejoin the cluster, synchronize with running nodes, and resume service.

All replicas of a given row are stored in a set of data nodes known as a nodegroup.  To provide service, a cluster must have at least one data node from each nodegroup available at all times.  When the cluster detects that the last node in a nodegroup has failed, the remaining cluster nodes will be gracefully shutdown, to ensure the consistency of the stored databases on recovery.

Improvements to the heartbeating mechanism used by MySQL Cluster enables greater resilience to temporary latency spikes on a WAN, thereby maintaining operation of the cluster. A new “ConnectivityCheck” mechanism is introduced, which must be explicitly configured. This extra mechanism adds messaging overheads and failure handling latency, and so is not switched on by default.

When configuring Multi-Site clustering, the following factors must be considered:

Low bandwidth between data nodes can slow data node recovery.  In normal operation, the available bandwidth can limit the maximum system throughput.  If link saturation causes latency on individual links to increase, then node failures, and potentially cluster failure could occur.

Latency and performance
Synchronously committing transactions over a wide area increases the latency of operation execution and commit, therefore individual operations are slowed. To maintain the same overall throughput, higher client concurrency is required.  With the same client concurrency level, throughput will decrease relative to a lower latency configuration.

Latency and stability
Synchronous operation implies that clients wait to hear of the success or failure of each operation before continuing. Loss of communication to a node, and high latency communication to a node are indistinguishable in some cases.  To ensure availability, the Cluster monitors inter-node communication.  If a node experiences high communication latency, then it may be killed by another node, to prevent its high latency causing service loss.

Where inter-node latencies fluctuate, and are in the same range as the node-latency-monitoring trigger levels, node failures can result.  Node failures are expensive to recover from, and endanger Cluster availability. 

To avoid node failures, either the latency should be reduced, or the trigger levels should be raised.  Raising trigger levels can result in a longer time-to-detection of communication problems.

WAN latencies
Latency on an IP WAN may be a function of physical distance, routing hops, protocol layering, link failover times and rerouting times. The maximum expected latency on a link should be characterized as input to the cluster configuration.

Survivability of node failures
MySQL Cluster uses a fail fast mechanism to minimize time-to-recovery. Nodes that are suspected of being unreachable or dead are quickly excluded from the Cluster.  This mechanism is simple and fast, but sometimes takes steps that result in unnecessary cluster failure.  For this reason, latency trigger levels should be configured a safe margin
above the maximum latency variation on inter-data node links.

Users can configure various MySQL Cluster parameters including heartbeats, Connectivity_Check, GCP timeouts and transaction deadlock timeouts. You can read more about these parameters in the documentation

Recommendations for Multi-Site Clustering
- Ensure minimal, stable latency;
- Provision the network with sufficient bandwidth for the expected peak load - test with node recovery and system recovery;
- Configure the heartbeat period to ensure a safe margin above latency fluctuations;

- Configure the ConnectivtyCheckPeriod to avoid unnecessary node failures;

- Configure other timeouts accordingly including the GCP timeout, transaction deadlock timeout, and transaction inactivity timeout.

The following is a recommendation of latency and bandwidth requirements for applications with high throughput and fast failure detection requirements:
- latency between remote data nodes must not exceed 20 milliseconds;
- bandwidth of the network link must be more than 1 Gigabit per Second.

For applications that do not require this type of stringent operating environment, latency and bandwidth can be relaxed, subject to the testing recommended above.

As the recommendations demonstrate, there are a number of factors that need to be considered before deploying multi-site clustering. For geo-redundancy, Oracle recommends Geographic Replication, but multi-site clustering does present an alternative deployment, subject to the considerations and constraints discussed above.

You can learn more about scaling web databases with MySQL Cluster from our new Guide.  We look forward to hearing your experiences with the new MySQL Cluster 7.2.1 DMR!

Thursday Sep 29, 2011

MySQL HA Solutions: New Guide Available

Databases are the center of today’s web, enterprise and embedded applications, storing and protecting an organization’s most valuable assets and supporting business-critical applications. Just minutes of downtime can result in significant lost revenue and dissatisfied customers. Ensuring database highly availability is therefore a top priority for any organization.

The new MySQL Guide to High Availability solutions is designed to navigate users through the HA maze, discussing:

- The causes, effects and impacts of downtime;

- Methodologies to select the right HA solution;

- Different approaches to delivering highly available MySQL services;

- Operational best practices to meet Service Level Agreements (SLAs).

As discussed in the new Guide, selecting the high availability solution that is appropriate for your application depends upon 3 core principles:

- The level of availability required to meet business objectives, within budgetary constraints;

- The profile of application being deployed (i.e. concurrent users, requests per second, etc.);

- Operational standards within each data center.

Recognizing that each application or service has different operational and availability requirements, the guide discusses the range of certified and supported High Availability (HA) solutions – from internal departmental applications all the way through to geographically redundant, multi-data center systems delivering 99.999% availability (i.e. less than 5 ½ minutes of downtime per year) supporting transactional web services, communications networks, cloud and hosting environments, etc.

By combining the right technology with the right skills and processes, users can achieve business continuity, while developers and DBAs can sleep tight at night! Download the guide to learn more.

Tuesday Sep 06, 2011

Oracle Enhances MySQL Manageability on Windows

Windows is a major development and deployment platform for MySQL. A few months ago, we held our first MySQL Online Forum, dedicated to MySQL on Windows. We outlined then why MySQL was a great fit for the Windows environment, and what were the upcoming milestones to make MySQL even better on the Microsoft platform.

We’re now pleased to announce that two important milestones have been completed:

1. The New MySQL Installer for Windows is GA

The MySQL Installer for Windows radically simplifies the installation process for all MySQL users on the Windows platform. It only takes a few minutes from downloading the MySQL Installer to having a ready to use MySQL system on your machine. It delivers:

  • An intuitive graphical interface that guides users through the installation process
  • The ability to install all MySQL products that are required at once
  • An integrated update system for all MySQL products installed

Mike Zinner published an article presenting the then beta installer last April, and you can now download it here.

2. Certification for Windows Failover Clustering

This certification expands the range of High-Availability solutions available for MySQL on Windows, which previously included replication and MySQL Cluster. Users can now power business critical applications with MySQL Enterprise Edition using native Windows clustering services.

Andrew Morgan provided more detailed information in his blog earlier today about the solution and how to get going.

We will be holding a webinar on September 15th at 9.00 am PT to present the two new Windows solutions in more details, Register now!


Get the latest updates on products, technology, news, events, webcasts, customers and more.




« March 2015