Wednesday Feb 25, 2015

Datatype handling/validation for MySQL Connectors using RQG

System QA took the approach of using Random Query Generator (RQG)  to generate different SQL’s for multitude of data types available and supported in MySQL.

This approach would help in easily testing the validity of currently supported data types and changes (if any) to the same.

Following steps explain What and How this was accomplished

  1. Generate the table structure and generate insert queries for different data types through RQG and redirect to a file.

    Use which is part of RQG to create Table structure ( for this instance we have used DATE related data type )

    $ perl --dsn=dbi:mysql:host= --spec=ex_dt.zz

  2. Generate SQL queries using gensql and redirect the output to a file

    For the following grammar we generated insert queries


    insert ;


    INSERT INTO _table(_field) VALUES (data);


    date | time | datetime | year | timestamp ;


    _date |1000-01-01 | 999-01-01| 999-00-00 |9999-12-31|10000-13-32 ;


    _time|-838:59:59 | -839:59:59 |-838:61:61 |838:59:59| 838:62:63 ;




    _datetime|999-00-00 00:00:-10| 9999-12-31 23:59:59|10000-12-31 23:59:59 ;


    _timestamp|1970 _01_01 00:00:01 |1970-00-00 00:00:01 |2038-01-19 03:14:07 |2038-02-19 03:14:07 ;

    $ perl --grammar=ex_dt.yy --queries=10 --dsn=dbi:mysql:host= > generatedsqls.txt

    Note: One can tune the –queries and the corresponding grammar to increase or decrease the data set generated

  3. Execute the queries through Connector Application and collect the results

    Created a sample Connector Application to
    • Open up a connection ( with default settings ) to MySQL Server
    • Read the SQL’s generated from Step 2 (generatedsqls.txt )
    • Execute the insert query that is read on to MySQL Server
    • Retrieve the data inserted using Select operation
    • Capture the output/response for the Select operation

  4. Perform same operation(s) as in Step 3 with MySQL Client

  5. Compare the output captured from Step 3 and Step 4

    If there is any difference in the resultant then the analysis would clearly point to the difference in handling the datatype/data by different connectors.
    Currently System QA has tried this with MySQL Connector/Python and MySQL Connector/Java and identified some differences.

    For instance with DATE type following difference is noticed
    Data inserted: '999-00-00'
    Data retrieved from MySQL Client: '0999-00-00'
    Data retrieved from MySQL C/Java: '0998-11-30'


We can try and test this with different Connectors ( MySQL Connector/ODBC, MySQL Connector/C, MySQL Connector/C++,
MySQL Connector/Java,MySQL Connector Python )

The same approach can be easily expanded to other data types or operations by using the
features in RQG, if there are any changes to data types in MySQL Server.






Amrutha Rao and Madhusudhan Joshi
System QA Team

Sunday Feb 01, 2015

MySQL Fabric Crash Testing

MySQL SystemQA: MySQL Fabric Crash Testing

MySQL Fabric is an open-source solution released by the MySQL Engineering team at Oracle.

It is an extensible and easy to use system for managing a MySQL deployment for Sharding

and High-availability.

This handles multiple machines, multiple servers in different platforms.The usability, recovery,
stability of MySQL Fabric is much more required as it is a complete distributed system. 
So we have designed a setup in JAVA framework to test various crash features.

What is a Crash in MySQL Fabric?

A crash is mainly considered as destroy,kill,stop,disconnect,etc.

Here you define crash in various points

  • Kill a server using SEGKILL

  • Stop a server normal way

  • Reboot a server with background activities

  • Crash a machine/VM

  • Reboot a machine/VM

  • Network cable unplug on a machine

  • Network drop between machines.

MySQL Fabric is tested using these real time features to ensure the stability 
and fault tolerance of the product. Also we need to ensure the minimal loss 
of data on recovery.

The setup for this testing consists of 2 physical machines and 10 Oracle VMs 
and MySQL Fabric setup using 150 servers, 35 groups and 30 shards is created. 
This setup was done in Linux and windows environment. 

What has been tested using this setup?

With this large setup , the various crash points are defined in different test cases and

ran in JAVA framework. Here are few examples of the test scenarios

Case 1:

One of the VM machine is rebooted. Verify the group topology is running fine and sharding

is still active.


Kill all servers of different groups and rollback the initial setup. Verify the stability

of fabric topology.

Case 3:

MySQL Fabric instance is stopped and restarted during multiple activities.

Case 3:

Multiple VMs are stopped and restarted. Verify client interaction with fabric.

We have around 50 similar scenarios run on MySQL Fabric releases.


The Crashing and recovery of fabric system is a real time scenario and it can

happen at anytime. So we have verified various scenarios to see if our product is able to

handle such incidents.

Thursday Jul 31, 2014

Testing MySQL repository packages by Ramana Yeruva

This is a wew blog post by Ramana Yeruva on how we test the mysql repository. Please check it out

[Read More]

Tuesday Apr 29, 2014

MySQL Fabric Server Farm Testing Overview

MySQL SystemQA: MySQL Fabric server farm Setup and Testing overview

MySQL Fabric is an open-source solution released by the MySQL Engineering team at Oracle. 
It is an extensible and easy to use system for managing a MySQL deployment for Sharding  
and High-availability.

High availability is generally used to describe the ability of a system to continue operating 
and provide resources to the application when a failure occurs in one or more of the 
following categories in a fault domain: hardware, software, or application.

With high availability feature, MySQL fabric manages several servers in different groups
using replication topology. At any point of failure it creates servers available to do the regular 

To test similar functionalities, we have designed a setup in JET framework which will handle multiple 
servers in multiple machines.

What we did?

We have started with a fabric setup class in JET framework, which starts few servers, creates groups
and shards. Initially we ran this on a single machine to verify the setup is fine.

Later we moved to 2-3 physical machines to create up to 100 servers, 20 groups and 15 shards. 
We encountered with a failure as the servers can not be started in individual machines. This 
was because of the limitations of the physical machine i.e we can not start more that 15-20 
servers in a 8gb ram machine.

We create a server farm using 2 physical machines and 18 Oracle VMs. There we were able to 
create a MySQL Fabric setup using 230 servers, 40 groups and 35 shards. This setup was done 
in linux environment. We then used one window 7 machine and multiple MySQL versions to run 
the setup.

What we have tested using this setup?

We have used the setup to run :
  • Feature testing (fabric CLI features)
  • Failover testing ( failing a server/VM and ensure the recovery)
  • Concurrent testing (running multiple activities using threads)
  • Scalability testing

Future activities:

  • Crash Testing includes crashing of fabric state-store, multiples Vms, Multiple servers.
  • Randomize testing includes running randomly selected activities in multiple tests.
  • Blogs about detail of Crash and randomize testing

Monday Apr 28, 2014

MySQL Fabric Setup using ndb Cluster

MySQL SystemQA: MySQL Fabric Setup using ndb Cluster

MySQL Fabric is an open-source solution released by the MySQL Engineering team at Oracle. It is 
an extensible and easy to use system for managing a MySQL deployment for Sharding  and

To ensure/provide resilience to failures, MySQL Fabric manages servers in groups and deploy 
High Availability. MySQL Fabric also supports Sharding, which is used to scale out a large 
databases. Sharding setup handles the increasing demand for read loads and also handles the 
increasing write loads. The database/table are shared across different servers, where each shard
contain a fragment of the data.

A Fabric node is Fabric Process + State Store (which is a MySQL Server).As MySQL Fabric 
manages such a valuable information for server farms and scaling of database, using single 
machine to handle the MySQL Fabric node is not a good solution. Thus we need to come up 
with a solution that make MySQL Fabric node fail-safe.

We need to have a full-fledged fault-tolerant solution at both levels, i.e. process and state store. 
Here we will only discuss redundancy at the state store level and a full-fledged solution is 
something that will be delivered in the future.

We can use two methods  to make MySQL Fabric resilient with no single point of failure EITHER
using MySQL Replication setup with multiple Fabric nodes in a topology OR setting up fabric
nodes using MySQL Cluster (ndb cluster).As part of SystemQA team at Oracle MySQL we 
validated both possibilities to make MySQL Fabric failure-safe.

As part of SystemQA team at Oracle MySQL we validated both possibilities to make 
MySQL Fabric resilient with no single point of failure.

What we did?

- Prepared a ndb cluster setup in multiple machines
- MySQL Fabric Setup and Start in a cluster node.
- Killed/Crashed the Fabric node/server and verified if the data is safe in other cluster nodes.
- Start Fabric node in other cluster node and run the application.

NDB Cluster:

With its distributed, shared-nothing architecture, MySQL Cluster has been carefully designed

to deliver 99.999% availability ensuring resilience to failures and the ability to perform

scheduled maintenance without downtime.

It provides

  • Synchronous Replication

  • Automatic Failover

  • Shared Nothing Architecture, No Single Point of Failure

  • Geographical Replication


NDB Cluster Setup in Multi Machines:

1) in mngmt machine create the folders
-bash-4.1$ /usr/local/mysql/ 
data/          mgmt_data/     mysql-cluster/ 

2) in mngmt machine create the ini file
-bash-4.1$ ls /usr/local/mysql/mgmt_data/ 

[ndbd default] 
NoOfReplicas = 2 

[mysqld default] 
[ndb_mgmd default] 
[tcp default] 

NodeId = 1 
HostName = mngmt_machine_ip 
#LogDestination = FILE:filename=/usr/local/mysql/log/ndb_1_cluster.log,maxsize=1000000,

NodeId = 2 
HostName = mngmt_machine_ip 
DataDir = /..../data 

NodeId = 4 
HostName = othernodemahcine_ip 
DataDir = /..../data 

NodeId = 3 
NodeId = 5 

3) create /etc/my.cnf (different port and socket) in both machines 




default-storage-engine = ndb 


4) start the MGM node process in mngmt machine: 

/..../mysql-cluster-binary/bin/ndb_mgmd --config-file=/usr/local/mysql/mgmt_data/
mgmt_config.ini --initial 
/..../mysql-cluster-binary/bin/ndbd --initial 

5) On each of the DATA (NDB) nodes, run this command to start ndbd only for the first time: 

/..../mysql-cluster-binary/bin//ndbd --initial -c mngmt_ip:port
in all nodes run ps -ef |grep ndbd and check if nodes are running 

#Note that it is very important to use the --initial parameter only when starting ndbd for the first time, or when restarting after a 
backup/restore operation or a configuration change. This is because the --initial option causes the node to delete any files 
created by earlier ndbd instances that are needed for recovery, including the recovery log files. 

6) Start mysql servers in different nodes.

7) Use the mngmt machine to check whether the cluster is working properly.


ndb_mgm> show 

make sure that all nodes are connected. 

Fabric Setup:

/usr/bin/mysqlfabric --param=storage.address=localhost
:13000 –param=storage.user=root --param=protocol.xmlrpc
.password=<password> manage setup 

/usr/bin/mysqlfabric --param=storage.address=localhost
:13000 –param=storage.user=root --param=protocol.xmlrpc
.password=<password> manage start


Manual recovery of fabric state-store if there is a fabric cluster node failure.

Future activities:

Crash and restart of Cluster node (fabric node) to ensure recovery of fabric state-store.


Saturday Apr 05, 2014

MySQl Fabric Testing

MySQL Fabric Testing :

MySQL Fabric is a simple and easy to use management system to handle High Availability and Sharding. MySQL SysQA team is responsible to test the MySQL Fabric at a system level.We do

1) Feature/Functional/Regression Testing   
2) Complex/Integration/concurrent Testing
3) Scalability testing
4) Performance/Atomic Testing
5) Fabric Recovery/Security Testing

There are few test scenario examples below  that the team is performed to get a stable,robust application.

MySQL Fabric Testing Examples:

1) multiple activities performed at same time. e.g.

  • Shard split

  • promote a slave

  • insert records in a shard

  • demote another group

  • remove master from the group

  • add old master to the group

2) promote and demote for 100 times
3) scalability testing for 20 mn and 50 mn records and run activities to ensure fabric does not crash
4)1000 connections at the same time to check if there is crash.
5)comparing performance of conn/python,conn/j fabric support against conn/python,conn/j
6) fabric server setup using ndbCluster
7) DDLs on a sharded table
8) 100-500 servers in multiple machines with 50 groups and 45 shards (will change based on topology) to ensure there is no crash
9) 10+ activities in parallel to check if there is any hangs in activities
10) 20+ activites one after another
11) Fabric tests integration with mysqlutilities (failover, mysqlreplicate,etc) (future scope)
12) Backup of shard table (using multisource replication) (future scope)
13) 10 tables sharded using a single mapping and do DDLs and DMLs in each shard
14) Fabric test with DDLs and DMLs in backend
15) Minor updown of servers for each master or slave
(future scope)

How We did it?

- Created/updated an Automated and Multi Threaded program to run the Fabric Activities and transactions in parallel.

- Creating Hudson jobs to run the tests regularly

Future activities:

- Crash testing and Randomize testing for MySQL Fabric server farm.

Monday Jun 03, 2013

Testing MEB impact over MySQL Server features

MySQL SystemQA: Testing MEB impact over MySQL Server features:

     MySQL SysQA team is responsible to test the MySQL Enterprise backup at a system level. Along with the System testing , SysQA team worked on MEB impact over MySQL Server features to ensure the new version of MySQl server is compatible with latest release of MEB.

What we do.

- Analyse each Server feature and findout if that impacts MEB.

- Run an Automated test that integrates the server functionalities along with MEB. The test is done in parallel with running server feature.

- Run the feature at each level of MEB run and verify if it works fine.

- Run the feature at different boundary conditions and verify the backup.

How we Did.

- Created a MultiThreaded program to run the MEB and Server feature in parallel.

- Run different transactions along with the server feature and run backup. Restore the backup and - Start a MySQL server with the restored data.

- Verify the data against the original data directory.


MySQL Server Feature: Online alter

- Created different online alter scenarios (with boundary conditions) and prepared a SQL file.

- Included the same in the Program.

- Modify the parameter of the program to trigger the Transactions at certain point of MEB run.(run the transactions when the backup command starts copying the table impacted)

- Check if backup is successful

- Restore the new data directory and start a server.

- Check if the transactions are updated in the table.

Future activities:

High load integration with the automation script to ensure the MEB works with new server features.


This blog will discuss the testing aspects of the MySQL products, testing we do for different mysql products, how we qualify the releases, some of the approaches we use, test tools we use.


« March 2015