Wednesday Oct 28, 2015

Multiple MysqL Routers setup for an application

The MySQL Router is a new building block for high availability solutions. MySQL Router simplifies development of high availability applications by intelligently routing queries to MySQL Servers for increased performance, robustness, and uptime. MySQL Router also helps in load balancing of read or write transactions to a database with support of multiple router connections to multiple clients.

There are various use cases where you can use multiple router connections.
- Write to a DB can be routed using one router and read can be happened by multiple router connections to have a better performance and stability
- If client and servers are in specific region , we can use multiple routers in the specific region
- Different warehouse can be handled by multiple routers considering the load.

These are few examples usecases. You can define your own usecase and check if it suites your environment. Also you can use this in your existing application

Multiple Router setup:
Below is a simple example of how to create multiple router setup for a set of warehouses.
Define MySQL Router configuration files using below template. Note that the router binding port should be different.

Config1:
#[DEFAULT]
#logging_folder =
#plugin_folder = /usr/local/lib/mysqlrouter
#config_folder = /etc/mysql
#runtime_folder = /var/run

#[logger]
#level = INFO

#[routing:read_from_servers1_warehouse1]
#bind_port = 7002
#destinations = sl1:port1,sl2:port2,sl3:port3 (you can define the host:port for the masters here)
#mode = read-only

Config2:
#[DEFAULT]
#logging_folder =
#plugin_folder = /usr/local/lib/mysqlrouter
#config_folder = /etc/mysql
#runtime_folder = /var/run

#[logger]
#level = INFO

#[routing:read_from_servers2_warehouse2]
#bind_port = 7003
#destinations = sl4:port4,sl5:port5,sl6:port6 (you can define the host:port for the masters here)
#mode = read-only

Config3:
#[DEFAULT]
#logging_folder =
#plugin_folder = /usr/local/lib/mysqlrouter
#config_folder = /etc/mysql
#runtime_folder = /var/run

#[logger]
#level = INFO

#[routing:read_from_servers3_warehouse3]
#bind_port = 7004
#destinations = sl7:port7,sl8:port8,sl9:port9 (you can define the host:port for the masters here)
#mode = read-only

Please refer to http://dev.mysql.com/doc/mysql-router/en/mysql-router-installation.html  to install MySQL Router

You can define your log files path in the ini file. Run Routers in the background as

nohup mysqlrouter -c config1.ini &
nohup mysqlrouter -c config2.ini &
nohup mysqlrouter -c config3.ini &

This can be done in different machines,regions,etc depending on the use cases.The corresponding router ports can be used in application programs to connect to the servers. So you can connect many no of clients to a router port to get the information and with support of high availability.

Summary:
Multiple routers setup is required in simple to complex applications for a better performance, load balancing and stability.






MySQL Fabric Automation setup using python script

MySQL Fabric is an extensible framework for managing farms of MySQL Servers. Two primary features have been implemented - High Availability (HA) and scaling out using data sharding. These features can be used in isolation or in combination.

MySQL Fabric setup is a process in which you need to run different commands in a sequence and use the corresponding logs for future reference. This blog tells about an easy process to do the initial setup of a Fabric system using config file. You need to define the properties file as per your requirement and run a similar python script while will do the setup.
Note: This fabric setup script is basically for release 1.5.6 or less and it is tested in our test environment.

Precondition:
- Fabric manage Setup and manage start is already triggered.
- Database is created OR shard servers have database already present.

Properties File:Create a properties file and save it as ConfigFile.properties

[DatabaseSection]
database.dbname=oltp
database.user=root
database.password=
[groups]
groups.names=My_group1,My_group2,My_group3,My_group4,My_group5,My_group6
groups.no=6
[servers]
servers.name=localhost:13001,localhost:13002,localhost:13003,localhost:13004,localhost:13005,localhost:13006,localhost:13007,localhost:13008,localhost:13009,localhost:13010,localhost:13011,localhost:13012
servers.eachgroup=2
[shards]
shards.table=oltp.oltp
shards.row=emp_id
shards.range=10000
shards.shardingtype=RANGE
#shards.shardingtype=HASH,RANGE_DATE,RANGE_STRING
shards.globalgroup=My_group5
shards.groups=My_group1,My_group2,My_group3,My_group4

Python Script:

#!/usr/bin/python    
import ConfigParser
import os
import commands
import sys

config = ConfigParser.RawConfigParser()
config.read('ConfigFile.properties')
#config.set('DatabaseSection','servers',5)
#config.set('DatabaseSection1','server_id',[1, 2, 3])
print config.items('DatabaseSection')
print config.get('DatabaseSection', 'database.dbname');
group_ids = config.get('groups', 'groups.names').split(",");
server_ids = config.get('servers','servers.name').split(",");
print server_ids
#####for logging the details###############
with open('serveroutput.properties', 'wb') as output:
    for id in server_ids:
        output.write(str(id))
        print id
with open('serveroutput.properties', 'wb') as output:
    for id in group_ids:
        output.write(str(id))
    print id
########Verify if fabric is running############a
if sys.platform.startswith('linux'):
    output = commands.getoutput('ps -A')
    print "####inside checking OS####"
    if 'mysqlfabric' in output:
        print "ITS ALIVE!!!"
    else:
    assert (false), "Fabric is not running"
if sys.platform.startswith('win'):
    output = commands.getoutput('tasklist')
    if 'mysqlfabric' in output:
        print "ITS ALIVE!!!"
    else:
    assert (false), "Fabric is not running"

########Add groups#######################
serversingroup =  config.get('servers', 'servers.eachgroup');
print serversingroup
print config.get('groups', 'groups.no');
for group_id in group_ids:
   #str1=os.system("mysqlfabric create group "+group_id)
   #assert (int(str1) < 0), "fabric command failed"
   print ("mysqlfabric create group "+group_id)

#########add servers to group###########
id= 0
server_id= 0
for group_id in group_ids:
    while (id < int(serversingroup)):
       #str1=os.system("mysqlfabric add "+group_id+" "+server_ids[server_id])
       #assert (int(str1) < 0), "fabric command failed"
       print ("mysqlfabric add "+group_id+" "+server_ids[server_id])
       id=id + 1
       server_id= server_id+ 1
    id= 0
#########Sharding Setup###################
#os.system("mysqlfabric sharding create_definition "+config.get('shards','shards.shardingtype')+" "+config.get('shards','shards.globalgroup'))
#assert (int(str1) < 0), "fabric command failed"
#os.system("mysqlfabric sharding add_table 1 "+config.get('shards','shards.table')+" "+config.get('shards','shards.row'))
#assert (int(str1) < 0), "fabric command failed"
print("mysqlfabric sharding create_definition "+config.get('shards','shards.shardingtype')+" "+config.get('shards','shards.globalgroup'))
print("mysqlfabric sharding add_table 1 "+config.get('shards','shards.table')+" "+config.get('shards','shards.row'))
shard_grps = config.get('shards','shards.groups').split(",");
shard_list = []
if config.get('shards','shards.shardingtype') == "RANGE":
   sid = 0
   shard_range = config.get('shards','shards.range')
   range =1
   for id in shard_grps:
       shard_list.append(id+"/"+str(range))
       range = range + int(shard_range)
   str_list=("[{}]".format(",".join(map(repr, shard_list))))
print "mysqlfabric sharding add_shard 1 "+ str(str_list) +"  --state=ENABLED"
#str1 = os.system("mysqlfabric sharding prune_shard "+config.get('shards','shards.table'))
#assert (int(str1) < 0), "fabric command failed"

Run the below script as python Fabric_pythonSetup.py

Note: Please try it in your test environment and modify it,if required.

Summary:
This code is just a demo to do fabric setup. you can refer this to create your own script for a complex setup,log the outputs in log file

Monday Oct 26, 2015

Concurrent connections to a MySQL router from connector/python.

An application always needs support of concurrent connections to mysql servers and always requires high available system for a loss less data environment.

The MySQL Router is a new building block for high availability solutions. MySQL Router simplifies development of high availability applications by intelligently routing queries to MySQL Servers for increased performance, robustness, and uptime. MySQL Router definitely helps in supporting HA and concurrent connections to servers without any change in the application level coding irrespective of the servers in a simple topology or in a fabric distributed server.

In a practical application environment, the data is fetched from the read-only servers where data is written to read-write servers. So Router is pre-defined to support similar server setup. Also it is required connect to the available/next available server when there is failover and Provide proper logs to the DBA regarding the failover system.

Round-Robin mode: read-only will use a simple round-robin method to go through the list of MySQL Servers. If one server is not available, next server will be tried. When no server is available, the routing is aborted.MySQL Server which could not be contacted are quarantined. They will be regularly checked whether they are again available. When they are, they will be put back into the available servers.

Read-write mode: will try the first MySQL Server. When it can not connect, it will try the next in the list. If the list runs out, the routing is aborted.The first successful server contacted will be saved in memory as first to try for next incoming connections. Note however that this state will not be saved and on next restart of Router the list it tried the way it is was configured in the configuration file.

Concurrent connection setup using MySQL Router:
Please refer to http://dev.mysql.com/doc/mysql-router/en/mysql-router-installation.html to install MySQL Router.

Define the server setup for your application. for example:
-> You can have a one master and multiple slave topology (m1-> sl1,sl2,sl3)
-> You can have a multiple masters setups where application connects to each server for different data.    (server1->sl1,sl2,sl3,server2->sl4,sl5,sl6,server3->sl7,sl8,sl9)                                                        

Define a configuration file for router as below.

#[DEFAULT]
#logging_folder =
#plugin_folder = /usr/local/lib/mysqlrouter
#config_folder = /etc/mysql
#runtime_folder = /var/run

#[logger]
#level = INFO

#[routing:basic_failover1]
# To be more transparent, use MySQL Server port 3306
#bind_port = 7011
#mode = read-write
#destinations = m1:3306 (you can define the host:port for the masters here)

#[routing:basic_failover2]
#bind_port = 7012
#mode = read-write
#destinations = m2:3306 (you can define the host:port for the masters here)

#[routing:basic_failover3]
#bind_port = 7013
#mode = read-write
#destinations = m3:3306 (you can define the host:port for the masters here)

Masters are configured to write data from the application.

#[routing:read_from_slaves]
#bind_port = 7002
#destinations = sl1:port1,sl2:port2,sl3:port3 (you can define the host:port for the masters here)
#mode = read-only

#[routing:read_from_slaves2]
#bind_port = 7003
#destinations = sl4:port4,sl5:port5,sl6:port6 (you can define the host:port for the masters here)
#mode = read-only

#[routing:read_from_slaves3]
#bind_port = 7004
#destinations = sl7:port7,sl8:port8,sl9:port9 (you can define the host:port for the masters here)
#mode = read-only

We can use port 7002 to get data from first topology, 7003 and 7004 to get data from 2nd and 3rd topologies respectively.

Run Router as mysqlrouter -c /path/to/your/mysqlrouter.ini

How can we configure it in connector python ?

It is very simple to configure conn/Python job for a corresponding routing connection. If you are passing the host and port to conn/Python, you need to pass the corresponding binding port of the router.

Example:

    connection = mysql.connector.Connect(host=options.hostname,user="root",passwd ="",port=options.portvalue)  #here port value is 7002 or 7003 or 7004
    cursor = connection.cursor()
    cursor.execute("use test")
        cursor.execute("select * from t1; ")     
        rows = cursor.fetchall()
        for row in rows:
            print row[0]

It is going to connect to the available slave from the destinations. It is round-robin process to get the slave server details. The first server will connect to first router connection, 2nd to second and so on. Once there is no servers available, it will start from the first one.At a failover, if the first is not available, it will move on to 2nd, third,etc.

It depends on the application developer to add corresponding router binding information, to get a connection to required server.In case of Masters, it always connect to one server to write the data. If there is a failover the next master server will be taken for a connection.Application developer can run concurrent connections using thread programming to read data from slaves using the same router port and it increases the performance of data extraction.

Note: As part of testing we have verified 1000 concurrent connections using MySQL Router.

Summary:

MySQl Router is easy to configure and easy to use and it helps in providing high availability and load balancing to an application.


MySQL Fabric integration with Router

The MySQL Router is a new building block for high availability solutions. MySQL Router simplifies development of high availability applications by intelligently routing queries to MySQL Servers for increased performance, robustness, and uptime.

MySQL Router also connects seamlessly with MySQL Fabric permitting Fabric to store and manage the high availability groups for routing, making it simpler to manage groups of MySQL Servers for redundancy and
continued operation.Also You do not need specific fabric aware connectors to do so. You can use generic MySQL Connectors for this purpose.

Please refer to http://dev.mysql.com/doc/mysql-router/en/mysql-router-installation.html to install MySQL Router.
Create a fabric setup using https://dev.mysql.com/doc/mysql-utilities/1.4/en/fabric-setup.html

How to integrate Router with MySQL Fabric:
you can have one MySQL Fabric state-store with 6 groups, each group having 3 servers (1 master and 2 slaves) and 3 shards.

Define a configuration file for Router.


[DEFAULT]
logging_folder =
plugin_folder =
config_folder =
runtime_folder =

[logger]
level = INFO

[fabric_cache]
address = localhost:32275
user = admin

[routing:b]
bind_address = 127.0.0.1:7002
destinations = fabric+cache:///group/my_group1
mode = read-write

This will connect only to the master of the group. if there is a failover, the new master will be selected and connected

[routing:d]
bind_address = 127.0.0.1:7003
destinations = fabric+cache:///group/my_group1?allow_primary_reads=no
mode = read-only

This will connect to the slaves of the group in a round-robin manner. So in application you need to connect to the routing port only considering the read-only or read-write requirement.

Note: Round-Robin mode read-only will use a simple round-robin method to go through the list of MySQL Servers. If one server is not available, next server will be tried. When no server is available, the routing is aborted.MySQL Server which could not be contacted are quarantined. They will be regularly checked whether they are again available. When they are, they will be put back into the available servers.The order shall be maintained when quarantined servers are put back into the list.

Summary:

MySQL Router can be used in any kind of industries  as it is easy to configure and easy to use. There will be very minimal change required in application level coding and you can avail the high availability, load balancing features.

Monday Aug 17, 2015

MySQL Fabric Setup Checklist

MySQL Fabric: Setup Checklists

MySQL Fabric is an open-source solution released by the MySQL Engineering team at 
Oracle. It is an extensible and easy to use system for managing a MySQL deployment 
for Sharding  and High-availability.


      This handles machines, multiple servers in different platforms. The usability, recovery, 
stability of MySQL Fabric is much more required as it is a complete distributed system. 
The setup of MySQL fabric involves servers,database, group details, shard details. So we 
need to ensure the fabric setup using below checklists.

Pre-Installation Checklist:
Pre-Install tasks


What to Do

Comments

1

Install latest python version. Download MySQLFabric



2

Define the group names,MySQL servers, shards,database,user details, shard type, lower bound,etc for Fabric setup



3

Update the fabric.cfg.in with corresponding details. Define disable_authentication type in the fabric.cfg.in



4

Install Farbic and verify the version using mysqlfabric --version



5

Verify the information in fabric.cfg



6

Start multiple MySQL servers and create the fabric setup as per the document



7

Refer to list commands for fabric setup (mysqlfabric help commands)



Post-Installation Checklist:

Assumption: Fabric setup is done by referring to the documentation. Groups,shards are created.

Note: The below verification are to be done in sample set of data and then the production data can be used for this setup.


Post-Install tasks


What to Do

Comments

1

Check Fabric database is created in Fabric server and corresponding tables are created



2

Check Fabric database is not accessible by any other user. Also with proper grant options



3

Check if fabric manage start is running in background



4

Accessibility for other users are restricted



5

Check fabric manage ping to ensure fabric is running



6

Check fabric group lookup_groups and match the same with your design.



7

Run mysqlfabric group lookup_servers <group_id> and verify if servers are added properly to each group as per design document



8

Run mysqlfabric group health to check the primary ,secondary servers, read-only and read-write options



9

Check if mysqlfabric group activate <group_id> ran for each server



10

Run group promote and group demote and check the group health. Check the replication status



11

Deactivate and activate a group again and check if everything is fine.



12

Remove a group and add it again.



13

Remove a server and add again to the group



14

Add a new group and servers to existing fabric setup



15

After shard_prune, check if the range of data is present in the group masters.



16

Check mysqlfabric sharding lookup_table db.table , if the table details are correct



17

Check dump shard_tables and verify



18

Check dump sharding_information and verify as per design



19

Check mysqlfabric dump shard_maps and verify as per design



20

Check dump shard_index and verify



21

Check dump servers and verify the servers link to shards



22

Check sharding lookup_servers and verify



23

Stop fabric instance using mysqlfabric manage stop. Start it again using mysql fabric start. Check everything is fine.



24

Kill Fabric process using kill -9 and start again . Check if everything is fine.



25

Create and insert data in to few test tables in global group and check if those are replicated in all shard groups.



26

Insert out of range data to different group masters and check the proper error message appears.



27

Add a new group and servers to existing fabric setup



28

Split a shard



29

Move a shard to other group



30

Kill a server in a group and check if new master is identified.



Conclusion:

As Fabric deals with larger number of server and data ,It is required to make a robust and proper setup for MySQL Fabric in production environment.



Wednesday Feb 25, 2015

Datatype handling/validation for MySQL Connectors using RQG

System QA took the approach of using Random Query Generator (RQG)  to generate different SQL’s for multitude of data types available and supported in MySQL.

This approach would help in easily testing the validity of currently supported data types and changes (if any) to the same.

Following steps explain What and How this was accomplished

  1. Generate the table structure and generate insert queries for different data types through RQG and redirect to a file.

    Use gendata.pl which is part of RQG to create Table structure ( for this instance we have used DATE related data type )


    $ perl gendata.pl --dsn=dbi:mysql:host=127.0.0.1:port=3306:user=root:database=rqg --spec=ex_dt.zz

  2. Generate SQL queries using gensql and redirect the output to a file

    For the following grammar we generated insert queries

    query:

    insert ;

    insert:

    INSERT INTO _table(_field) VALUES (data);

    data:

    date | time | datetime | year | timestamp ;

    date:

    _date |1000-01-01 | 999-01-01| 999-00-00 |9999-12-31|10000-13-32 ;

    time:

    _time|-838:59:59 | -839:59:59 |-838:61:61 |838:59:59| 838:62:63 ;

    year:

    _year|1901|1900|2155|2156|2000;

    datetime:

    _datetime|999-00-00 00:00:-10| 9999-12-31 23:59:59|10000-12-31 23:59:59 ;

    timestamp:

    _timestamp|1970 _01_01 00:00:01 |1970-00-00 00:00:01 |2038-01-19 03:14:07 |2038-02-19 03:14:07 ;

    $ perl gensql.pl --grammar=ex_dt.yy --queries=10 --dsn=dbi:mysql:host=127.0.0.1:port=3306:user=root:database=rqg > generatedsqls.txt


    Note: One can tune the –queries and the corresponding grammar to increase or decrease the data set generated


  3. Execute the queries through Connector Application and collect the results

    Created a sample Connector Application to
    • Open up a connection ( with default settings ) to MySQL Server
    • Read the SQL’s generated from Step 2 (generatedsqls.txt )
    • Execute the insert query that is read on to MySQL Server
    • Retrieve the data inserted using Select operation
    • Capture the output/response for the Select operation

  4. Perform same operation(s) as in Step 3 with MySQL Client

  5. Compare the output captured from Step 3 and Step 4

    If there is any difference in the resultant then the analysis would clearly point to the difference in handling the datatype/data by different connectors.
    Currently System QA has tried this with MySQL Connector/Python and MySQL Connector/Java and identified some differences.

    For instance with DATE type following difference is noticed
    Data inserted: '999-00-00'
    Data retrieved from MySQL Client: '0999-00-00'
    Data retrieved from MySQL C/Java: '0998-11-30'



Enhancements/Variations:

We can try and test this with different Connectors ( MySQL Connector/ODBC, MySQL Connector/C, MySQL Connector/C++,
MySQL Connector/Java,MySQL Connector Python )

The same approach can be easily expanded to other data types or operations by using the
features in RQG, if there are any changes to data types in MySQL Server.


References:

1 http://dev.mysql.com/doc/refman/5.6/en/data-types.html

2 http://dev.mysql.com/doc/connectors/en/index.html

3 https://github.com/RQG/RQG-Documentation/wiki/RandomQueryGeneratorQuickStart


Authors:

Amrutha Rao and Madhusudhan Joshi
System QA Team

Sunday Feb 01, 2015

MySQL Fabric Crash Testing

MySQL SystemQA: MySQL Fabric Crash Testing

MySQL Fabric is an open-source solution released by the MySQL Engineering team at Oracle.

It is an extensible and easy to use system for managing a MySQL deployment for Sharding

and High-availability.

This handles multiple machines, multiple servers in different platforms.The usability, recovery,
stability of MySQL Fabric is much more required as it is a complete distributed system. 
So we have designed a setup in JAVA framework to test various crash features.


What is a Crash in MySQL Fabric?

A crash is mainly considered as destroy,kill,stop,disconnect,etc.

Here you define crash in various points

  • Kill a server using SEGKILL

  • Stop a server normal way

  • Reboot a server with background activities

  • Crash a machine/VM

  • Reboot a machine/VM

  • Network cable unplug on a machine

  • Network drop between machines.


MySQL Fabric is tested using these real time features to ensure the stability 
and fault tolerance of the product. Also we need to ensure the minimal loss 
of data on recovery.

The setup for this testing consists of 2 physical machines and 10 Oracle VMs 
and MySQL Fabric setup using 150 servers, 35 groups and 30 shards is created. 
This setup was done in Linux and windows environment. 


What has been tested using this setup?

With this large setup , the various crash points are defined in different test cases and

ran in JAVA framework. Here are few examples of the test scenarios


Case 1:

One of the VM machine is rebooted. Verify the group topology is running fine and sharding

is still active.

Case2:

Kill all servers of different groups and rollback the initial setup. Verify the stability

of fabric topology.

Case 3:

MySQL Fabric instance is stopped and restarted during multiple activities.

Case 3:

Multiple VMs are stopped and restarted. Verify client interaction with fabric.

We have around 50 similar scenarios run on MySQL Fabric releases.

Conclusion:

The Crashing and recovery of fabric system is a real time scenario and it can

happen at anytime. So we have verified various scenarios to see if our product is able to

handle such incidents.


Thursday Jul 31, 2014

Testing MySQL repository packages by Ramana Yeruva

This is a wew blog post by Ramana Yeruva on how we test the mysql repository. Please check it out

  http://mysqlrelease.com/2014/07/testing-mysql-repository-packages-how-we-make-sure-they-work-for-you/

[Read More]

Tuesday Apr 29, 2014

MySQL Fabric Server Farm Testing Overview

MySQL SystemQA: MySQL Fabric server farm Setup and Testing overview


MySQL Fabric is an open-source solution released by the MySQL Engineering team at Oracle. 
It is an extensible and easy to use system for managing a MySQL deployment for Sharding  
and High-availability.

High availability is generally used to describe the ability of a system to continue operating 
and provide resources to the application when a failure occurs in one or more of the 
following categories in a fault domain: hardware, software, or application.

With high availability feature, MySQL fabric manages several servers in different groups
using replication topology. At any point of failure it creates servers available to do the regular 
activities.



To test similar functionalities, we have designed a setup in JET framework which will handle multiple 
servers in multiple machines.

What we did?

We have started with a fabric setup class in JET framework, which starts few servers, creates groups
and shards. Initially we ran this on a single machine to verify the setup is fine.

Later we moved to 2-3 physical machines to create up to 100 servers, 20 groups and 15 shards. 
We encountered with a failure as the servers can not be started in individual machines. This 
was because of the limitations of the physical machine i.e we can not start more that 15-20 
servers in a 8gb ram machine.

We create a server farm using 2 physical machines and 18 Oracle VMs. There we were able to 
create a MySQL Fabric setup using 230 servers, 40 groups and 35 shards. This setup was done 
in linux environment. We then used one window 7 machine and multiple MySQL versions to run 
the setup.

What we have tested using this setup?

We have used the setup to run :
  • Feature testing (fabric CLI features)
  • Failover testing ( failing a server/VM and ensure the recovery)
  • Concurrent testing (running multiple activities using threads)
  • Scalability testing

Future activities:

  • Crash Testing includes crashing of fabric state-store, multiples Vms, Multiple servers.
  • Randomize testing includes running randomly selected activities in multiple tests.
  • Blogs about detail of Crash and randomize testing

Monday Apr 28, 2014

MySQL Fabric Setup using ndb Cluster

MySQL SystemQA: MySQL Fabric Setup using ndb Cluster


MySQL Fabric is an open-source solution released by the MySQL Engineering team at Oracle. It is 
an extensible and easy to use system for managing a MySQL deployment for Sharding  and
High-availability.

To ensure/provide resilience to failures, MySQL Fabric manages servers in groups and deploy 
High Availability. MySQL Fabric also supports Sharding, which is used to scale out a large 
databases. Sharding setup handles the increasing demand for read loads and also handles the 
increasing write loads. The database/table are shared across different servers, where each shard
contain a fragment of the data.




A Fabric node is Fabric Process + State Store (which is a MySQL Server).As MySQL Fabric 
manages such a valuable information for server farms and scaling of database, using single 
machine to handle the MySQL Fabric node is not a good solution. Thus we need to come up 
with a solution that make MySQL Fabric node fail-safe.

We need to have a full-fledged fault-tolerant solution at both levels, i.e. process and state store. 
Here we will only discuss redundancy at the state store level and a full-fledged solution is 
something that will be delivered in the future.

We can use two methods  to make MySQL Fabric resilient with no single point of failure EITHER
using MySQL Replication setup with multiple Fabric nodes in a topology OR setting up fabric
nodes using MySQL Cluster (ndb cluster).As part of SystemQA team at Oracle MySQL we 
validated both possibilities to make MySQL Fabric failure-safe.

As part of SystemQA team at Oracle MySQL we validated both possibilities to make 
MySQL Fabric resilient with no single point of failure.

What we did?

- Prepared a ndb cluster setup in multiple machines
- MySQL Fabric Setup and Start in a cluster node.
- Killed/Crashed the Fabric node/server and verified if the data is safe in other cluster nodes.
- Start Fabric node in other cluster node and run the application.

NDB Cluster:

With its distributed, shared-nothing architecture, MySQL Cluster has been carefully designed

to deliver 99.999% availability ensuring resilience to failures and the ability to perform

scheduled maintenance without downtime.

It provides

  • Synchronous Replication

  • Automatic Failover

  • Shared Nothing Architecture, No Single Point of Failure

  • Geographical Replication

Reference:https://www.mysql.com/products/cluster/availability.html


NDB Cluster Setup in Multi Machines:

1) in mngmt machine create the folders
-bash-4.1$ /usr/local/mysql/ 
data/          mgmt_data/     mysql-cluster/ 

2) in mngmt machine create the ini file
-bash-4.1$ ls /usr/local/mysql/mgmt_data/ 
mgmt_config.ini 

--------------------------------- 
[ndbd default] 
NoOfReplicas = 2 

[mysqld default] 
[ndb_mgmd default] 
[tcp default] 

[ndb_mgmd] 
NodeId = 1 
HostName = mngmt_machine_ip 
#LogDestination = FILE:filename=/usr/local/mysql/log/ndb_1_cluster.log,maxsize=1000000,
maxfiles=6 
 

[ndbd] 
NodeId = 2 
HostName = mngmt_machine_ip 
DataDir = /..../data 

[ndbd] 
NodeId = 4 
HostName = othernodemahcine_ip 
DataDir = /..../data 

[mysqld] 
NodeId = 3 
[mysqld] 
NodeId = 5 

--------------------------------------- 
3) create /etc/my.cnf (different port and socket) in both machines 

[MYSQLD] 
ndbcluster 
ndb-connectstring=mngmt_machine_ip 


[MYSQL_CLUSTER] 
ndb-connectstring=mngmt_machine_ip 

[client] 
port=13000 
socket=/tmp/mysql13000.sock 

[mysqld] 
port=13000 
socket=/tmp/mysql13000.sock 
key_buffer_size=16M 
max_allowed_packet=8M 
default-storage-engine = ndb 

[mysqldump] 
quick 

4) start the MGM node process in mngmt machine: 

/..../mysql-cluster-binary/bin/ndb_mgmd --config-file=/usr/local/mysql/mgmt_data/
mgmt_config.ini --initial 
/..../mysql-cluster-binary/bin/ndbd --initial 

5) On each of the DATA (NDB) nodes, run this command to start ndbd only for the first time: 

/..../mysql-cluster-binary/bin//ndbd --initial -c mngmt_ip:port
 
in all nodes run ps -ef |grep ndbd and check if nodes are running 

#Note that it is very important to use the --initial parameter only when starting ndbd for the first time, or when restarting after a 
backup/restore operation or a configuration change. This is because the --initial option causes the node to delete any files 
created by earlier ndbd instances that are needed for recovery, including the recovery log files. 

6) Start mysql servers in different nodes.

7) Use the mngmt machine to check whether the cluster is working properly.

ndb_mgm 

ndb_mgm> show 

make sure that all nodes are connected. 

Fabric Setup:

/usr/bin/mysqlfabric --param=storage.address=localhost
:13000 –param=storage.user=root --param=protocol.xmlrpc
.password=<password> manage setup 

/usr/bin/mysqlfabric --param=storage.address=localhost
:13000 –param=storage.user=root --param=protocol.xmlrpc
.password=<password> manage start

Limitations:

Manual recovery of fabric state-store if there is a fabric cluster node failure.

Future activities:

Crash and restart of Cluster node (fabric node) to ensure recovery of fabric state-store.

 

Saturday Apr 05, 2014

MySQl Fabric Testing

MySQL Fabric Testing :

MySQL Fabric is a simple and easy to use management system to handle High Availability and Sharding. MySQL SysQA team is responsible to test the MySQL Fabric at a system level.We do

1) Feature/Functional/Regression Testing   
2) Complex/Integration/concurrent Testing
3) Scalability testing
4) Performance/Atomic Testing
5) Fabric Recovery/Security Testing

There are few test scenario examples below  that the team is performed to get a stable,robust application.

MySQL Fabric Testing Examples:

1) multiple activities performed at same time. e.g.

  • Shard split

  • promote a slave

  • insert records in a shard

  • demote another group

  • remove master from the group

  • add old master to the group

2) promote and demote for 100 times
3) scalability testing for 20 mn and 50 mn records and run activities to ensure fabric does not crash
4)1000 connections at the same time to check if there is crash.
5)comparing performance of conn/python,conn/j fabric support against conn/python,conn/j
6) fabric server setup using ndbCluster
7) DDLs on a sharded table
8) 100-500 servers in multiple machines with 50 groups and 45 shards (will change based on topology) to ensure there is no crash
9) 10+ activities in parallel to check if there is any hangs in activities
10) 20+ activites one after another
11) Fabric tests integration with mysqlutilities (failover, mysqlreplicate,etc) (future scope)
12) Backup of shard table (using multisource replication) (future scope)
13) 10 tables sharded using a single mapping and do DDLs and DMLs in each shard
14) Fabric test with DDLs and DMLs in backend
15) Minor updown of servers for each master or slave
(future scope)

How We did it?

- Created/updated an Automated and Multi Threaded program to run the Fabric Activities and transactions in parallel.

- Creating Hudson jobs to run the tests regularly

Future activities:

- Crash testing and Randomize testing for MySQL Fabric server farm.

Monday Jun 03, 2013

Testing MEB impact over MySQL Server features

MySQL SystemQA: Testing MEB impact over MySQL Server features:

     MySQL SysQA team is responsible to test the MySQL Enterprise backup at a system level. Along with the System testing , SysQA team worked on MEB impact over MySQL Server features to ensure the new version of MySQl server is compatible with latest release of MEB.


What we do.

- Analyse each Server feature and findout if that impacts MEB.

- Run an Automated test that integrates the server functionalities along with MEB. The test is done in parallel with running server feature.

- Run the feature at each level of MEB run and verify if it works fine.

- Run the feature at different boundary conditions and verify the backup.

How we Did.

- Created a MultiThreaded program to run the MEB and Server feature in parallel.

- Run different transactions along with the server feature and run backup. Restore the backup and - Start a MySQL server with the restored data.

- Verify the data against the original data directory.


Example:

MySQL Server Feature: Online alter

- Created different online alter scenarios (with boundary conditions) and prepared a SQL file.

- Included the same in the Program.

- Modify the parameter of the program to trigger the Transactions at certain point of MEB run.(run the transactions when the backup command starts copying the table impacted)

- Check if backup is successful

- Restore the new data directory and start a server.

- Check if the transactions are updated in the table.


Future activities:

High load integration with the automation script to ensure the MEB works with new server features.


About

This blog will discuss the testing aspects of the MySQL products, testing we do for different mysql products, how we qualify the releases, some of the approaches we use, test tools we use.

Search

Categories
Archives
« May 2016
SunMonTueWedThuFriSat
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
    
       
Today