MySQL Fabric Setup using ndb Cluster

MySQL SystemQA: MySQL Fabric Setup using ndb Cluster


MySQL Fabric is an open-source solution released by the MySQL Engineering team at Oracle. It is 
an extensible and easy to use system for managing a MySQL deployment for Sharding  and
High-availability.

To ensure/provide resilience to failures, MySQL Fabric manages servers in groups and deploy 
High Availability. MySQL Fabric also supports Sharding, which is used to scale out a large 
databases. Sharding setup handles the increasing demand for read loads and also handles the 
increasing write loads. The database/table are shared across different servers, where each shard
contain a fragment of the data.




A Fabric node is Fabric Process + State Store (which is a MySQL Server).As MySQL Fabric 
manages such a valuable information for server farms and scaling of database, using single 
machine to handle the MySQL Fabric node is not a good solution. Thus we need to come up 
with a solution that make MySQL Fabric node fail-safe.

We need to have a full-fledged fault-tolerant solution at both levels, i.e. process and state store. 
Here we will only discuss redundancy at the state store level and a full-fledged solution is 
something that will be delivered in the future.

We can use two methods  to make MySQL Fabric resilient with no single point of failure EITHER
using MySQL Replication setup with multiple Fabric nodes in a topology OR setting up fabric
nodes using MySQL Cluster (ndb cluster).As part of SystemQA team at Oracle MySQL we 
validated both possibilities to make MySQL Fabric failure-safe.

As part of SystemQA team at Oracle MySQL we validated both possibilities to make 
MySQL Fabric resilient with no single point of failure.

What we did?

- Prepared a ndb cluster setup in multiple machines
- MySQL Fabric Setup and Start in a cluster node.
- Killed/Crashed the Fabric node/server and verified if the data is safe in other cluster nodes.
- Start Fabric node in other cluster node and run the application.

NDB Cluster:

With its distributed, shared-nothing architecture, MySQL Cluster has been carefully designed

to deliver 99.999% availability ensuring resilience to failures and the ability to perform

scheduled maintenance without downtime.

It provides

  • Synchronous Replication

  • Automatic Failover

  • Shared Nothing Architecture, No Single Point of Failure

  • Geographical Replication

Reference:https://www.mysql.com/products/cluster/availability.html


NDB Cluster Setup in Multi Machines:

1) in mngmt machine create the folders
-bash-4.1$ /usr/local/mysql/ 
data/          mgmt_data/     mysql-cluster/ 

2) in mngmt machine create the ini file
-bash-4.1$ ls /usr/local/mysql/mgmt_data/ 
mgmt_config.ini 

--------------------------------- 
[ndbd default] 
NoOfReplicas = 2 

[mysqld default] 
[ndb_mgmd default] 
[tcp default] 

[ndb_mgmd] 
NodeId = 1 
HostName = mngmt_machine_ip 
#LogDestination = FILE:filename=/usr/local/mysql/log/ndb_1_cluster.log,maxsize=1000000,
maxfiles=6 
 

[ndbd] 
NodeId = 2 
HostName = mngmt_machine_ip 
DataDir = /..../data 

[ndbd] 
NodeId = 4 
HostName = othernodemahcine_ip 
DataDir = /..../data 

[mysqld] 
NodeId = 3 
[mysqld] 
NodeId = 5 

--------------------------------------- 
3) create /etc/my.cnf (different port and socket) in both machines 

[MYSQLD] 
ndbcluster 
ndb-connectstring=mngmt_machine_ip 


[MYSQL_CLUSTER] 
ndb-connectstring=mngmt_machine_ip 

[client] 
port=13000 
socket=/tmp/mysql13000.sock 

[mysqld] 
port=13000 
socket=/tmp/mysql13000.sock 
key_buffer_size=16M 
max_allowed_packet=8M 
default-storage-engine = ndb 

[mysqldump] 
quick 

4) start the MGM node process in mngmt machine: 

/..../mysql-cluster-binary/bin/ndb_mgmd --config-file=/usr/local/mysql/mgmt_data/
mgmt_config.ini --initial 
/..../mysql-cluster-binary/bin/ndbd --initial 

5) On each of the DATA (NDB) nodes, run this command to start ndbd only for the first time: 

/..../mysql-cluster-binary/bin//ndbd --initial -c mngmt_ip:port
 
in all nodes run ps -ef |grep ndbd and check if nodes are running 

#Note that it is very important to use the --initial parameter only when starting ndbd for the first time, or when restarting after a 
backup/restore operation or a configuration change. This is because the --initial option causes the node to delete any files 
created by earlier ndbd instances that are needed for recovery, including the recovery log files. 

6) Start mysql servers in different nodes.

7) Use the mngmt machine to check whether the cluster is working properly.

ndb_mgm 

ndb_mgm> show 

make sure that all nodes are connected. 

Fabric Setup:

/usr/bin/mysqlfabric --param=storage.address=localhost
:13000 –param=storage.user=root --param=protocol.xmlrpc
.password=<password> manage setup 

/usr/bin/mysqlfabric --param=storage.address=localhost
:13000 –param=storage.user=root --param=protocol.xmlrpc
.password=<password> manage start

Limitations:

Manual recovery of fabric state-store if there is a fabric cluster node failure.

Future activities:

Crash and restart of Cluster node (fabric node) to ensure recovery of fabric state-store.

 

Comments:

Post a Comment:
Comments are closed for this entry.
About

This blog will discuss the testing aspects of the MySQL products, testing we do for different mysql products, how we qualify the releases, some of the approaches we use, test tools we use.

Search

Categories
Archives
« May 2015
SunMonTueWedThuFriSat
     
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
      
Today