Setting up MySQL Cluster using Solaris Zones

Zones is one of the key features of Solaris 10 Operating system. It allows multiple virtualized secured runtime environment for applications without the need for dedicated system resources. Some of the interesting features of Zones like cloning, migrations can be used to good effect as your rollback strategy when rolling out upgrades to your applications. This article covers steps involved getting MySQL Cluster running in Solaris Zone environment as if they were running on independent physical server. This could be useful for replicating environment in house without the need of having multiple physical systems. Since you are running on the same physical system, it not meant for production deployment. It may not be supported configuration from MySQL. The intend is to provide environment to simulate MySQL cluster without need of having multiple physical systems.

Before we look at steps involved (install, configure, test) in details, see below diagram to understand the deployment topology. The main components of MySQL cluster MGM Node, SQL Node, Data Nodes are deployed on dedicated zones. Typical production deployment will have these components running on dedicated physical systems.

Introduction to the terms used:
Zone or Non global Zone: A zone is a virtualized operating system environment created within a single instance of the Solaris Operating System.
Global Zone: The global zone is both the default zone for the system and the zone used for system-wide administrative control
MGM Node: Manage(configure/start/stop etc) the other nodes within the MySQL Cluster
SQL Node: Allow access to cluster data
Data Nodes: Store Cluster data.

Major
steps involved in the MySQL cluster setup in Solaris Zones:
  1. Create Solaris Zones
  2. Installing MySQL Cluster Software
  3. Configuring & Testing MySQL Cluster
1. Create Solaris Zones
Example of creating a zone using command line given below.  When dealing with zones, we mainly use two Solaris commands. These are “zonecfg” for configure/modify/delete zones configuration & “zoneadm” command for install/un-install/boot/halt the zone If you would like speed up things and NOT type these commands for creating all four zones, you may skip this section and use the script provided to create multiple zones  section.

1.2 Create Zone Using command line
1.2 Creating multiple zones using the script

1.1 Create Zone Using command line:

Step 1.1.1: Configure the Zone
Step 1.1.2: Install the Configured Zone
Step 1.1.3:
Complete the post-install tasks
Step 1.1.4:
Accessing the Zone

1.1.1 Configuring the Zone:
# zonecfg -z mysql
mysql: No such zone configured
Use 'create' to begin configuring a new zone.
zonecfg:mysql> create
zonecfg:mysql> set zonepath=/zones/mysql <== Replace this with the directory where you have space (Around 200MB for zone + Additional space for Application that you plan to install)
zonecfg:mysql> set autoboot=true <== Zone will autoboot with system reboot
zonecfg:mysql> add net
zonecfg:mysql:net> set physical=bge0<= Replace this with the interface name your system has(User #ifconfig -a to verify the interface name)
zonecfg:mysql:net> set address=10.12.161.101/24 <= Replace this with the IP address on the same subnet as Global Zone /24 is subnet mask)
zonecfg:mysql:net> end
zonecfg:mysql> verify
zonecfg:mysql> commit
zonecfg:mysql> \^D

Verify the Zone configuration status
#zoneadm list -icv  You should see newly configured zone entry + Global Zone

Step 1.1.2: Install the Configured Zone

# zoneadm -z mysql install
This step could take between 15-30 mins depending upon the speed of your Hard Disks/Processor speed

Step 1.1.3: Complete the post-install tasks 

From One Terminal Window - Connect to Console of the newly created zone using the command below
#zlogin -C mysql (Uppercase C)


Open Second Terminal Window and initiate a boot command as given below

# zoneadm -z mysql boot 
Switch to Console Terminal Window and Answer the first time system configuration questions. Once the zone configuration is complete, it will go through reboot. Observe this from Console Window

Step 1.1.4: Accessing the Zone

A new command “zlogin” is available to access the zone from GLOBAL zone without supplying user name/password.
#zlogin zonename (e.g. zlogin mysql ).
This command allows login to zone name "mysql" without root password - You need to be "root" user in GLOBAL Zone to run this command. Other Access Methods like telnet, ssh, rlogin will work as well. Type zonename command to verify if we are global zone or non-global zone
#zonename
global                         <==== You are currently logged into global zone.(Base OS)
#zonename
mysql                         <==== You are currently logged into zone name “mysql”

1.2 Creating multiple zones using the script
Solaris Containers and Resource Manager Guide has script to create the multiple zones. It can be found here Save the script into a file called “zone-create.sh” & Run the script to create the 4 zones needed for MySQL cluster
usage: ./zone-create.sh <#-of-zones> <zonename-prefix> <basedir>
-bash-3.00# ./zone-create.sh 4 mysql /export/home            (Make sure there is no training / as the end of the basedir argument)

configuring mysql1
Truncated output
.........................
........................
booting mysql4

The above step will create & boot the four zones.
Verify that all zones are configured and running state by using the below command:
-bash-3.00# zoneadm list -iv
ID NAME STATUS PATH BRAND IP
0 global running / native shared
2 mysql2 running /export/home/mysql2 native shared
4 mysql3 running /export/home/mysql3 native shared
5 mysql4 running /export/home/mysql4 native shared
6 mysql1 running /export/home/mysql1 native shared

Modify the zone configuration for all the zones to use the network interface and IP address. By default, the zone creation script doesn't assign any network interface or IP address to zones.
Follow the steps below for ALL the zones:


# zonecfg -z mysql1
zonecfg:mysql1> add net
zonecfg:mysql1:net> set physical=bge0
zonecfg:mysql1:net> set address=10.12.161.101/24
zonecfg:mysql1:net> end
zonecfg:mysql1> commit
zonecfg:mysql1> exit

# zoneadm -z mysql1 reboot

Next step is to reset the root password for the zones that we created using the script. The script sets “root” user password which is passed in encrypted form at the time of zone creation.
Remember the “zlogin” command that we described earlier in the section? You need to login using “zlogin” and reset the root password on all the zones. See steps below. You may also want to reset the default shell for root user to be “/usr/bin/bash”

Reseting Root  User Password for the zones:
# zlogin mysql1
[Connected to zone 'mysql1' pts/4]
Sun Microsystems Inc. SunOS 5.10 Generic January 2005
# hostname
mysql1
# zonename
mysql1
# passwd root
New Password:
Re-enter new Password:
passwd: password successfully changed for root
# \^D
[Connection to zone 'mysql1' pts/4 closed]
# zonename
global

Change Root User Shell:
# usermod -s /usr/bin/bash root
UX: usermod: root is currently logged in, some changes may not take effect until next login.
# \^D
Reboot the zone using the following command:
#zoneadm -z mysql1 reboot

Repeat above steps for the remaining 3 zones. Note that you have to use unique IP address for each zone.

We now have four zones(see below) required for MySQL Cluster Ready to use

MySQL Cluster Component
Zone Name
IP Address
MGT Node
mysql1
10.12.161.101
SQL Node
mysql2
10.12.161.102
Data Node 1
mysql3
10.12.161.103
Data Node 2
mysql4
10.12.161.104

2. Installing MySQL Cluster Software

2.1 Download & Install the MySQL Cluster Software for Solaris x64
2.2 Setup "my.cnf" configuration for MySQL Server
2.3 Verify access to the MySQL Server
2.4 Edit ".profile" for root user

2.1 Download & Install the MySQL Cluster Software for Solaris x64. Download Link
Transfer the downloaded binary to all the zone. You can transfer the file using ftp to the individual zone and follow the steps below

Important Note: We have created "sparse root zone", /usr file system is shared as read only to non-global zones. The way mysql binaries are packaged and as per the install instructions, you need to create link "/usr/local/mysql" to  the directory where "MySQL binaries are located".  Since /usr file system is read-only from non-global zone for sparse root zone configuration, we use the below given workaround. You can achieve the same by playing with "mysql_install_db" script.

Ensure that we are in global zone
bash-3.00# zonename
global
bash-3.00# mkdir /mysql-cluster-gpl-6.2.15-solaris10-x86_64
bash-3.00# ln -s /mysql-cluster-gpl-6.2.15-solaris10-x86_64 /usr/local/mysql

<> Now we can proceed with the installation in the zone. Verify if you are logged in to required zone.
bash-3.00# zonename
mysql1
bash-3.00# pwd
/

bash-3.00# ls my\*                              <===== MySQL Binaries transferred under "/" directory of zone
mysql-cluster-gpl-6.2.15-solaris10-x86_64.tar.gz
bash-3.00# gzip -d mysql-cluster-gpl-6.2.15-solaris10-x86_64.tar.gz
bash-3.00# tar -xf mysql-cluster-gpl-6.2.15-solaris10-x86_64.tar
bash-3.00# ls -ltd /usr/local/my\*
lrwxrwxrwx 1 root root 42 Jul 1 19:09 /usr/local/mysql -> /mysql-cluster-gpl-6.2.15-solaris10-x86_64

bash-3.00# groupadd mysql
bash-3.00# useradd -g mysql mysql
bash-3.00# chown -R mysql:mysql /mysql-cluster-gpl-6.2.15-solaris10-x86_64
bash-3.00# cd /usr/local/mysql
bash-3.00# pwd
/usr/local/mysql

bash-3.00# ./scripts/mysql_install_db --user=mysql

Installing MySQL system tables...
OK
Filling help tables...
OK
Truncated output
...............
You can test the MySQL daemon with mysql-test-run.pl
cd ./mysql-test ; perl mysql-test-run.pl

2.2 Setup "my.cnf" configuration for MySQL Server
Copy Sample "my.cnf"  shipped with MySQL as configuration file for our environment
root@mysql1:[/] # cd /usr/local/mysql/support-files/
root@mysql1:[/usr/local/mysql/support-files] # cp my-small.cnf /etc/my.cnf

bash-3.00# ./bin/mysqld_safe --user=mysql &
[1] 26530
080701 19:24:28 mysqld_safe Logging to '/usr/local/mysql/data/mysql3.err'.
080701 19:24:28 mysqld_safe Starting mysqld daemon with databases from /usr/local/mysql/data

2.3 Verify  access to the MySQL Server
bash-3.00# ./bin/mysql
Welcome to the MySQL monitor. Commands end with ; or \\g.
Your MySQL connection id is 1
Server version: 5.1.23-ndb-6.2.15-cluster-gpl MySQL Cluster Server (GPL)
Type 'help;' or '\\h' for help. Type '\\c' to clear the buffer.
mysql>mysql> exit
Bye

bash-3.00# pwd
/usr/local/mysql
bash-3.00# ./bin/mysqladmin shutdown

Edit ".profile" for root user
Include PATH to MySQL Binaries and Default Shell Prompt Change
PATH=$PATH:/usr/local/mysql/bin;export PATH      <=====PATH to MySQL Binaries added
PS1="\\u@\\h:[\\w] # ";export PS1                                <=====default PROMPT changes.

Repeat the steps on other three zones before proceeding further. After end of these steps, we will have four zones  namely "mysql1", "mysql2", "mysql3" & "mysql4"


3. Configuring & Testing MySQL Cluster
3.1 Configuring the management node
3.2 Configuring the Data and SQL Nodes
3.3 Starting and Stopping the Cluster
3.4 Testing Cluster Operation

3.1 Configuring the management node
Follow the steps below to configure the management node:

root@mysql1:[/] # mkdir /var/lib/mysql-cluster
root@mysql1:[/] # cp /usr/local/mysql/support-files/config.small.ini /var/lib/mysql-cluster/config.ini
root@mysql1:[/] # cd /var/lib/mysql-cluster/
root@mysql1:[/var/lib/mysql-cluster] # vi config.ini                           <=========== Edit the file and make the changes as shown below [NDBD DEFAULT]
NoOfReplicas: 2
DataDir: /usr/local/mysql/data
FileSystemPath: /usr/local/mysql/data
# Data Memory, Index Memory, and String Memory
DataMemory: 600M
IndexMemory: 100M
BackupMemory: 64M
[MGM DEFAULT]
PortNumber: 1186
DataDir: /usr/local/mysql/data
[NDB_MGMD]
Id: 1
HostName: 10.12.161.101                                                         # <========== Replaced  localhost entry with IP address of MGT Node Zone IP
ArbitrationRank: 1
[mysqld]
Id: 2
HostName: 10.12.161.102                                                        # <========== Replaced  localhost entry with IP address of SQL Node Zone IP
[NDBD]
Id: 3
HostName: 10.12.161.103                                                       # <========== Replaced  localhost entry with IP address of Data Node-1 Zone IP
[NDBD]
Id: 4
HostName: 10.12.161.104                                                      # <========== Replaced  localhost entry with IP address of Data Node-2  Zone IP

Before we start MGT Node processes, let's finish the configuration on SQL Node & Data Nodes

3.2 Configuring the Data and SQL Nodes


Follow the below step for SQL Node & Data Nodes
Edit /etc/my.cnf and the following entries:
# Cluster Configuration/Information Regarding MGT Server
[mysqld]
ndbcluster
ndb-connectstring=10.12.161.101
[mysql_cluster]
ndb-connectstring=10.12.161.101

3.3 Starting & Stopping the Cluster
On the management node run the below commands:
If you are already not logged in to the management zone, login in using the below command from global zone
global# zlogin mysql1
[Connected to zone 'mysql1' pts/4]
Sun Microsystems Inc. SunOS 5.10 Generic January 2005
root@mysql1:[/] #cd /var/lib/mysql-cluster
root@mysql1:[/var/lib/mysql-cluster] # ndb_mgmd -f /var/lib/mysql-cluster/config.ini
root@mysql1:[/] #

On the SQL Node run the below commands:
If you are already not logged in to the SQL node zone, login in using the below command from global zone
global# zlogin mysql2
[Connected to zone 'mysql1' pts/4]
Sun Microsystems Inc. SunOS 5.10 Generic January 2005
root@mysql2:[/] # mysqld_safe --user=mysql &

On the Data Node run the below commands:
If you are already not logged in to the Data node-1 zone, login in using the below command from global zone
global# zlogin mysql3
[Connected to zone 'mysql1' pts/4]
Sun Microsystems Inc. SunOS 5.10 Generic January 2005
root@mysql3:[/] # ndbd
root@mysql3:[/] #

global# zlogin mysql4
[Connected to zone 'mysql1' pts/4]
Sun Microsystems Inc. SunOS 5.10 Generic January 2005
root@mysql4:[/]ndbd
root@mysql4:[/]
Verify the cluster status by running the following commands from Management node:

root@mysql1:[/] # ndb_mgm
-- NDB Cluster -- Management Client --
ndb_mgm> show
Cluster Configuration
--------------------- <>[ndbd(NDB)] 2 node(s)
id=3 @10.12.161.103 (mysql-5.1.23 ndb-6.2.15, Nodegroup: 0, Master)
id=4 @10.12.161.104 (mysql-5.1.23 ndb-6.2.15, Nodegroup: 0)
[ndb_mgmd(MGM)] 1 node(s)
id=1 @10.12.161.101 (mysql-5.1.23 ndb-6.2.15)

[mysqld(API)] 6 node(s)
id=2 @10.12.161.102 (mysql-5.1.23 ndb-6.2.15)
id=5 (not connected, accepting connect from 10.12.161.101)
id=6 (not connected, accepting connect from 10.12.161.101)
id=7 (not connected, accepting connect from any host)
id=8 (not connected, accepting connect from any host)
id=9 (not connected, accepting connect from any host)

ndb_mgm>
SQL Node & both Data Node are reported in the configuration and connected. We can now move to testing the cluster configuration.
Stopping the cluster: To shutdown the entire cluster: Use "ndb_mgm -e shutdown" from management node.(Remember you would rarely shutdown the entire cluster in real deployment scenarios.)
root@mysql1:[/] # ndb_mgm -e shutdown

3.4 Testing Cluster Operation
Let's connect to the Cluster database. You need to access the cluster database via "SQL Node". Login to SQL Node zone and connect to the database and create a sample table.
Later force failure of one of the data node and see if affect the data base availability.
root@mysql2:[/] # mysql
Welcome to the MySQL monitor.  Commands end with ; or \\g.
Your MySQL connection id is 2
Server version: 5.1.23-ndb-6.2.15-cluster-gpl MySQL Cluster Server (GPL)
Type 'help;' or '\\h' for help. Type '\\c' to clear the buffer.

mysql>use test;
Database changed

mysql> CREATE TABLE SetupDetails
    -> (ZoneName varchar(15),
    -> Mcomponent varchar(30),
    -> IP int(15)) ENGINE=NDBCLUSTER;
Query OK, 0 rows affected (0.78 sec)
mysql> desc SetupDetails;
+------------+-------------+------+-----+---------+-------+
| Field      | Type        | Null | Key | Default | Extra |
+------------+-------------+------+-----+---------+-------+
| ZoneName   | varchar(15) | YES  |     | NULL    |       |
| Mcomponent | varchar(30) | YES  |     | NULL    |       |
| IP         | int(15)     | YES  |     | NULL    |       |
+------------+-------------+------+-----+---------+-------+
3 rows in set (0.01 sec)
mysql>

Inducing the failure of one of the data node
On the MGT Node, verify the cluster status before the failure:
ndb_mgm> show
Cluster Configuration
---------------------
[ndbd(NDB)]     2 node(s)
id=3    @10.12.161.103  (mysql-5.1.23 ndb-6.2.15, Nodegroup: 0, Master)
id=4    @10.12.161.104  (mysql-5.1.23 ndb-6.2.15, Nodegroup: 0)

[ndb_mgmd(MGM)] 1 node(s)
id=1    @10.12.161.101  (mysql-5.1.23 ndb-6.2.15)

[mysqld(API)]   6 node(s)
id=2    @10.12.161.102  (mysql-5.1.23 ndb-6.2.15)
id=5 (not connected, accepting connect from 10.12.161.101)
id=6 (not connected, accepting connect from 10.12.161.101)
id=7 (not connected, accepting connect from any host)
id=8 (not connected, accepting connect from any host)
id=9 (not connected, accepting connect from any host)

On the SQL Node Initial a session:
root@mysql2:[/] # mysql
Welcome to the MySQL monitor.  Commands end with ; or \\g.
Your MySQL connection id is 6
Server version: 5.1.23-ndb-6.2.15-cluster-gpl MySQL Cluster Server (GPL)
Type 'help;' or '\\h' for help. Type '\\c' to clear the buffer.
mysql> use test;
Database changed
mysql> desc SetupDetails;
+------------+-------------+------+-----+---------+-------+
| Field      | Type        | Null | Key | Default | Extra |
+------------+-------------+------+-----+---------+-------+
| ZoneName   | varchar(15) | YES  |     | NULL    |       |
| Mcomponent | varchar(30) | YES  |     | NULL    |       |
| IP         | int(15)     | YES  |     | NULL    |       |
+------------+-------------+------+-----+---------+-------+
3 rows in set (0.01 sec)
mysql>

Reboot One of the Data Node:
global# zoneadm -z mysql4 reboot
Verify the status of the Cluster from MGT Node. Note that 12.12.161.104 is not connected status:
ndb_mgm> show
Cluster Configuration
---------------------
[ndbd(NDB)]     2 node(s)
id=3    @10.12.161.103  (mysql-5.1.23 ndb-6.2.15, Nodegroup: 0, Master)
id=4 (not connected, accepting connect from 10.12.161.104)

[ndb_mgmd(MGM)] 1 node(s)
id=1    @10.12.161.101  (mysql-5.1.23 ndb-6.2.15)

[mysqld(API)]   6 node(s)
id=2    @10.12.161.102  (mysql-5.1.23 ndb-6.2.15)
id=5 (not connected, accepting connect from 10.12.161.101)
id=6 (not connected, accepting connect from 10.12.161.101)
id=7 (not connected, accepting connect from any host)
id=8 (not connected, accepting connect from any host)
id=9 (not connected, accepting connect from any host)

Did it affect our data availability? Check from SQL Node. You will notice it doesn't affect the availability of the data.
mysql> desc SetupDetails;
+------------+-------------+------+-----+---------+-------+
| Field      | Type        | Null | Key | Default | Extra |
+------------+-------------+------+-----+---------+-------+
| ZoneName   | varchar(15) | YES  |     | NULL    |       |
| Mcomponent | varchar(30) | YES  |     | NULL    |       |
| IP         | int(15)     | YES  |     | NULL    |       |
+------------+-------------+------+-----+---------+-------+
3 rows in set (0.00 sec)
mysql>

What if we reboot the other remaining data node as well??

Now you are good to start  using the MySQL Cluster with zones. You can simulating failures on the Cluster nodes. In the previous section we have seen  a scenario of failed data node while we were accessing the database from SQL Node. Since we are running MySQL Cluster Solaris Zones, we now have ability to tests various failure scenario's as if these are different physical systems. 
Comments:

Nice article, Hasham. Welcome to the blogging world !!

Posted by Mani on July 04, 2008 at 03:00 AM IST #

Very useful article for setting up mysql cluster under zones.

Posted by Vijay Upreti on July 06, 2008 at 02:34 PM IST #

Post a Comment:
Comments are closed for this entry.
About

Hashamkha Pathan

Search

Categories
Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today