Monday Aug 20, 2007

Installation and Configuration Sun Cluster Data Service For MySQL

MySQL data service running on a clustered-server model provide orderly startup, shutdown, fault monitoring, and high availability(failover) mechanism compared to the single server model. It can be free donwloaded, and Sun also released Open HA cluster, derived from the Sun Cluster 3.2 agents including HA MySQL data service. The open HA cluster also provide build tools necessary to develop new features, build and use the code. In addition, Sun cluster 3.2 Data service for MySQL has the advantages including:

  • It is easy to install and configure with straightforward GUI and command line interfaces.

  • Expanded support for Solaris Containers(Solaris zones)

  • Expanded support for SMF

  • ZFS is fully supported as a local highly available fileystem

The following is the OS and HW setup for two-node cluster:

  • Solaris 10 11/06 installed on the two nodes

  • Each node has two network interfaces to be used as point-to-point private interconnects, and one network interface connect to the public network interface.

  • Two storage device(SE6120) connected to the two nodes

Figure 1:MySQL Replication Clustered Model

In the above MySQL clustered-server configuration, logical host name is set as the failover IP address within the same subnet. When the MySQL resource online, the failover IP is plumbed on the node where the MySQL resource is running. If a failover happen, the IP address moves along with the MySQL resource to the failover node. In this example, the configuration is:



IP address

Logical Hostname







For the private network interfaces as cluster interconnection, Sun cluster installation will configure the network assigning the private network addresses. Note: Do NOT configure the private network interfaces before installing Sun cluster, otherwise, you will fail to install Sun cluster and get the error message as:

Adapter “ce0” is already in use as a public network adapter

After the HW and network is setup as the above, the general task flow of MySQL data service installation and configuration is:

Figure 2:MySQL Data Service Installation & Configuration Flow chart

  • Step 1: Plan Installation

  1. Sun cluster requires to set aside a special file system named as: /globaldevices on one of the local disks for use in managing global devices on the two nodes.This file system is later mounted as a cluster file system as:


  2. (optional) Setup cluster environment


  3. On both nodes, update the /etc/inet/ipnodes file with all public hostnames for the cluster

Sunfire-x64-241-02# vi /etc/inet/ipnodes       localhost    Sunfire-x64-241-02     loghost    Sunfire-x64-241-03
Sunfire-x64-241-03# vi /etc/inet/ipnodes       localhost    Sunfire-x64-241-03     loghost    Sunfire-x64-241-02
  1. On both nodes, add the following entry in the /etc/system file

set ce:ce_taskq_disable=1

The first entry supports ce adapters for the private interconnect, and the second entry disable the loopback file system(LOFS)

5. local-mac-address? variable must set as true for Ethenet adapters. On Solaris x86, use the command: #eeprom local-mac-address?=true; On Solaris  SPARC, change the local-mac-address? Variable to be ture from OBP OK> prompt
  • Step 2: Sun Cluster and MySQL data service Installation

Solaris cluster(sun cluster, Sun cluster Geographic Edition and Sun cluster agents) 3.2 is downloadable at:

On both nodes, run the installer command to bring up the installer GUI . The installation of Sun cluster and Sun cluster for HA MySQL is straightforward to follow the instructions on the screen. Choose Sun cluster Core 3.2 core software, Sun cluster HA for MySQL to install, and choose Configure later before the installation.

  • Step 3: Configure Sun Cluster Software

Perform this procedure from one node of the cluster to configure Sun Cluster software on all nodes of the cluster:

1. # /usr/cluster/bin/scinstall

2. From the main menu, pick the “Option 1” to “Create a new cluster or add a cluster node”

3. From the new cluster and Cluster Node menu, pick the “Option 1” to “Create a new cluster”

4. From the Typical or Custom Mode, pick the “Option 1” for typical mode

5. From the Cluster Nodes menu, type the node name: “Sunfire-x64-241-02”, and “Sunfire-x64-241-03”

6. Provide the first and the second private adapter name: ce0, bge1

7. Type “no” for “ disable automatic quorum device selection”

8. Type “yes” for “ create the new cluster”

9. Type “no” for “ Interrupt cluster creation for sccheck errors”

At this point, the scinstall utility configures all cluster nodes and reboot the cluster. The cluster is established when all nodes have successfully booted into the cluster. Sun Cluster installation output is logged in /var/cluster/logs/install/scinstall.log.N file.

10. Verify cluster setup:

Sunfire-x64-241-03# clquorum list
Sunfire-x64-241-03# clnode status

Cluster Nodes ===

--- Node Status ---

Node Name                                       Status
---------                                       ------
Sunfire-x64-241-03                                     Online
Sunfire-x64-241-02                                     Online

When the scinstall utility finishes, The cluster is now ready to configure the components you will use to support highly available MySQL, including device groups, and file systems.

  • Step 4: Configure Volume Manager and File System

In a shared disk set configuration in this example, two hosts are physically connected to the same set of disks. When one node fails, another node has exclusive access to the disks. Each node can control a shared disk set, but only one host can control it at a time.

  1. From one node: Sunfire-x64-241-02, create one disk set for MySQL data service, so that Sunfire-x64-241-02 is make as the primary node:

    Sunfire-x64-241-02# metaset -s mysqlset -a -h Sunfire-x64-241-02 Sunfire-x64-241-03

  2. Verify that the configuration of the disk sets is correct and visible to both nodes.

  Sunfire-x64-241-02# metaset

  Set name = mysqlset, Set number = 1
   Host                Owner
   Sunfire-x64-241-02          Yes


  1. From the primary node: Sunfire-x64-241-02, list the DID mappings:

    Sunfire-x64-241-02# cldevice show |grep Device
  === DID Device Instances ===                   
    DID Device Name:                                /dev/did/rdsk/d5
      Full Device Path:                                Sunfire-x64-241-02:/dev/rdsk/c3t60003BACCC90200046264D58000A22E3d0
      Full Device Path:                                Sunfire-x64-241-03:/dev/rdsk/c5t60003BACCC90200046264D58000A22E3d0


  1. Add /dev/did/rdsk/d5 to the MySQL disk setup

Sunfire-x64-241-02# metaset -s mysqlset -a /dev/did/rdsk/d5

  1. Verify that the configuration of the disk set is correct

Sunfire-x64-241-02# metaset -s mysqlset

  1. On both nodes, create the /etc/lvm/ file with the following entries

     mysqlset1/d0 -m mysqlset1/d10
            mysqlset1/d10 1 1 /dev/did/rdsk/d5s0
     mysqlset1/d1 -p mysqlset1/d0 50G
mysqlset1/d2 -p mysqlset1/d0 50G

mysqlset1/d3 -p mysqlset1/d0 50G

7.  From Sunfire-x64-241-02, take ownership for the mysql disk set and activate the volume
     Sunfire-x64-241-02# cldevicegroup switch -n Sunfire-x64-241-02 mysqlset
     Sunfire-x64-241-02# metainit -s mysqlset -a
8.  Verify the status of the volume for the disk setup
     Sunfire-x64-241-02# metastat
      Status: Okay
9. Create the cluster file system for use Sun Cluster HA for MySQL. From Sunfire-x64-241-02,create te file systems:
  Sunfire-x64-241-02# newfs /dev/md/mysqlset1/rdsk/d1
  Sunfire-x64-241-02# newfs /dev/md/mysqlset1/rdsk/d2
  Sunfire-x64-241-02# newfs /dev/md/mysqlset1/rdsk/d3

10. On both node, create the mount-point directory for the file systems

      # mkdir -p /global/mysql       - mysql master and slave servers
# mkdir -p /global/mysql-data1 – data directory for mysql master server
# mkdir -p /global/mysql-data2 – data directory for mysql slave server

11. On both node, add entries to the /etc/vfstab file for the above mount points

12. On both nodes, mount the file systems, and verify that the file systems are mounted
    # mount
     /global/mysql on /dev/md/mysqlset/dsk/d1  read/write/setuid/devices/intr/largefiles/logging/noquota/global/xattr/nodfratime/onerror=panic/dev=1544001 on Tue Aug 14 17:12:45 2007

/global/mysql-data1 on /dev/md/mysqlset/dsk/d2 read/write/setuid/devices/intr/forcedirectio/largefiles/logging/noquota/global/xattr/nodfratime/onerror=panic/dev=1544002 on Tue Aug 14 17:12:50 2007

    /global/mysql-data2 on /dev/md/mysqlset/dsk/d3 read/write/setuid/devices/intr/forcedirectio/largefiles/logging/noquota/global/xattr/nodfratime/onerror=panic/dev=1544003 on Tue Aug 14 17:12:56 2007
13. Verify the Sun cluster installation & configuration before registering and configuring Sun cluster HA for MySQL
     Sunfire-x64-241-02# cluster check
  • Step 5: Configure Sun Cluster Resource for MySQL

  1. Register SUNW.gds, SUNW.HAStoragePlus resource type

    Sunfire-x64-241-02# scrgadm -a -t SUNW.gds

    Sunfire-x64-241-02# scrgadm -a -t SUNW.HAStoragePlus

  2. Create MySQL resource group named MySQL-failover-resource-group

    Sunfire-x64-241-02# scrgadm -a -g MySQL-failover-resource-group

  3. Create the HAStoragePlus resource named MySQL-has-resource in the MySQL-failover-resource-group resource group for MySQl disk storage

    Sunfire-x64-241-02# scrgadm -a -j MySQL-has-resource -g MySQL-failover-resource-group -t SUNW.HAStoragePlus -x FilesystemMountPoints=/global/mysql-data1,/global/mysql-data2

  4. Create a logical hostname resource named MySQL-lh-resource

    Sunfire-x64-241-02# scrgadm -a -L -j MySQL-lh-resource -g MySQL-failover-resource-group -l

    To verify the logical hostname resource is online, you can run “ifconfig -a” to see if the virtual IP address is configured on the network interface.

  5. Enable the failover resource group including the MySQL disk storage and logical hostname resources.

    Sunfire-x64-241-02# scswitch -Z -g MySQL-failover-resource-group

  • Step 6: Install and Configure MySQL

  1. On both nodes, install MySQL under /usr/local/mysql, which is symbol link to /global/mysql

    Sunfire-x64-241-02# mysql_install_db –datadir=/global/mysql-data1

    Sunfire-x64-241-02# chown -R root .

    Sunfire-x64-241-02# chown -R mysql /global/mysql-data1

    Sunfire-x64-241-02# chgrp -R mysql .

  2. Copy the sample “my.cnf_sample_master” and “my.cnf_sample_slave” under “/opt/SUNWscmys/etc” to the MySQl data directory(mysql-data1, mysql-data2) of the MySQL master and slave machine.

  3. Modify the sample my.cnf file to point to the right directories for the data and log files. “bind-address” must be set with the logical hostname as “” in this example. Please note: it need to set bind-address = ip number, because of the unfixed MySQL bug on Solaris amd64 OS.

  • Step 7: Modify MySQL Configuration Files

  1. Go to the directory /opt/SUNWscmys, add cluster's information in the mysql_config file.

     Sunfire-x64-241-02# vi mysql_config
    MYSQL_HOST=    ->Logical hostname(IP)
    MYSQL_NIC_HOSTNAME="Sunfire-x64-241-02 Sunfire-x64-241-03"  ->Physical hostname
  1. Add cluster's information in the ha_mysql_config file

    Sunfire-x64-241-02# vi ha_mysql_config
  • Step 8: Enable Sun cluster HA For MySQL

  1. Start MySQL server with “–skip-grant-table” option

    Sunfire-x64-241-02# mysqld_safe –defaults-file=/global/mysql-data1/my.cnf –datadir=/global/mysql-data1 –skip-grant-table –user=mysql &

  2. Change the password for root as “password”

    Sunfire-x64-241-02# mysql -S /tmp/ -u root
    mysql>UPDATE mysql.user set Password=PASSWORD('password')
    -> where User='root';
  3. Shutdown and restart the MySQL servers without the “–skip-grant-table” option on the node where the resource group is online(check it with the “scstat -g” command)

  4. Prepare the Sun cluster specific test database.

    Sunfire-x64-241-02# cd /opt/SUNWscmys
    Sunfire-x64-241-02# ./mysql_register -f /opt/SUNWscmys/util/mysql_config
    sourcing /opt/SUNWscmys/util/mysql_config and create a working copy under /opt/SUNWscmys/util/
MySQL version 5 detected on 5.10/SC3.2
Add faulmonitor user (fmuser) with password (fmuser) with Process-,Select-, Reload- and Shutdown-privileges to user table for mysql database for host Sunfire-x64-241-02
Add SUPER privilege for fmuser@Sunfire-x64-241-02
Add faulmonitor user (fmuser) with password (fmuser) with Process-,Select-, Reload- and Shutdown-privileges to user table for mysql database for host Sunfire-x64-241-03
Add SUPER privilege for fmuser@Sunfire-x64-241-03
Create test-database sc3_test_database
Grant all privileges to sc3_test_database for faultmonitor-user fmuser for host Sunfire-x64-241-02
Grant all privileges to sc3_test_database for faultmonitor-user fmuser for host Sunfire-x64-241-03
Flush all privileges
Mysql configuration for HA is done
  1. Shutdown the mysql server

    Sunfire-x64-241-02# mysqladmin -S /tmp/ shutdown -p

  2. Register resource

     Sunfire-x64-241-02# cd /opt/SUNWscmys
     Sunfire-x64-241-02# ./ha_mysql_register -f /opt/SUNWscmys/util/ha_mysql_config
  3. Enable each MySQL resource

     scswitch -e -j MySQL-has-resource
     scswitch -e -j MySQL-lh-resource
  • Step 9: Verify Sun cluster HA for MySQL configuration

Once all the MySQL resources are created and configured, and online(check with the “scstat -g” command), you should go ahead to see if the MySQL database can successfully fail over to each node configured in the resource group(MySQL-failover-resource-group). This can be verified by running with “scswitch” to switch MySQL resource group to another node to fail the resouce group to:

#scswitch -z -g MySQL-failover-resource-group -h Sunfire-x64-241-03

If you can successfully migrate the database to each node in the cluster, you now have highly available MySQL database.

At this point, by following the above nine steps, I have completed the basics of deploying highly available MySQL database. To deploy the Sun Cluster MySQL data service to achieve the maximum availability into the production environment, I would highly recommend you read through the Sun cluster documents and MySQL data service guides and verify everything in a test environment in advance.

Friday May 25, 2007

DTrace Probes In MySQL


Inserting user-defined DTrace probes into MySQL source code is very useful to help user identify the performance problems in the application level and the database server, In addition, the cost of the USDT probe is basically neglectable. Each probes inserted into the src can be enabled by adding the code like:







The steps to add DTrace probes into MySQL is very straightforward.


Step 1: Figure out what probes are needed to insert into the source code

This is the difficult part that requires you understand the MySQL implementation details. Generally, it is good to insert probes to clarify the DB response time distribution including processing query, waiting on locks and latches, doing disk I/O, receiving/sending back data. You can certainly define more probes deep into each of the MySQL engines (such as: define probes to measure the cost of innodb sync spin wait)


Step 2: Define Provider and probes

Create a mysqlprovider.d file as:


provider mysql {

                probe query__execute__start(int);

               probe query__execute__finish(int);



It is required to define the probes with easy to understand name. The two underscore(__) is translated to hyphen(-) in the D script file, so the above two probes are called query-execute-start and query-execute-finish


Step 3: Define header file for probes

Create mysqlprovider.h file as:















extern void __ dtrace_mysql__query_execute__start(int)

extern int __ dtraceenabled_mysql__query_execute__start(void)

extern void __ dtrace_mysql__query_execute__finish(int)

extern int __ dtraceenabled_mysql__query_execute__finish(void)




\*Unless DTrace is explicitly enabled with –enable-dtrace, the MYSQL macros will expand to no-ops.












#endif  /\* _MYSQLPROVIDER_H \*/


Step 4: Insert the probes into source code

You need to include the header file created for DTrace probes before inserting the probe macro. And in order to monitor the server behavior as expected, it requires the knowledge of the MySQL source code to add the probe macro into the right place.


#include <mysqlprovider.h>

mysql_parse {


mysql_execute_command(THD \*thd)















Step 5: Build MySQL with DTrace

You will need to specify the “—enable-dtrace” as the configure option to make the DTrace probes available in MySQL on Solaris 10 and above. On the other operating system without the DTrace facility, the DTrace probes are disabled as default.


In the Makefile, you can compile the 64-bit MySQL with DTrace probes as bellow:


mysqlproviders.o: mysqlproviders.d $(mysqld_OBJECTS)

dtrace -G -64 -s mysqlproviders.d $(mysqld_OBJECTS)



Now, at this point, you have completed inserting the DTrace probes into MySQL, and the probes are ready to use. For example, to use the query-execute-start and query-execute-stop probes, you can write a simple D script(query-execute.d) to measure the time spending on the query execution for each session.






                self->init = timestamp;






                @inittime[args[0]] = sum(timestamp – self->init);

                self->init = 0;






        printf("Date: %Y\\n", walltimestamp);

        printf("Query execution time\\n");






Now, you can execute the script to get the data for query execution:



Date: 2007 May 25 19:18:59

Query execution time


      149       4542802785

      146       4577178817

      148       4586742308

      147       4602289846


Please let me know if you find this is useful, any suggestions on which/where probes would be useful in the MySQL server and client application. You can contact me by email:, or comment on this blog.







Top Tags
« April 2014