Wednesday Nov 26, 2008

Deploying MySQL with Solaris Cluster

Check out the following blueprint for deploying MySQL database with Solaris Cluster for high availability:


http://wikis.sun.com/display/BluePrints/Deploying+MySQL+Database+in+Solaris+Cluster++Environments

Tuesday Nov 04, 2008

Advantages of deploying MySQL database with Solaris Cluster

ritu

The primary advantage of deploying the MySQL database in a Solaris Cluster environment is high availability. The Solaris Cluster environment provides fault monitoring and failover capabilities not only for the MySQL software, but also for the entire infrastructure including servers, storage, interconnects, and the operating system. If any component of the entire infrastructure fails, that failure is isolated and managed independently with no impact on availability.

MySQL Master-Slave configurations, deployed outside of a Solaris Cluster environment, provide limited availability: if the master fails, then the slave can manually be assigned master status and take over operation. However, this process is not automatic but requires manual intervention by a system administrator. Solaris Cluster removes this limitation, as it automatically fails over in the case of a master node failure. In addition, Solaris Cluster provides high availability for slaves as well as for masters. By providing high availability for slaves, these slaves can be kept updated with the masters throughout database transactions, thereby supporting scalability of MySQL database services. The Solaris cluster also provides both failover and scalable Apache Web server instances, thereby offering larger high availability coverage of the SAMP stack. Thus, MySQL Master-Slave configurations deployed in Solaris Cluster environment become highly available in the true sense.

Solaris Cluster deployments provide additional benefits beyond high availability. The Solaris Cluster environment can simplify administration by enabling clustered systems to be managed as if they were a single system. Data services, such as the MySQL database, can be deployed in Solaris Containers, providing the benefits of consolidation (as provided by Solaris Containers) as well as high availability (as provided by Solaris Cluster). Finally, both the MySQL database and Solaris Cluster are free and open source software, helping to contain costs and provide a low-cost solution for highly available databases.

If this is of interest to you , stay tuned here for a pointer on the blueprint for "Deploying MySQL Databases on Solaris Cluster Environment for increased High Availability"

Monday Feb 25, 2008

MySQL Benchmark UltraSPARC T2 beats Xeon on Consolidation of OLTP & Web

Recently we put together a consolidation benchmark to see how an open-source stack performs against the proprietary stack from Microsoft. Solaris, MySQL, and Sun Web Server running on the open-source UltraSPARC T2 processor were pitted against a Microsoft SW stack running on a 4-socket QC Xeon server. This benchmark highlights the continued trend to incorporate MySQL open-source databases and how it works under virtualization (Solaris Zones).

The Sun SPARC Enterprise T5220 (1.4 Ghz UltraSPARC T2 processor) and Solaris Containers managing a consolidation of Open-Source Software components (MySQL Database and Sun Java System Web Server) provided 2.4 times better performance than the HP DL580 system (four Xeon quad-core processors) and a major virtualization software, Microsoft Windows 2003 Server EE, Microsoft SQLserver database and Microsoft IIS webserver.

The Sun SPARC Enterprise T5220 using the MySQL database in Solaris zones is over 3.2 times faster performance than HP DL580 using Microsoft SQLserver database running with the leading virtualization software.

The Sun SPARC Enterprise T5220 using the Sun Java System Web Server with Solaris Zones is 53% faster than than HP DL580 using Microsoft IIS webserver running with the leading virtualization software.

The database performance and the web performance were normalized and equally weighted for an overall performance metric.

The Sun SPARC Enterprise T5220 (1.4 Ghz UltraSPARC T2 processor) including hardware and open-source software costs has 3.7 times better price/performance than the HP DL580 system (four Xeon quad-core processors) with the Microsoft software stack. The Sun solution has 56% less price than the HP/Microsoft/virtualization configuration.

The Sun SPARC Enterprise T5220 (1.4 Ghz UltraSPARC T2 processor) had a 4.1 times better watt/performance than the HP DL580(four Xeon quad-core processors). The Sun system used 73% fewer watts total.

Sun's solution requires half of the rack space of the HP DL580. Sun has an 8.1 SWaP advantage over the HP DL580 (SWaP = Perf /[ Space (RU) x Watts ] )

The Sun SPARC Enterprise T5220 used open-source MySQL 5.0 database software and Sun Java System Web Server 7 which are both free for download to obtain these results.

Performance Comparison:

System Configuration (both 64GB Memory)
CPU RUs OS DB SW Web-
server SW
Sun SPARC Enterprise T5220 1.4GHz 1 chip
8 cores
2 RU Solaris 10 8/07 MySQL 5.0.51a Sun Java System Web
Server 7
HP DL580 G5 2.93GHz 4 QC Xeon
16 cores
4 RU MS Server 2003 EE MS SQL-
server 2005
MS IIS 6.0

System Performance & Cost (server, 64GB, disks, SW)
Web ops/s OLTP txns/m Watts HW+
SW $
Norm
Perf
Watt/
perf
$/perf
Sun SPARC Enterprise T5220 1.4GHz 9546 224K 480W $131k 2.36 203 55
HP DL580 G5 2.93GHz 6250 70K 830W $204k 1.0 830 204
Sun's Advantage 53% 3.2x 73% 56% 2.4x 4.1x 3.7x
Note: Watts are measured server watts during the run.

Benchmark Description

The Full-Stack Consolidated workload consists of 2 primary components: 1) a Web workload consisting of static HTTP requests, and 2) an OLTP database workload using a mix of common SQL transactions executed on a mix of tables in an RDBMS.

The Web workload generates static requests for web pages using a mix of file sizes that range from 100 bytes to 900K bytes with an average file size of 16K bytes. The metrics from this workload includes the number of web operations per second. In this Web workload, a set of 6 client systems emulating thousands of users, generated HTTP requests to all 3 webservers in parallel running on the server.

This performance comparison used an Ad-Hoc OLTP workload, called iGEN OLTP 1.6, which was developed from a realistic customer workload. iGen OLTP avoids problems that plague other OTLP benchmarks like TPC-C. In particular, it is completely random in table row selections and thus is difficult to use artificial optimizations. iGen OLTP stresses process and thread creation, process scheduling, and database commit processing. The database has 10 million customers residing in it, and is approximately 15 GB in size. at time of creation. The transactions are comprised of various SQL transactions: read-only selects, joins, inserts and update operations. For this OLTP workload, 1 client system was used to emulate hundreds of users, generating SQL transactions to all 3 database instances in parallel running on the server.

On the Sun SPARC Enterprise T5220, the 3x Webserver instances and 3x MySQL database instances where isolated on 6 Containers. Each webserver zone shared a single storage array, but separate Gbit network interfaces. The 3x MySQL database zones shared a single network interface but used separate storage arrays for the database files and transaction logs. On the HP DL580, the 3x Webserver instances and 3x SQLserver database instances where isolated on 6 virtual machines. Each webserver VM shared a single storage array, but separate Gbit network interfaces. The 3x SQLserver database VMs shared a single network interface but used separate storage arrays for the database files and transaction logs.

When results listed above were obtained when running both the Web and OLTP workloads concurrently. The performance results for each workload were normalized using the results obtained from the HP DL580 as follows:

Sun SPARC Enterprise T5220 factor = Web + OLTP = 2.36

  1. Web Factor = 9546 ops/sec / (2x 6250 ops/sec) = 0.76
  2. OLTP Factor = 224K txns/min / (2x 70K txns/min) = 1.6

HL DL580 performance factor = Web + OLTP = 1.0

  1. Web Factor = 6250 ops/sec / (2x 6250 ops/sec) = 0.50
  2. OLTP Factor = 70K txns/min / (2x 70K txns/min) = 0.50

The Sun Enterprise T5220 used 3 Solaris Containers to host separate MySQL database instances and 3 Containers to host separate Sun Web Server instances.

Each MySQL and Webserver instance ran in an isolated zone not visible o any processes or users in the other containers.

The combination of OLTP and Web workloads consumed an average of 95% of cpu utilization when running concurrently.

System Configuration & Results Summary

Sun SPARC Enterprise T5220 (1 chip, 8 cores)9546 Web operations/sec, 224K OLTP Txns/min.
HP DL580 (4 chips, 16 cores)6250 Web operations/sec, 70K OLTP Txns/min. Results as of 2/22/2008.

Results
Sun SPARC Enterprise T5220 9546 Web ops/sec, 224K OLTP Txns/min
HP DL580 G5 6250 Web ops/sec, 70K OLTP Txns/min
Reference Date: Feb 22, 2008
Systems: Sun SPARC Enterprise T5220
HP DL580 G5
Processor/GHz of Server: 1x UltraSPARC T2 1.4 GHz,
4x Intel Xeon X7350 2.93 GHz
Operating System/Virtualization: Solaris 10 8/07 & Solaris Containers
Microsoft Windows Server2003 EE & a leading virtualization platform

Sun SPARC Enterprise T5220 with:

  • 1x UltraSPARC T2, 1.4  GHz 8-core processor
  • 64 GB of memory
  • 4x Internal 146GB SAS disks
  • 2x 750W Power Supplies
  • 4x Sun StorageTek ST2540 arrays (12x 146GB disks per array)
  • 2x Sun StorageTek 4Gb Dual-Port Fibre Channel PCI-E HBA (from Emulex)
  • 1x Sun PCI-E Dual Giga Ethernet Adapter UTP
  • Solaris 10 8/07
  • Solaris Containers
  • MySQL 5.0.51a
  • Sun Java System Web Server 7 u1

HP DL580 G5 with:

  • 4x Intel Xeon X7350, 2.93  GHz 4-core processors
  • 64 GB of memory
  • 5x Internal 146GB SAS disks & Smart Array P400i SAS Ctlr
  • 4x 800/1200W Power Supplies
  • 4x Sun StorageTek ST2540 arrays (12x 146GB disks per array)
  • 2x QLogic SANblade QLE2462 PCI-E Dual Port FC adapter
  • 3x HP NC360T PCI Express Dual Port Gigabit Server Adapter
  • Leading Virtualization Platform
  • Microsoft Windows Server 2003 EE 64-bit
  • Microsoft SQLserver 2005
  • Microsoft IIS 6.0 Webserver

Monday Feb 11, 2008

Optimal Sun Studio compiler options for MySQL

mysqlcompiler

We recently integrated MySQL 5.0.45 version with Open Solaris. MySQL 5.0.45 is available in SXDE 01/08. Sun Studio was used to compile MySQL on both Solaris SPARC and Solaris AMD64 platform.

This blog entry describes the changes that were made to improve MySQL performance using Sun Studio:

  • Enable function inlining when Sun Studio is used.
    • The header file univ.i has the following code:
      • #if !defined(GNUC) && !defined(WIN)
      • #undef UNIV_MUST_NOT_INLINE /\* Remove compiler warning \*/
      • #define UNIV_MUST_NOT_INLINE
      • #endif
    •  Modifying it to add Sun Studio compiler as under:
      • #if !defined(GNUC) && !defined(WIN) && !defined(__SUNPRO_C)
      • #undef UNIV_MUST_NOT_INLINE /\* Remove compiler warning \*/
      • #define UNIV_MUST_NOT_INLINE
      • #endif

  •   The following compiler options boost performance on both SPARC and AMD64:
    • -xbuiltin=%all' and ' -xprefetch=auto -xprefetch_level=3
  •   On SPARC setting the optimization level to 4 helps performance. (-x04).
  •  On AMD64 platform, the compilation of NDB fails with Sun Studio due to the following snippet in configure.in:    

case $SYSTEM_TYPE-$MACHINE_TYPE-$ac_cv_prog_gcc-$have_ndbcluster in
  \*solaris\*-i?86-no-yes)
  # ndb fail for whatever strange reason to link Sun Forte/x86
  # unless using incremental linker
  CXXFLAGS="$CXXFLAGS -xildon"
  ;;

           The -xildon option is added for Solaris AMD64 platform ...which is no longer available.In previous releases, this option forced the compiler to use the incremental linker (ild) by default instead of the linker (ld) for link-only invocations of the compiler. That is, with -g, the compiler's default behavior was to automatically invoke ild in place of ld whenever you used the compiler to link object files, unless you specified -G or source files on the command line. This is no longer the case. The incremental linker is no longer available. Refer to the following for more details : http://docs.sun.com/source/819-3690/Comp_Options_App.html

                After removing this option, the compilation of ndb works fine.

  • On 32bit built , the release binary is using the following option: -features=no%except It is recommended not to use this option. Removing this option results in following error:
    • Undefined                       first referenced
       symbol                             in file
      __1cG__CrunMex_rethrow_q6F_v_       ../libmysql/.libs/libmysqlclient.a(ssl.o) 
            This error can be resolved by adding the -lCrun library at run-time



How to configure MySQL to run with Solaris Management Facility (SMF)

MySQL 5.0.45 is integrated with Open Solaris build 79. It is available in Solaris Express Developer Edition (SXDE) 01/08. MySQL 5.0.45 is integrated with Solaris Service Management Facility (SMF).

This blog entry describes the steps that were taken to integrate MySQL with SMF.

First a quick recap of what is SMF:

SMF is the core component of the predictive self-healing technology available in Solaris 10, which provides automatic recovery from software and hardware failures as well as adminstrative errors.
Some of the advantages of using SMF are as under:

  • Failed services are automatically restarted in dependency order, whether they failed as the result of administrator error, software bug, or were affected by an uncorrectable hardware error.

  • More information is available about misconfigured or misbehaving services, including an explanation of why a service isn't running , as well as individual, persistent log files for each service.

  • Problems during the boot process are easier to debug, as boot verbosity can be controlled, service startup messages are logged, and console access is provided more reliably during startup failures.

  • Administrators can securely delegate tasks to non-root users more easily, including the ability to configure, start, stop, or restart services .

  • Large systems boot faster by starting services in parallel according to their dependencies.

Below are the SMF service manifest and accompanying shell script needed to integrate MySQL with Solaris SMF.

Perform the following steps to import the manifest into the SMF repository.

  1. Save the following XML code to a file called "mysql.xml" in /var/svc/manifest/application/database. You need to create the directory if it doesn't exist and have the appropriate privileges to perform this action.

<service_bundle type='manifest' name='mysql'>

<service
    name='application/database/mysql'
    type='service'
    version='1'>

<!--
Wait for network interfaces to be initialized.
-->

    <dependency
        name='network'
        grouping='require_all'
        restart_on='none'
        type='service'>
        <service_fmri value='svc:/milestone/network:default' />

    </dependency>

<!--
Wait for all local filesystems to be mounted.
-->

<dependency
    name='filesystem-local'
    grouping='require_all'
    restart_on='none'
    type='service'>
    <service_fmri value='svc:/system/filesystem/local:default' />
</dependency>

<exec_method
    type='method'
    name='start'
    exec='/lib/svc/method/mysql start'
    timeout_seconds='60' />

<exec_method
    type='method'
name='stop'
exec='/lib/svc/method/mysql stop'
timeout_seconds='60' />

<instance name='version_50' enabled='false'>

    <method_context>
    <method_credential user='mysql' group='mysql' />
    </method_context>

    <property_group name='mysql' type='application'>
        <propval name='bin' type='astring'      
            value='/usr/mysql/5.0/bin' />
        <propval name='data' type='astring'
            value='/var/mysql/5.0/data' />
    </property_group>

</instance>

<stability value='Evolving' />

<template>
    <common_name>
        <loctext xml:lang='C'>
            MySQL RDBMS
        </loctext>
    </common_name>

    <documentation>
        <manpage title='MySQL 5.0.45' section='1' />
        <doc_link name='mysql.com' uri='http://dev.mysql.com/docs' />
    </documentation>
</template>

</service>

</service_bundle>


The default instance of the manifest assumes that the database user is mysql and the database directory is /var/mysql/5.0/datasql/data . If any of them is different, update the above XML accordingly.
  1. Save the following shell script to a file called "mysql".

getproparg() {
    val=`svcprop -p $1 $SMF_FMRI`
    [ -n "$val" ] && echo $val
}

MYSQLBIN=`getproparg mysql/bin`
MYSQLDATA=`getproparg mysql/data`
PIDFILE=${MYSQLDATA}/`/usr/bin/uname -n`.pid

if [ -z $SMF_FMRI ]; then
    echo "SMF framework variables are not initialized."
    exit $SMF_EXIT_ERR
fi

if [ -z ${MYSQLDATA} ]; then
    echo "mysql/data property not set"
    exit $SMF_EXIT_ERR_CONFIG
fi

               
                    if [ ! -d ${MYSQLDATA} ]; then
                       
echo "mysql/data directory ${MYSQLDATA} is not a valid MySQL data directory"
                       
exit $SMF_EXIT_ERR_CONFIG
                    fi

if [ ! -d ${MYSQLDATA}/mysql ]; then
    ${MYSQLBIN}/mysql_install_db --user=mysql --datadir=${MYSQLDATA}
fi

mysql_start() {

    echo ${MYSQLBIN}/mysqld --user=mysql --datadir=${MYSQLDATA} --pid-file=${PIDFILE}
    ${MYSQLBIN}/mysqld --user=mysql --datadir=${MYSQLDATA} --pid-file=${PIDFILE} > /dev/null &

}

mysql_stop() {

    if [ -f ${PIDFILE} ]; then
        pkill mysqld
    fi
}

case "$1" in

    'start')

        mysql_start
        ;;

    'stop')

        mysql_stop
        ;;

       \*)
        echo "Usage: $0 {start|stop}"
        exit 1

esac

exit $SMF_EXIT_OK


  1. Place the shell script "mysql" in /lib/svc/method.

  2. Change the permission to 555. You need to have the appropriate write privileges to copy files into this directory.

  3. Import the SMF manifest by executing the following commands:

    # cd /var/svc/manifest/application/database 
    # /usr/sbin/svccfg import mysql.xml
  4. Initially the service instance is disabled. Use the following command to see the state.
    # svcs mysql

  5. Start the service for the default instance by executing the following command:

    # /usr/sbin/svcadm enable mysql

    From this point on the MySQL process is controlled by the Solaris SMF.

    For more details on how to use SMF, refer to the following BigAdmin site:

    http://www.sun.com/bigadmin/content/selfheal/



Wednesday Oct 03, 2007

MySQL Cluster installation

Cluster This blog entry describes how to install MySQL clusters on Solaris.

MySQL cluster consist of 3 seperate types of nodes:
  • SQL nodes
  • Storage nodes
  • Management nodes

The SQL nodes are the nodes that applications can connect to. Internally SQL nodes connect to storage nodes to  process the queries and return the result set to the end client.

The storage nodes are controlled by management nodes. They do most of the work in processing the queries.

Managment nodes manages the entire cluster. They start and stop the data and SQL nodes and manage backups.

Lets start with the simplistic installation where all the nodes of the cluster are on the same box. Of course this is not how you would do a typical MySQL cluster installation...but this is just to get a feel of what is involved in MySQL cluster installation.

The following steps are required to create a MySQL cluster on a single machine:


1)  Create config.ini file in /var/lib/mysql-cluster directory. The contents of config.ini are as under

[NDBD DEFAULT]
NoOfReplicas= 1

[MYSQLD DEFAULT]
[NDB_MGMD DEFAULT]
[TCP DEFAULT]

[NDB_MGMD]
HostName= hostname

[NDBD]
HostName= hostname
DataDir= /var/lib/mysql-cluster

[MYSQLD]
HostName=hostname

Replace <hostname> with the your hostname. The ndb , mysqld and ndb_mgmd sections donate settings for Storage , SQL and mangement nodes respectively. As indicated in the configuration file above , all the nodes are on the same hostname.

2)  Start the managment server by invoking ndb_mgmd command. This binary reads config.ini to get configuration information

3) Start the storage or data node by running ndbd.    

4)Add the cluster options to my.cnf

Essentially add ndbcluster option to mysqld section of my.cnf file.
[mysqld]
ndbcluster
datadir=/var/lib/mysql-cluster

5) Start mysql as usual.

6) Ensure MySQL is using ndb engine by issusing " show engines " command

7) To ensure everything is working fine ,log in to mangement client and issue show command:
ndb_mgm> show

Detailed documentation can be found at http://dev.mysql.com/doc/refman/5.0/en/mysql-cluster-quick.html

Now, lets do an installation of MySQL cluster on three seperate machines:
Lets assume that <mgmt_host> is the hostname for management node, <sql_host> is the hostname for SQL node and <storage_node> is the hostname for storage nodes.

1) Create the config.ini file under /var/lib/mysql-cluster directory on management node <mgmt_host>
[NDBD DEFAULT]
NoOfReplicas=1
DataDir= /var/lib/mysqlcluster
 # Management Node
[NDB_MGMD]
HostName=<mgmt_host>
DataDir= /var/lib/mysql/cluster
# Storage Nodes
[NDBD]
HostName=<storage_host>
# SQL Nodes
[MYSQLD]
HostName=sql_host

2) Start the management server using ndb_mgmd from /var/lib/mysql-cluster directory

3) Log to storage node <storage_host> and start the storage node by invoking ndbd as under
ndbd --connect-string={mangement server clear21} --initial
or put the following in /etc/my.cnf file on storage node [mysql_cluster]
ndb-connectstring=<mgmt_host>

4)
Add the following lines in /etc/my.cnf of SQL node <sql_host>

[mysqld]
ndbcluster
ndb-connectstring=<mgmt_host>

5) Start mysql as usual.

6) Ensure MySQL is using ndb engine by issusing " show engines " command

7) To ensure everything is working fine ,log in to mangement client and issue show command:
ndb_mgm> show



Generating trace files for MySQL

traceFiles
MySQL should be compiled with debug on option to enable trace file generation. Issue mysqld -V command to find out if it shall been compiled with debug option. If the version number ends with -debug,then the binary is compiled with support for trace files.

The default way of generating the trace files is to invoke mysqld as under:
  • mysqld --debug.
The size of the trace file is huge with this option. You can limit the output in trace file by invoking mysqld as under:
  • mysqld --debug=d,info,error,query,general,where :o,/tmp/mysqld.trace

For more info refer to the following URL:
  • http://www.tol.it/doc/MySQL/appendixC.html
  •  http://dev.mysql.com/doc/refman/5.1/de/the-dbug-package.html
About

Ritu Kamboj

Search

Categories
Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today