Friday Oct 16, 2009

New White Paper: Practicing Solaris Cluster using VirtualBox

For developers it is often convenient to have all tools necessary for their work in one place, ideally on a laptop for maximum mobility.

For system administrators, it is often critical to have a test system on which to try out things and learn about new features. Of course the system needs to be low cost and transportable to anywhere they need to be.

HA Clusters are often perceived as complex to setup and resource hungry in terms of hardware requirements.

This white paper explains how to setup a single x86 based system (like a laptop) with OpenSolaris, configuring a training and development environment for Solaris 10 / Solaris Cluster 3.2 and using VirtualBox to setup a two node cluster. The configuration can then be used to practice various technologies:

OpenSolaris technologies like Crossbow (to create virtual networking adapters), COMSTAR (to export iSCSI targets from the host being used as iSCSI initiators by the Solaris Cluster nodes as shared storage and quorum device), ZFS (to  export a ZFS volume as iSCSI targets and as failover file system within the cluster) and IPsec (to secure the cluster private interconnect traffic) are used for the host system and VirtualBox guests to configure Solaris 10 / Solaris Cluster 3.2.

Solaris Cluster technologies like software quorum and zone clusters are getting used to setup HA MySQL and HA Tomcat as failover services running in one virtual cluster. A second virtual cluster is being used to show how to setup Apache as a scalable service.

The instructions can be used as a step-by-step guide to setup any x86 64bit based system that is capable to run OpenSolaris. A CPU which supports hardware virtualization is recommended as well as at least 3GB of main memory. In order to try out if your system works, simply boot the OpenSolaris live CD-ROM and confirm with the Device Driver Utility (DDU) that all required components are able to run. The hardware compatibility list can be found at http://www.sun.com/bigadmin/hcl/. The role model for such a system is the Toshiba Tecra M10 with 4GB main memory.

If you ever had missed a possibility to just try out things with Solaris 10 and Solaris Cluster 3.2 and exploring new features - this is your chance :-)

Thursday May 14, 2009

Second Blueprint: Deploying Oracle Real Application Clusters (RAC) on Solaris Zone Clusters

Some time ago I blogged about the blueprint that explains how zone clusters work in general. Dr Ellard Roush and Gia-Khanh Nguyen did now create a second blueprint that specifically explains how to deploy Oracle RAC in zone clusters.

This paper addresses the following topics:

  • "Zone cluster overview" provides a general overview of zone clusters.
  • "Oracle RAC in zone clusters" describes how zone clusters work with Oracle RAC.
  • "Example: Zone clusters hosting Oracle RAC" steps through an example configuring Oracle RAC on a zone cluster.
  • "Oracle RAC configurations" provides details on the various Oracle RAC configurations supported on zone clusters.
Note that you need to login with your Sun Online Account in order to access it.

Monday Nov 24, 2008

Some Blueprints and Whitepaper for Solaris Cluster

Recently there have been some Blueprints and Whitepaper made available, which also cover Solaris Cluster within the various topics:

  1. Blueprint: Deploying MySQL Database in Solaris Cluster Environments for Increased High Availability
  2. Whitepaper: High Availability in the Datacenter with the Sun SPARC Enterprise Server Line
  3. Community-Submitted Article on BigAdmin: Installing HA Containers With ZFS Using the Solaris 10 5/08 OS and Solaris Cluster 3.2 Software
Hope you find them useful!

Thursday Oct 16, 2008

Business Continuity and Disaster Recovery Webcast

In the past months I talked in various presentations about Open HA Cluster and Solaris Cluster. The emphasis has been set to give an introduction into the Solaris Cluster architecture and the fact that this product is now fully Open Source, describing the various possibilities to contribute and giving an overview about already existing projects.

Most talks started with a note that in order to achieve high availability for a given service, it is not just enough to deploy a product like Solaris Cluster. The same is true if you look for business continuity and disaster recovery solutions. Besides the service stack in the backend, it is not only necessary to analyze the infrastructure end-to-end to identify and eliminate single points of failure (SPOF), but also to have a close look at people (education), processes, policies and clearly defined service level agreements.

Thus I am happy to see a webcast hosted by Hal Stern about Business Continuity and Disaster Recovery, which gives a nice introduction about this holistic topic. More information can be found at a dedicated page about Sun Business Continuity & Disaster Recovery Services.

Start with a Plan Not a Disaster! :-)

Monday Jun 23, 2008

Solaris8 and 9 Container on Solaris Cluster

If you are still running applications on Solaris 8 using SPARC hardware and maybe even using Sun Cluster 3.0, then you should get a plan ready to upgrade to a more recent releases like Solaris 10 and Solaris Cluster 3.2 02/08.

As you might know the last ship date for Solaris 8 was 02/16/07, the end of Phase 1 support is scheduled for 3/31/09.

Sun Cluster 3.0 is also reaching its end of life as announced within the Sun Cluster 3.2 Release Notes for Solaris OS.

In case you can not immediately upgrade to a newer Solaris release, Sun recently announced the Solaris 8 Container, which introduces the solaris8 brand type for non-global zones on Solaris 10. The packages can be freely downloaded for evaluation and would require subscription for the RTU and support.

While the solaris8 brand type is NOT extending the support life for Solaris 8, it allows you a phased approach for migrating to Solaris 10 and leveraging new hardware platforms while the application still runs within a Solaris 8 runtime environment.

The Sun Cluster Data Service for Solaris Containers does support the solaris8 brand type for Sun Cluster 3.1 08/05 with Patch 120590-06 and for Solaris Cluster 3.2 with Patch 126020-02 and newer.

Before going through the virtual to physical (p2v) migration, the existing Sun Cluster 3.0 configuration and packages need to get removed. See the Sun Cluster 3.0 System Administration Guide for more details on how to achieve that. This also means that there is no cluster framework running within the solaris8 brand type zone. Therefore existing standard agents can not be used. However, the sczsh component of the HA Container agent can be used to manage an application running within that solaris8 branded zone.

Of course any migration should get carefully planned.

The same works and is true for the recent announced Solaris 9 Containers. Patch 126020-03 introduces support for the solaris9 brand type for the HA Container agent on Solaris Cluster 3.2.

Wednesday Apr 02, 2008

Detailed Deployment and Failover Study of HA MySQL on a Solaris Cluster

Krish Shankar from ISV engineering published a very nice and detailed blog illustrating the deployment process of MySQL on a Solaris Cluster configuration. It also focuses on regression and failover testing of HA MySQL, and explains in detail the tests that were performed.  Solaris 10 fully supports MySQL, and the HA cluster application agent for MySQL on Solaris Cluster.

Monday Jan 21, 2008

BigAdmin article on how to implement IBM DB2 on Solaris Cluster 3.2

My colleague Neil Garthwaite from Availablility Engineering and Cherry Shu from ISV Engineering did write an article on BigAdmin about implementing IBM DB2 UDB V9 HA in a Solaris Cluster 3.2 Environment.

This paper provides step-by-step instructions on how to install, create, and enable DB2 Universal Database (UDB) V9 for high availability (HA) in a two-node Solaris Cluster 3.2 environment. The article demonstrates how to use ZFS as a failover file system for a DB2 instance and how to implement DB2 failover across Solaris Containers in the Solaris 10 Operating System.

Friday Nov 09, 2007

SWIFTAlliance Access and SWIFTAlliance Gateway 6.0 support on SC 3.2 / S10 in global and non-global zones

The Solaris 10 packages for the Sun Cluster 3.2 Data Service for SWIFTAlliance Access and the Sun Cluster 3.2 Data Service for SWIFTAlliance Gateway are available from the Sun Download page. They introduce support for SWIFTAlliance Access and SWIFTAlliance Gateway 6.0 on Sun Cluster 3.2 with Solaris 10 11/06 or newer. It is now possible to configure the data services for resource groups, which can fail over between the global zone of the nodes or between non-global zones. For more information consult the updated documentation, which is part of the PDF file of the downloadable tar archive.

The data services were tested and verified in a joint effort between SWIFT and Sun Microsystems at the SWIFT labs in Belgium. Many thanks to the SWIFT engineering team and our Sun colleagues in Belgium for the ongoing help and support!

For completeness, here is the support matrix for SWIFTAlliance Access and SWIFTAlliance Gateway with Sun Cluster 3.1 and 3.2 software:

Failover Services for Sun Cluster 3.1 SPARC

Application Application Version Solaris SC version Comments
SWIFTAlliance Access 5.0
5.5
5.9
6.0
8, 9
9
9
10 11/06 or newer
3.1 SPARC Requires Patch 118050-05 or newer
SWIFTAlliance Gateway 5.0
6.0
9
10 11/06 or newer
3.1 SPARC Requires Patch 118984-04 or newer

Failover Services for Sun Cluster 3.2 SPARC

Application Application Version Solaris SC version Comments
SWIFTAlliance Access 5.9
6.0
9
10 11/06 or newer
3.2 SPARC Requires Patch 126085-01 or newer
Package available for download
SWIFTAlliance Gateway 5.0
6.0
9
10 11/06 or newer
3.2 SPARC
Package available for download

If you want to study the data services source code, you can find it online for SWIFTAlliance Access and SWIFTAlliance Gateway on the community page for Open High Availability Cluster.

Thursday Oct 18, 2007

Oracle certifies Sun Cluster 3.2 for RAC 10gR2 64 & 32-bit on x86-64

Finally Oracle now also certified RAC 10gR2 64 & 32-bit for Sun Cluster 3.2 running on the Solaris 10 x86/x64 platform. You can verify this if you have a Metalink account, in the "Certify" column, clicking on the section "View Certifications by Platform", selecting "Solaris Operating System x86-x64" and then selecting "Real Application Clusters".

Together with the certification mentioned in my previous blog on the SPARC platform, you can fully leverage the recently documented "Maximum Availability Architecture (MAA) from Sun and Oracle" solution. Details are available within the published white paper and presentation.

The BigAdmin document "Installation Guide for Solaris Cluster 3.2 Software and Oracle 10g Release 2 Real Application Clusters" describes a detailed, step-by-step guide for installing the Solaris 10 11/06 Operating System, Solaris Cluster (formerly Sun Cluster) 3.2 software, the QFS 4.5 cluster file system, and Oracle 10g Release 2 Real Application Clusters (Oracle 10gR2 RAC). It also provides detailed instructions on how to configure QFS and Solaris Volume Manager so they can be used with Oracle 10gR2 RAC. Those instructions are valid for SPARC and x86-x64.

Saturday Jul 14, 2007

"Secure by default" and Sun Cluster 3.2

If you choose the "Secure by default" option when installing Solaris 10 11/06 (which is equal to run "netservices limited" lateron), then you need to perform the following steps prior to installing Sun Cluster 3.2:

  1. Ensure that the local_only property of rpcbind is set to false:
    # svcprop network/rpc/bind:default | grep local_only

    if local_only is not set to false, run:

    # svccfg
    svc:> select network/rpc/bind
    svc:/network/rpc/bind> setprop config/local_only=false
    svc:/network/rpc/bind> quit
    # svcadm refresh network/rpc/bind:default

     It is needed for cluster communication between nodes.

  2. Ensure that the tcp_listen property of webconsole is set to true:
    # svcprop /system/webconsole:console | grep tcp_listen

    If tcp_listen is not true, run:

    # svccfg
    svc:> select system/webconsole
    svc:/system/webconsole> setprop options/tcp_listen=true
    svc:/system/webconsole> quit
    # svcadm refresh svc:/system/webconsole:console
    # /usr/sbin/smcwebserver restart


    It is needed for Sun Cluster Manager communication.

    To verify if the port is listen to \*.6789 you can execute
    # netstat -a | grep 6789


Wednesday Jul 11, 2007

Oracle certifies Sun Cluster 3.2 for RAC 9i/10g on S9/S10 SPARC

Finally Oracle did officially certify RAC 9.2/10gR1/10gR2 64-bit on Solaris 9 and Solaris 10 SPARC running with Sun Cluster 3.2.  You can verify this if you have a Metalink account, in the "Certify" column, searching in the section "Product Version and Other Selections: RAC for Unix On Solaris Operating System (SPARC)".

There got also an Installation Guide for Solaris Cluster 3.2 Software and Oracle 10g Release 2 Real Application Clusters published on the BigAdmin System Administration Portal.

It is a detailed, step-by-step guide for installing the Solaris 10 11/06 Operating System, Sun Cluster 3.2 software, the QFS 4.5 cluster file system, and Oracle 10g Release 2 Real Application Clusters (Oracle 10gR2 RAC). It also provides detailed instructions on how to configure QFS and Solaris Volume Manager so they can be used with Oracle 10gR2 RAC.

Last but not least I want to reference the white paper Making Oracle Database 10G R2 RAC Even More Unbreakable, which explains a lot reasons why the combination of Sun Cluster and Oracle RAC is very benefitial.

Thursday Jul 05, 2007

Tricking applications which bind to nodename only with libschost.so.1

If you read through the Sun Cluster Data Services Developer's Guide for Solaris OS, you will find the requirements for non-cluster aware applications in Appendix E:

  1. Multihosted Data
  2. Host Names
  3. Multihomed Hosts
  4. Binding to INADDR_ANY as Opposed to Binding to Specific IP Addresses
  5. Client Retry

You can also read about analyzing the application for suitability.

For this blog number 2. is of special interest - if your application is somehow depending on the physical node name of a server (ie. the name that gets returned by hostname or uname -n), and does not offer the possibility to configure to use a logical host name instead, than the libschost.so.1 library provided with Sun Cluster 3.2 might help you out.

The referenced man page has all the information needed with examples how to use it within C source and shell based agent code.

You can also find examples of its usage in the Open HA Cluster source code within the Oracle E-Business Suite and Oracle Application Server agent by using the search interface and browse through the results.

About

This Blog is about my work at Availability Engineering: Wine, Cluster and Song :-) The views expressed on this blog are my own and do not necessarily reflect the views of Sun and/or Oracle.

Search

Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today