Friday Oct 16, 2009

New White Paper: Practicing Solaris Cluster using VirtualBox

For developers it is often convenient to have all tools necessary for their work in one place, ideally on a laptop for maximum mobility.

For system administrators, it is often critical to have a test system on which to try out things and learn about new features. Of course the system needs to be low cost and transportable to anywhere they need to be.

HA Clusters are often perceived as complex to setup and resource hungry in terms of hardware requirements.

This white paper explains how to setup a single x86 based system (like a laptop) with OpenSolaris, configuring a training and development environment for Solaris 10 / Solaris Cluster 3.2 and using VirtualBox to setup a two node cluster. The configuration can then be used to practice various technologies:

OpenSolaris technologies like Crossbow (to create virtual networking adapters), COMSTAR (to export iSCSI targets from the host being used as iSCSI initiators by the Solaris Cluster nodes as shared storage and quorum device), ZFS (to  export a ZFS volume as iSCSI targets and as failover file system within the cluster) and IPsec (to secure the cluster private interconnect traffic) are used for the host system and VirtualBox guests to configure Solaris 10 / Solaris Cluster 3.2.

Solaris Cluster technologies like software quorum and zone clusters are getting used to setup HA MySQL and HA Tomcat as failover services running in one virtual cluster. A second virtual cluster is being used to show how to setup Apache as a scalable service.

The instructions can be used as a step-by-step guide to setup any x86 64bit based system that is capable to run OpenSolaris. A CPU which supports hardware virtualization is recommended as well as at least 3GB of main memory. In order to try out if your system works, simply boot the OpenSolaris live CD-ROM and confirm with the Device Driver Utility (DDU) that all required components are able to run. The hardware compatibility list can be found at http://www.sun.com/bigadmin/hcl/. The role model for such a system is the Toshiba Tecra M10 with 4GB main memory.

If you ever had missed a possibility to just try out things with Solaris 10 and Solaris Cluster 3.2 and exploring new features - this is your chance :-)

Monday Oct 05, 2009

Sicherheitsaspekte bei Hochverfügbarkeitsclustern

Vor kurzem bekam ich die Gelegenheit, einen Artikel für den Newsletter zur IT-Security-Messe it-sa beizutragen.

Der Artikel gibt einen Überblick über Methoden zur Härtung (security hardening) und Minimierung von Solaris und Solaris Cluster basierenden HA Systemen.

Monday Aug 24, 2009

Hochverfügbarkeit mit minimalem Cluster

Meinen Vortrag über "Hochverfügbarkeit mit minimalem Cluster" habe ich auf folgenden Veranstaltungen halten dürfen:

Hier die aktuelle Version als PDF.

Der Vortrag hat einen theoretischen Teil und ein live Demo von meinem Laptop. Die benutzte Konfiguration habe ich in einem white paper beschrieben.

Nach meinem persönlichen Gefühl ist mir der Vortrag in Berlin am besten gelungen. Auf dem Solaris iX Day bin ich spontan eingesprungen, nachdem ein geplanter Redner kurzfristig ausgefallen war - dafür war es denke ich OK. Auf der FrOSCon hatte ich etwas Probleme in meinen Redefluss zu finden und die Gedanken zu ordnen - stand also etwas neben mir. Hoffe der Vortrag war trotzdem nützlich und interessant für den einen oder anderen :-)

Friday Jun 19, 2009

Running Open HA Cluster with VirtualBox

My presentation at the Open HA Cluster Summit 2009 in San Francisco explained how to setup a system with OpenSolaris, serving as a host for at least two VirtualBox OpenSolaris guests, and shows how to setup a two node Open HA Cluster with them by using technologies like Crossbow and COMSTAR. Such a system can be used to build, develop and test Open HA Cluster, which got demonstrated live during the session.

The video recording for this presentation is now available (thanks to Deirdré Straughan):

You can also download the slides in order to follow the video better. Additionally I created a white paper with the following abstract:

For system administrators, it is often critical to have a test system on which to try out things and learn about new features. Of course the system needs to be low cost and transportable to anywhere they need to be.

HA Clusters are often perceived as complex to setup and resource hungry in terms of hardware requirements.

This white paper explains step-by-step how to setup a single x86 based system (like a Toshiba M10 laptop) with OpenSolaris, configuring a build environment for Open HA Cluster and using VirtualBox to setup a two node cluster.

OpenSolaris technologies like Crossbow (to create virtual networking adapters), COMSTAR (to setup non-shared storage as iSCSI targets and using them as iSCSI initiators), ZFS (to mirror the iSCSI targets), Clearview (the new architecture for IPMP), and IPsec (to secure the cluster private interconnect traffic) are used for the host system and VirtualBox guests to configure Open HA Cluster. The image packaging system (IPS) is being used to deploy the build packages into the guests. Open HA Cluster technologies like weak membership (to not require an extra quorum device) and the integration into OpenSolaris technologies are leveraged to setup three typical FOSS applications: HA MySQL, HA Tomcat and scalable Apache webserver.

Enjoy watching, reading and trying it out!

Monday Jun 08, 2009

Best kept secret - or - taking availability for granted

As Dr. David Cheriton made a very good point in his keynote at the Open HA Cluster Summit last week in San Francisco: we take availability for granted and only recognize the lack of availability.

Seems the announcement for OpenSolaris 2009.06 took that to heart. Fortunately there is the "What's New in OpenSolaris 2009.06" document, giving a hint on Open HA Cluster 2009.06. The features section then has a nice description about the availability options with Open HA Cluster on OpenSolaris.

Finally I recommend to have a look at the interview about the next generation high availability cluster.

If you are keen to try out Open HA Cluster on OpenSolaris, there is a white paper available, describing step-by-step how to setup a two node cluster using VirtualBox on a single x86 system, like a laptop.

The official release notes and installation guide are available on the HA Clusters community web page.

I want to thank the whole team that made Open HA Cluster 2009.06 for their hard work and a great release!

Thursday May 14, 2009

Second Blueprint: Deploying Oracle Real Application Clusters (RAC) on Solaris Zone Clusters

Some time ago I blogged about the blueprint that explains how zone clusters work in general. Dr Ellard Roush and Gia-Khanh Nguyen did now create a second blueprint that specifically explains how to deploy Oracle RAC in zone clusters.

This paper addresses the following topics:

  • "Zone cluster overview" provides a general overview of zone clusters.
  • "Oracle RAC in zone clusters" describes how zone clusters work with Oracle RAC.
  • "Example: Zone clusters hosting Oracle RAC" steps through an example configuring Oracle RAC on a zone cluster.
  • "Oracle RAC configurations" provides details on the various Oracle RAC configurations supported on zone clusters.
Note that you need to login with your Sun Online Account in order to access it.

Monday Mar 16, 2009

Open HA Cluster on OpenSolaris - a milestone to look at

You might have noticed the brief message to the ha-clusters-discuss maillist that a new Colorado source drop is available.  If you have read my blog about why getting Open HA Cluster running on OpenSolaris is not just a "re-compile and run" experience, then it is worth to mention that the team working on project Colorado made some great progress:

  • The whole source can get compiled by using the latest SunStudioExpress compiler on OpenSolaris.
  • Logic has been implemented to create the IPS manifest content within the source gate (usr/src/ipsdefs/) and to send the IPS packages to a configurable repository as part of the build process. That way one can easily install the own set of created IPS packages on any OpenSolaris system which can reach that repository.
  • IPS package dependencies have been analysed and are now defined explicitly in more fine granularity.
  • scinstall has been enhanced to do all the required steps at initial configuration time, which have been previously done within individual postinstall/preremove SVR4 package scripts.
  • Shell scripts have been verified to either work with KSH93 or been changed to use /usr/xpg4/bin/sh.
  • While the framework gate has still a build dependency to JATO (which is part of webconsole), any run-time dependency has been removed.
  • pconsole has been made available for OpenSolaris, which can be used instead of the Solaris Cluster adminconsole (which uses Motif).
  • Changes have been made to work with new networking features introduced by projects Crossbow, Clearview and Volo. This especially means that vnics can be used to setup a minimal cluster on systems which just have one physical network interface.
  • Changes have been made to improve the DID layer, to work with non-shared storage exported as Solaris iSCSI targets, which in turn can get used through configuring the Solaris iSCSI initiator on both nodes. This can get combined with ZFS to achieve failover for the corresponding zpool. Details can be found within the iSCSI design document
  • Changes have been made to implement a new feature called weak membership. This allows to form a two node cluster without requiring a quorum device (neither quorum disk nor quorum server). To better understand this new functionality, read the weak membership design document.
  • Changes have been made to the HA Containers agent to work the the ipkg brand type for non-global zones on OpenSolaris.
The source has been verified to compile on OpenSolaris 2009.06 build 108 on x86. Instructions on how to compile the framework and agent source code should enable you to try the various new possibilities out on your OpenSolaris system. Give it a try and report back your experience! Of course be aware that this is still work in progress.

Friday Mar 06, 2009

Zone Clusters Blueprint available - How to deploy virtual clusters and why

If you have read the introducing blog about the new Zone Clusters feature, which got released with Solaris Cluster 3.2 01/09, then you might be also interested that a new blueprint has been published: Zone Clusters - How to deploy virtual clusters and why, again written by the techlead himself: Dr. Ellard Roush.

Note that you need to login with your Sun Online Account.

The following topics are getting covered:

  • Some background on virtualization technologies
  • High level overview of Zone Cluster
  • Zone Cluster customer use cases
  • Zone Cluster design overview
  • Creation of an example Zone Cluster
  • Detailed examples for the many clzonecluster commands

Monday Feb 16, 2009

Participate: Colorado Phase 1 Open Design review

Colorado is the OpenSolaris project to have Open HA Cluster running on the OpenSolaris binary distribution. In a past blog I explained why this is not just a "compile and run" experience. Last year in September and October 2008 there have been two CLARC (CLuster Architecture Review Committee) slots assigned to review the Colorado requirements specification.

This week, Thursday 19th February 2009, you can participate within an Open CLARC review from 9am - 11am PT (Pacific Time). Free dialin information can be found at the Open CLARC wiki page.

Prior to the call it makes sense to first read the design documents:

  • The main document explains how generic requirements get implemented, like necessary (ksh) script changes, modifications to some agent, specifically support for the ipkg brand type within the HA Containers agent, details on the IPS support implementation, build changes to support compilation with Sun Studio Express, etc.
  • The document about weak membership does explain the modifications to achieve a two node cluster without requiring an extra quorum device. Instead a new mechanism gets implemented which makes use of health checks. The changed behavior to the already existing strong membership model gets explained, as well as the modifications to the command line interface to switch between those two membership models.
  • The networking design document describes the changes necessary to work with the OpenSolaris projects Clearview and Crossbow. Especially the later also requires changes within the cluster CLI to support the configuration of virtual network interfaces (VNIC). 
  • The iSCSI design document describes the changes required within the DID (device id) driver and the interaction with the Solaris iSCSI subsystem.

You can also send your comments to the ha-clusters-dicsuss-at-opensolaris-dot-org mailing list.

Don't miss this interesting opportunity to participare and learn a lot about the Open HA Cluster internals. Hear you at the CLARC slot or read you on the maillist :-)

Wednesday Jan 28, 2009

Unsung features of the Solaris Cluster 3.2 1/09 (U2) release

By now I am sure you have seen that Solaris Cluster 3.2 1/09 got released yesterday. The set of new features is impressive, where the Solaris Containers Cluster for Oracle RAC clearly stands out. Read the Release Notes and the new set of product documentation for more details.

But I also want to mention the additional agent qualifications part of this new update release:

  • Informix Dynamic Server (IDS) version 9.4, 10, 11 and 11.5 on Solaris 10 (SPARC and x64) - this is a new agent which got developed within a project of the HA Clusters Community Group for Open HA Cluster
  • PostgreSQL agent support for PostgreSQL WAL shipping - again developed within the corresponding project for Open HA Cluster
  • SAP version 7.1
  • MaxDB version 7.7
  • WebLogic Server version 9.2, 10.0 and 10.2 in Solaris Container
  • SwiftAlliance Access version 6.2
  • SwiftAlliance Gateway version 6.1
  • Sun Java System Message Queue version 4.1 and 4.2
  • Sun Java System Application Server version 9.1UR2, Glassfish V2 UR2
  • Sun Java System Web Server version 7.0u4
  • MySQL version 5.1 - have also a look at the corresponding project page for Open HA Cluster
  • Apache Proxy Server version 2.2.5 and versions bundled with Solaris 10 10/08 and 5/08
  • Apache Web Server version 2.2.5 and versions bundled with Solaris 10 10/08 and 5/08
  • Agfa IMPAX version 6.3
  • IBM Websphere MQ version 7
  • Solaris 9 Container support within the HA Containers agent - again developed as part of the corresponding project for Open HA Cluster

Clearly anyone deploying Solaris Cluster wants to achieve high availability for an application, or set of applications. Therefor I believe we can be proud on our rich data services portfolio for various standard applications!

Tuesday Jan 20, 2009

Interview about Open HA Cluster and MySQL

Together with Detlef Ulherr, I had the pleasure to get interviewed by Lenz Grimmer, member of the MySQL Community Relations team at Sun Microsystems.

You can read the full Interview at the MySQL Developer Zone. I like the graphics at the top of the page :-)

Monday Jan 19, 2009

unexpected ksh93 behaviour with build-in commands send to background

While working on modifying the HA Containers agent to support the ipkg zone brand type from OpenSolaris, I stumbled over a difference in behavior of ksh88 vs ksh93 to be aware off, since it will at least impact the GDS based agents.

The difference affects commands that exist within user land as well as shell build-in, like sleep, and how they are then seen within the process table.

Example: Consider the following script called "/var/tmp/mytesting.ksh":


#!/bin/ksh

# send one sleep into background - it will run longer than the script itself
sleep 100 &

# another sleep, this time not in the background
sleep 20
echo "work done"
exit 0

If you then invoke the script on OpenSolaris, and while the second sleep runs invoke "ps -ef | grep mytest" in a different window, it will show something like:


    root  8185  8159   0 06:48:13 pts/4       0:00 /bin/ksh /var/tmp/mytesting.ksh
    root  8248  8178   0 06:48:32 pts/5       0:00 grep mytesting
    root  8186  8185   0 06:48:13 pts/4       0:00 /bin/ksh /var/tmp/mytesting.ksh

Note the two processes with the same name.

After the second sleep 20 did finish, you will see the script has terminated with printing "work done".

However, "ps -ef | grep mytest" will show:


    root  8262  8178   0 06:48:37 pts/5       0:00 grep mytesting
    root  8186     1   0 06:48:13 pts/4       0:00 /bin/ksh /var/tmp/mytesting.ksh

until the first sleep did also finish.

What is interesting is that the sleep put into background has the script name in the process table.

On a system where /bin/ksh is ksh88 based, you would see a process called "sleep 100", and "/bin/ksh /var/tmp/mytesting.ksh" just once.

If you on the other side create the following script called "/var/tmp/mytesting2.ksh":


#!/bin/ksh

# send one sleep into background - it will run longer than the script itself
/usr/bin/sleep 100 &

# another sleep, this time not in the background
/usr/bin/sleep 20
echo "work done"
exit 0

And do the above testing again, you will see:


# ps -ef | grep mytesting
    root  8276  8159   0 06:57:31 pts/4       0:00 /bin/ksh /var/tmp/mytesting2.ksh
    root  8292  8178   0 06:57:36 pts/5       0:00 grep mytesting

Ie. the script appears just once. And you can see:


# ps -ef | grep sleep
    root  8278  8276   0 06:57:31 pts/4       0:00 /usr/bin/sleep 20
    root  8296  8178   0 06:57:43 pts/5       0:00 grep sleep
    root  8277  8276   0 06:57:31 pts/4       0:00 /usr/bin/sleep 100

While the second sleep is still running and the script has not finished.

Once it has finished, you just see:


# ps -ef | grep sleep
    root  8306  8178   0 06:57:55 pts/5       0:00 grep sleep
    root  8277     1   0 06:57:31 pts/4       0:00 /usr/bin/sleep 100

and no longer the /var/tmp/mytesting2.ksh process.

This does make a difference in our logic within the GDS based agents, where we disable the pmf action script. Before doing that we invoke a sleep with the START_TIMEOUT length to assure that at least one process is within the tag until the action script is disabled.

And in our probe script, the wait_for_online mechanism does check if the start_command is still running. If it does, the probe returns 100 to indicate that the resource is not online yet.

So far many of our code invokes sleep instead of /usr/bin/sleep - and with the wait_for_online combination above, this will cause our start commands to always run into a timeout - although the script itself does terminate and everything worked fine. Manual testing will not show anything obvious as well.

The fix is to always invoke /usr/bin/sleep, not the shell build-in sleep.

Took me a while to understand it, and I write it here so others are not scratching their head as I did ;-)

Monday Jan 12, 2009

White paper on implementing Oracle 11g RAC with Solaris Cluster Geographic Edition and EMC

EMC2 did publish a white paper about "Implementing Oracle 11g RAC on Sun Solaris Cluster Geographic Edition Integrated with EMC SRDF". The abstract reads like:

 "This white paper  documents a proof of concept for running Oracle 11g RAC in a Sun Solaris Cluster Geographic Edition (SCGE) framework. The paper outlines the steps in configuring the test environment and also describes the system's functionality and corresponding administrative task."

It is a result of collaboration from Sun and EMC2 at Oracle Open World 2008.

Wednesday Dec 10, 2008

GDS template - browse source online or checkout subversion repository

It is often convenient to refer directly to some code if you discuss a certain portion, or try to explain it.

Therefore I put the GDS coding template (which we published so far as compressed tar file) into a subversion repository under the HA Cluster Utilities project page. You can browse it online using OpenGrok.

The repository should be available via
svn+ssh://<username>@svn.opensolaris.org/svn/ha-utilities/GDS-template

Find instructions on how to use subversion at here.

If you want to enhance or improve the GDS template, feel free to send code changes for review to the ha-cluster-discuss mailing list.

In order to receive commit rights to this repository, your userid needs to be registered as a "project observer" at the HA Utilities project page. Only then I can select and enable the username to grant write access.

Monday Nov 24, 2008

Some Blueprints and Whitepaper for Solaris Cluster

Recently there have been some Blueprints and Whitepaper made available, which also cover Solaris Cluster within the various topics:

  1. Blueprint: Deploying MySQL Database in Solaris Cluster Environments for Increased High Availability
  2. Whitepaper: High Availability in the Datacenter with the Sun SPARC Enterprise Server Line
  3. Community-Submitted Article on BigAdmin: Installing HA Containers With ZFS Using the Solaris 10 5/08 OS and Solaris Cluster 3.2 Software
Hope you find them useful!

About

This Blog is about my work at Availability Engineering: Wine, Cluster and Song :-) The views expressed on this blog are my own and do not necessarily reflect the views of Sun and/or Oracle.

Search

Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today