Friday Apr 18, 2008

Improving Sun Java System Application Server availability with Solaris Cluster


Sun Java System Application Server is one of the leading middle-ware products in the market with its robust architecture, stability, and ease of use.  The design of the Application Server by itself has some  high availability (HA) features in the form of node agents (NA) which are spread on multiple nodes to avoid a single point of failure (SPoF).  A simple illustration of the design :

However, as we can notice from the above block diagram, the Domain Administration Server (DAS) is not highly available. If the DAS goes down, then the administrative tasks cannot be done.  Despite the client connections being redirected to other instances of the cluster in case of an instance or NA failure or unavailability, an automated recovery would be desirable to reduce the load on the remaining instances of the cluster.  There are also the hardware, OS and network failure scenarios that needs to be accounted for in critical deployments, in which uptime is one of the main requirements.  

Why is a High Availability Solution Required?

A high availability solution is required to handle those failures which Application Server or for that matter any user-land application, cannot recover from, like network, hardware, operating  system failures, and human errors. Apart from these, there are other scenarios like providing continuous service even when OS or hardware upgrades and/or maintenance is done.

Apart from failures, a high availability solution helps the deployment take advantage of other operating system features, like network level load distribution, link failure detection, and virtualization etc.,  to the fullest.

How to decide on the best solution?

Once customers decide that their deployment is better served by a high availability solution, they need to decide on which solution to choose from the market.  The answer to the following questions will help in the decision making:

Is the solution very mature and robust?

Does the vendor provide an Agent that is specifically designed for Sun Java System Application Server?

Is the solution very easy to use and deploy?

Is the solution cost effective?

Is the solution complete? Can it provide high availability for associated components like
Message Queue?

And importantly, can they get very good support in the form of documentation, customer service and a single point of support?

Why Solaris Cluster?

Solaris Cluster is the best high availability solution for the Solaris platform available. It offers excellent integration with the Solaris Operating System and helps customers make use of new features introduced in Solaris without making modifications on their deployments.  Solaris Cluster supports applications running in containers, offers a very good choice of file systems that can be used, choices of processor architecture, etc.  Some of the  highlights include:

Kernel level integration to make use of Solaris features like containers, ZFS, FMA, etc.

A wide portfolio of agents to support the most widely used applications in the market.

Very robust and quick failure detection mechanism and stability even during very high loads.

IPMP - based network failure detection and load balancing.

The same agent can be used for both Sun Java Application Server and Glassfish.

Data Services Configuration Wizards for most common Solaris Cluster tasks.

Sophisticated fencing mechanism to avoid data corruption.

Detect loss of access to storage by monitoring the disk paths.

How does Solaris Cluster Provide High Availability?

Solaris Cluster provides high availability by using redundant components.  The storage, server and network card are redundant.  The following figure illustrates a simple two-node cluster which has the recommended redundant interconnects, storage accessible to both nodes, and public network interfaces each. It is important to note that this is the recommended configuration and the minimal configuration can have just one shared storage, interconnect, and public network interface.  Solaris Cluster even provides the flexibility of having a single-node cluster as well based on individual needs.

LH =  Logical hostname, type of virtual IP used for moving IP addresses across NICs.

RAID =  any suitable software or hardware based RAID mechanism that provides both redundancy and performance.

One can opt to provide high availability just for the DAS alone or for the node agent as well. The choice is based on the environment. Scalability of the node agents is not a problem with high availability deployments, since multiple node agents can be deployed on a single Solaris Cluster installation. These node agents are configured in multiple resource groups, with each resource group having a single logical host, HAStoragePlus and agent node resource. Since node agents are spread over multiple nodes in a normal deployment, there is no need for additional hardware just because a  highly available architecture is being used.  Storage can be made redundant either with software or hardware based RAID.

Solaris Cluster Failover Steps in Case of a Failure

Solaris Cluster provides a set of sophisticated algorithms that are applied to determine whether to restart an application or to failover to the redundant node. Typically the IP address, the file system on which the application binaries and data reside, and the application resource itself are grouped into a logical entity called resource group (RG).  As the name implies, the IP address, file system, and application itself are viewed as resources and each one of them is identified by a resource type (RT) typically referred to as an agent. The recovery mechanism, i.e restart or fail over to another node is, determined based on a combination of time outs, number of restarts, and history of failovers. An agent typically has start, stop, and validate methods that are used to start, stop, and verify prerequisites every time the application changes state.  It also includes a probe which is executed at a predetermined period of time to determine application availability.

Solaris Cluster has two RTs or agents for the Sun Java System Application Server.  The Resource Type SUNW.jsas is used for DAS, and SUNW.jsas_na for node agent. The probe mechanism involves executing the “asadmin list-domain” and “asadmin list-node-agents” command and interpreting the output to determine if the DAS and the node agents are in the desired state or not.  The Application Server software, file system, and  IP address are moved to the redundant node in case of a failover. Please refer to the Sun Cluster Data Service guide ( for more details.

The following is a simple illustration of a failover in case of a server crash.

In the previously mentioned setup, Application Server is not failed over to the second node if
one of the NICs alone fails. The redundant NIC, which is part of the same IPMP group hosts the logical host to which the DAS and NA make use. A temporary network delay will be noticed for until the logical host is moved from nic1 to nic2.

The Global File System (GFS) is recommended for Application Server deployments since there is very little write activity other than logs on the file system in which the configuration files and in specific deployments, binaries are installed. Because GFS is always mounted on all nodes, it results in better fail over times and quicker startup of Application server in case of a node crash or similar problems.

Maintenance and Upgrades

The same properties that help Solaris Cluster provide recovery during failures can be used to provide service continuity in case of maintenance and upgrade work. 

During any planned OS maintenance or upgrade, the RGs are switched over to the redundant node and the node that needs maintenance is rebooted into non-cluster mode. The planned actions are performed and the node is then rebooted into the cluster.  The same procedure can be repeated for all the remaining nodes of the cluster.

Application Server maintenance or upgrade depends on the way in which the binaries and the data and  configuration files are stored. 

1.)Storing the binaries on the node's internal hard disk and storing the domain and node agent related files on the shared storage.  This method is preferable for environments in which frequent updates are necessary. The downside is the possibility of inconsistency in the application binaries, due to differences in patches or upgrades

2.)Storing both the binaries and the data in the shared storage.   This method provides consistent data during all times but makes upgrades and maintenance without outages difficult.

The choice has to be made by taking into account the procedures and processes followed in the organization.

Other Features

Solaris Cluster also provides features that can be used for co-locating services based on the concept of affinities. For example, you can use negative affinity to evacuate the test environment when a production environment is switched to a node or use positive affinity to move the Application Server resources to the same node on which database server is hosted for better performance etc.

Solaris Cluster has an easy-to-use and intuitive GUI  management tool called Sun Cluster Manager, which can be used to perform most management taks.

Solaris Cluster has an inbuilt telemetry feature that can be used to monitor the usage of resources like CPU, memory, etc.

Sun Java Application server doesn't require any modification for Solaris Cluster as the agent is designed with this scenario in mind.

The same agent can be used for Glassfish as well.

The Message Queue Broker can be made highly available as well with the HA  for Sun Java Message Queue agent.

Consistent with Sun's philosophy, the product is being open sourced in phases and the agents are already available under the CDDL license.

An open source product based on the same code base is available for OpenSolaris releases called Open High Availability Cluster.  For more details on the product and community, please visit .

The open-source product also has a comprehensive test suite that serves helps users test their deployment satisfactorily.  For more details, please read


For mission-critical environments, availability against all types of failures is a very important criterion.  Solaris Cluster is best designed to provide the highest availability for  Application Server by virtue of its integration with Solaris OS, stability, and having an agent specifically designed for Sun Java System Application Server.

Madhan Kumar
Solaris Cluster Engineering

Friday Feb 15, 2008

Recent Open HA Cluster Activity

There have been some exciting developments in the Open HA Cluster open-source community over the past couple months. Specifically:

Nicholas Solter, Sun Cluster Developer and OpenSolaris HA Clusters Core Contributor

Thursday Nov 29, 2007

Sun Tech Days in Rome and Milan, Italy

Sun Tech Days are a series of developer focused events being organized world-wide by SUN. The main focus of these events are Java and OpenSolaris. The emphasis of the events is less on product features and more on engaging developers to work with the product. Here is the main event link

There were a couple of "tracks" (series of talks with a specific theme) in Rome and Milan: OpenSolaris, NETBEANS, Java and Solaris Development. The Java/NETBEANS related tracks were going on in parallel with the the Solaris ones. In addition, in Milan, there also were some booths where people can come in and looked at demos, get Solaris installed on their laptops etc. This page has the Agenda in Rome and Milan

With Solaris Cluster, our talk titled "Using and Contributing To Open High Availability Cluster" was part of the OpenSolaris Track. It also included a demo of Cluster Express running on a laptop running OpenSolaris and demonstrated failover of an application (Apache) between two zones on the same machine. Here are some of my impressions of the events:

OpenSolaris Day

OpenSolaris welcome was done by Chris Armes, Solaris Marketing director. Chris made a few jokes to get the audience to start paying attention (he seem to be good at that!) and also pointed out a bunch of goodies (T-Shirts, Books and Solaris Cluster demo DVDs), available for the audience members.

There were several presentations before Cluster, covering OpenSolaris, Nevada plans, Solaris Virtualization strategy and Solaris Networking enhancements. Virtualization topics generated most of the comments and questions from the audience, with many people asking about the plans around xVM and OpenSolaris.

My presentation on ""Using and Contributing To Open High Availability Cluster" was scheduled after the networking presentation. Before starting the presentation, I did a show-of-hands and not many people in the audience had prior knowledge of Open HA Cluster.

Given that, I spent a bit more time then I had planned, on introducing the basic concept of Availability and Solaris Cluster itself. I talked about concept of SPOFs and how Sun Cluster deals with failures in Storage/Network/Server/Application etc. I moved quickly thru details of what a SC Agent is and what it does. Next I showed a demo of SC with Apache as the application failing over between two zones on the same single node cluster (my laptop). The demo used the "keno" demo which continuously contacts the server and depending upon which node it is, it displays a square of different color. After failover, the client application starts showing squares of different colors. I demoed killing the apache daemons and quick restarts of the application as well as halting of the zone etc. The demo was well received by the audience, although the only comment I received was to use the "uadmin 1 0" command to reboot a zone instead of "reboot" command which I had planned to use. Hey... At least it shows the audience wasn't sleeping! :-) :-)

After the demo I talked about Solaris Cluster plans for open sourcing the code. The fact that all the Agents are already open sourced and there is a roadmap for more. I closed with (and spent some time on), the slide with all the Open HA Cluster related resources. I am hoping that at the very least, the audience members would remember the OHAC community web page on

I talked a bit about the contents of the Solaris Cluster demo DVD and encouraged people to install it on their laptops. I made the point that the presentation laptop itself is running Cluster Express on OpenSolaris. Which seemed to get the point across that the Open HA Cluster is easy to setup and run on common, off the shelf, hardware.

OpenSolaris Installfest

In Milan we had a OpenSolaris booth where people could come over and get their laptops installed with OpenSolaris. I had duties on all of Friday. Thanks to feedback from Boston about lack of software to partition people's hard drives to create empty partitions for Solaris, I was ready with my own copies of the partitioning software.

Lots of people came over to the booth simply to have a chat about Solaris, not really to install their laptops with it. The most common question was "Why should I use Solaris?", we tried various ways to tackle that question such as unique features and strengths of Solaris, because it is the most reliable and scalable (I had some amount of traction on the reliable part, although strangely enough, from the POV of the quality of code). Another point which (I thought) connected with some people was: Because it is fun and new and because you have never tried it before.

There were lots of questions on whether OpenSolaris can be run via VMware, answer is yes. I took pains to remind people that if they run Solaris under VMware, don't attribute any slowness or lack of stability to Solaris, rather to VMware.

Most people were wary of actually going thru the repartitioning of their hard drives. Some which did go thru were pleasantly surprised to find that once the right partition is specified to the installer, it goes thru the install very quickly and painlessly. I think this is one area where OpenSolaris has improved a lot recently and it shows. I engaged others (not ready for repartitioning of their hard drive yet) by putting in the OpenSolaris DVD into their laptops and running the Solaris Device Detection tool on their laptops. For all of the people who tried that, the device detection tool was able to determine that Solaris would run fine on the laptop. Only missing driver was sound driver, which is not essential, but I suggested to people that opensound drivers would solve that problem as well, as they have for me. Hopefully a couple of them would try out the OpenSolaris install by themselves.

If you missed the event in Italy, check out the schedule on , you may just find one near your home town. Find out more about Open HA Clusters at OHAC community web page on . If wanna join the conversation about it, or just wanna listen to what people in the community are talking about, visit the Open HA Cluster forum, which also has a link to discussion archives.

Ashutosh Tripathi
Solaris Cluster Engineering

Friday Aug 17, 2007

Eyes Wide Open

Well this week really has been a case of eyes wide open, either the roller coast ride of the Dow or Nasdaq or the amazing IPO of VMware will do it. However, I'd argue that in these cases "eyes wide open" simply reflects the fixed gazed observer in us.

Instead, I'd argue that "eyes wide open" correctly reflects the recent Sun and IBM agreement,"IBM Expands Support for the Solaris OS on x86 Systems". In this scenario, I'd argue we are not fixed gazed observers but serious players making their moves.

Today we are releasing onto OpenSolaris, Open HA Clusters Community (OHAC), the source code for the IBM WebSphere MQ and IBM WebSphere Message Broker agents. Within Solaris Cluster and OHAC we take these IBM products seriously, ensuring high availability support for WebSphere MQ V6.0 and WebSphere Message Broker V6.0 on Solaris 10 SPARC and Solaris 10 x86-64, also within Solaris Containers.

I think it's important to try and demonstrate the purpose of OHAC. As a relatively new community, within OHAC we have already endorsed developing an agent for IBM Informix v11, please see the HA-Informix project. Once developed it is our intention to putback HA-Informix into Solaris Cluster, thereby ensuring high availability support for IBM Infomix v11 on Solaris 10 SPARC and Solaris 10 x86-64, also within Solaris Containers.

While I'm here, I also think it's worthwhile to briefly outline the steps to propose a project to the OHAC community.

1. Anyone can propose a project to the forum. Here you simply need to outline your proposal. Please see a recent proposal for an example that proposes HA-Xen.

2. The proposal then gets reviewed by anyone that's interested, but perhaps more importantly by the HA-Cluster's core contributors. Here those people will vote, either positively (+1), negatively (-1) or by abstaining (+/- 0). Please see Article VIII for the gory details although please note a consensus must be reached for the proposal to be approved.

3. If approved, you then need to submit your proposal to the OpenSolaris Governing Board (OGB). Please see an example used by the HA-Informix proposal.

4. The OGB then set you up with a project page from which you can store your design docs and general project details etc. As the sponsor of the project you will then be the project leader. Again, please look here for an example.

The next steps are then to interact with the community on the design or any open questions, develop the agent code and submit a code review, although we're in the process of providing you with a checklist of things to do.

Once all this is done, one of the HA-Cluster's core contributors will putback your code into the Open HA Cluster (OHAC) gate. Once it's there we then have the "small" task of getting it putback into Solaris Cluster. This process is not yet open and as such an internal SC committee reviews new features to be included within SC. Nevertheless, by this time most of the hard work has been done and if we believe that "the more supported agents we have then the more appealing SC is to customers" then putback into SC is just be a formality.

Finally, within Solaris Cluster we're focused on virtualization. This means working to provide solutions that use Solaris Containers (b.t.w did you know that lx type branded zones are now supported in "failover" Solaris Containers, please check out SC 3.1 patch 120590-05 on sunsolve and soon available in SC 3.2 although this is a whole new blog topic.) Other virtualization areas we're focused on in Solaris Cluster address LDOMS, VMware and Xen, although these are also further blog topics.

In fact we've recently refreshed all of our code which includes amongst many, the putback to support lx type branded zones in the failover HA-Zones agent. We believe addressing our development through OpenSolaris allows us to ensure that whatever is developed is well thought out, meets the need and ultimately gets supported in Solaris.

I'd argue most people would agree with Larry Ellison's recent quote, "Open source is not something to be feared. Open source is something to be explained. Open source wins not because it's open and not because it's free. Open source wins only when it's better."

Sounds good to me.

Neil Garthwaite
Solaris Cluster Engineering


Oracle Solaris Cluster Engineering Blog


« June 2016