By hhnguyen on Apr 18, 2008
Sun Java System Application Server is one of the leading middle-ware products in the market with its robust architecture, stability, and ease of use. The design of the Application Server by itself has some high availability (HA) features in the form of node agents (NA) which are spread on multiple nodes to avoid a single point of failure (SPoF). A simple illustration of the design :
However, as we can notice from the above block diagram, the Domain Administration Server (DAS) is not highly available. If the DAS goes down, then the administrative tasks cannot be done. Despite the client connections being redirected to other instances of the cluster in case of an instance or NA failure or unavailability, an automated recovery would be desirable to reduce the load on the remaining instances of the cluster. There are also the hardware, OS and network failure scenarios that needs to be accounted for in critical deployments, in which uptime is one of the main requirements.
Why is a High Availability Solution Required?
A high availability solution is required to handle those failures which Application Server or for that matter any user-land application, cannot recover from, like network, hardware, operating system failures, and human errors. Apart from these, there are other scenarios like providing continuous service even when OS or hardware upgrades and/or maintenance is done.
Apart from failures, a high availability solution helps the deployment take advantage of other operating system features, like network level load distribution, link failure detection, and virtualization etc., to the fullest.
How to decide on the best solution?
Once customers decide that their deployment is better served by a high availability solution, they need to decide on which solution to choose from the market. The answer to the following questions will help in the decision making:
Is the solution very mature and robust?
Does the vendor provide an Agent that is specifically designed for Sun Java System Application Server?
Is the solution very easy to use and deploy?
Is the solution cost effective?
Is the solution complete? Can it provide high availability for associated components like
And importantly, can they get very good support in the form of documentation, customer service and a single point of support?
Why Solaris Cluster?
Solaris Cluster is the best high availability solution for the Solaris platform available. It offers excellent integration with the Solaris Operating System and helps customers make use of new features introduced in Solaris without making modifications on their deployments. Solaris Cluster supports applications running in containers, offers a very good choice of file systems that can be used, choices of processor architecture, etc. Some of the highlights include:
Kernel level integration to make use of Solaris features like containers, ZFS, FMA, etc.
A wide portfolio of agents to support the most widely used applications in the market.
Very robust and quick failure detection mechanism and stability even during very high loads.
IPMP - based network failure detection and load balancing.
The same agent can be used for both Sun Java Application Server and Glassfish.
Data Services Configuration Wizards for most common Solaris Cluster tasks.
Sophisticated fencing mechanism to avoid data corruption.
Detect loss of access to storage by monitoring the disk paths.
How does Solaris Cluster Provide High Availability?
Solaris Cluster provides high availability by using redundant components. The storage, server and network card are redundant. The following figure illustrates a simple two-node cluster which has the recommended redundant interconnects, storage accessible to both nodes, and public network interfaces each. It is important to note that this is the recommended configuration and the minimal configuration can have just one shared storage, interconnect, and public network interface. Solaris Cluster even provides the flexibility of having a single-node cluster as well based on individual needs.
LH = Logical hostname, type of virtual IP used for moving IP addresses across NICs.
RAID = any suitable software or hardware based RAID mechanism that provides both redundancy and performance.
One can opt to provide high availability just for the DAS alone or for the node agent as well. The choice is based on the environment. Scalability of the node agents is not a problem with high availability deployments, since multiple node agents can be deployed on a single Solaris Cluster installation. These node agents are configured in multiple resource groups, with each resource group having a single logical host, HAStoragePlus and agent node resource. Since node agents are spread over multiple nodes in a normal deployment, there is no need for additional hardware just because a highly available architecture is being used. Storage can be made redundant either with software or hardware based RAID.
Solaris Cluster Failover Steps in Case of a Failure
Solaris Cluster provides a set of sophisticated algorithms that are applied to determine whether to restart an application or to failover to the redundant node. Typically the IP address, the file system on which the application binaries and data reside, and the application resource itself are grouped into a logical entity called resource group (RG). As the name implies, the IP address, file system, and application itself are viewed as resources and each one of them is identified by a resource type (RT) typically referred to as an agent. The recovery mechanism, i.e restart or fail over to another node is, determined based on a combination of time outs, number of restarts, and history of failovers. An agent typically has start, stop, and validate methods that are used to start, stop, and verify prerequisites every time the application changes state. It also includes a probe which is executed at a predetermined period of time to determine application availability.
Solaris Cluster has two RTs or agents for the Sun Java System Application Server. The Resource Type SUNW.jsas is used for DAS, and SUNW.jsas_na for node agent. The probe mechanism involves executing the “asadmin list-domain” and “asadmin list-node-agents” command and interpreting the output to determine if the DAS and the node agents are in the desired state or not. The Application Server software, file system, and IP address are moved to the redundant node in case of a failover. Please refer to the Sun Cluster Data Service guide (http://docs.sun.com/app/docs/doc/819-2988) for more details.
The following is a simple illustration of a failover in case of a server crash.
In the previously mentioned setup, Application Server is not failed over to the second node if
one of the NICs alone fails. The redundant NIC, which is part of the same IPMP group hosts the logical host to which the DAS and NA make use. A temporary network delay will be noticed for until the logical host is moved from nic1 to nic2.
The Global File System (GFS) is recommended for Application Server deployments since there is very little write activity other than logs on the file system in which the configuration files and in specific deployments, binaries are installed. Because GFS is always mounted on all nodes, it results in better fail over times and quicker startup of Application server in case of a node crash or similar problems.
Maintenance and Upgrades
The same properties that help Solaris Cluster provide recovery during failures can be used to provide service continuity in case of maintenance and upgrade work.
During any planned OS maintenance or upgrade, the RGs are switched over to the redundant node and the node that needs maintenance is rebooted into non-cluster mode. The planned actions are performed and the node is then rebooted into the cluster. The same procedure can be repeated for all the remaining nodes of the cluster.
Application Server maintenance or upgrade depends on the way in which the binaries and the data and configuration files are stored.
1.)Storing the binaries on the node's internal hard disk and storing the domain and node agent related files on the shared storage. This method is preferable for environments in which frequent updates are necessary. The downside is the possibility of inconsistency in the application binaries, due to differences in patches or upgrades
2.)Storing both the binaries and the data in the shared storage. This method provides consistent data during all times but makes upgrades and maintenance without outages difficult.
The choice has to be made by taking into account the procedures and processes followed in the organization.
Solaris Cluster also provides features that can be used for co-locating services based on the concept of affinities. For example, you can use negative affinity to evacuate the test environment when a production environment is switched to a node or use positive affinity to move the Application Server resources to the same node on which database server is hosted for better performance etc.
Solaris Cluster has an easy-to-use and intuitive GUI management tool called Sun Cluster Manager, which can be used to perform most management taks.
Solaris Cluster has an inbuilt telemetry feature that can be used to monitor the usage of resources like CPU, memory, etc.
Sun Java Application server doesn't require any modification for Solaris Cluster as the agent is designed with this scenario in mind.
The same agent can be used for Glassfish as well.
The Message Queue Broker can be made highly available as well with the HA for Sun Java Message Queue agent.
Consistent with Sun's philosophy, the product is being open sourced in phases and the agents are already available under the CDDL license.
An open source product based on the same code base is available for OpenSolaris releases called Open High Availability Cluster. For more details on the product and community, please visit http://www.opensolaris.org/os/communities/ohac .
The open-source product also has a comprehensive test suite that serves helps users test their deployment satisfactorily. For more details, please read http://opensolaris.org/os/community/ha-clusters/ohac/Documentation/Tests/.
For mission-critical environments, availability against all types of failures is a very important criterion. Solaris Cluster is best designed to provide the highest availability for Application Server by virtue of its integration with Solaris OS, stability, and having an agent specifically designed for Sun Java System Application Server.
Solaris Cluster Engineering