Thursday Aug 07, 2014

Solaris Unified Archive Support in Zone Clusters

OSC Blog - GH+clzc.docx

Terry Fu

terry.fu@oracle.com

Introduction

Oracle Solaris Cluster version 4.2 software provides support for Solaris Unified Archive in configuring and installing zone clusters. This new feature enables you deploying a zone cluster with your customized configuration and application stacks much easier and faster. Now, you can create a unified archive from a zone with your application stack installed, and then easily deploy this archive into a zone cluster.

This blog will introduce you how to create unified archives for zone clusters, configure a zone cluster from a unified archive and install a zone cluster from a unified archive.

How to Create Unified Archives for Zone Clusters

Solaris 11.2 introduced unified archive which is a new native archive file type. Users create unified archives from a deployed Solaris instance, and the archive could include or exclude any zones or zone clusters within the instance. Unified archives are created through the use of archiveadm(1m) utility. You can check out the blog for Unified Archive in Solaris 11.2 for more information.

There are two types of Solaris Unified Archives: clone archive and recovery archive. A clone archive created for a global zone contains, by default, every system present on the host which includes global zone itself and every non-global zone. A recovery archive contains a single deployable system which can be the global zone without any non-global zones, a non-global zone or the global zone with the existing non-global zones.

Both clone and recovery type of archive are supported for zone cluster configuration and installation. The unified archive used for zone cluster can be created from either a global zone or a non global zone.

How to Configure and Install a Zone Cluster from Unified Archive

clzonecluster(1cl) utility is used to configure and install zone clusters. To configure a zone cluster from unified archive, “-a” option should be used along with “create” subcommand of “clzonecluster configure” utility. This option is supported in both interactive mode and non-interactive mode.

Though the unified archive contains the configuration for the zone cluster, you will need to set some properties as needed to valid zone cluster configuration. When configuring a zone cluster from a unified archive, you will need to set the new zone path and node scope properties as the old ones in the unified archive may not be valid for the destination system. If the unified archive is created from a non-clustered zone, you need to set the cluster attribute and also set the enable_priv_net property to true. Moreover, you can also change any zone property as needed.

You can use the info command in the interactive mode to view the current configuration of the zone cluster.

The following shows an example of configuring a zone cluster from a unified archive.

phys-schost-1#
clzonecluster configure sczone
sczone: No such zone cluster configured
Use'create' to begin configuring a new zone cluster.
clzc:sczone> create -a absolute_path_to_archive -z archived_zone_1
clzc:sczone> set zonepath=/zones/sczone
clzc:sczone> info
zonename:sczone
zonepath:/zones/sczone
autoboot:true
hostid:
brand:solaris
bootargs:
pool:
limitpriv:
scheduling-class:
ip-type:exclusive
enable_priv_net:true
attr:
name: cluster
type: boolean
value: true
clzc:sczone> add node
clzc:sczone:node> set physical-host=psoft1
clzc:sczone:node> set hostname=zc-host-1
clzc:sczone:node> add net
clzc:sczone:node:net> set address=vzsoft1a
clzc:sczone:node:net> set physical=sc_ipmp0
clzc:sczone:node:net> end
clzc:sczone:node> end
clzc:sczone> add node
clzc:sczone:node> set physical-host=psoft2
clzc:sczone:node> set hostname=zc-host-2
clzc:sczone:node> add net
clzc:sczone:node:net> set address=vzsoft2a
clzc:sczone:node:net> set physical=sc_ipmp0
clzc:sczone:node:net> end
clzc:sczone:node> end

The zone cluster is now configured. The following commands installs the zone cluster from a unified archive on a global-cluster node.

phys-schost-1# clzonecluster install -a absolute_path_to_archive -z archived-zone sczone

The zone cluster is now installed. If the unified archive for the zone cluster installation was created from a non-clustered zone, you will need to install the Oracle Solaris Cluster packages on the zone nodes before forming a zone cluster.

Note, configuring and installing zone cluster are independent to each other, so you can use the Solaris Unified Archive feature in either or both steps. Also, the unified archive used to configure and install the zone cluster does not have to be the same one.

Next Steps

After configuring and installing the zone cluster from unified archives, you can now boot the zone cluster.

If you would like to learn more about the use of Solaris Unified Archives for configuring and installing zone clusters in Solaris Cluster 4.2, see clzonecluster(1cl) and Oracle Solaris Cluster System Administration Guide.

Oracle's Siebel 8.2.2 support on Oracle Solaris Cluster software

Swathi Devulapalli

The Oracle Solaris Cluster 4.1 data service for Siebel on SPARC now supports Oracle's Siebel 8.2.2 version. It is now possible to configure the Siebel Gateway Server and Siebel Server components of Siebel 8.2.2 software for failover/high availability.

What it is

Siebel is the most popular CRM solution that delivers a combination of transactional, analytical, and engagement features to manage all customer-facing operations. Siebel 8.2.2 is the first Siebel version that is certified and released for Oracle Solaris 11 software on the SPARC platform.

The Oracle Solaris Cluster 4.1 SRU3 software on Oracle Solaris 11 provides a high availability(HA) data service for Siebel 8.2.2. The Oracle Solaris Cluster data service for Siebel provides fault monitoring and automatic failover of Siebel application. The data service makes highly available two essential components of the Siebel application: Siebel Gateway Server and Siebel Server. A resource of type SUNW.sblgtwy monitors the Siebel Gateway server and a resource of type SUNW.sblsrvr monitors the Siebel server. The Oracle Solaris Cluster 4.1 SRU3 on Oracle Solaris 11 extends the support of this data service for Siebel 8.2.2.

With the support of Siebel 8.2.2 on Oracle Solaris 11, the features of Oracle Solaris 11 and Oracle Solaris Cluster 4.1 are available in the Siebel 8.2.2 HA agent. The HA solution for Siebel stack can be configured on a complete Oracle products stack with an Oracle Solaris Cluster HA solution available on each tier of the stack. For example, the Oracle Solaris Cluster HA Oracle agent in the database tier, the Oracle Solaris Cluster HA Siebel agent in the application tier and the Oracle Solaris Cluster HA Oracle iPlanet Web Server agent in the Web tier.

What’s new?

1. A new extension property Siebel_version has been introduced. This property indicates the Siebel server version number i.e 8.2.2, etc

The below example illustrates the usage of the Siebel_version property in the Siebel Gateway Server and Siebel Server resource creation.

Creation of Siebel Gateway Server resource:


Creation of Siebel Server resource:


2. Encryption facility for the HA-Siebel configuration files

The HA Siebel solution uses the database user/password and the Siebel user/password to execute the start, stop and monitor methods. These passwords are stored in scsblconfig and scgtwyconfig files located under the Siebel Server installation directory and Siebel Gateway Server installation directory respectively. The new HA-Siebel data service provides an option to encrypt these files that the agent decrypts before use.

The Oracle Solaris Cluster administrator encrypts the configuration files following the steps provided in the HA-Siebel document. The HA-Siebel agent decrypts these files and uses the entries while executing the start, stop and monitor methods.

For detailed information on the configuration of encrypted files, refer to Configuring the Oracle Solaris Cluster Data Service for Siebel (Article 1509776.1) posted on My Oracle Support at http://support.oracle.com. You must have an Oracle support contract to access the site. Alternatively, you can also refer to the Oracle Solaris Cluster Data Service for Siebel Guide.

HA-LDOM live migration in Oracle Solaris Cluster 4.2

Most Oracle Solaris Cluster resource types clearly separate their stopping and starting actions. However, the HA-LDom (HA Logical Domain / HA for Oracle VM for SPARC data service) agent has a capability of performing live migration, in which an entire switchover of the resource is performed while the cluster framework is stopping the resource. Suppose that the LDom resource starts out on node A and the administrator executes a switchover to node B. While the cluster framework is stopping the resource on node A, LDom live migration is actually migrating the LDom from node A to node B.

The problem (which previously existed) with this approach arose if another resource group declared a strong negative (SN) affinity for the LDom resource's group or if load limits were in use. The LDom is actually migrating onto the target node before the cluster framework knows that it has started there, so the cluster framework will not have had a chance to evict any resource group(s) due to strong negative affinities or load limits. There will be an interval of time in which the hard loadlimit or strong negative affinity will be violated. This may cause overload of the node, and consequent failure of the LDom live migration attempt.

With Oracle Solaris Cluster 4.2 we eliminate the above said problem by providing a mechanism in the cluster framework by which excess workload can be migrated off of the target node of the switchover before executing the live migration of the LDom. This is accomplished by making use of the following two enhancements:

  1. A new resource property Pre_evict which ensures seamless live migration of LDom. The Pre_evict property allows the cluster framework to move excess workload off of the target node before the switchover begins, which then allows the live migration to succeed more often.

  2. A new scha_resourcegroup_get query tag SCHA_TARGET_NODES which allows a data service to perform the live migration by informing it of the target node to be used for the migration.

For the convenience of the users / administrators, the HA-LDom agent in Oracle Solaris Cluster 4.2 has the Pre_evict set to True by default and the agent is enhanced to take advantage of these new API (Pre_evict and SCHA_TARGET_NODES query) features.

For more information, refer the following links

- Harish Mallya
Oracle Solaris Cluster 
About

Oracle Solaris Cluster Engineering Blog

Search

Archives
« August 2014 »
SunMonTueWedThuFriSat
     
2
3
4
6
8
9
10
11
12
14
15
16
17
19
21
22
23
24
25
26
27
28
29
30
31
      
Today