Tuesday Sep 16, 2014

Oracle Solaris Cluster and Oracle GoldenGate

Detlef Ulherr


With Oracle Solaris Cluster we provide an agent for Oracle GoldenGate. This agent provides high availability for the GoldenGate application.  With the Oracle Solaris Cluster GoldenGate agent you can configure the GoldenGate software to replicate databases controlled by Oracle Solaris Cluster. The replication partner can be located either on a cluster node of the same cluster, on a different cluster, or a non clustered system. The Oracle Solaris Cluster agent for GoldenGate does not impose any restriction on the possible Oracle GoldenGate replication topologies, so there is no difference in regards of Oracle GoldenGate replication topologies between a clustered and a non clustered system.

With this new agent you can integrate Oracle GoldenGate together with the replication source or replication target database in Oracle Solaris Cluster. To accomplish this you must make the database highly available using an Oracle Solaris Cluster agent for your database. If no agent is available for this database you can easily create one using the Generic Data Service Version 2 shipped by Oracle Solaris Cluster 4.2.

The Oracle Solaris Cluster agent for GoldenGate can be configured in failover and multiple master configurations. Beside the obvious shared storage topologies for failover configurations, you can implement shared nothing topologies with multiple master configurations, if the Oracle GoldenGate replication satisfies your requirements.

The documentation for the GoldenGate agent is available at:


One possible configuration would be:

Friday Sep 12, 2014

Oracle Database 12c Agent: new features

The Oracle Solaris Cluster 4.2 release contains several enhancements to the Oracle Database-related agents. These enhancements improve availability by allowing finer-grained dependencies and by enabling new Oracle Database features.

Support for Policy-Managed Databases

Policy-managed databases is a feature that was introduced in Oracle Real Applications Clusters 11g release 2. This feature enables the database administrator to establish policies that govern how many database instances are running in the cluster and on which nodes.

We have enhanced the existing SUNW.scalable_rac_server_proxy resource type to support policy-managed databases, in addition to the traditional administrator-managed databases. These changes do require that existing resources of the SUNW.scalable_rac_server_proxy resource type be upgraded, to accommodate the changes that have been made. Also, a manual step must be performed prior to the resource-type upgrade.

See "Upgrading Resources in Support for Oracle RAC" in Oracle Solaris Cluster Data Service for Oracle Real Application Clusters (http://docs.oracle.com/cd/E39579_01/html/E39656/gdwbm.html#scrolltoc) for procedures to recreate the Oracle Grid Infrastructure sun.storage_proxy_type resource type and resources, and then upgrade the SUNW.scalable_rac_server_proxy resource type.

Support for the Oracle RAC Services Feature

Enhancements have been made to the existing HA for Oracle External Proxy agent to enable the agent to work with the services feature of Oracle RAC. These changes enable an Oracle RAC service to be represented by a proxy resource in Oracle Solaris Cluster, so that other resources can establish dependencies at a finer granularity than the database.

Support for CDBs and PDBs

Another benefit to the enhancement to support Oracle RAC services is the ability to now support multitenant container databases (CDB) and pluggable databases (PDB), new features that were introduced in Oracle Database 12c. The PDBs are represented as a database service, which means that they can be used by creating an HA for Oracle External Proxy resource to represent them in the Oracle Solaris Cluster configuration.

Bob Bart
Oracle Solaris Cluster Engineering

Wednesday Aug 20, 2014

Oracle Solaris Cluster 4.2 Event and its SNMP Interface


The cluster event SNMP interface was first introduced in Oracle Solaris Cluster 3.2 release. The details of the SNMP interface are described in the Oracle Solaris Cluster System Administration Guide and the Cluster 3.2 SNMP blog.

Prior to the Oracle Solaris Cluster 4.2 release, when the event SNMP interface was enabled, it would take effect on WARNING or higher severity events. The events with WARNING or higher severity are usually for the status change of a cluster component from ONLINE to OFFLINE. The interface worked like an alert/alarm interface when some components in the cluster were out of service (changed to OFFLINE). The consumers of this interface could not get notification for all status changes and configuration changes in the cluster.

Cluster Event and its SNMP Interface in Oracle Solaris Cluster 4.2

The user model of the cluster event SNMP interface is the same as what was provided in the previous releases. The cluster event SNMP interface is not enabled by default on a freshly installed cluster; you can enable it by using the cluster event SNMP administration commands on any cluster nodes. Usually, you only need to enable it on one of the cluster nodes or a subset of the cluster nodes because all cluster nodes get the same cluster events. When it is enabled, it is responsible for two basic tasks.
• Logs up to 100 most recent NOTICE or higher severity events to the MIB.
• Sends SNMP traps to the hosts that are configured to receive the above events.

The changes in the Oracle Solaris Cluster 4.2 release are
1) Introduction of the NOTICE severity for the cluster configuration and status change events.
The NOTICE severity is introduced for the cluster event in the 4.2 release. It is the severity between the INFO and WARNING severity. Now all severities for the cluster events are (from low to high)
• INFO (not exposed to the SNMP interface)
• NOTICE (newly introduced in the 4.2 release)

In the 4.2 release, the cluster event system is enhanced to make sure at least one event with the NOTICE or a higher severity will be generated when there is a configuration or status change from a cluster component instance. In other words, the cluster events from a cluster with the NOTICE or higher severities will cover all status and configuration changes in the cluster (include all component instances). The cluster component instance here refers to an instance of the following cluster components
node, quorum, resource group, resource, network interface, device group, disk, zone cluster and geo cluster heartbeat. For example, 
pnode1 is an instance of the cluster node component, and oracleRG is an instance of the cluster resource group.

With the introduction of the NOTICE severity event, when the cluster event SNMP interface is enabled, the consumers of the SNMP interface will get notification for all status and configuration changes in the cluster. A thrid-party system management platform with the cluster SNMP interface integration can generate alarms and clear alarms programmatically, because it can get notifications for the status change from ONLINE to OFFLINE and also from OFFLINE to ONLINE.

2) Customization for the cluster event SNMP interface
• The number of events logged to the MIB is 100. When the number of events stored in the MIB reaches 100 and a new qualified event arrives, the oldest event will be removed before storing the new event to the MIB (FIFO, first in, first out). The 100 is the default and minimum value for the number of events stored in the MIB. It can be changed by setting the log_number property value using the clsnmpmib command. The maximum number that can be set for the property is 500.

• The cluster event SNMP interface takes effect on the NOTICE or high severity events. The NOTICE severity is also the default and lowest event severity for the SNMP interface. The SNMP interface can be configured to take effect on other higher severity events, such as WARNING or higher severity events by setting the min_severity property to the WARNING. When the min_severity property is set to the WARNING, the cluster event SNMP interface would behave the same as the previous releases (prior to the 4.2 release).

• Set the number of events stored in the MIB to 200
# clsnmpmib set -p log_number=200 event
• Set the interface to take effect on WARNING or higher severity events.
# clsnmpmib set -p min_severity=WARNING event

Administering the Cluster Event SNMP Interface

Oracle Solaris Cluster provides the following three commands to administer the SNMP interface.
clsnmpmib: administer the SNMP interface, and the MIB configuration.
clsnmphost: administer hosts for the SNMP traps
clsnmpuser: administer SNMP users (specific for SNMP v3 protocol)

Only clsnmpmib is changed in the 4.2 release to support the aforementioned customization of the SNMP interface. Here are some simple examples using the commands.

1. Enable the cluster event SNMP interface on the local node
# clsnmpmib enable event
2. Display the status of the cluster event SNMP interface on the local node
# clsnmpmib show -v
3. Configure my_host to receive the cluster event SNMP traps.
# clsnmphost add my_host

Cluster Event SNMP Interface uses the common agent container SNMP adaptor, which is based on the JDMK SNMP implementation as its SNMP agent infrastructure. By default, the port number for the SNMP MIB is 11161, and the port number for the SNMP traps is 11162. The port numbers can be changed by using the cacaoadm. For example,
# cacaoadm list-params
Print all changeable parameters. The output includes the snmp-adaptor-port and snmp-adaptor-trap-port properties.
# cacaoadm set-param snmp-adaptor-port=1161
Set the SNMP MIB port number to 1161.
# cacaoadm set-param snmp-adaptor-trap-port=1162
Set the SNMP trap port number to 1162.

The cluster event SNMP MIB is defined in sun-cluster-event-mib.mib, which is located in the /usr/cluster/lib/mibdirectory. Its OID is, that can be used to walk through the MIB data. Again, for more detail information about the cluster event SNMP interface, please see the Oracle Solaris Cluster 4.2 System Administration Guide.

- Leland Chen 

Monday Aug 18, 2014

SCHA API for resource group failover / switchover history

The Oracle Solaris Cluster framework keeps an internal log of cluster events, including switchover and failover of resource groups. These logs can be useful to Oracle support engineers for diagnosing cluster behavior. However, till now, there was no external interface to access the event history. Oracle Solaris Cluster 4.2 provides a new API option for viewing the recent history of resource group switchovers in a program-parsable format.

Oracle Solaris Cluster 4.2 provides a new option tag argument RG_FAILOVER_LOG for the existing API command scha_cluster_get which can be used to list recent failover / switchover events for resource groups.

The command usage is as shown below:

# scha_cluster_get -O RG_FAILOVER_LOG number_of_days

number_of_days : the number of days to be considered for scanning the historical logs.

The command returns a list of events in the following format. Each field is separated by a semi-colon [;]:


source_nodes: node_names from which resource group is failed over or was switched manually.

target_nodes: node_names to which the resource group failed over or was switched manually.

There is a corresponding enhancement in the C API function scha_cluster_get() which uses the SCHA_RG_FAILOVER_LOG query tag.

In the example below geo-infrastructure (failover resource group), geo-clusterstate (scalable resource group), oracle-rg (failover resource group), asm-dg-rg (scalable resource group) and asm-inst-rg (scalable resource group) are part of Geographic Edition setup.

# /usr/cluster/bin/scha_cluster_get -O RG_FAILOVER_LOG 3
geo-infrastructure;schost1c;;Mon Jul 21 15:51:51 2014
geo-clusterstate;schost2c,schost1c;schost2c;Mon Jul 21 15:52:26 2014
oracle-rg;schost1c;;Mon Jul 21 15:54:31 2014
asm-dg-rg;schost2c,schost1c;schost2c;Mon Jul 21 15:54:58 2014
asm-inst-rg;schost2c,schost1c;schost2c;Mon Jul 21 15:56:11 2014
oracle-rg;;schost2c;Mon Jul 21 15:58:51 2014
geo-infrastructure;;schost2c;Mon Jul 21 15:59:19 2014
geo-clusterstate;schost2c;schost2c,schost1c;Mon Jul 21 16:01:51 2014
asm-inst-rg;schost2c;schost2c,schost1c;Mon Jul 21 16:01:10 2014
asm-dg-rg;schost2c;schost2c,schost1c;Mon Jul 21 16:02:10 2014
oracle-rg;schost2c;;Tue Jul 22 16:58:02 2014
oracle-rg;;schost1c;Tue Jul 22 16:59:05 2014
oracle-rg;schost1c;schost1c;Tue Jul 22 17:05:33 2014

Note that in the output some of the entries might have an empty string in the source_nodes. Such entries correspond to events in which the resource group is switched online manually or during a cluster boot-up. Similarly, an empty destination_nodes list indicates an event in which the resource group went offline.

- Arpit Gupta, Harish Mallya

Wednesday Aug 13, 2014

Support for Kernel Zones with the Oracle Solaris Cluster 4.2 Data Service for Oracle Solaris Zones

The Oracle Solaris Cluster Data Service for Oracle Solaris Zones is enhanced to support Oracle Solaris Kernel Zones (also called solaris-kz branded zones) with Oracle Solaris 11.2.

This data service provides high availability for Oracle Solaris Zones through three components in a failover or multi-master configuration:

  • sczbt: The orderly booting, shutdown and fault monitoring of an Oracle Solaris zone.
  • sczsh: The orderly startup, shutdown and fault monitoring of an application within the Oracle Solaris zone (managed by sczbt), using scripts or commands.
  • sczsmf: The orderly startup, shutdown and fault monitoring of an Oracle Solaris Service Management Facility (SMF) service within the Oracle Solaris zone managed by sczbt.

With Oralce Solaris Cluster 4.0 and 4.1 the sczbt component does support cold migration (boot and shutdown) of solaris and solaris10 branded zones.

The sczbt component now in addition supports cold and warm migration for kernel zones on Oracle Solaris 11.2. Using warm migration (suspend and resume of a kernel zone) allows you to minimize planned downtime, in case a cluster node is overloaded or needs to be shutdown for maintenance.

By deploying kernel zones under control of the sczbt data service, system administrators can provide a highly available virtual machine with enhanced flexibility associated with being able to support consolidation of multiple zones with separate kernel patch levels on a single system. At the same time the administrator has the ability to very efficiently consolidate multiple workloads onto a server.

The three data service components are now implemented as their own dedicated resource types as follows:

  • sczbt - ORCL.ha-zone_sczbt
  • sczsh - ORCL.ha-zone_sczsh
  • sczsmf - ORCL.ha-zone_sczsmf

Resource configuration is still done by amending the component configuration file and providing it to the component register script. There is no longer a need to configure or maintain a parameter file for the sczbt or sczsh component.

In case existing deployments of the data service components are upgraded to Oracle Solaris Cluster 4.2, there is no requirement to re-register the resources. The previous SUNW.gds based component resources continue to run unchanged.

For more details, please refer to the Oracle Solaris Cluster Data Service for Oracle Solaris Zones Guide.

Thorsten Früauf
Oracle Solaris Cluster Engineering


Oracle Solaris Cluster Engineering Blog


« December 2015