Thursday Jan 15, 2015

Managing a remote Oracle Database instance with "Geographic Edition"

Another new feature of Oracle Solaris Cluster Geographic Edition

A few weeks ago I wrote about a new feature for Geographic Edition: DR Orchestration, which we added in Oracle Solaris Cluster 4.2.  With the latest update to 4.2, SRU1, we've completed the testing of another Geographic Edition feature which can make DR Orchestration even more useful - we can now support Oracle Data Guard replication control for a remote database.

As I described before, a multigroup can combine several protection groups so that they can be switched together in a coordinated manner. That enables a service constructed out of multiple tiers, on multiple clusters, to be managed as a unit.

Since each tier is represented by a protection group it might seem necessary that each tier be running Oracle Solaris Cluster, which could be inconvenient if one of the tiers is running only an Oracle RAC database. There is no absolute requirement to use Oracle Solaris Cluster in that configuration; the systems might not be running any cluster software, or perhaps running Oracle RAC Clusterware.  An example of such a configuration is the Oracle SuperCluster Engineered System.

In practice, however, we can use some features of Oracle Solaris Cluster and the Oracle Database to include such a tier in an orchestrated Geographic Edition configuration.  The two features we're using are:

  • The Oracle Solaris Cluster data service for Oracle External Proxy (HA for Oracle External Proxy)
  • Remote database connectivity

HA for Oracle External Proxy

This is a data service which can be used to reflect the status of an Oracle database which is running on a remote system, so that local Oracle Solaris Cluster resources and resource groups can associate dependencies with it.  You can find more information about it at: http://docs.oracle.com/cd/E39579_01/html/E52343/index.html

Remote Database Connectivity

This is a standard feature of the Oracle Database.  A service name that is configured in the TNSNAMES.ORA file can identify a database that is running on a remote system, so that control operations from local sqlplus and dgmgrl (Data Guard broker) commands will operate on the remote database instance.  To take advantage of this you don't even need to install a full Oracle database locally, you can just install the Oracle Database Client software, see: http://docs.oracle.com/database/121/SSCLI/toc.htm

Put these together, and...

By using these two features together you can now create a local Geographic Edition protection group (PG) that uses Oracle Data Guard replication, but which manages a database instance on a remote system.  You just need to provide the name of a resource group that contains an external proxy resource, instead of an Oracle RAC or failover database resource.  As far as the cluster configuration is concerned, this behaves just like an ordinary local protection group, and so it can be included in a multigroup as part of a DR Orchestration configuration. When you switch over that multigroup, the software will contact all the systems configured in the protection group, local and remote.

Now you can fully orchestrate all tiers of a service and control them from a system that is part of an Oracle Solaris Cluster configuration, even if the database tier is on a system that isn't running Oracle Solaris Cluster.

Geographic Edition team
Oracle Solaris Cluster Engineering

Tuesday Nov 25, 2014

Disaster Recovery Orchestration

With the increasing use of virtualization and cloud-based services, a service might no longer be confined to a single cluster. Disaster Recovery Orchestration enables an administrator to control multiple service tiers, on multiple clusters, with a single command.[Read More]

Tuesday Sep 16, 2014

Oracle Solaris Cluster and Oracle GoldenGate

Detlef Ulherr

detlef.ulherr@oracle.com

With Oracle Solaris Cluster we provide an agent for Oracle GoldenGate. This agent provides high availability for the GoldenGate application.  With the Oracle Solaris Cluster GoldenGate agent you can configure the GoldenGate software to replicate databases controlled by Oracle Solaris Cluster. The replication partner can be located either on a cluster node of the same cluster, on a different cluster, or a non clustered system. The Oracle Solaris Cluster agent for GoldenGate does not impose any restriction on the possible Oracle GoldenGate replication topologies, so there is no difference in regards of Oracle GoldenGate replication topologies between a clustered and a non clustered system.

With this new agent you can integrate Oracle GoldenGate together with the replication source or replication target database in Oracle Solaris Cluster. To accomplish this you must make the database highly available using an Oracle Solaris Cluster agent for your database. If no agent is available for this database you can easily create one using the Generic Data Service Version 2 shipped by Oracle Solaris Cluster 4.2.

The Oracle Solaris Cluster agent for GoldenGate can be configured in failover and multiple master configurations. Beside the obvious shared storage topologies for failover configurations, you can implement shared nothing topologies with multiple master configurations, if the Oracle GoldenGate replication satisfies your requirements.

The documentation for the GoldenGate agent is available at:

http://docs.oracle.com/cd/E39579_01/html/E48593/index.html

One possible configuration would be:





Friday Sep 12, 2014

Oracle Database 12c Agent: new features

The Oracle Solaris Cluster 4.2 release contains several enhancements to the Oracle Database-related agents. These enhancements improve availability by allowing finer-grained dependencies and by enabling new Oracle Database features.

Support for Policy-Managed Databases

Policy-managed databases is a feature that was introduced in Oracle Real Applications Clusters 11g release 2. This feature enables the database administrator to establish policies that govern how many database instances are running in the cluster and on which nodes.

We have enhanced the existing SUNW.scalable_rac_server_proxy resource type to support policy-managed databases, in addition to the traditional administrator-managed databases. These changes do require that existing resources of the SUNW.scalable_rac_server_proxy resource type be upgraded, to accommodate the changes that have been made. Also, a manual step must be performed prior to the resource-type upgrade.

See "Upgrading Resources in Support for Oracle RAC" in Oracle Solaris Cluster Data Service for Oracle Real Application Clusters (http://docs.oracle.com/cd/E39579_01/html/E39656/gdwbm.html#scrolltoc) for procedures to recreate the Oracle Grid Infrastructure sun.storage_proxy_type resource type and resources, and then upgrade the SUNW.scalable_rac_server_proxy resource type.

Support for the Oracle RAC Services Feature

Enhancements have been made to the existing HA for Oracle External Proxy agent to enable the agent to work with the services feature of Oracle RAC. These changes enable an Oracle RAC service to be represented by a proxy resource in Oracle Solaris Cluster, so that other resources can establish dependencies at a finer granularity than the database.

Support for CDBs and PDBs

Another benefit to the enhancement to support Oracle RAC services is the ability to now support multitenant container databases (CDB) and pluggable databases (PDB), new features that were introduced in Oracle Database 12c. The PDBs are represented as a database service, which means that they can be used by creating an HA for Oracle External Proxy resource to represent them in the Oracle Solaris Cluster configuration.


Bob Bart
Oracle Solaris Cluster Engineering

Wednesday Aug 20, 2014

Oracle Solaris Cluster 4.2 Event and its SNMP Interface

Background

The cluster event SNMP interface was first introduced in Oracle Solaris Cluster 3.2 release. The details of the SNMP interface are described in the Oracle Solaris Cluster System Administration Guide and the Cluster 3.2 SNMP blog.

Prior to the Oracle Solaris Cluster 4.2 release, when the event SNMP interface was enabled, it would take effect on WARNING or higher severity events. The events with WARNING or higher severity are usually for the status change of a cluster component from ONLINE to OFFLINE. The interface worked like an alert/alarm interface when some components in the cluster were out of service (changed to OFFLINE). The consumers of this interface could not get notification for all status changes and configuration changes in the cluster.

Cluster Event and its SNMP Interface in Oracle Solaris Cluster 4.2

The user model of the cluster event SNMP interface is the same as what was provided in the previous releases. The cluster event SNMP interface is not enabled by default on a freshly installed cluster; you can enable it by using the cluster event SNMP administration commands on any cluster nodes. Usually, you only need to enable it on one of the cluster nodes or a subset of the cluster nodes because all cluster nodes get the same cluster events. When it is enabled, it is responsible for two basic tasks.
• Logs up to 100 most recent NOTICE or higher severity events to the MIB.
• Sends SNMP traps to the hosts that are configured to receive the above events.


The changes in the Oracle Solaris Cluster 4.2 release are
1) Introduction of the NOTICE severity for the cluster configuration and status change events.
The NOTICE severity is introduced for the cluster event in the 4.2 release. It is the severity between the INFO and WARNING severity. Now all severities for the cluster events are (from low to high)
• INFO (not exposed to the SNMP interface)
• NOTICE (newly introduced in the 4.2 release)
• WARNING
• ERROR
• CRITICAL
• FATAL

In the 4.2 release, the cluster event system is enhanced to make sure at least one event with the NOTICE or a higher severity will be generated when there is a configuration or status change from a cluster component instance. In other words, the cluster events from a cluster with the NOTICE or higher severities will cover all status and configuration changes in the cluster (include all component instances). The cluster component instance here refers to an instance of the following cluster components
node, quorum, resource group, resource, network interface, device group, disk, zone cluster and geo cluster heartbeat. For example, 
pnode1 is an instance of the cluster node component, and oracleRG is an instance of the cluster resource group.

With the introduction of the NOTICE severity event, when the cluster event SNMP interface is enabled, the consumers of the SNMP interface will get notification for all status and configuration changes in the cluster. A thrid-party system management platform with the cluster SNMP interface integration can generate alarms and clear alarms programmatically, because it can get notifications for the status change from ONLINE to OFFLINE and also from OFFLINE to ONLINE.

2) Customization for the cluster event SNMP interface
• The number of events logged to the MIB is 100. When the number of events stored in the MIB reaches 100 and a new qualified event arrives, the oldest event will be removed before storing the new event to the MIB (FIFO, first in, first out). The 100 is the default and minimum value for the number of events stored in the MIB. It can be changed by setting the log_number property value using the clsnmpmib command. The maximum number that can be set for the property is 500.

• The cluster event SNMP interface takes effect on the NOTICE or high severity events. The NOTICE severity is also the default and lowest event severity for the SNMP interface. The SNMP interface can be configured to take effect on other higher severity events, such as WARNING or higher severity events by setting the min_severity property to the WARNING. When the min_severity property is set to the WARNING, the cluster event SNMP interface would behave the same as the previous releases (prior to the 4.2 release).

Examples,
• Set the number of events stored in the MIB to 200
# clsnmpmib set -p log_number=200 event
• Set the interface to take effect on WARNING or higher severity events.
# clsnmpmib set -p min_severity=WARNING event

Administering the Cluster Event SNMP Interface

Oracle Solaris Cluster provides the following three commands to administer the SNMP interface.
clsnmpmib: administer the SNMP interface, and the MIB configuration.
clsnmphost: administer hosts for the SNMP traps
clsnmpuser: administer SNMP users (specific for SNMP v3 protocol)

Only clsnmpmib is changed in the 4.2 release to support the aforementioned customization of the SNMP interface. Here are some simple examples using the commands.

Examples:
1. Enable the cluster event SNMP interface on the local node
# clsnmpmib enable event
2. Display the status of the cluster event SNMP interface on the local node
# clsnmpmib show -v
3. Configure my_host to receive the cluster event SNMP traps.
# clsnmphost add my_host

Cluster Event SNMP Interface uses the common agent container SNMP adaptor, which is based on the JDMK SNMP implementation as its SNMP agent infrastructure. By default, the port number for the SNMP MIB is 11161, and the port number for the SNMP traps is 11162. The port numbers can be changed by using the cacaoadm. For example,
# cacaoadm list-params
Print all changeable parameters. The output includes the snmp-adaptor-port and snmp-adaptor-trap-port properties.
# cacaoadm set-param snmp-adaptor-port=1161
Set the SNMP MIB port number to 1161.
# cacaoadm set-param snmp-adaptor-trap-port=1162
Set the SNMP trap port number to 1162.

The cluster event SNMP MIB is defined in sun-cluster-event-mib.mib, which is located in the /usr/cluster/lib/mibdirectory. Its OID is 1.3.6.1.4.1.42.2.80, that can be used to walk through the MIB data. Again, for more detail information about the cluster event SNMP interface, please see the Oracle Solaris Cluster 4.2 System Administration Guide.

- Leland Chen 

About

Oracle Solaris Cluster Engineering Blog

Search

Archives
« March 2015
SunMonTueWedThuFriSat
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
    
       
Today