Sunday May 17, 2015

New white paper available: Providing High Availability to the OpenStack Cloud Controller on Oracle Solaris with Oracle Solaris Cluster

Oracle Solaris delivers a complete OpenStack distribution which is integrated with its core technologies such as Oracle Solaris Zones, the ZFS file system, and its image packaging system (IPS). OpenStack in Oracle Solaris 11.2 helps IT organizations to create an enterprise-ready Infrastructure as a Service (IaaS) cloud, so that users can quickly deploy virtual networking and compute resources by using a centralized web-based portal.

Of course any enterprise-class OpenStack deployment requires a highly available OpenStack infrastructure that can sustain individual system failures.

Oracle Solaris Cluster is deeply integrated with Oracle Solaris technologies and delivers high availability to Oracle Solaris based OpenStack services through a rich set of features.

The primary goals of the Oracle Solaris Cluster software are to maximize service availability through fine-grained monitoring and automated recovery of critical services and to prevent data corruption through proper fencing. The services covered include networking, storage, virtualization used by the OpenStack cloud controller and its own components.

Our team has created a new white paper to specifically explain how to provide high availability to the OpenStack cloud controller on Oracle Solaris with Oracle Solaris Cluster.

After describing an example for a highly available physical node OpenStack infrastructure deployment, a detailed and structured walk-through over the high availability cloud controller configuration is provided. It discusses each OpenStack component that runs on the cloud controller, with explicit steps on how to create and configure these components under cluster control. The deployment example achieves secure isolation between services, while defining all the required dependencies for proper startup and stopping between services which is orchestrated by the cluster framework.

The white paper is linked from the OpenStack Cloud Management page as well as from the Oracle Solaris Cluster Technical Resources page on the Oracle Technology Network portal.

Thorsten Früauf
Oracle Solaris Cluster Engineering

Wednesday Aug 20, 2014

Oracle Solaris Cluster 4.2 Event and its SNMP Interface

Background

The cluster event SNMP interface was first introduced in Oracle Solaris Cluster 3.2 release. The details of the SNMP interface are described in the Oracle Solaris Cluster System Administration Guide and the Cluster 3.2 SNMP blog.

Prior to the Oracle Solaris Cluster 4.2 release, when the event SNMP interface was enabled, it would take effect on WARNING or higher severity events. The events with WARNING or higher severity are usually for the status change of a cluster component from ONLINE to OFFLINE. The interface worked like an alert/alarm interface when some components in the cluster were out of service (changed to OFFLINE). The consumers of this interface could not get notification for all status changes and configuration changes in the cluster.

Cluster Event and its SNMP Interface in Oracle Solaris Cluster 4.2

The user model of the cluster event SNMP interface is the same as what was provided in the previous releases. The cluster event SNMP interface is not enabled by default on a freshly installed cluster; you can enable it by using the cluster event SNMP administration commands on any cluster nodes. Usually, you only need to enable it on one of the cluster nodes or a subset of the cluster nodes because all cluster nodes get the same cluster events. When it is enabled, it is responsible for two basic tasks.
• Logs up to 100 most recent NOTICE or higher severity events to the MIB.
• Sends SNMP traps to the hosts that are configured to receive the above events.


The changes in the Oracle Solaris Cluster 4.2 release are
1) Introduction of the NOTICE severity for the cluster configuration and status change events.
The NOTICE severity is introduced for the cluster event in the 4.2 release. It is the severity between the INFO and WARNING severity. Now all severities for the cluster events are (from low to high)
• INFO (not exposed to the SNMP interface)
• NOTICE (newly introduced in the 4.2 release)
• WARNING
• ERROR
• CRITICAL
• FATAL

In the 4.2 release, the cluster event system is enhanced to make sure at least one event with the NOTICE or a higher severity will be generated when there is a configuration or status change from a cluster component instance. In other words, the cluster events from a cluster with the NOTICE or higher severities will cover all status and configuration changes in the cluster (include all component instances). The cluster component instance here refers to an instance of the following cluster components
node, quorum, resource group, resource, network interface, device group, disk, zone cluster and geo cluster heartbeat. For example, 
pnode1 is an instance of the cluster node component, and oracleRG is an instance of the cluster resource group.

With the introduction of the NOTICE severity event, when the cluster event SNMP interface is enabled, the consumers of the SNMP interface will get notification for all status and configuration changes in the cluster. A thrid-party system management platform with the cluster SNMP interface integration can generate alarms and clear alarms programmatically, because it can get notifications for the status change from ONLINE to OFFLINE and also from OFFLINE to ONLINE.

2) Customization for the cluster event SNMP interface
• The number of events logged to the MIB is 100. When the number of events stored in the MIB reaches 100 and a new qualified event arrives, the oldest event will be removed before storing the new event to the MIB (FIFO, first in, first out). The 100 is the default and minimum value for the number of events stored in the MIB. It can be changed by setting the log_number property value using the clsnmpmib command. The maximum number that can be set for the property is 500.

• The cluster event SNMP interface takes effect on the NOTICE or high severity events. The NOTICE severity is also the default and lowest event severity for the SNMP interface. The SNMP interface can be configured to take effect on other higher severity events, such as WARNING or higher severity events by setting the min_severity property to the WARNING. When the min_severity property is set to the WARNING, the cluster event SNMP interface would behave the same as the previous releases (prior to the 4.2 release).

Examples,
• Set the number of events stored in the MIB to 200
# clsnmpmib set -p log_number=200 event
• Set the interface to take effect on WARNING or higher severity events.
# clsnmpmib set -p min_severity=WARNING event

Administering the Cluster Event SNMP Interface

Oracle Solaris Cluster provides the following three commands to administer the SNMP interface.
clsnmpmib: administer the SNMP interface, and the MIB configuration.
clsnmphost: administer hosts for the SNMP traps
clsnmpuser: administer SNMP users (specific for SNMP v3 protocol)

Only clsnmpmib is changed in the 4.2 release to support the aforementioned customization of the SNMP interface. Here are some simple examples using the commands.

Examples:
1. Enable the cluster event SNMP interface on the local node
# clsnmpmib enable event
2. Display the status of the cluster event SNMP interface on the local node
# clsnmpmib show -v
3. Configure my_host to receive the cluster event SNMP traps.
# clsnmphost add my_host

Cluster Event SNMP Interface uses the common agent container SNMP adaptor, which is based on the JDMK SNMP implementation as its SNMP agent infrastructure. By default, the port number for the SNMP MIB is 11161, and the port number for the SNMP traps is 11162. The port numbers can be changed by using the cacaoadm. For example,
# cacaoadm list-params
Print all changeable parameters. The output includes the snmp-adaptor-port and snmp-adaptor-trap-port properties.
# cacaoadm set-param snmp-adaptor-port=1161
Set the SNMP MIB port number to 1161.
# cacaoadm set-param snmp-adaptor-trap-port=1162
Set the SNMP trap port number to 1162.

The cluster event SNMP MIB is defined in sun-cluster-event-mib.mib, which is located in the /usr/cluster/lib/mibdirectory. Its OID is 1.3.6.1.4.1.42.2.80, that can be used to walk through the MIB data. Again, for more detail information about the cluster event SNMP interface, please see the Oracle Solaris Cluster 4.2 System Administration Guide.

- Leland Chen 

Thursday Jul 31, 2014

Oracle Solaris Cluster 4.2 is out!

Oracle Solaris Cluster 4.2 has been released today, together with Oracle Solaris 11.2, Oracle’s flagship operating system! This latest version offers maximized availability and orchestrated disaster recovery for enterprise applications, leveraging Oracle Solaris 11.2's latest virtualization and software life-cycle technologies.

It delivers:

  • Extreme availability for virtualized workloads and applications

    Oracle Solaris Cluster 4.2 adds support for Oracle Solaris Kernel Zones with an updated failover zone agent which provides monitoring, automatic restart and failover as well as warm migration of kernel zones. It combines resiliency with the flexibility offered by kernel zones to have independent kernel versions and patch levels.

    Two new agents, for Oracle JD Edwards EnterpriseOne and GoldenGate have been added to the portfolio, offering application specific monitoring and recovery policies, providing increased application up-time to these Oracle solutions.

    This new release also offers support for Oracle 12.1 RAC database options such as Oracle Multitenant, service agents, policy managed database as well as new versions for a set of applications such as Oracle Business Intelligence, Siebel, ...

  • Fast and reliable disaster recovery for multi-tiered services

    The new orchestrated disaster recovery manages the automated and synchronized recovery of multiple applications and their respective replication solution across multiple sites, offering significant gains in terms of reliability, speed of recovery and reduced risk.

  • Agile deployments with Oracle Solaris lifecycle management

    The new Oracle Solaris Unified Archives format can now be used with Oracle Solaris Cluster to recover or clone easily and rapidly either physical clusters or virtual clusters. Also with the support for the new secure Automated Installer, cluster deployments can leverage Oracle Solaris end-to-end secure provisioning.

  • Simplified administration with new GUI

    The new browser based graphical user interface offers a single point access to status, configuration and management capabilities. Its topology, tree and table views offer easy navigation inside cluster instances both in local or multi-cluster configurations facilitating operations, monitoring, and diagnostics.

  • Flexible and easy integration for custom applications with the new GDSv2

    Agent (or data service) development allows you to deploy your application within the Oracle Solaris Cluster infrastructure. The Generic Data Service (GDS) has been designed and developed by Oracle Solaris Cluster engineering to reduce the complexity associated with data service development. GDS v2 further increases flexibility, ease of use and security of this already trusted and robust development tool.

  • Designed and tested together with Oracle hardware, infrastructure software and applications to deliver best time to value

    Oracle Solaris Cluster is engineered from the ground up to support the stringent requirements of a multi-tier mission critical environment. It is thoroughly tested with the Oracle servers, storage and networking component and integrated with Oracle Solaris. And last but not least it delivers the application high availability framework for Oracle Optimized Solutions and Oracle SuperCluster, Oracle's premier general purpose engineered system.

If you are interested to know more stay tuned: we have planned a series of posts going into much more details. In the mean time download the software and the documentation and try it out !

Looking forward to your comments,
Eve and the Oracle Solaris Cluster engineering team

Wednesday Mar 20, 2013

Configuring HA for Oracle WebLogic Server Managed Servers by Using the clsetup Wizard

This post describes the steps to use the clsetup wizard to configure an Oracle WebLogic Server Managed Servers instance as a highly available failover data service with Oracle Solaris Cluster software.

HA Prerequisites

  • The logical host resource to be used by the data service is configured.

  • The administration server instance of Oracle WebLogic Server software is running. This is to avoid the delay during discovery of the managed servers, which might take 10 minutes or longer.

  • The common agent container (cacao) service is running on the nodes on which Oracle WebLogic Server software will be configured.

Application Prerequisites

  • The boot.properties file is in the $DOMAIN_DIR/servers/<server-name>/security folder.

  • The Managed Server to be configured is created.

Points to Note

While using the clsetup wizard:

  • Type a question mark (?) to access the help.

  • Type a left angle bracket (<) or a right angle bracket (>) to move backward or forward, respectively.

  • Each input provided is validated by the wizard.

Procedure

From a command prompt, type clsetup to start the utility. From the clsetup menu, select "Data Services". In the next menu, select “Oracle WebLogic Server”.

The following table is an overview of the steps that the clsetup wizard performs to configure the data service.

#

Step

Remarks

1

Select Oracle WebLogic Server Location

Select "Global Cluster" or "Zone Cluster". If you select “Zone Cluster”, in the next menu select the zone cluster in which to configure the data service.

2

WebLogic Server Configuration

Lists the available types of configuration. Select “Managed Servers”.

3

Verify Prerequisites

Make sure all listed prerequisites are met.

4

Specify Domain Location

Be sure to provide the proper domain location. If you provide the proper domain location and no customized configuration is made in the Oracle WebLogic Server software, the clsetup wizard will auto-discover the parameter values.

5

Specify WebLogic Home Directory

Either select a directory from the discovered values or specify your own home directory by selecting the option “Specify explicitly”.

6

Specify WebLogic Managed Server Start Script

Press “RETURN” to accept the discovered value or type your own value.

7

Specify WebLogic Server Environment File

Press “RETURN” to accept the discovered value or type your own value.

8

Select Configuration Mode for Managed Servers

Select “Failover”.

9

Select Managed Server

Select from the displayed list the Managed Server that you want to configure.

10

Select Logical Hostname for Managed Server <Managed Server Name>

Select the appropriate logicalhost from the list.

11

Specify Monitoring URI

Specify the URI or leave the field blank. To specify multiple monitoring URIs, separate them by commas. If you specify a URI, it will be validated.

12

Configure Highly Available Storage Resources

Select the appropriate storage from the list.

13

Review Panel

Displays the values entered. To edit the information, choose the corresponding number.

14

Summary Panel

Displays the values you entered. [This panel is read only.]

Configuring the HA for Oracle WebLogic Server service by using the clsetup wizard is easy, effective, and ensures proper configuration of properties for resources and resource groups.

For additional information for the Oracle Solaris Cluster 3.3 3/13 release, see Oracle Solaris Cluster Data Service for Oracle WebLogic Server Guide. For other applicable Oracle Solaris Cluster releases, see the Oracle Systems Software documentation indexes.

- Sabareesh

Friday Oct 26, 2012

Announcing Release of Oracle Solaris Cluster 4.1!


About

Oracle Solaris Cluster Engineering Blog

Search

Archives
« September 2015
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today