Friday Oct 30, 2015

Introducing the New Cluster Configuration Wizard

Oracle Solaris Cluster users have been using the scinstall command-line interface to configure new clusters in the past. Version 4.3 of the software is introducing a new browser-based wizard that makes this process easier. Users will be able to navigate through a series of panels in a browser to configure a new cluster.

The new cluster configuration wizard in the Oracle Solaris Cluster Manager browser interface has a typical mode and a custom mode. They provide the same degree of customization to the user when configuring a new cluster as the existing interactive scinstall utility, all in a more intuitive interface.

As prerequisites to use this new feature, you will need Oracle Solaris Cluster packages installed on all nodes that are planned to form a new cluster and you need to run the clauth command on all non-control nodes to authenticate the control node, if you haven’t done so previously.

Then you can open a browser and navigate to address “hostname:8998/scm/faces/main”. The “hostname” here stands for the host of the webapp server that Cluster Manager runs on. Note that the host could be the control node (just make sure the Cluster Manager packages are installed on that node) or any machine hosting Cluster Manager that could access the control node. The address will lead you to the login page (shown below). Login as the root role to the control node on this page and you can now start configuring a new cluster.

The panels guide you through the process of configuring a new cluster. You might find the online help pane handy on the right side of the page. At the end of the process, the wizard invokes a script to configure and reboot each node in a sequence automatically. The control node will be the last one in the sequence that is configured and rebooted. When all nodes are rebooted, the new cluster configuration process is completed.

Monday Oct 26, 2015

Announcing Oracle Solaris Cluster 4.3

Today Oracle released Oracle Solaris Cluster 4.3. This new release further simplifies operations with new configuration tools and increases resiliency for business-critical application and platform services leveraging Oracle Solaris 11.3 new virtualization features. With an extended application portfolio and more disaster recovery options it enables orchestrated reliable and fast disaster recovery for a larger range of configurations.[Read More]

Thursday Oct 22, 2015

Oracle Solaris Cluster at Oracle OpenWorld 2015

Oracle OpenWorld is a great opportunity to meet with the experts and learn about the latest product news around Oracle Solaris Cluster. Whether you prefer checking out sessions, experiencing hands-on lab or visiting demo pods we have something for you:
  • Meet architects, tech leads or product managers at our demo pods located in the Oracle DEMOgrounds in Moscone South (Center):
    • High Availability and Disaster Recovery for the Enterprise Cloud (SC-026)
    • Single-Command Disaster Recovery Orchestration on Oracle SuperCluster (SC-008)
    • JD Edwards for Availability, Performance, and Security: Ready for the Cloud (SC-023)

  • Learn on how to deploy Oracle Solaris Cluster with the Oracle Solaris Automated Installer:
    • Tuesday Oct 27, 10:15 AM at the Hotel Nikko - Monterey (3rd Floor) [HOL1931]

  • Listen to our specialists talking about business critical solutions embedding Oracle Solaris Cluster such as:
    • Expert Insights on Orchestrating Extreme High Availability on Oracle SuperCluster [CON3303]
    • JD Edwards EnterpriseOne: Manage High Availability and Deliver High Performance [CON1643]
    • How to Securely Consolidate High-Performance SAP Landscapes [CON5743]
    • How to Securely Consolidate High-Performance PeopleSoft Environments [CON5721]

Hoping to meet you there,

Eve Kleinknecht 

Sunday May 17, 2015

New white paper available: Providing High Availability to the OpenStack Cloud Controller on Oracle Solaris with Oracle Solaris Cluster

Oracle Solaris delivers a complete OpenStack distribution which is integrated with its core technologies such as Oracle Solaris Zones, the ZFS file system, and its image packaging system (IPS). OpenStack in Oracle Solaris 11.2 helps IT organizations to create an enterprise-ready Infrastructure as a Service (IaaS) cloud, so that users can quickly deploy virtual networking and compute resources by using a centralized web-based portal.

Of course any enterprise-class OpenStack deployment requires a highly available OpenStack infrastructure that can sustain individual system failures.

Oracle Solaris Cluster is deeply integrated with Oracle Solaris technologies and delivers high availability to Oracle Solaris based OpenStack services through a rich set of features.

The primary goals of the Oracle Solaris Cluster software are to maximize service availability through fine-grained monitoring and automated recovery of critical services and to prevent data corruption through proper fencing. The services covered include networking, storage, virtualization used by the OpenStack cloud controller and its own components.

Our team has created a new white paper to specifically explain how to provide high availability to the OpenStack cloud controller on Oracle Solaris with Oracle Solaris Cluster.

After describing an example for a highly available physical node OpenStack infrastructure deployment, a detailed and structured walk-through over the high availability cloud controller configuration is provided. It discusses each OpenStack component that runs on the cloud controller, with explicit steps on how to create and configure these components under cluster control. The deployment example achieves secure isolation between services, while defining all the required dependencies for proper startup and stopping between services which is orchestrated by the cluster framework.

The white paper is linked from the OpenStack Cloud Management page as well as from the Oracle Solaris Cluster Technical Resources page on the Oracle Technology Network portal.

Thorsten Früauf
Oracle Solaris Cluster Engineering

Wednesday Aug 20, 2014

Oracle Solaris Cluster 4.2 Event and its SNMP Interface


The cluster event SNMP interface was first introduced in Oracle Solaris Cluster 3.2 release. The details of the SNMP interface are described in the Oracle Solaris Cluster System Administration Guide and the Cluster 3.2 SNMP blog.

Prior to the Oracle Solaris Cluster 4.2 release, when the event SNMP interface was enabled, it would take effect on WARNING or higher severity events. The events with WARNING or higher severity are usually for the status change of a cluster component from ONLINE to OFFLINE. The interface worked like an alert/alarm interface when some components in the cluster were out of service (changed to OFFLINE). The consumers of this interface could not get notification for all status changes and configuration changes in the cluster.

Cluster Event and its SNMP Interface in Oracle Solaris Cluster 4.2

The user model of the cluster event SNMP interface is the same as what was provided in the previous releases. The cluster event SNMP interface is not enabled by default on a freshly installed cluster; you can enable it by using the cluster event SNMP administration commands on any cluster nodes. Usually, you only need to enable it on one of the cluster nodes or a subset of the cluster nodes because all cluster nodes get the same cluster events. When it is enabled, it is responsible for two basic tasks.
• Logs up to 100 most recent NOTICE or higher severity events to the MIB.
• Sends SNMP traps to the hosts that are configured to receive the above events.

The changes in the Oracle Solaris Cluster 4.2 release are
1) Introduction of the NOTICE severity for the cluster configuration and status change events.
The NOTICE severity is introduced for the cluster event in the 4.2 release. It is the severity between the INFO and WARNING severity. Now all severities for the cluster events are (from low to high)
• INFO (not exposed to the SNMP interface)
• NOTICE (newly introduced in the 4.2 release)

In the 4.2 release, the cluster event system is enhanced to make sure at least one event with the NOTICE or a higher severity will be generated when there is a configuration or status change from a cluster component instance. In other words, the cluster events from a cluster with the NOTICE or higher severities will cover all status and configuration changes in the cluster (include all component instances). The cluster component instance here refers to an instance of the following cluster components
node, quorum, resource group, resource, network interface, device group, disk, zone cluster and geo cluster heartbeat. For example, 
pnode1 is an instance of the cluster node component, and oracleRG is an instance of the cluster resource group.

With the introduction of the NOTICE severity event, when the cluster event SNMP interface is enabled, the consumers of the SNMP interface will get notification for all status and configuration changes in the cluster. A thrid-party system management platform with the cluster SNMP interface integration can generate alarms and clear alarms programmatically, because it can get notifications for the status change from ONLINE to OFFLINE and also from OFFLINE to ONLINE.

2) Customization for the cluster event SNMP interface
• The number of events logged to the MIB is 100. When the number of events stored in the MIB reaches 100 and a new qualified event arrives, the oldest event will be removed before storing the new event to the MIB (FIFO, first in, first out). The 100 is the default and minimum value for the number of events stored in the MIB. It can be changed by setting the log_number property value using the clsnmpmib command. The maximum number that can be set for the property is 500.

• The cluster event SNMP interface takes effect on the NOTICE or high severity events. The NOTICE severity is also the default and lowest event severity for the SNMP interface. The SNMP interface can be configured to take effect on other higher severity events, such as WARNING or higher severity events by setting the min_severity property to the WARNING. When the min_severity property is set to the WARNING, the cluster event SNMP interface would behave the same as the previous releases (prior to the 4.2 release).

• Set the number of events stored in the MIB to 200
# clsnmpmib set -p log_number=200 event
• Set the interface to take effect on WARNING or higher severity events.
# clsnmpmib set -p min_severity=WARNING event

Administering the Cluster Event SNMP Interface

Oracle Solaris Cluster provides the following three commands to administer the SNMP interface.
clsnmpmib: administer the SNMP interface, and the MIB configuration.
clsnmphost: administer hosts for the SNMP traps
clsnmpuser: administer SNMP users (specific for SNMP v3 protocol)

Only clsnmpmib is changed in the 4.2 release to support the aforementioned customization of the SNMP interface. Here are some simple examples using the commands.

1. Enable the cluster event SNMP interface on the local node
# clsnmpmib enable event
2. Display the status of the cluster event SNMP interface on the local node
# clsnmpmib show -v
3. Configure my_host to receive the cluster event SNMP traps.
# clsnmphost add my_host

Cluster Event SNMP Interface uses the common agent container SNMP adaptor, which is based on the JDMK SNMP implementation as its SNMP agent infrastructure. By default, the port number for the SNMP MIB is 11161, and the port number for the SNMP traps is 11162. The port numbers can be changed by using the cacaoadm. For example,
# cacaoadm list-params
Print all changeable parameters. The output includes the snmp-adaptor-port and snmp-adaptor-trap-port properties.
# cacaoadm set-param snmp-adaptor-port=1161
Set the SNMP MIB port number to 1161.
# cacaoadm set-param snmp-adaptor-trap-port=1162
Set the SNMP trap port number to 1162.

The cluster event SNMP MIB is defined in sun-cluster-event-mib.mib, which is located in the /usr/cluster/lib/mibdirectory. Its OID is, that can be used to walk through the MIB data. Again, for more detail information about the cluster event SNMP interface, please see the Oracle Solaris Cluster 4.2 System Administration Guide.

- Leland Chen 


Oracle Solaris Cluster Engineering Blog


« December 2015