Monday Oct 26, 2015
Sunday May 17, 2015
New white paper available: Providing High Availability to the OpenStack Cloud Controller on Oracle Solaris with Oracle Solaris Cluster
By T.F.-Oracle on May 17, 2015
Oracle Solaris delivers a complete OpenStack distribution which is integrated with its core technologies such as Oracle Solaris Zones, the ZFS file system, and its image packaging system (IPS). OpenStack in Oracle Solaris 11.2 helps IT organizations to create an enterprise-ready Infrastructure as a Service (IaaS) cloud, so that users can quickly deploy virtual networking and compute resources by using a centralized web-based portal.
Of course any enterprise-class OpenStack deployment requires a highly available OpenStack infrastructure that can sustain individual system failures.
Oracle Solaris Cluster is deeply integrated with Oracle Solaris technologies and delivers high availability to Oracle Solaris based OpenStack services through a rich set of features.
The primary goals of the Oracle Solaris Cluster software are to maximize service availability through fine-grained monitoring and automated recovery of critical services and to prevent data corruption through proper fencing. The services covered include networking, storage, virtualization used by the OpenStack cloud controller and its own components.
Our team has created a new white paper to specifically explain how to provide high availability to the OpenStack cloud controller on Oracle Solaris with Oracle Solaris Cluster.
After describing an example for a highly available physical node OpenStack infrastructure deployment, a detailed and structured walk-through over the high availability cloud controller configuration is provided. It discusses each OpenStack component that runs on the cloud controller, with explicit steps on how to create and configure these components under cluster control. The deployment example achieves secure isolation between services, while defining all the required dependencies for proper startup and stopping between services which is orchestrated by the cluster framework.
Oracle Solaris Cluster Engineering
Wednesday Aug 20, 2014
By Leland Chen-Oracle on Aug 20, 2014
The cluster event SNMP interface was first introduced in Oracle Solaris Cluster 3.2 release. The details of the SNMP interface are described in the Oracle Solaris Cluster System Administration Guide and the Cluster 3.2 SNMP blog.
Prior to the Oracle Solaris Cluster 4.2 release, when the event SNMP interface was enabled, it would take effect on WARNING or higher severity events. The events with WARNING or higher severity are usually for the status change of a cluster component from ONLINE to OFFLINE. The interface worked like an alert/alarm interface when some components in the cluster were out of service (changed to OFFLINE). The consumers of this interface could not get notification for all status changes and configuration changes in the cluster.
Cluster Event and its SNMP Interface in Oracle Solaris Cluster 4.2
The user model of the cluster event SNMP interface is the same as what was provided in the previous releases. The cluster event SNMP interface is not enabled by default on a freshly installed cluster; you can enable it by using the cluster event SNMP administration commands on any cluster nodes. Usually, you only need to enable it on one of the cluster nodes or a subset of the cluster nodes because all cluster nodes get the same cluster events. When it is enabled, it is responsible for two basic tasks.
• Logs up to 100 most recent NOTICE or higher severity events to the MIB.
• Sends SNMP traps to the hosts that are configured to receive the above events.
The changes in the Oracle Solaris Cluster 4.2 release are
1) Introduction of the NOTICE severity for the cluster configuration and status change events.
The NOTICE severity is introduced for the cluster event in the 4.2 release. It is the severity between the INFO and WARNING severity. Now all severities for the cluster events are (from low to high)
• INFO (not exposed to the SNMP interface)
• NOTICE (newly introduced in the 4.2 release)
In the 4.2 release, the cluster event system is enhanced to make sure at least one event with the NOTICE or a higher severity will be generated when there is a configuration or status change from a cluster component instance. In other words, the cluster events from a cluster with the NOTICE or higher severities will cover all status and configuration changes in the cluster (include all component instances). The cluster component instance here refers to an instance of the following cluster components
node, quorum, resource group, resource, network interface, device group, disk, zone cluster and geo cluster heartbeat. For example, pnode1 is an instance of the cluster node component, and oracleRG is an instance of the cluster resource group.
With the introduction of the NOTICE severity event, when the cluster event SNMP interface is enabled, the consumers of the SNMP interface will get notification for all status and configuration changes in the cluster. A thrid-party system management platform with the cluster SNMP interface integration can generate alarms and clear alarms programmatically, because it can get notifications for the status change from ONLINE to OFFLINE and also from OFFLINE to ONLINE.
2) Customization for the cluster event SNMP interface
• The number of events logged to the MIB is 100. When the number of events stored in the MIB reaches 100 and a new qualified event arrives, the oldest event will be removed before storing the new event to the MIB (FIFO, first in, first out). The 100 is the default and minimum value for the number of events stored in the MIB. It can be changed by setting the log_number property value using the clsnmpmib command. The maximum number that can be set for the property is 500.
• The cluster event SNMP interface takes effect on the NOTICE or high severity events. The NOTICE severity is also the default and lowest event severity for the SNMP interface. The SNMP interface can be configured to take effect on other higher severity events, such as WARNING or higher severity events by setting the min_severity property to the WARNING. When the min_severity property is set to the WARNING, the cluster event SNMP interface would behave the same as the previous releases (prior to the 4.2 release).
• Set the number of events stored in the MIB to 200
# clsnmpmib set -p log_number=200 event
• Set the interface to take effect on WARNING or higher severity events.
# clsnmpmib set -p min_severity=WARNING event
Administering the Cluster Event SNMP Interface
Oracle Solaris Cluster provides the following three commands to administer the SNMP interface.
• clsnmpmib: administer the SNMP interface, and the MIB configuration.
• clsnmphost: administer hosts for the SNMP traps
• clsnmpuser: administer SNMP users (specific for SNMP v3 protocol)
Only clsnmpmib is changed in the 4.2 release to support the aforementioned customization of the SNMP interface. Here are some simple examples using the commands.
1. Enable the cluster event SNMP interface on the local node
# clsnmpmib enable event
2. Display the status of the cluster event SNMP interface on the local node
# clsnmpmib show -v
3. Configure my_host to receive the cluster event SNMP traps.
# clsnmphost add my_host
Cluster Event SNMP Interface uses the common agent container SNMP adaptor, which is based on the JDMK SNMP implementation as its SNMP agent infrastructure. By default, the port number for the SNMP MIB is 11161, and the port number for the SNMP traps is 11162. The port numbers can be changed by using the cacaoadm. For example,
# cacaoadm list-params
Print all changeable parameters. The output includes the snmp-adaptor-port and snmp-adaptor-trap-port properties.
# cacaoadm set-param snmp-adaptor-port=1161
Set the SNMP MIB port number to 1161.
# cacaoadm set-param snmp-adaptor-trap-port=1162
Set the SNMP trap port number to 1162.
The cluster event SNMP MIB is defined in sun-cluster-event-mib.mib, which is located in the /usr/cluster/lib/mibdirectory. Its OID is 220.127.116.11.18.104.22.168.80, that can be used to walk through the MIB data. Again, for more detail information about the cluster event SNMP interface, please see the Oracle Solaris Cluster 4.2 System Administration Guide.
- Leland Chen
Thursday Jul 31, 2014
By Eve Kleinknecht-Oracle on Jul 31, 2014
Oracle Solaris Cluster 4.2 has been released today, together with Oracle Solaris 11.2, Oracle’s flagship operating system! This latest version offers maximized availability and orchestrated disaster recovery for enterprise applications, leveraging Oracle Solaris 11.2's latest virtualization and software life-cycle technologies.
- Extreme availability for virtualized workloads and applications
Oracle Solaris Cluster 4.2 adds support for Oracle Solaris Kernel Zones with an updated failover zone agent which provides monitoring, automatic restart and failover as well as warm migration of kernel zones. It combines resiliency with the flexibility offered by kernel zones to have independent kernel versions and patch levels.
Two new agents, for Oracle JD Edwards EnterpriseOne and GoldenGate have been added to the portfolio, offering application specific monitoring and recovery policies, providing increased application up-time to these Oracle solutions.
This new release also offers support for Oracle 12.1 RAC database options such as Oracle Multitenant, service agents, policy managed database as well as new versions for a set of applications such as Oracle Business Intelligence, Siebel, ...
- Fast and reliable disaster recovery for multi-tiered services
The new orchestrated disaster recovery manages the automated and synchronized recovery of multiple applications and their respective replication solution across multiple sites, offering significant gains in terms of reliability, speed of recovery and reduced risk.
- Agile deployments with Oracle Solaris lifecycle management
The new Oracle Solaris Unified Archives format can now be used with Oracle Solaris Cluster to recover or clone easily and rapidly either physical clusters or virtual clusters. Also with the support for the new secure Automated Installer, cluster deployments can leverage Oracle Solaris end-to-end secure provisioning.
- Simplified administration with new GUI
The new browser based graphical user interface offers a single point access to status, configuration and management capabilities. Its topology, tree and table views offer easy navigation inside cluster instances both in local or multi-cluster configurations facilitating operations, monitoring, and diagnostics.
- Flexible and easy integration for custom applications with the new GDSv2
Agent (or data service) development allows you to deploy your application within the Oracle Solaris Cluster infrastructure. The Generic Data Service (GDS) has been designed and developed by Oracle Solaris Cluster engineering to reduce the complexity associated with data service development. GDS v2 further increases flexibility, ease of use and security of this already trusted and robust development tool.
- Designed and tested together with Oracle hardware, infrastructure software and applications to deliver best time to value
Oracle Solaris Cluster is engineered from the ground up to support the stringent requirements of a multi-tier mission critical environment. It is thoroughly tested with the Oracle servers, storage and networking component and integrated with Oracle Solaris. And last but not least it delivers the application high availability framework for Oracle Optimized Solutions and Oracle SuperCluster, Oracle's premier general purpose engineered system.
Looking forward to your comments,
Eve and the Oracle Solaris Cluster engineering team
Friday Mar 08, 2013
By User9159196-Oracle on Mar 08, 2013
Over the years, the SAP kernel changed a lot. Finally, the architecture of our SAP agents had reached its limits. It was clearly time for a rewrite of the SAP agent. To accommodate the changes in the SAP kernel, we created the new HA for NetWeaver agent. We also took this chance to support central instance and traditional HA deployments within this one agent. This integration reduces the resource complexity.
Most recently, the SAP kernel 7.20_EXT SAP introduced the ability to integrate HA frameworks. Of course, we did not miss the opportunity to integrate our new SAP NetWeaver agent with this functionality. So, with the new HA for NetWeaver agent, it is now possible to use the SAP command and the SAP Management console to manage SAP instances under cluster control, without having root access. This will improve the ease of use for SAP in an Oracle Solaris Cluster environment. We made these functionalities available on Oracle Solaris 11 software starting with Oracle Solaris Cluster 4.0 SRU 4.
One of the larger cities in Europe made the change with their SAP system from Oracle Solaris 10 to Oracle Solaris 11 as soon as the HA for NetWeaver agent was available. They reduced their system administration costs because of the benefits of Oracle Solaris 11 and Oracle Solaris Cluster 4.0. Information about this customer use case is available at:
We look forward to your feedback and inputs !
Oracle Solaris Cluster Engineering Blog
- Oracle Solaris Cluster Manager: Configuring HA for Oracle Solaris Zones
- Oracle Solaris Cluster Manager: Setting Up Geo Disaster Recovery Orchestration
- New choices for Oracle Solaris Cluster Public Networking: Link Aggregation (Trunk & DLMP) and VNIC
- Configuring a Data Service for Oracle VM Server for SPARC by Using a Graphical Wizard
- Introducing the New Cluster Configuration Wizard
- Oracle Solaris Cluster Manager: Getting Started
- Announcing Oracle Solaris Cluster 4.3
- Oracle Solaris Cluster at Oracle OpenWorld 2015
- New white paper available: Providing High Availability to the OpenStack Cloud Controller on Oracle Solaris with Oracle Solaris Cluster
- Managing a remote Oracle Database instance with "Geographic Edition"