Wednesday Oct 28, 2015

Oracle Solaris Cluster Manager: Getting Started

Oracle Solaris Cluster Manager (Cluster Manager) is a browser interface available starting in Oracle Solaris Cluster 4.2 running on Oracle Solaris 11.

Like its predecessor in Oracle Solaris Cluster 3.3 for Oracle Solaris 10, Cluster Manager displays all cluster objects with their status and details and allows all management actions including creating and deleting objects.

Getting Started

Making Sure Cluster Manager Is Installed

The browser interface is delivered as two IPS packages:

These packages are automatically installed as part of incorporation ha-cluster-full or can be manually installed later if necessary with this command:

   # pkg install ha-cluster/system/manager

After package installation these two SMF services:
    online         Aug_06   svc:/system/cluster/manager-glassfish3:default
    online         Aug_06   svc:/system/cluster/manager:default

will be started automatically and will come online. See Troubleshooting below if either service does not come online.

Running Cluster Manager

Point your browser at a cluster node where the browser interface services are online:

Only https is supported, which means that the first time accessing Cluster Manager you'll be given a certificate warning. Take a moment to assure yourself that you really are talking to the correct machine, then allow your browser to add this name to its exceptions list for certificates.


By default, the target machine to authenticate into is "localhost".

You may enter any machine name
where at least Oracle Solaris Cluster 4.2 is installed, even if it's in a different cluster than the machine you browsed to. In version 4.3, if the target machine isn't yet a cluster node but cluster software is present on that machine you'll be able to connect and then bring up the cluster configuration wizard.

A future post will go into detail on using this wizard.


Cluster Manager in version 4, as in version 3, is laid out with a navigation pane to the left and the details pane in the center. New
in version 4 is a right-side online help pane with links to more documentation.

Both the navigation and online help panes can be pushed aside by clicking on the little black triangle halfway down the center border, and restored by clicking it again. Alternatively, the panes can be resized wider or narrower by grabbing that border and dragging.

in version 4 is a "cluster landing page" showing an overview of the state of important cluster objects, such as resource groups and nodes as a group, as well as "grid boxes", which display information on individual objects. Hovering over the pie graphs and grid boxes brings up tool tips with information. Clicking on the pie graphs or grid boxes is a speedy way to navigate to overview or detail pages for these objects.

Each folder in the navigation pane (except for Tasks, which opens a page of buttons for launching various data service configuration wizards) is dedicated to a specific part of Oracle Solaris Cluster. The folders open to show lists of objects such as resource groups or zone clusters by name, as well as the "landing page" for that type of object.

in version 4, Geographic Edition partnerships are included in the main browser interface, and in version 4.3 the "disaster recovery orchestration" feature is supported via the Sites folder in the navigation pane.

Each folder of the navigation pane has an overview page with tables listing instances of the objects along with their status and other details.

Most of the tables contain buttons for management actions that can be taken on one selected row in the table, and sometimes multiple selected rows. Click a row in the leftmost column to select it. Shift and control clicks allow selection of multiple rows. Most tables can be re-sorted by clicking in the column headers. Other customization is available by clicking on the View drop-down list in the upper left of each table. Many tables have a Create or New button, which will launch wizards for creating new objects such as resource groups, resources, zone clusters, and more.

Clicking an individual object's name in the table drills into the details for that object. Some of these details pages have a Properties tab as well as the default Status tab. The Properties tabs start in read-only mode and can be converted to edit mode by clicking on the Edit button.


In addition to the various object-creation wizards available on some tables, the Tasks page hosts wizards for data services ranging from HA Storage and Logical Hostname to HA for Oracle Database. Cluster Manager 4.3 also offers wizards to create HA for Oracle Solaris Zone and HA for Oracle VM Server for SPARC. Future posts will describe these new wizards in detail.

Resource Groups Topology View

First available in 4.2 is a completely new topology view of resource groups with their resources and how they are laid out, both across global and zone cluster nodes and within global and zone clusters as entities. Navigate to the Resource Groups folder/overview/Status page, then click the Topology tab.

Multi-Cluster Management

Another feature first delivered in 4.2 is the ability to authenticate into multiple clusters at once, then switch back and forth quickly between them. At the top of the navigation pane, next to the "Cluster" heading, is a drop-down list that, when closed, shows the name of the "current cluster." This drop-down list contains names of all clusters authenticated into during this session and also contains the names of
Geographic Edition partners. 

At the bottom is the "Other..." choice, which pops up a dialog to allow authenticating into additional clusters. In a Partnership detail page, clicking on the name of a partner allows authenticating into that cluster.


1) Is the Cluster Manager service available?

The browser interface depends upon two SMF services:
   online         Aug_06   svc:/system/cluster/manager-glassfish3:default
   online         Aug_06   svc:/system/cluster/manager:default

If either of these services is not online, the browser interface will not be available. The very first time cluster/manager is started takes a bit longer than subsequent times because the Cluster Manager application is being deployed into the application server. Sometimes following an uncivilized shutdown, the manager-glassfish3 service might take longer than usual to start.

If either service goes into the
maintenance state, first check the SMF logs:

to see if any useful error messages are present. Often, simply clearing the service will be enough to bring it online. Make sure the manager-glassfish3 service is online first, and if not, clear the service
     # svcadm clear -s svc:/system/cluster/manager-glassfish3:default

Then check that the cluster/manager service is online, and if not, clear that service:
     # svcadm clear -s svc:/system/cluster/manager:default

2) Sometimes an error might cause a loss of connection to the application server. If this should happen, the easiest recovery is to remove the browser cookie for the machine you browsed to, then log in again. In some rare cases, it might be necessary to cycle the application server by performing:
     # svcadm restart -s svc:/system/cluster/manager-glassfish3:default

If the cluster/manager service does not come online automatically, start it as well:
     # svcadm enable -s svc:/system/cluster/manager-glassfish3:default

If you ever need to contact Oracle Support about an issue with Cluster Manager, please provide copies of the server log: /var/cluster/ClusterManager/glassfish3/domains/domain1/logs/server.log, and possibly the next older log as well if the log has rolled over recently.

3) Cluster Manager depends upon the common agent container. This service
will come online when packages are installed. Some additional configuration will be done at first cluster boot. If an error message indicates that there may be a problem with the common agent container, you can check it this way:
     # svcs -a | grep common-agent-container

     # /usr/sbin/cacaoadm status

If the service is not online, use the svcdm command to start, clear, or restart it as appropriate.

See also the Oracle Solaris Cluster documentation Troubleshooting Oracle Solaris Cluster Manager.

Monday Oct 26, 2015

Announcing Oracle Solaris Cluster 4.3

Today Oracle released Oracle Solaris Cluster 4.3. This new release further simplifies operations with new configuration tools and increases resiliency for business-critical application and platform services leveraging Oracle Solaris 11.3 new virtualization features. With an extended application portfolio and more disaster recovery options it enables orchestrated reliable and fast disaster recovery for a larger range of configurations.[Read More]

Thursday Oct 22, 2015

Oracle Solaris Cluster at Oracle OpenWorld 2015

Oracle OpenWorld is a great opportunity to meet with the experts and learn about the latest product news around Oracle Solaris Cluster. Whether you prefer checking out sessions, experiencing hands-on lab or visiting demo pods we have something for you:
  • Meet architects, tech leads or product managers at our demo pods located in the Oracle DEMOgrounds in Moscone South (Center):
    • High Availability and Disaster Recovery for the Enterprise Cloud (SC-026)
    • Single-Command Disaster Recovery Orchestration on Oracle SuperCluster (SC-008)
    • JD Edwards for Availability, Performance, and Security: Ready for the Cloud (SC-023)

  • Learn on how to deploy Oracle Solaris Cluster with the Oracle Solaris Automated Installer:
    • Tuesday Oct 27, 10:15 AM at the Hotel Nikko - Monterey (3rd Floor) [HOL1931]

  • Listen to our specialists talking about business critical solutions embedding Oracle Solaris Cluster such as:
    • Expert Insights on Orchestrating Extreme High Availability on Oracle SuperCluster [CON3303]
    • JD Edwards EnterpriseOne: Manage High Availability and Deliver High Performance [CON1643]
    • How to Securely Consolidate High-Performance SAP Landscapes [CON5743]
    • How to Securely Consolidate High-Performance PeopleSoft Environments [CON5721]

Hoping to meet you there,

Eve Kleinknecht 

Sunday May 17, 2015

New white paper available: Providing High Availability to the OpenStack Cloud Controller on Oracle Solaris with Oracle Solaris Cluster

Oracle Solaris delivers a complete OpenStack distribution which is integrated with its core technologies such as Oracle Solaris Zones, the ZFS file system, and its image packaging system (IPS). OpenStack in Oracle Solaris 11.2 helps IT organizations to create an enterprise-ready Infrastructure as a Service (IaaS) cloud, so that users can quickly deploy virtual networking and compute resources by using a centralized web-based portal.

Of course any enterprise-class OpenStack deployment requires a highly available OpenStack infrastructure that can sustain individual system failures.

Oracle Solaris Cluster is deeply integrated with Oracle Solaris technologies and delivers high availability to Oracle Solaris based OpenStack services through a rich set of features.

The primary goals of the Oracle Solaris Cluster software are to maximize service availability through fine-grained monitoring and automated recovery of critical services and to prevent data corruption through proper fencing. The services covered include networking, storage, virtualization used by the OpenStack cloud controller and its own components.

Our team has created a new white paper to specifically explain how to provide high availability to the OpenStack cloud controller on Oracle Solaris with Oracle Solaris Cluster.

After describing an example for a highly available physical node OpenStack infrastructure deployment, a detailed and structured walk-through over the high availability cloud controller configuration is provided. It discusses each OpenStack component that runs on the cloud controller, with explicit steps on how to create and configure these components under cluster control. The deployment example achieves secure isolation between services, while defining all the required dependencies for proper startup and stopping between services which is orchestrated by the cluster framework.

The white paper is linked from the OpenStack Cloud Management page as well as from the Oracle Solaris Cluster Technical Resources page on the Oracle Technology Network portal.

Thorsten Fr√ľauf
Oracle Solaris Cluster Engineering

Thursday Jan 15, 2015

Managing a remote Oracle Database instance with "Geographic Edition"

Another new feature of Oracle Solaris Cluster Geographic Edition

A few weeks ago I wrote about a new feature for Geographic Edition: DR Orchestration, which we added in Oracle Solaris Cluster 4.2.  With the latest update to 4.2, SRU1, we've completed the testing of another Geographic Edition feature which can make DR Orchestration even more useful - we can now support Oracle Data Guard replication control for a remote database.

As I described before, a multigroup can combine several protection groups so that they can be switched together in a coordinated manner. That enables a service constructed out of multiple tiers, on multiple clusters, to be managed as a unit.

Since each tier is represented by a protection group it might seem necessary that each tier be running Oracle Solaris Cluster, which could be inconvenient if one of the tiers is running only an Oracle RAC database. There is no absolute requirement to use Oracle Solaris Cluster in that configuration; the systems might not be running any cluster software, or perhaps running Oracle RAC Clusterware.  An example of such a configuration is the Oracle SuperCluster Engineered System.

In practice, however, we can use some features of Oracle Solaris Cluster and the Oracle Database to include such a tier in an orchestrated Geographic Edition configuration.  The two features we're using are:

  • The Oracle Solaris Cluster data service for Oracle External Proxy (HA for Oracle External Proxy)
  • Remote database connectivity

HA for Oracle External Proxy

This is a data service which can be used to reflect the status of an Oracle database which is running on a remote system, so that local Oracle Solaris Cluster resources and resource groups can associate dependencies with it.  You can find more information about it at:

Remote Database Connectivity

This is a standard feature of the Oracle Database.  A service name that is configured in the TNSNAMES.ORA file can identify a database that is running on a remote system, so that control operations from local sqlplus and dgmgrl (Data Guard broker) commands will operate on the remote database instance.  To take advantage of this you don't even need to install a full Oracle database locally, you can just install the Oracle Database Client software, see:

Put these together, and...

By using these two features together you can now create a local Geographic Edition protection group (PG) that uses Oracle Data Guard replication, but which manages a database instance on a remote system.  You just need to provide the name of a resource group that contains an external proxy resource, instead of an Oracle RAC or failover database resource.  As far as the cluster configuration is concerned, this behaves just like an ordinary local protection group, and so it can be included in a multigroup as part of a DR Orchestration configuration. When you switch over that multigroup, the software will contact all the systems configured in the protection group, local and remote.

Now you can fully orchestrate all tiers of a service and control them from a system that is part of an Oracle Solaris Cluster configuration, even if the database tier is on a system that isn't running Oracle Solaris Cluster.

Geographic Edition team
Oracle Solaris Cluster Engineering

Oracle Solaris Cluster Engineering Blog


« July 2016