Oracle on Sun Cluster

Oracle is by far and away the most popular service running on Sun Cluster 3.x . Sun Cluster supports highly available (HA) Oracle, Oracle Parallel Server (OPS) and Oracle Real Application Cluster ( RAC ) giving users a very wide choice. Here it is the breadth of release, operating system and platform coverage that drives its appeal.

The HA Oracle agent on SPARC supports a long list of Oracle releases from 8.1.6.x on Solaris 8 to 10.2.0.x on Solaris 10 and numerous options in between. Additionally, the Sun Cluster 3.1u4 for (64 bit) x86 HA Oracle agent supports Oracle 10g R1 (32 bit) and 10g R2 (64 bit).

The parallel database coverage is similarly extensive with the SPARC platform supporting a broad set of volume manager (Solaris Volume Manager and Veritas Volume Manager) and Oracle releases from 8.1.7 up to 10.2.0.x. In addition Oracle 10g R2 (10.2.0.x) is also supported on the 64 bit x86 platform.

There are also a wide set of Oracle data storage options: raw disk, highly available local file systems and global file systems for HA Oracle; raw disk or network attached storage for Oracle OPS and raw disk, network attached storage or shared QFS file system for Oracle RAC.

But why even mention that Sun support these releases, why don't Sun support all releases in every hardware and software combination? The answer is that high availability is Sun Cluster's number one goal and achieving this doesn't happen by accident. It demands careful design and implementation of the software using extensive peer review of all code changes, followed by extremely thorough testing.

Having only joined the engineering group in the last year or so, I was staggered by the sheer volume of testing that is actually performed. It was also encouraging to see how close the engineering relationship was with Oracle too. For the recent release of Oracle 10g R2 on 64 bit x86 Solaris, the team I work with performed numerous Oracle designed tests on the product. These checked the installation process, its 'flexing' capability, i.e. adding or removing nodes, and its co-existence with previous releases, each for the various types of storage option. These tests numbered in the 100s and often required re-tests if bugs were found and these were just the Oracle mandated tests. In addition the Sun Cluster QA performed extensive load and fault injection tests.

It's these latter two items that set Sun Cluster apart in the robustness stakes. What makes an insurance policy worth the investment is the degree of confidence the user has that it will 'do the right thing' when a failure occurs. When system is sick or under load, user land processes often don't respond or may only respond after a long delay. It may also be difficult to determine whether other cluster nodes are alive or dead. Here, Sun Cluster comes into its own; the kernel based membership monitor very quickly determines whether cluster nodes are alive or not and takes action, i.e. failure fencing, to ensure that failed or failing nodes do not corrupt critical customer data.

By using automated test harnesses, Sun Cluster's Quality Assurance (QA) team are able to simulate a wide variety of fault conditions, e.g. killing critical processes or aborting nodes. These can be performed repeatably at any point during the test cycle. Faults are also injected even while the cluster is recovering from previous faults. In addition, the QA team perform a comprehensive set of manual, physical fault injections, such as disconnecting network cables and storage connections. All of this helps ensure that the cluster survives and continues to provide service, even in the event of cascading failures, and under extreme load.

This level of "certification", rather than simple functional regression testing, means that Sun Cluster has the capability to achieve levels of service availability that competing products may struggle to match.

Tim Read
Staff Engineer


The new Oracle HA Agent for Sun Cluster 3.2 now has an option regarding Oracle DataGuard replication software. As we can read at the Sun Cluster 3.2 Relase Notes: "Oracle DataGuard Support Customers are now able to operate Oracle DataGuard data replication configurations under Sun Cluster control. Sun Cluster software now offers improved usability for Oracle deployments including DataGuard data replication software. For more information , see Sun Cluster Data Service for Oracle RAC Guide for Solaris OS." Will this agent be backported to Sun Cluster 3.1? Is there a way to address this kind of configuration in Sun Cluster 3.1?

Posted by Gustavo on April 11, 2007 at 03:01 AM PDT #

First, apologies for the delay in posting a reply.

Second, I should make it clear that the agent simply allows the Sun Cluster HA-Oracle agent to inter-operate with Oracle instances that are part of an Oracle Data Guard (ODG) configuration. This means that the start methods understand that a standby database should not be started up with just the 'startup' command. Instead it uses 'startup nomount ; alter database mount standby database'. Notice that the recovery is not started. This is up to the DBA to initiate.

I'm not aware of any plans to backport this functionality to Sun Cluster 3.1. If you need this functionality under that release you'd need to either write a SUNW.gds based agent to manage Oracle yourself or put in a request to have the features backported. Unless there is substantial demand for the latter, I doubt if it will happen.

Posted by Tim Read on April 23, 2007 at 06:49 PM PDT #

Is there a nice easy way to determine what versions of Oracle are supported with which versions of Sun Cluster and Solaris ?


Posted by Mick Scott on November 06, 2007 at 10:25 AM PST #

There are two sides to this question: there is what Oracle support and there is what Sun support. The two don't always entirely coincide.

Support for Oracle's products is primarily Oracle's concern. Their support matrix is given on MetaLink site ( Sun work with Oracle to certify these products on Solaris Cluster, but once we (Sun) have certified a combination we don't usually withdraw support for it. Consequently, Sun's matrix is usually a superset of Oracle's.

I'm not sure if that qualifies as an easy way to find out whether your combination is supported or not!


Posted by Tim Read on November 06, 2007 at 05:35 PM PST #

I recently read the documenation, "Installation Guide for Solaris Cluster 3.2 SW and Oracle 10g Rel2 RAC. My project is in the process of setting up a RAC environment and we're using both Sun cluster and Oracle clusterware. I would like to know which cluster controls the VIPs, Sun cluster or Oracle Clusterware?

Posted by Dawkins on February 04, 2008 at 10:27 PM PST #

Oracle clusterware controls the Oracle VIP resources.

Posted by Tim Read on February 04, 2008 at 11:09 PM PST #

Tim, thanks for you response. That means that we will need to unregister the vips from Sun cluster and only place the vips in the /etc/hosts and register them in the DNS. When we get to the point during the Oracle Clusterware install we will enter the Vips and Oracle will configure/activate them at that time?

Posted by Dawkins on February 04, 2008 at 11:15 PM PST #

Correct. As you are using Oracle 10g RAC, Oracle Clusterware itself controls all its own Oracle related resources: VIPs, listeners, database instances, services, ASM, etc, etc. Solaris Cluster works with Oracle Clusterware to provide the necessary infrastructure support: DIDs, membership and fencing, clprivnet, knowledge of storage status, etc, etc.

So yes, you are correct. Put the VIPs in /etc/hosts and register in DNS (if required). Then supply them when installing Oracle RAC. Make sure that when you come to the private networks that you \*only\* choose clprivnet. All others should be unused or public.

Hope that clarifies things.


Posted by Tim Read on February 04, 2008 at 11:58 PM PST #

Have you seen environments using the combination of sun cluster and oracle clusterware hosting multiple databases with their own VIPs? If so, how were the additional vips registered with the oracle clusterware?

Posted by Dawkins on February 05, 2008 at 10:16 PM PST #

So you want a VIP per database instance? If so, it's not something I've tried. I don't know if it is done by customers either, though I can't see why it shouldn't be possible. I would expect you would just register the additional VIPs using crs_register. Furthermore, I would expect that to be documented in the Oracle manuals.

I think this is really a question of the capabilities of Oracle Clusterware rather than Solaris Cluster. We certainly don't restrict what Clusterware can do.

Just curious - why are you going for separate VIPs and not using separate ports on the VIP to control access to the databases? I would have thought you could set up various listeners on different ports and have suitable TNS name entries to map to them.


Posted by Tim Read on February 05, 2008 at 10:37 PM PST #

I am currently waiting on Oracle Support to respond to me. The databases are controlled by different contractors and will run on different ports/listeners, this is why we want to use separate VIPs. Scenario, If the (3) DB instances are using the same VIP and Instance2 goes down, what happens to Instance1 & 3 when the Clusterware failover the VIP to node2? That's our concern. If we're using diff VIPs and Instance2 goes down, then the clusterware will only failover the vip associated with Instance2 to the other node leaving instance1 & 3 alone.

Posted by Dawkins on February 05, 2008 at 11:50 PM PST #

Why should the instance failing cause the VIP to migrate? Normally there is no dependency of the VIP on the instance!

Certainly the listener resource depends on the VIP and without the listener the database is inaccessible.

If you use "crs_stat -p <resource>" you can see its properties.


Posted by Tim Read on February 06, 2008 at 12:28 AM PST #

The Oracle documentation states that the clusterware will move the VIP over to the available node.

Posted by Dawkins on February 07, 2008 at 02:51 AM PST #

Sorry, I'm a bit dim. I can't find that in the documentation. Could you send me a point to the relevant section of the docs?

The only thing I could find was that the VIP would fail-over if a node failed and that was to allow a rapid "connection refused" (see


Posted by Tim Read on February 07, 2008 at 05:26 PM PST #

What is the maximum number of nodes that are supported for an Oracle 10g RAC cluster? I am finding that this number depends on your storage array where VCS just flately says 32 nodes regardless of what infrastructure you run it on.

Posted by John Franklin on May 23, 2008 at 04:27 AM PDT #

Sun Cluster supports up to 16 nodes for SPARC Solaris RAC 10g and up to 8 nodes for Solaris x64 RAC 10g. See the Oracle certification page for the certified storage management and associated node counts:

Posted by Gia-Khanh on May 23, 2008 at 05:17 AM PDT #

That is why I was confused because looking at the Sun Cluster Open Storage certification numbers the numbers are actually quite a bit lower. I.E only 4 nodes are the max if you are running on Hitachi Storage for the T2000's that I have.

Posted by john franklin on May 23, 2008 at 05:46 AM PDT #

The certified configurations information given earlier is more from a SW compatibility point of view. For a given choice of HW components the supported node count could be lower. Quoting another SC OSP config, 16 nodes (including T2000s) are supported for certain EMC storage products:
Look down the end of the table.

Posted by Gia-Khanh on May 23, 2008 at 12:28 PM PDT #

Ok, then it looks like the number of nodes supported is dependent on the storage array being used. My understanding is this is due to the support of persistant group reservation support across the arrays. Veritas support SCSI III reservations so I guess that is why as long as you run on their list of supported arryas you can do the max of 32 nodes whereas Sun ranges all the way from 2 nodes to 64 nodes depending on the PGR support in the array.

Posted by John Franklin on May 27, 2008 at 02:21 AM PDT #

Hi John,

To address your question, "the number of supported nodes is dependent on the storage arrays being used", is driven by the business requirements of each vendor. Some vendors believe that there customer base will use no more than (4) node connectivity, and will therefore only test up to the maximum number. Others may opt for (6) or (8) and decide this is what is best for their business. Technically the capability is their to go much higher, but we let the business dictate how much resourcing will be applied to a particular certification. I hope this addresses your question.

Roger Autrand
Senior Manager
Solaris Cluster Open Storage Program

Posted by Roger Autrand on June 03, 2008 at 10:44 PM PDT #

Thanks for the response Roger. This helps. So what you are saying is that it could be possible to go higher than what is stated for max nodes in the Open Storage docs for a particular storage array. It is just that the max node count was the highest that vendor had tested with based on thier customer's assumptions.

Posted by John Franklin on June 04, 2008 at 12:42 AM PDT #


Yes, you are correct !


Posted by roger Autrand on June 04, 2008 at 12:54 AM PDT #


I had a lot of problem designing an Oracle RAC without any issue in the rac interconnect, in the case of two nodes, it's obvious that the Solaris 10 aggregation is the solution, but in case more than two nodes, the IPMP is a half solution, but I'm considering reviewin the design. I found the the technology for the private interconnect on the Sun Cluster 3.x is interesting for more than one two nodes since it give us a virtual interface name which solve my issues in the Oracle interconnect.

But more than a year ago I decided to avoid the use of the Sun Cluster because it's not a free product and it add more configuration/administration which means for the customer COMPLEXITY.

I'm wondering if the package used for the cluster private interconnect could be installed alone without installing the other packages from the sun cluster ???

This may help me, on, the net I didnt find some one who tried this solution !!!

Any comments ???


Posted by Mourad on November 06, 2008 at 01:38 AM PST #

You cannot install just the Solaris Cluster private interconnect functionality only. So unfortunately, that's not an option.

You could argue that Oracle RAC adds complexity compared with HA Oracle or standalone Oracle, but because it has functionality that you need, you use it. I would suggest that the same is true of Solaris Cluster. It may add a small amount of complexity, but the benefits are substantial. It's not just the private interconnects, it's the consistent (automatically maintained) global device namespace, the support for volume managers and shared QFS file system, etc.

For more details on the benefits, see our whitepaper on the subject (

As for cost, well I think you'll find that the Solaris Cluster and RAC agent licenses are pretty reasonably priced, but I'll agree, they are not zero.


Posted by Tim Read on November 06, 2008 at 01:56 AM PST #

Hi Tim,

First thank you for your quick reply.

Yes I know about the benifits, I read last year the document you supplied me, and during a flight.
But I can't decide on the cost, I just want to solve the issue on the interconnect, till know every thing is free on the Solaris 10 (aggregation is not free on the Solaris 8/9).

Is there another option which can do what the clprvnet ?

Which means a virtual interface for two or more real interfaces connected to more than one ethernet switch.

PS : even installing the clprvnet alone is not free ?? kidding.


Posted by Mourad on November 06, 2008 at 02:10 AM PST #

There is no other option (that I know of) that gives you exactly the same functionality as clprivnet - that's why we added it to Solaris Cluster.


Posted by Tim Read on November 06, 2008 at 07:10 PM PST #

Hi people,

I recently set up an Oracle RAC 10g on a SUN Cluster 3.2 for a Certification Authority following the "Installation Guide for Solaris(TM) Cluster 3.2 Software and Oracle(R) 10g Release 2 Real Application Clusters" by Fernando Castano (and succeeded).

My question is: Is the resource group and resources setup pointed out in the document sufficient? I am storing my db files on an SVM managed multi-owner metaset, i.e. I do have the resource group "rac-framework-rg" and the three resources "rac-framework-rs", "rac-udlm-rs", and "rac-svm-rs".

To be more concrete: Do I need an additional HA StoragePlus resource, which ensures that the SVM mount point is really "there" before the RAC "monster" is started?

Cheers, Ingo.

Posted by Ingo Kubbilun on February 27, 2009 at 02:23 AM PST #


I'm not quite sure what your phrase:
'the SVM mount point is really "there" ...'
means. An SVM disk set doesn't have a mount point, only a file system has a mount point. If you mean how do you ensure that the SVM disk set is imported, then that function is performed by the rac-svm-rs.

I'm assuming that:
'storing my db files on an SVM managed multi-owner metaset...'

means that you have a shared QFS file system. If so, there should be a couple of other Sun Cluster resources configured including a QFS metadata server resource and a scalable mount point resource. These can be created using the RAC configuration wizard in clsetup or via the Solaris Cluster Manager GUI.

If you need clarification of any of this, please post further questions or email me directly.

Tim Read
Solaris Availability Engineering

Posted by Tim Read on March 01, 2009 at 05:07 PM PST #

Dear Tim,

sorry for the confusing email; it was already a little bit tired.

No, I do not deploy QFS. I concatenated two LUNs of my SUN StorEdge to one entity using SVM. It can be mounted on /global/oraracdata (fs is UFS).

Maybe I misunderstood the HAStoragePlus resource type: I thought that another resource of type HAStoragePlus with the "FileSystemMountPoints=/global/oraracdata" is needed to ensure that it is mounted before the RAC group may become operational?

Am I wrong?

Thanks in adavnce and kind regards, Ingo.

Posted by Ingo Kubbilun on March 01, 2009 at 05:26 PM PST #


You cannot put UFS or VxFS on a multi-owner diskset. Furthermore, you cannot install Oracle RAC data files on UFS or VxFS file systems mounted globally. Only shared QFS is supported for Oracle RAC data.

Your options for storing various Oracle RAC structures are given in table 1-2 (page 22) of the "Sun Cluster Data Service for Oracle RAC Guide for Solaris OS"

I hope that helps. If not, please post again.

Tim Read
Solaris Availability Engineering

Posted by Tim Read on March 01, 2009 at 05:47 PM PST #

Post a Comment:
  • HTML Syntax: NOT allowed

Oracle Solaris Cluster Engineering Blog


« July 2016