News, tips, partners, and perspectives for the Oracle Solaris operating system

Oracle on Sun Cluster

Guest Author
Oracle is by far and away the most popular service running on Sun Cluster 3.x . Sun Cluster supports highly available (HA) Oracle, Oracle Parallel Server (OPS) and Oracle Real Application Cluster ( RAC ) giving users a very wide choice. Here it is the breadth of release, operating system and platform coverage that drives its appeal.
The HA Oracle agent on SPARC supports a long list of Oracle releases from 8.1.6.x on Solaris 8 to 10.2.0.x on Solaris 10 and numerous options in between. Additionally, the Sun Cluster 3.1u4 for (64 bit) x86 HA Oracle agent supports Oracle 10g R1 (32 bit) and 10g R2 (64 bit).
The parallel database coverage is similarly extensive with the SPARC platform supporting a broad set of volume manager (Solaris Volume Manager and Veritas Volume Manager) and Oracle releases from 8.1.7 up to 10.2.0.x. In addition Oracle 10g R2 (10.2.0.x) is also supported on the 64 bit x86 platform.
There are also a wide set of Oracle data storage options: raw disk, highly available local file systems and global file systems for HA Oracle; raw disk or network attached storage for Oracle OPS and raw disk, network attached storage or shared QFS file system for Oracle RAC.
But why even mention that Sun support these releases, why don't Sun support all releases in every hardware and software combination? The answer is that high availability is Sun Cluster's number one goal and achieving this doesn't happen by accident. It demands careful design and implementation of the software using extensive peer review of all code changes, followed by extremely thorough testing.
Having only joined the engineering group in the last year or so, I was staggered by the sheer volume of testing that is actually performed. It was also encouraging to see how close the engineering relationship was with Oracle too. For the recent release of Oracle 10g R2 on 64 bit x86 Solaris, the team I work with performed numerous Oracle designed tests on the product. These checked the installation process, its 'flexing' capability, i.e. adding or removing nodes, and its co-existence with previous releases, each for the various types of storage option. These tests numbered in the 100s and often required re-tests if bugs were found and these were just the Oracle mandated tests. In addition the Sun Cluster QA performed extensive load and fault injection tests.
It's these latter two items that set Sun Cluster apart in the robustness stakes. What makes an insurance policy worth the investment is the degree of confidence the user has that it will 'do the right thing' when a failure occurs. When system is sick or under load, user land processes often don't respond or may only respond after a long delay. It may also be difficult to determine whether other cluster nodes are alive or dead. Here, Sun Cluster comes into its own; the kernel based membership monitor very quickly determines whether cluster nodes are alive or not and takes action, i.e. failure fencing, to ensure that failed or failing nodes do not corrupt critical customer data.
By using automated test harnesses, Sun Cluster's Quality Assurance (QA) team are able to simulate a wide variety of fault conditions, e.g. killing critical processes or aborting nodes. These can be performed repeatably at any point during the test cycle. Faults are also injected even while the cluster is recovering from previous faults. In addition, the QA team perform a comprehensive set of manual, physical fault injections, such as disconnecting network cables and storage connections. All of this helps ensure that the cluster survives and continues to provide service, even in the event of cascading failures, and under extreme load.
This level of "certification", rather than simple functional regression testing, means that Sun Cluster has the capability to achieve levels of service availability that competing products may struggle to match.
Tim Read
Staff Engineer

Join the discussion

Comments ( 23 )
  • Tim Read Tuesday, April 24, 2007
    First, apologies for the delay in posting a reply.

    Second, I should make it clear that the agent simply allows the Sun Cluster HA-Oracle agent to inter-operate with Oracle instances that are part of an Oracle Data Guard (ODG) configuration. This means that the start methods understand that a standby database should not be started up with just the 'startup' command. Instead it uses 'startup nomount ; alter database mount standby database'. Notice that the recovery is not started. This is up to the DBA to initiate.

    I'm not aware of any plans to backport this functionality to Sun Cluster 3.1. If you need this functionality under that release you'd need to either write a SUNW.gds based agent to manage Oracle yourself or put in a request to have the features backported. Unless there is substantial demand for the latter, I doubt if it will happen.
  • Mick Scott Tuesday, November 6, 2007

    Is there a nice easy way to determine what versions of Oracle are supported with which versions of Sun Cluster and Solaris ?



  • Dawkins Tuesday, February 5, 2008

    I recently read the documenation, "Installation Guide for Solaris Cluster 3.2 SW and Oracle 10g Rel2 RAC. My project is in the process of setting up a RAC environment and we're using both Sun cluster and Oracle clusterware. I would like to know which cluster controls the VIPs, Sun cluster or Oracle Clusterware?

  • Tim Read Tuesday, February 5, 2008

    Oracle clusterware controls the Oracle VIP resources.

  • Dawkins Tuesday, February 5, 2008

    Tim, thanks for you response. That means that we will need to unregister the vips from Sun cluster and only place the vips in the /etc/hosts and register them in the DNS. When we get to the point during the Oracle Clusterware install we will enter the Vips and Oracle will configure/activate them at that time?

  • Tim Read Tuesday, February 5, 2008

    Correct. As you are using Oracle 10g RAC, Oracle Clusterware itself controls all its own Oracle related resources: VIPs, listeners, database instances, services, ASM, etc, etc. Solaris Cluster works with Oracle Clusterware to provide the necessary infrastructure support: DIDs, membership and fencing, clprivnet, knowledge of storage status, etc, etc.

    So yes, you are correct. Put the VIPs in /etc/hosts and register in DNS (if required). Then supply them when installing Oracle RAC. Make sure that when you come to the private networks that you \*only\* choose clprivnet. All others should be unused or public.

    Hope that clarifies things.



  • Dawkins Wednesday, February 6, 2008


    Have you seen environments using the combination of sun cluster and oracle clusterware hosting multiple databases with their own VIPs? If so, how were the additional vips registered with the oracle clusterware?

  • Tim Read Wednesday, February 6, 2008

    So you want a VIP per database instance? If so, it's not something I've tried. I don't know if it is done by customers either, though I can't see why it shouldn't be possible. I would expect you would just register the additional VIPs using crs_register. Furthermore, I would expect that to be documented in the Oracle manuals.

    I think this is really a question of the capabilities of Oracle Clusterware rather than Solaris Cluster. We certainly don't restrict what Clusterware can do.

    Just curious - why are you going for separate VIPs and not using separate ports on the VIP to control access to the databases? I would have thought you could set up various listeners on different ports and have suitable TNS name entries to map to them.



  • Dawkins Wednesday, February 6, 2008

    I am currently waiting on Oracle Support to respond to me. The databases are controlled by different contractors and will run on different ports/listeners, this is why we want to use separate VIPs. Scenario, If the (3) DB instances are using the same VIP and Instance2 goes down, what happens to Instance1 & 3 when the Clusterware failover the VIP to node2? That's our concern. If we're using diff VIPs and Instance2 goes down, then the clusterware will only failover the vip associated with Instance2 to the other node leaving instance1 & 3 alone.

  • Tim Read Wednesday, February 6, 2008

    Why should the instance failing cause the VIP to migrate? Normally there is no dependency of the VIP on the instance!

    Certainly the listener resource depends on the VIP and without the listener the database is inaccessible.

    If you use "crs_stat -p <resource>" you can see its properties.



  • Dawkins Thursday, February 7, 2008

    The Oracle documentation states that the clusterware will move the VIP over to the available node.

  • John Franklin Friday, May 23, 2008

    What is the maximum number of nodes that are supported for an Oracle 10g RAC cluster? I am finding that this number depends on your storage array where VCS just flately says 32 nodes regardless of what infrastructure you run it on.

  • John Franklin Tuesday, May 27, 2008

    Ok, then it looks like the number of nodes supported is dependent on the storage array being used. My understanding is this is due to the support of persistant group reservation support across the arrays. Veritas support SCSI III reservations so I guess that is why as long as you run on their list of supported arryas you can do the max of 32 nodes whereas Sun ranges all the way from 2 nodes to 64 nodes depending on the PGR support in the array.

  • Roger Autrand Wednesday, June 4, 2008

    Hi John,

    To address your question, "the number of supported nodes is dependent on the storage arrays being used", is driven by the business requirements of each vendor. Some vendors believe that there customer base will use no more than (4) node connectivity, and will therefore only test up to the maximum number. Others may opt for (6) or (8) and decide this is what is best for their business. Technically the capability is their to go much higher, but we let the business dictate how much resourcing will be applied to a particular certification. I hope this addresses your question.

    Roger Autrand

    Senior Manager

    Solaris Cluster Open Storage Program

  • John Franklin Wednesday, June 4, 2008

    Thanks for the response Roger. This helps. So what you are saying is that it could be possible to go higher than what is stated for max nodes in the Open Storage docs for a particular storage array. It is just that the max node count was the highest that vendor had tested with based on thier customer's assumptions.

  • roger Autrand Wednesday, June 4, 2008


    Yes, you are correct !


  • Mourad Thursday, November 6, 2008


    I had a lot of problem designing an Oracle RAC without any issue in the rac interconnect, in the case of two nodes, it's obvious that the Solaris 10 aggregation is the solution, but in case more than two nodes, the IPMP is a half solution, but I'm considering reviewin the design. I found the the technology for the private interconnect on the Sun Cluster 3.x is interesting for more than one two nodes since it give us a virtual interface name which solve my issues in the Oracle interconnect.

    But more than a year ago I decided to avoid the use of the Sun Cluster because it's not a free product and it add more configuration/administration which means for the customer COMPLEXITY.

    I'm wondering if the package used for the cluster private interconnect could be installed alone without installing the other packages from the sun cluster ???

    This may help me, on, the net I didnt find some one who tried this solution !!!

    Any comments ???


  • Mourad Thursday, November 6, 2008

    Hi Tim,

    First thank you for your quick reply.

    Yes I know about the benifits, I read last year the document you supplied me, and during a flight.

    But I can't decide on the cost, I just want to solve the issue on the interconnect, till know every thing is free on the Solaris 10 (aggregation is not free on the Solaris 8/9).

    Is there another option which can do what the clprvnet ?

    Which means a virtual interface for two or more real interfaces connected to more than one ethernet switch.

    PS : even installing the clprvnet alone is not free ?? kidding.


  • Tim Read Friday, November 7, 2008

    There is no other option (that I know of) that gives you exactly the same functionality as clprivnet - that's why we added it to Solaris Cluster.



  • Ingo Kubbilun Friday, February 27, 2009

    Hi people,

    I recently set up an Oracle RAC 10g on a SUN Cluster 3.2 for a Certification Authority following the "Installation Guide for Solaris(TM) Cluster 3.2 Software and Oracle(R) 10g Release 2 Real Application Clusters" by Fernando Castano (and succeeded).

    My question is: Is the resource group and resources setup pointed out in the document sufficient? I am storing my db files on an SVM managed multi-owner metaset, i.e. I do have the resource group "rac-framework-rg" and the three resources "rac-framework-rs", "rac-udlm-rs", and "rac-svm-rs".

    To be more concrete: Do I need an additional HA StoragePlus resource, which ensures that the SVM mount point is really "there" before the RAC "monster" is started?

    Cheers, Ingo.

  • Tim Read Monday, March 2, 2009


    I'm not quite sure what your phrase:

    'the SVM mount point is really "there" ...'

    means. An SVM disk set doesn't have a mount point, only a file system has a mount point. If you mean how do you ensure that the SVM disk set is imported, then that function is performed by the rac-svm-rs.

    I'm assuming that:

    'storing my db files on an SVM managed multi-owner metaset...'

    means that you have a shared QFS file system. If so, there should be a couple of other Sun Cluster resources configured including a QFS metadata server resource and a scalable mount point resource. These can be created using the RAC configuration wizard in clsetup or via the Solaris Cluster Manager GUI.

    If you need clarification of any of this, please post further questions or email me directly.

    Tim Read

    Solaris Availability Engineering

  • Ingo Kubbilun Monday, March 2, 2009

    Dear Tim,

    sorry for the confusing email; it was already a little bit tired.

    No, I do not deploy QFS. I concatenated two LUNs of my SUN StorEdge to one entity using SVM. It can be mounted on /global/oraracdata (fs is UFS).

    Maybe I misunderstood the HAStoragePlus resource type: I thought that another resource of type HAStoragePlus with the "FileSystemMountPoints=/global/oraracdata" is needed to ensure that it is mounted before the RAC group may become operational?

    Am I wrong?

    Thanks in adavnce and kind regards, Ingo.

  • Tim Read Monday, March 2, 2009


    You cannot put UFS or VxFS on a multi-owner diskset. Furthermore, you cannot install Oracle RAC data files on UFS or VxFS file systems mounted globally. Only shared QFS is supported for Oracle RAC data.

    Your options for storing various Oracle RAC structures are given in table 1-2 (page 22) of the "Sun Cluster Data Service for Oracle RAC Guide for Solaris OS"

    I hope that helps. If not, please post again.

    Tim Read

    Solaris Availability Engineering

Please enter your name.Please provide a valid email address.Please enter a comment.CAPTCHA challenge response provided was incorrect. Please try again.