An Oracle blog about Exadata

  • September 19, 2009

11gR2 Grid Infrastructure Installation

Rene Kundersma
Software Engineer

There is so much to tell about the new features that come with 11gR2, this new release gives me input for years ! Since the "Oracle Database New Features Guide 11g Release 2" does a good job here, I am not even trying to cover some of that.

I will however try to discuss some highlights or cool new things that changed since the previous (11gR1) release. 11gR2 Grid Infrastructure is one of those things.

11gR2 Grid Infrastructure combines Clusterware and ASM in one Oracle home and can be described as the next step in Grid Computing. If you are familiar with previous Clusterware and ASM releases, you will recognize the new functionality and way of working and realize this is indeed the next step in what we need for enabling Enterprise Grid. Deployment is simpler, faster and we are not talking about nodes anymore, but about services that live on resources.

One of the new features of 11gR2 is Grid Plug and Play, also called GPnP. Let me repeat what the documentation says about GPnP:

"Grid Plug and Play (GPnP) eliminates per-node configuration data and the need for explicit add and delete nodes steps. This allows a system administrator to take a template system image and run it on a new node with no further configuration. This removes many manual operations, reduces the opportunity for errors, and encourages configurations that can be changed easily. Removal of the per-node configuration makes the nodes easier to replace, because they do not need to contain individually-managed state.

Grid Plug and Play reduces the cost of installing, configuring, and managing database nodes by making their per-node state disposable. It allows nodes to be easily replaced with regenerated state"

Some of the key enablers for GPnP are GNS and DHCP. GNS, the Grid Naming Service is described here.

Since all of the requirements for a Grid Infrastructure installation are clearly documented in the "Grid Infrastructure installation guide", there is no need to discuss this.

This posting however is made to demo how to do an "Advanced Installation" of the Grid Infrastructure your self and show how to do an installation for education purposes, for example a situation at home where you want to test the setup of Oracle Grid Infrastructure with your own DNS and DHCP server. In real life, at customer sites, DNS and DHCP servers are all in place and Oracle Grid Infrastructure can leverage from these existing services.

Since most steps of the Oracle Grid Infrastructure installation are easy I will only only focus on the details I want to discuss regarding GNS and DHCP.

Oracle Grid Infrastructure can be downloaded here and when you made sure all prerequisites are checked you can start the installation by executing runInstaller.

It does makes sense to install the Oracle Grid Infrastructure with a different user id then the Oracle database. For this the Oracle documentation again has some sound examples. Because of this I had to make sure permissions for directories and for example ASM disks are setup with 'grid' permissions instead of 'oracle' (and both oinstall as group)

I used user "grid" to install the Oracle Grid Infrastructure and since I wanted to install and configure the Oracle Grid Infrastructure I chose the first option.


A typical installation does not have GNS and since the purpose of the posting is to explain about the setup with GNS, the "Advanced Installation" option was chosen.


Language, speaks for itself.


Okay, this basically is the most important step of the setup. At this step you have to define the name for your cluster. In my case "cluster01", that is an easy one as there are no relations for this.

The SCAN name however, is the "Simple Client Access Name" and will be setup by the Oracle Grid Infrastructure. This SCAN name will resolve to three ip addresses within the cluster. The good news is that you don't have to do much for it, just make up a name that your clients will use later to acces databases in your cluster. SCAN Port 1521 is the default port we always use for SQLNet. The SCAN name has to be in the GNS Sub Domain as explained below:


The option "Configure GNS" was checked. If this box was not checked, still SCAN could be used, but then, I had to setup the SCAN entries in DNS myself, with the SCAN name resolving to three different ip addresses.

However, since GNS is checked, the Grid Naming Service will be configured and GNS will setup my SCAN name. The only requirement is that a GNS Sub domain must be made and the DNS must be configured so that each request for this Sub Domain will be delegated to the GNS Sub Domain, so that GNS can handle the request.

The GNS VIP address is the ip address of the server that will host the GNS. You need to make sure this one is available for use.

You may ask yourself why this all is needed. Well, imagine yourself a cluster, where nodes are added and removed dynamically. In this situation, the complete administration with ip address management and name resolution management is done by the cluster itself. No need to do any manual work in updating connection strings, configuring ip numbers etc.


So how does it work:

First, my (named, linux) DNS is running on

This DNS does the naming for cluster01.nl.oracle.com and pts.local.

For cluster01.nl.oracle.com a "delegation" is made, so that every request to a machine in the domain .cluster01.nl.oracle.com is delegated to the GNS. (with the GNS VIP).


cluster01.nl.oracle.com NS gns.cluster01.nl.oracle.com


So, once the cluster installation is done, the GNS in the cluster will be stared and a request to scan.cluster01.nl.oracle.com will be forwarded to the GNS. The GNS will then take care of the request and answer which three nodes in the cluster will serve as scan listeners:

[root@gridnode01pts05 ~]# nslookup scan.cluster01.nl.oracle.com



Non-authoritative answer:

Name: scan.cluster01.nl.oracle.com


Name: scan.cluster01.nl.oracle.com


Name: scan.cluster01.nl.oracle.com


Also, with dig, you can see all information coming from GNS:

[root@dns-dhcp ~]# dig scan.cluster01.nl.oracle.com

; <<>> DiG 9.3.4-P1 <<>> scan.cluster01.nl.oracle.com

;; global options: printcmd

;; Got answer:

;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 46016

;; flags: qr rd ra; QUERY: 1, ANSWER: 3, AUTHORITY: 10, ADDITIONAL: 10


;scan.cluster01.nl.oracle.com. IN A


scan.cluster01.nl.oracle.com. 6 IN A

scan.cluster01.nl.oracle.com. 6 IN A

scan.cluster01.nl.oracle.com. 6 IN A


oracle.com. 10732 IN NS dns2.us.oracle.com.

oracle.com. 10732 IN NS dns3.us.oracle.com.

oracle.com. 10732 IN NS dns4.us.oracle.com.

oracle.com. 10732 IN NS dns1-us.us.oracle.com.

oracle.com. 10732 IN NS dnsmaster1.oracle.com.

oracle.com. 10732 IN NS dnsmaster2.oracle.com.

oracle.com. 10732 IN NS dnsmaster3.oracle.com.

oracle.com. 10732 IN NS dnsmaster4.oracle.com.

oracle.com. 10732 IN NS dnsmaster5.oracle.com.

oracle.com. 10732 IN NS dnsmaster6.oracle.com.


dns2.us.oracle.com. 3984 IN A

dns3.us.oracle.com. 3984 IN A

dns4.us.oracle.com. 3984 IN A

dns1-us.us.oracle.com. 3984 IN A

dnsmaster1.oracle.com. 1060 IN A

dnsmaster2.oracle.com. 1060 IN A

dnsmaster3.oracle.com. 1060 IN A

dnsmaster4.oracle.com. 1060 IN A

dnsmaster5.oracle.com. 1060 IN A

dnsmaster6.oracle.com. 1060 IN A

;; Query time: 0 msec


;; WHEN: Sat Sep 19 17:15:47 2009

;; MSG SIZE rcvd: 486

Later, when the database is installed, you can use the SCAN with SQLNet EZ connect to connect to the database. Can't wait, I just have to demo it now:

[oracle@gridnode01pts05 ~]$ sqlplus system/oracle@scan.cluster01.nl.oracle.com:1521/dbpts05.pts.local

SQL*Plus: Release Production on Sat Sep 19 17:11:32 2009

Copyright (c) 1982, 2009, Oracle. All rights reserved.

Connected to:

Oracle Database 11g Enterprise Edition Release - Production

With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,

Data Mining and Real Application Testing options


After specification of the GNS details the details of the nodes within the cluster have to be entered. The node names you enter here also have to be able to resolve their name. The management of the virtual ip address will be done automatically as long as a working DHCP service is available to serve ip addresses within that network. You can see here that the nodes can be in another domain then the GNS Sub Domain.



for my dhcp config.

New in 11gR2 is the ability to let the installer configure ssh for you ! Great !


Even if it says that it will take several minutes, most of the time the ssh setup is done within the minute.


As in 10g, and 11gR1, this step is used to specify the public and internal interface.

Same as in 10g and 11gR1, the private interface will be used for the interconnect.


11gR2 has the new option to place the OCR and Voting disks on ASM storage, so that is what I will do.


After choosing for ASM, the next step is specifying disks that will be used for the ASM diskgroup. This is also kind-of 10g/11gR1, however, at that time this step was in the DBCA.


Choosing 6 disks of 2GB with external redundancy.


You see the installer complaining about my not so complicated password that I chose. (since this not for production purposes)



Intelligent Planform Management is really cool, look at this. You choose to let the Grid Infrastructure work with it.


In the real world it does makes sense to separate the three groups !



Screen to specify the Oracle base and Software Location for the Grid Infrastructure. Remember, this location has to extist (read and write) on all nodes that you plan to install the software on.


Location of the inventory


This is really great. The Prerequisite checker now has the ability to generate a "fix" script.

So some (not all) requirements that are not setup okay can be corrected by running a fix-up script generated by the prerequisite checker.


In my situation, some kernel parameters needed to be changed. That could be done easily with this tool.


Run the script on both nodes.


My swap space seems to be 1KB too small. This is because I used an Oracle VM template (yes, although not officially certified yet I am running virtualized). I think I will manage with 1KB less then recommend.






Software being transferred to the other node:


And running the root scripts to setup permissions and configure the cluster.


Running oraInstRoot.sh on both nodes:


Running root.sh on node 1, this is where the cluster configuration is done.


Watch the Voting File on the first ASM disk in diskgroup DATA:


And root.sh on node 1 finished...


Don't forget node 2:



After running the root scripts, you have to click on okay. After this some small post configuration steps are performed by the installer and clufvy also runs. If this all runs fine, like it did here the screen will quickly continue to the last page telling you the installation was succesful


In my next postings i will show the installation and creation of a 11gR2 RAC database, and also adding nodes to the cluster and adding instances to the RAC database.

Rene Kundersma

Oracle Technology Services, The Netherlands

Join the discussion

Comments ( 14 )
  • Peter Sunday, September 20, 2009
    Awesome, as always!
    Haven't had time to do an install myself yet, but I cant wait to try this new beast out.
    Any chance of an ACFS-post?
  • Rene Thursday, September 24, 2009
    Peter thanks !
    Yes, I will. I was also thinking about DBFS, did you check out that ? awesome !
  • Dave Saturday, December 12, 2009
    regarding the scan-listener, supposing you have 3 nodes in your cluster, will there be 1 scan-listener on each node or will there be 1 listener that will active on one node at the time? Haven't figured that out yet.
  • Rene Saturday, December 12, 2009
    With this configuration you will have 3 scan listeners, even if you had 3 or more nodes.
  • Gilles Haro Wednesday, April 7, 2010
    This is a great job, Rene. Once again !!
    Thank you for these clarifications.
    The point is that, even for testing purposes, we are now forced to have dns entries. /etc/hosts definitions are no longer suffisants.
  • Jo&atilde;o Rodolfo Friday, November 19, 2010
    Hi Rene
    I found your document yesterday and I thought it very helpful.
    I just don't get the SCAN information.
    Why it has three IPs for two nodes?
    In my head each node will answer one IP. That means on DNS entry for two IPs.
    Will the SCAN entry have one IP for itself?
  • rene.kundersma Friday, November 19, 2010
    Hi João
    Please see this document:
  • guest Sunday, May 29, 2011
    Hi Rene ,
    Its awesome very helpful...i am trying to do upgrade from 11gr2 to 11gr2 on windows 2 node RAC.
    Thank you very much for this posting.
    It helped me a lot.
    Best Regards,
    Abid Ali
    Oracle Dba
  • guest Monday, July 18, 2011

    I have quick question.

    Do we need a dynamic DNS, so that whatever ips are taken by the vips and scan from the DHCP should be correctly registered with DNS?

  • Rene Monday, July 18, 2011

    When you use GNS, Oracle does this for you. You just need to 'delegate' the DNS queries to the existing DNS to the GNS. You can find this in the manual.

  • guest Monday, July 18, 2011

    Thanks a lot for the quick reply. One more favor that i need from you.

    Can you post the link to the oracle documentation here for delegating the DNS queries to existing GNS. I looked in the clusterware management and also in real application clusters books, but off course i am not looking in the right manual.


  • guest Monday, August 8, 2011

    Nice doc. Thanks.

  • Karthik Tuesday, November 12, 2013

    Great info with screenshots!!! Do you have similar post for "Installation and creation of a 11gR2 RAC database" with screenshots.

  • guest Tuesday, November 12, 2013

    Sorry Karthik - no, I don't. However a quick google (images) on '11.2 rac install captures' and you will find them.

Please enter your name.Please provide a valid email address.Please enter a comment.CAPTCHA challenge response provided was incorrect. Please try again.