11gR2 Grid Infrastructure Installation
By Rene Kundersma on Sep 19, 2009
There is so much to tell about the new features that come with 11gR2, this new release gives me input for years ! Since the "Oracle Database New Features Guide 11g Release 2" does a good job here, I am not even trying to cover some of that.
I will however try to discuss some highlights or cool new things that changed since the previous (11gR1) release. 11gR2 Grid Infrastructure is one of those things.
11gR2 Grid Infrastructure combines Clusterware and ASM in one Oracle home and can be described as the next step in Grid Computing. If you are familiar with previous Clusterware and ASM releases, you will recognize the new functionality and way of working and realize this is indeed the next step in what we need for enabling Enterprise Grid. Deployment is simpler, faster and we are not talking about nodes anymore, but about services that live on resources.
One of the new features of 11gR2 is Grid Plug and Play, also called GPnP. Let me repeat what the documentation says about GPnP:
"Grid Plug and Play (GPnP) eliminates per-node configuration data and the need for explicit add and delete nodes steps. This allows a system administrator to take a template system image and run it on a new node with no further configuration. This removes many manual operations, reduces the opportunity for errors, and encourages configurations that can be changed easily. Removal of the per-node configuration makes the nodes easier to replace, because they do not need to contain individually-managed state.
Grid Plug and Play reduces the cost of installing, configuring, and managing database nodes by making their per-node state disposable. It allows nodes to be easily replaced with regenerated state"
Some of the key enablers for GPnP are GNS and DHCP. GNS, the Grid Naming Service is described here.
Since all of the requirements for a Grid Infrastructure installation are clearly documented in the "Grid Infrastructure installation guide", there is no need to discuss this.
This posting however is made to demo how to do an "Advanced Installation" of the Grid Infrastructure your self and show how to do an installation for education purposes, for example a situation at home where you want to test the setup of Oracle Grid Infrastructure with your own DNS and DHCP server. In real life, at customer sites, DNS and DHCP servers are all in place and Oracle Grid Infrastructure can leverage from these existing services.
Since most steps of the Oracle Grid Infrastructure installation are easy I will only only focus on the details I want to discuss regarding GNS and DHCP.
Oracle Grid Infrastructure can be downloaded here and when you made sure all prerequisites are checked you can start the installation by executing runInstaller.
It does makes sense to install the Oracle Grid Infrastructure with a different user id then the Oracle database. For this the Oracle documentation again has some sound examples. Because of this I had to make sure permissions for directories and for example ASM disks are setup with 'grid' permissions instead of 'oracle' (and both oinstall as group)
I used user "grid" to install the Oracle Grid Infrastructure and since I wanted to install and configure the Oracle Grid Infrastructure I chose the first option.
A typical installation does not have GNS and since the purpose of the posting is to explain about the setup with GNS, the "Advanced Installation" option was chosen.
Language, speaks for itself.
Okay, this basically is the most important step of the setup. At this step you have to define the name for your cluster. In my case "cluster01", that is an easy one as there are no relations for this.
The SCAN name however, is the "Simple Client Access Name" and will be setup by the Oracle Grid Infrastructure. This SCAN name will resolve to three ip addresses within the cluster. The good news is that you don't have to do much for it, just make up a name that your clients will use later to acces databases in your cluster. SCAN Port 1521 is the default port we always use for SQLNet. The SCAN name has to be in the GNS Sub Domain as explained below:
The option "Configure GNS" was checked. If this box was not checked, still SCAN could be used, but then, I had to setup the SCAN entries in DNS myself, with the SCAN name resolving to three different ip addresses.
However, since GNS is checked, the Grid Naming Service will be configured and GNS will setup my SCAN name. The only requirement is that a GNS Sub domain must be made and the DNS must be configured so that each request for this Sub Domain will be delegated to the GNS Sub Domain, so that GNS can handle the request.
The GNS VIP address is the ip address of the server that will host the GNS. You need to make sure this one is available for use.
You may ask yourself why this all is needed. Well, imagine yourself a cluster, where nodes are added and removed dynamically. In this situation, the complete administration with ip address management and name resolution management is done by the cluster itself. No need to do any manual work in updating connection strings, configuring ip numbers etc.
So how does it work:
First, my (named, linux) DNS is running on 10.161.102.40.
This DNS does the naming for cluster01.nl.oracle.com and pts.local.
For cluster01.nl.oracle.com a "delegation" is made, so that every request to a machine in the domain .cluster01.nl.oracle.com is delegated to the GNS. (with the GNS VIP).
cluster01.nl.oracle.com NS gns.cluster01.nl.oracle.com
So, once the cluster installation is done, the GNS in the cluster will be stared and a request to scan.cluster01.nl.oracle.com will be forwarded to the GNS. The GNS will then take care of the request and answer which three nodes in the cluster will serve as scan listeners:
[root@gridnode01pts05 ~]# nslookup scan.cluster01.nl.oracle.com
Also, with dig, you can see all information coming from GNS:
[root@dns-dhcp ~]# dig scan.cluster01.nl.oracle.com
; <<>> DiG 9.3.4-P1 <<>> scan.cluster01.nl.oracle.com
;; global options: printcmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 46016
;; flags: qr rd ra; QUERY: 1, ANSWER: 3, AUTHORITY: 10, ADDITIONAL: 10
;; QUESTION SECTION:
;scan.cluster01.nl.oracle.com. IN A
;; ANSWER SECTION:
scan.cluster01.nl.oracle.com. 6 IN A 10.161.102.78
scan.cluster01.nl.oracle.com. 6 IN A 10.161.102.79
scan.cluster01.nl.oracle.com. 6 IN A 10.161.102.77
;; AUTHORITY SECTION:
oracle.com. 10732 IN NS dns2.us.oracle.com.
oracle.com. 10732 IN NS dns3.us.oracle.com.
oracle.com. 10732 IN NS dns4.us.oracle.com.
oracle.com. 10732 IN NS dns1-us.us.oracle.com.
oracle.com. 10732 IN NS dnsmaster1.oracle.com.
oracle.com. 10732 IN NS dnsmaster2.oracle.com.
oracle.com. 10732 IN NS dnsmaster3.oracle.com.
oracle.com. 10732 IN NS dnsmaster4.oracle.com.
oracle.com. 10732 IN NS dnsmaster5.oracle.com.
oracle.com. 10732 IN NS dnsmaster6.oracle.com.
;; ADDITIONAL SECTION:
dns2.us.oracle.com. 3984 IN A 184.108.40.206
dns3.us.oracle.com. 3984 IN A 220.127.116.11
dns4.us.oracle.com. 3984 IN A 18.104.22.168
dns1-us.us.oracle.com. 3984 IN A 22.214.171.124
dnsmaster1.oracle.com. 1060 IN A 126.96.36.199
dnsmaster2.oracle.com. 1060 IN A 188.8.131.52
dnsmaster3.oracle.com. 1060 IN A 184.108.40.206
dnsmaster4.oracle.com. 1060 IN A 220.127.116.11
dnsmaster5.oracle.com. 1060 IN A 18.104.22.168
dnsmaster6.oracle.com. 1060 IN A 22.214.171.124
;; Query time: 0 msec
;; SERVER: 10.161.102.40#53(10.161.102.40)
;; WHEN: Sat Sep 19 17:15:47 2009
;; MSG SIZE rcvd: 486
Later, when the database is installed, you can use the SCAN with SQLNet EZ connect to connect to the database. Can't wait, I just have to demo it now:
[oracle@gridnode01pts05 ~]$ sqlplus email@example.com:1521/dbpts05.pts.local
SQL*Plus: Release 126.96.36.199.0 Production on Sat Sep 19 17:11:32 2009
Copyright (c) 1982, 2009, Oracle. All rights reserved.
Oracle Database 11g Enterprise Edition Release 188.8.131.52.0 - Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options
After specification of the GNS details the details of the nodes within the cluster have to be entered. The node names you enter here also have to be able to resolve their name. The management of the virtual ip address will be done automatically as long as a working DHCP service is available to serve ip addresses within that network. You can see here that the nodes can be in another domain then the GNS Sub Domain.
Clickfor my dhcp config.
New in 11gR2 is the ability to let the installer configure ssh for you ! Great !
Even if it says that it will take several minutes, most of the time the ssh setup is done within the minute.
As in 10g, and 11gR1, this step is used to specify the public and internal interface.
Same as in 10g and 11gR1, the private interface will be used for the interconnect.
11gR2 has the new option to place the OCR and Voting disks on ASM storage, so that is what I will do.
After choosing for ASM, the next step is specifying disks that will be used for the ASM diskgroup. This is also kind-of 10g/11gR1, however, at that time this step was in the DBCA.
Choosing 6 disks of 2GB with external redundancy.
You see the installer complaining about my not so complicated password that I chose. (since this not for production purposes)
Intelligent Planform Management is really cool, look at this. You choose to let the Grid Infrastructure work with it.
In the real world it does makes sense to separate the three groups !
Screen to specify the Oracle base and Software Location for the Grid Infrastructure. Remember, this location has to extist (read and write) on all nodes that you plan to install the software on.
Location of the inventory
This is really great. The Prerequisite checker now has the ability to generate a "fix" script.
So some (not all) requirements that are not setup okay can be corrected by running a fix-up script generated by the prerequisite checker.
In my situation, some kernel parameters needed to be changed. That could be done easily with this tool.
Run the script on both nodes.
My swap space seems to be 1KB too small. This is because I used an Oracle VM template (yes, although not officially certified yet I am running virtualized). I think I will manage with 1KB less then recommend.
Software being transferred to the other node:
And running the root scripts to setup permissions and configure the cluster.
Running oraInstRoot.sh on both nodes:
Running root.sh on node 1, this is where the cluster configuration is done.
Watch the Voting File on the first ASM disk in diskgroup DATA:
And root.sh on node 1 finished...
Don't forget node 2:
After running the root scripts, you have to click on okay. After this some small post configuration steps are performed by the installer and clufvy also runs. If this all runs fine, like it did here the screen will quickly continue to the last page telling you the installation was succesful
In my next postings i will show the installation and creation of a 11gR2 RAC database, and also adding nodes to the cluster and adding instances to the RAC database.
Oracle Technology Services, The Netherlands