Installation and testing RAC One Node
By Rene Kundersma on Apr 10, 2010
Okay time to write something nice about Rac One Node:
In order to test RAC One Node, on my Laptop, I just:
- installed Oracle VM 2.2
- Created two OEL 5.3 images
The two images are fully prepared for Oracle 11gr2 Grid Infrastructure and 11gr2 RAC including four shared disks for ASM and private nics.
How would I test this all so fast without virtualization ?!
In order to view the captures best, I recommend to zoom in a couple of times (control-++)
After installation of the Oracle 11gr2 Grid Infrastructure and a "software only installation" of 11gr2 RAC, I installed patch 9004119 as you can see with the opatch lsinv output:
This patch has the scripts required to administer RAC One Node, you will see them later.
At the moment we have them available for Linux and Solaris.
After installation of the patch, I created a RAC database with an instance on one node.
Please note that the "Global Database Name" has to be the same as the SID prefix and should be less then or equal to 8 characters:
When the database creation is done, first I create a service. This is because RAC One Node needs to be "initialized" each time you add a service:
The service configuration details are:
After creating the service, a script called raconeinit needs to run from $RDBMS_HOME/bin. This is a script supplied by the patch. I can imagine the next major patch set of 11gr2 has this scripts available by default. The script will configure the database to run on other nodes:
After initialization, when you would run raconeinit again, you would see:
So, now the configuration is ready and we are ready to run 'Omotion' and move the service around from one node to the other (yes, vm competitor: this is service is available during the migration, nice right ?) .
Omotion is started by running Omotion. With Omotion -v you get verbose output:
So, during the migration with Omotion you will see the two instance active (RKDB_1 and RKDB_2):
And, after the migration, there is only one instance left on the new node:
Of course, regular node failures will also initiate a failover, all covered by the default Grid Infrastructure functionalities. The thing that I noticed is that if you would kill a node that runs an active instance, the instance is failed over nicely by RAC Node One, but the name of the instance failing over stays the same, so this is other behavior then the migration:
Some other funny thing I noticed is that after installing 11gr2 Grid Infrastructure, Oracle removes some essential lines from grub.conf. What you get when you try to start a vm after this is: "error: boot loader didn't return any data"
Luckily Oracle creates a backup of the grub.conf in /boot/grub named grub.conf.orabackup
So, you need to restore that file in the vm images itself.
This can be done with the great lomount option.
First make sure you create the entry in the fstab of your oracle vm server:
/var/ovs/mount/8BF866167A1746FE8FFA0EAA20939C55/running_pool/GRIDNODE01/System.img /tmp/q ext3 defaults 1 1
Then execute: lomount -diskimage ./System.img -partition 1
This mounts the image to /tmp/q so you can restore the file.
I would like to demo some, hopefully I can do this at the next Planboard Symposium June 8. See this link
Oracle Technology Services, The Netherlands