Installation and testing RAC One Node

Okay time to write something nice about Rac One Node:

In order to test RAC One Node, on my Laptop, I just:
- installed Oracle VM 2.2
- Created two OEL 5.3 images

The two images are fully prepared for Oracle 11gr2 Grid Infrastructure and 11gr2 RAC including four shared disks for ASM and private nics.

How would I test this all so fast without virtualization ?!

In order to view the captures best, I recommend to zoom in a couple of times (control-++)

After installation of the Oracle 11gr2 Grid Infrastructure and a "software only installation" of 11gr2 RAC, I installed patch 9004119 as you can see with the opatch lsinv output:

opatch.JPG

This patch has the scripts required to administer RAC One Node, you will see them later.
At the moment we have them available for Linux and Solaris.

After installation of the patch, I created a RAC database with an instance on one node.
Please note that the "Global Database Name" has to be the same as the SID prefix and should be less then or equal to 8 characters:

cr_db.JPG

When the database creation is done, first I create a service. This is because RAC One Node needs to be "initialized" each time you add a service:

oem_service_created.jpg

The service configuration details are:

service_config_details.jpg

After creating the service, a script called raconeinit needs to run from $RDBMS_HOME/bin. This is a script supplied by the patch. I can imagine the next major patch set of 11gr2 has this scripts available by default. The script will configure the database to run on other nodes:

racnodeoininit-success.jpg

After initialization, when you would run raconeinit again, you would see:

after_raconeinit.jpg

So, now the configuration is ready and we are ready to run 'Omotion' and move the service around from one node to the other (yes, vm competitor: this is service is available during the migration, nice right ?) .

Omotion is started by running Omotion. With Omotion -v you get verbose output:

omotion.jpg

So, during the migration with Omotion you will see the two instance active (RKDB_1 and RKDB_2):

status_during_omotion.jpg

And, after the migration, there is only one instance left on the new node:

status_after_omotion.jpg

Of course, regular node failures will also initiate a failover, all covered by the default Grid Infrastructure functionalities. The thing that I noticed is that if you would kill a node that runs an active instance, the instance is failed over nicely by RAC Node One, but the name of the instance failing over stays the same, so this is other behavior then the migration:

racnodeonefailover.jpg

p.s. 1:
Some other funny thing I noticed is that after installing 11gr2 Grid Infrastructure, Oracle removes some essential lines from grub.conf. What you get when you try to start a vm after this is: "error: boot loader didn't return any data"
Luckily Oracle creates a backup of the grub.conf in /boot/grub named grub.conf.orabackup

So, you need to restore that file in the vm images itself.

This can be done with the great lomount option.

First make sure you create the entry in the fstab of your oracle vm server:

/var/ovs/mount/8BF866167A1746FE8FFA0EAA20939C55/running_pool/GRIDNODE01/System.img /tmp/q ext3 defaults 1 1

Then execute: lomount -diskimage ./System.img -partition 1

This mounts the image to /tmp/q so you can restore the file.

p.s. 2:
I would like to demo some, hopefully I can do this at the next Planboard Symposium June 8. See this link

Rene Kundersma
Oracle Technology Services, The Netherlands

Comments:

Pretty nice article. Was wondering when we were going to see the RAC One Node demo. Similar features to VMotion but services not interrupted? Would clients still rely on TAF/FCF to minimize service disruptions for active sessions?

Posted by Leighton on April 10, 2010 at 08:29 AM PDT #

Exactly, this 'motion' functionality is what no other vendor can do with a database and yes we require usage of FAN/FCF for this. Thanks Rene

Posted by rene.kundersma on April 10, 2010 at 03:52 PM PDT #

Thank you so much for your solution regarding "error: boot loader didn't return any data". It appears the differences in the grub.conf files for the EL5U4_X86_64_PVM template are:
$ diff grub.conf grub.conf.orabackup
15,17c15,16
<       kernel /xen.gz-2.6.18-164.0.0.0.1.el5
<       module /vmlinuz-2.6.18-164.0.0.0.1.el5xen ro root=LABEL=/
<       module /initrd-2.6.18-164.0.0.0.1.el5xen.img
---
>       kernel /vmlinuz-2.6.18-164.0.0.0.1.el5xen ro root=LABEL=/
>       initrd /initrd-2.6.18-164.0.0.0.1.el5xen.img
Not really sure why this wouldn't have been detected and/or resolved prior to releasing the VM template?

Posted by ebrian on June 06, 2010 at 06:42 AM PDT #

Post a Comment:
  • HTML Syntax: NOT allowed
About

Blog of Rene Kundersma, Principal Member of Technical Staff at Oracle Development USA. I am designing and evaluating solutions and best practices around database MAA focused on Exadata. This involves HA, backup/recovery, migration and database consolidation and upgrades on Exadata. Opinions are my own and not necessarily those of Oracle Corporation. See http://www.oracle.com/technology/deploy/availability/htdocs/maa.htm.

Search

Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today