Antony Reynolds' Blog

  • September 28, 2010

Off the RAC

Antony Reynolds
Senior Director Integration Strategy

Configuring a RAC Cluster for SOA

To get the highest availability for a SOA cluster the backend database needs to be highly available.  So in this post I will go through the minimum requirements to get a RAC cluster up and running ready for use by SOA.  Note that this configuration is not suitable for production but is useful to enable you to develop and test in an environment that is similar to production.


I decided to go for an 11gR2 RAC cluster running on Oracle Enterprise Linux 5.5.  I used two Linux servers for database machines and OpenFiler for the NFS server to provide shared storage.  I created all these as images under Virtual Box.


NFS Preparation

I brought up OpenFiler and after initial configuration to use the Internal RAC LAN I created a single volume group (rac) and then created the following volumes with associated shares.

Volume Size Share Location Description
db 10GB /mnt/rac/db/share RAC Database Software
grid 10GB /mnt/rac/grid/share RAC Grid Software
cluster 1.5GB /mnt/rac/cluster/share RAC Cluster Files
data 10GB /mnt/rac/data/share RAC Data Files

The shares were configured with public guest access and RW access permissions.  UID/GID Mapping was set to no_root_squash, I/O Mode set to sync, Write delay set to no_wdelay and Request Origin Port set to insecure(>1024).

OS Preparation

The first step was to install the OS and configure it to use yum.  After updating packages to the latest revisions I can then apply the packages needed by RAC.  The easiest way to apply the required packages was to install the package oracle-validated (yum install oracle-validated) as this automatically installs all required packages for RAC and sets the necessary system parameters.

I also modified the /etc/sysconfig/ntpd file to add a –x flag at the start of the options to allow clock slew.


I then created the following user and appropriate groups

User Default Group Groups
oracle oinstall oinstall, oracle, dba

I also added the following to the .bash_profile, changing the ORACLE_HOSTNAME and ORACLE_SID as appropriate.

# Oracle Settings

TMP=/tmp; export TMP


ORACLE_HOSTNAME=rac1.soa.oracle.com; export ORACLE_HOSTNAME


ORACLE_BASE=/u01/app/oracle; export ORACLE_BASE




PATH=/usr/sbin:$PATH; export PATH




if [ $USER = "oracle" ]; then

if [ $SHELL = "/bin/ksh" ]; then

ulimit -p 16384

ulimit -n 65536


ulimit -u 16384 -n 65536




I actually set up three network cards on my Linux servers.

  • eth0 – DHCP configured to allow access to outside world
  • eth1 – dedicated to external RAC LAN, this is only reachable by SOA servers and has fixed IP addresses and is also used for the floating IP addresses required by RAC listeners.
  • eth2 – dedicated to internal RAC LAN, this is only reachable by RAC servers and has fixed IP addresses.  It also provides access to the storage device.

So each RAC server had a DHCP address, a fixed IP address on the external RAC LAN and a fixed IP address on the internal RAC LAN.

I provided the following hosts file.

# Do not remove the following line, or various programs

# that require network functionality will fail.               localhost.localdomain localhost

# ::1           localhost6.localdomain6 localhost6


# RAC #

#######      nas1.soa.oracle.com     nas1      rac1.soa.oracle.com     rac1      rac2.soa.oracle.com     rac2      rac-scan.soa.oracle.com rac-scan      rac1-vip.soa.oracle.com rac1-vip # rac1 is on this network at      rac2-vip.soa.oracle.com rac2-vip# rac2 is on this network at

The last three addresses are dynamically registered by the grid services layer and are used by network listeners.  These is the addresses that RAC will expose to the SOA Suite.

File Structure

I created the following file structure on the Linux servers.  Folders in bold are mount points for shared files.


Ownership of the entire /u01 sub-tree was given to oracle in group oinstall (chown –R oracle:oinstall /u01).  Permissions were set to 775 (chmod –R 775 /u01).

NFS Client

I added the following entries to the /etc/fstab file to enable the RAC servers to mount the shared NFS file system.

nas1:/rac/mnt/grid/share        /u01/app/11gR2/grid                                 nfs    
  rw,bg,hard,nointr,tcp,vers=3,timeo=600,rsize=32768,wsize=32768,actimeo=0   0 0

nas1:/rac/mnt/DB/share          /u01/app/oracle/product/11gR2/db        nfs    
  rw,bg,hard,nointr,tcp,vers=3,timeo=600,rsize=32768,wsize=32768,actimeo=0   0 0

nas1:/rac/mnt/cluster/share   /u01/cluster                                                 nfs    
  rw,bg,hard,nointr,tcp,vers=3,timeo=600,rsize=32768,wsize=32768,actimeo=0   0 0

nas1:/rac/mnt/data/share       /u01/oradata                                                nfs    
  rw,bg,hard,nointr,tcp,vers=3,timeo=600,rsize=32768,wsize=32768,actimeo=0   0 0

After mounting the NFS directories it was necessary to rerun the chown and chmod commands executed earlier to set permissions correctly on the NFS folders.


My final OS preparation step was to set the desktop background differently on each machine so that I knew what machine I was on just by seeing the background.  Helps to avoid unfortunate incidents of doing the wrong thing on the wrong machine.


Having prepared everything I shut down the 3 virtual machines (nas1, rac1 and rac2) and took a snapshot of the virtual images, labeling them pre-grid.  Then if there were problems later I could revert to the configuration just before installing any software.  When starting the virtual machines I always started the OPenFiler first so that the rac servers would be able to find it.

Grid Install

With the OS prepared I logged in as oracle user and kicked off the grid install choosing the advanced install option.  I identified my nodes as rac1 and rac2 and the internal RAC network as the private interface and the external RAC network as the public interface.  I used the shared file system storage option and claimed external redundancy.  Setting the OCR file location to /u01/cluster/storage/ocr and the voting disk location as /u01/cluster/storage/vdsk.  I installed the software onto the shared disk at /u01/app/11gR2/grid.  The install automatically installs the software on both rac nodes.

During the verification you may find you are still missing a couple of packages and some settings may not be correct.  The packages can be added using yum without aborting the install and the installer generates root scripts to adjust any parameters that need modifying.


After installing the grid software I again shut down all the servers and took a snapshot of each of them, labeling it grid.

DB Install

With the cluster services installed and running I logged in as oracle user and kicked off the database install choosing the database software only option and selecting a RAC install on the rac1 and rac2 nodes.  I identified the software location as /u01/app/product/11gR2/db.


After installing the database software I again shut down all the servers and took a snapshot of each of them, labeling it db.

Database Creation

With the database software installed I ran the $ORACLE_HOME/bin/dbca utility to create a RAC database.  I chose advanced install and selected rac1 and rac2 nodes and chose the AL32UTF8 character set.  On my machine the database configuration wizard took about 10 hours to complete, but it did finish successfully.


After creating the database I shut it down using srvctl stop database –d rac and then shut down all the servers and took a snapshot of each of them, labeling it rac.  At this point I deleted some of the earlier snapshots to reduce disk usage and potentially improve performance a little in the virtual machines.

Next Steps

With a RAC database available I am now ready to install and configure a SOA cluster which I will cover in the next few postings.


I found the following resources very helpful:

Be the first to comment

Comments ( 0 )
Please enter your name.Please provide a valid email address.Please enter a comment.CAPTCHA challenge response provided was incorrect. Please try again.