Off the RAC

Configuring a RAC Cluster for SOA

To get the highest availability for a SOA cluster the backend database needs to be highly available.  So in this post I will go through the minimum requirements to get a RAC cluster up and running ready for use by SOA.  Note that this configuration is not suitable for production but is useful to enable you to develop and test in an environment that is similar to production.

Target

I decided to go for an 11gR2 RAC cluster running on Oracle Enterprise Linux 5.5.  I used two Linux servers for database machines and OpenFiler for the NFS server to provide shared storage.  I created all these as images under Virtual Box.

image

NFS Preparation

I brought up OpenFiler and after initial configuration to use the Internal RAC LAN I created a single volume group (rac) and then created the following volumes with associated shares.

Volume Size Share Location Description
db 10GB /mnt/rac/db/share RAC Database Software
grid 10GB /mnt/rac/grid/share RAC Grid Software
cluster 1.5GB /mnt/rac/cluster/share RAC Cluster Files
data 10GB /mnt/rac/data/share RAC Data Files

The shares were configured with public guest access and RW access permissions.  UID/GID Mapping was set to no_root_squash, I/O Mode set to sync, Write delay set to no_wdelay and Request Origin Port set to insecure(>1024).

OS Preparation

The first step was to install the OS and configure it to use yum.  After updating packages to the latest revisions I can then apply the packages needed by RAC.  The easiest way to apply the required packages was to install the package oracle-validated (yum install oracle-validated) as this automatically installs all required packages for RAC and sets the necessary system parameters.

I also modified the /etc/sysconfig/ntpd file to add a –x flag at the start of the options to allow clock slew.

Users

I then created the following user and appropriate groups

User Default Group Groups
oracle oinstall oinstall, oracle, dba

I also added the following to the .bash_profile, changing the ORACLE_HOSTNAME and ORACLE_SID as appropriate.

# Oracle Settings
TMP=/tmp; export TMP
TMPDIR=$TMP; export TMPDIR

ORACLE_HOSTNAME=rac1.soa.oracle.com; export ORACLE_HOSTNAME
ORACLE_UNQNAME=rac; export ORACLE_UNQNAME
ORACLE_BASE=/u01/app/oracle; export ORACLE_BASE
ORACLE_HOME=$ORACLE_BASE/product/11gR2/db; export ORACLE_HOME
ORACLE_SID=rac1; export ORACLE_SID
ORACLE_TERM=xterm; export ORACLE_TERM
PATH=/usr/sbin:$PATH; export PATH
PATH=$ORACLE_HOME/bin:$PATH; export PATH

LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib; export LD_LIBRARY_PATH
CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib; export CLASSPATH

if [ $USER = "oracle" ]; then
if [ $SHELL = "/bin/ksh" ]; then
ulimit -p 16384
ulimit -n 65536
else
ulimit -u 16384 -n 65536
fi
fi


Network

I actually set up three network cards on my Linux servers.


  • eth0 – DHCP configured to allow access to outside world
  • eth1 – dedicated to external RAC LAN, this is only reachable by SOA servers and has fixed IP addresses and is also used for the floating IP addresses required by RAC listeners.
  • eth2 – dedicated to internal RAC LAN, this is only reachable by RAC servers and has fixed IP addresses.  It also provides access to the storage device.

So each RAC server had a DHCP address, a fixed IP address on the external RAC LAN and a fixed IP address on the internal RAC LAN.

I provided the following hosts file.


# Do not remove the following line, or various programs

# that require network functionality will fail.


127.0.0.1               localhost.localdomain localhost


# ::1           localhost6.localdomain6 localhost6


#######


# RAC #


#######


10.0.4.200      nas1.soa.oracle.com     nas1


10.0.4.210      rac1.soa.oracle.com     rac1


10.0.4.220      rac2.soa.oracle.com     rac2


10.0.3.200      rac-scan.soa.oracle.com rac-scan


10.0.3.201      rac1-vip.soa.oracle.com rac1-vip # rac1 is on this network at 10.0.3.210


10.0.3.202      rac2-vip.soa.oracle.com rac2-vip# rac2 is on this network at 10.0.3.220


The last three addresses are dynamically registered by the grid services layer and are used by network listeners.  These is the addresses that RAC will expose to the SOA Suite.

File Structure

I created the following file structure on the Linux servers.  Folders in bold are mount points for shared files.

image

Ownership of the entire /u01 sub-tree was given to oracle in group oinstall (chown –R oracle:oinstall /u01).  Permissions were set to 775 (chmod –R 775 /u01).

NFS Client

I added the following entries to the /etc/fstab file to enable the RAC servers to mount the shared NFS file system.


nas1:/rac/mnt/grid/share        /u01/app/11gR2/grid                                 nfs    
  rw,bg,hard,nointr,tcp,vers=3,timeo=600,rsize=32768,wsize=32768,actimeo=0   0 0


nas1:/rac/mnt/DB/share          /u01/app/oracle/product/11gR2/db        nfs    
  rw,bg,hard,nointr,tcp,vers=3,timeo=600,rsize=32768,wsize=32768,actimeo=0   0 0


nas1:/rac/mnt/cluster/share   /u01/cluster                                                 nfs    
  rw,bg,hard,nointr,tcp,vers=3,timeo=600,rsize=32768,wsize=32768,actimeo=0   0 0


nas1:/rac/mnt/data/share       /u01/oradata                                                nfs    
  rw,bg,hard,nointr,tcp,vers=3,timeo=600,rsize=32768,wsize=32768,actimeo=0   0 0


After mounting the NFS directories it was necessary to rerun the chown and chmod commands executed earlier to set permissions correctly on the NFS folders.

Background

My final OS preparation step was to set the desktop background differently on each machine so that I knew what machine I was on just by seeing the background.  Helps to avoid unfortunate incidents of doing the wrong thing on the wrong machine.

Snapshot

Having prepared everything I shut down the 3 virtual machines (nas1, rac1 and rac2) and took a snapshot of the virtual images, labeling them pre-grid.  Then if there were problems later I could revert to the configuration just before installing any software.  When starting the virtual machines I always started the OPenFiler first so that the rac servers would be able to find it.

Grid Install

With the OS prepared I logged in as oracle user and kicked off the grid install choosing the advanced install option.  I identified my nodes as rac1 and rac2 and the internal RAC network as the private interface and the external RAC network as the public interface.  I used the shared file system storage option and claimed external redundancy.  Setting the OCR file location to /u01/cluster/storage/ocr and the voting disk location as /u01/cluster/storage/vdsk.  I installed the software onto the shared disk at /u01/app/11gR2/grid.  The install automatically installs the software on both rac nodes.

During the verification you may find you are still missing a couple of packages and some settings may not be correct.  The packages can be added using yum without aborting the install and the installer generates root scripts to adjust any parameters that need modifying.

Snapshot

After installing the grid software I again shut down all the servers and took a snapshot of each of them, labeling it grid.

DB Install

With the cluster services installed and running I logged in as oracle user and kicked off the database install choosing the database software only option and selecting a RAC install on the rac1 and rac2 nodes.  I identified the software location as /u01/app/product/11gR2/db.

Snapshot

After installing the database software I again shut down all the servers and took a snapshot of each of them, labeling it db.

Database Creation

With the database software installed I ran the $ORACLE_HOME/bin/dbca utility to create a RAC database.  I chose advanced install and selected rac1 and rac2 nodes and chose the AL32UTF8 character set.  On my machine the database configuration wizard took about 10 hours to complete, but it did finish successfully.

Snapshot

After creating the database I shut it down using srvctl stop database –d rac and then shut down all the servers and took a snapshot of each of them, labeling it rac.  At this point I deleted some of the earlier snapshots to reduce disk usage and potentially improve performance a little in the virtual machines.

Next Steps

With a RAC database available I am now ready to install and configure a SOA cluster which I will cover in the next few postings.

References

I found the following resources very helpful:

Comments:

Post a Comment:
Comments are closed for this entry.
About

Musings on Fusion Middleware and SOA Picture of Antony Antony works with customers across the US and Canada in implementing SOA and other Fusion Middleware solutions. Antony is the co-author of the SOA Suite 11g Developers Cookbook, the SOA Suite 11g Developers Guide and the SOA Suite Developers Guide.

Search

Archives
« April 2015
SunMonTueWedThuFriSat
   
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
  
       
Today