Antony Reynolds' Blog

  • September 30, 2010

Installing an 11g SOA Cluster – Part I Preparation

Antony Reynolds
Senior Director Integration Strategy

Configuring a SOA Cluster – Part I Preparation

In this post I will go through the initial steps required to create a SOA cluster.  I will use the RAC database created in the previous posting ‘Off the RAC’.  We will follow the Enterprise Deployment Guide and along the way give some explanation as to why things are being done the way they are.


The target configuration we are aiming at is shown below with the SOA Servers running on Oracle Enterprise Linux 5.5.  We will use the same openFiler NAS device as we used for the RAC database.


We will create two SOA Servers – SOA-Cluster1 and SOA-Cluster2.  The two SOA Servers will use the Internal LAN for access to shared file storage, the external LAN is used for access to the RAC cluster, inter-cluster communication and access to the SOA Servers by the load balancer LB.  The public WAN is used by all clients of the SOA cluster.

I am using two physical machines to host six virtual machines running under Oracle Virtual Box.  The two physical machines have 8G Memory each.  The RAC cluster and NAS device run on one physical machine, the load balancer and SOA cluster will run on the other physical machine.

NFS Preparation

I wanted to keep the software on a shared disk, in addition the EDG requires that SOA cluster files such as transaction logs and JMS queue files are kept on shared storage in addition to the Admin server domain files being kept on shared storage.  To support this I created three shares on the OpenFiler NAS device in their own logical volume.

Volume Size Share Location Description
fmw 10GB /mnt/soa/fmw/share Middleware Software
aserver 2GB /mnt/soa/aserver/share Admin Server Domain Config
soacluster 2GB /mnt/soa/cluster/share SOA Shared Cluster Files

The shares were configured with public guest access and RW access permissions.  UID/GID Mapping was set to no_root_squash, I/O Mode set to sync, Write delay set to no_wdelay and Request Origin Port set to insecure(>1024).

OS Preparation

The first step was to install the OS and configure it to use yum.  After updating packages to the latest revisions I can then apply the packages needed by SOA.

yum install binutils  compat-libstdc++-33 elfutils-libelf elfutils-libelf-devel gcc gcc-c++ glibc
   glibc-common glibc-devel libaio libaio-devel libgcc libstdc++ libstdc++-devel make sysstat

I also modified the /etc/sysconfig/ntpd file to add a –x flag at the start of the options to allow clock slew.


I then created the following user and appropriate groups

User Default Group Groups
oracle oinstall oinstall, oracle

I also added the following to the .bash_profile.

# Oracle Settings

TMP=/tmp; export TMP


if [ $USER = "oracle" ]; then
if [ $SHELL = "/bin/ksh" ]; then
ulimit -p 16384
ulimit -n 65536
ulimit -u 16384 -n 65536


I actually set up three network cards on my Linux servers.

eth0 – DHCP configured to allow access to outside world.

eth1 – dedicated to external LAN, this is used to reach SOA servers and has fixed IP addresses and is also used for the floating IP addresses required by SOA servers.  It is also used to access the RAC cluster.

eth2 – dedicated to internal LAN, this is only used to access the NAS filer and has fixed IP addresses.

So each SOA server had a DHCP address, a fixed IP address on the external LAN and a fixed IP address on the internal LAN.

I provided the following hosts file.

# Do not remove the following line, or various programs

# that require network functionality will fail.       localhost.localdomain localhost       soa-cluster1.soa.oracle.com soa-cluster1       soa-cluster2.soa.oracle.com soa-cluster2


# RAC #

#######      nas1.soa.oracle.com     nas1      rac1.soa.oracle.com     rac1      rac2.soa.oracle.com     rac2      rac-scan.soa.oracle.com rac-scan      rac1-vip.soa.oracle.com rac1-vip      rac2-vip.soa.oracle.com rac2-vip


# SOA #


# FIXED         lb.soa.oracle.com lb       wsm1.soa.oracle.com wsm1       wsm2.soa.oracle.com wsm2       bam2.soa.oracle.com bam2       osb1.soa.oracle.com osb1       osb2.soa.oracle.com osb2       web1.soa.oracle.com web1       web2.soa.oracle.com web2

# FLOATING      admin.soa.oracle.com admin      bam1.soa.oracle.com bam1      soa1.soa.oracle.com soa1      soa2.soa.oracle.com soa2

# VIRTUAL   soa-cluster.soa.oracle.com soa-cluster   soa-cluster-admin.soa.oracle.com soa-cluster-admin   soa-cluster-internal.soa.oracle.com soa-cluster-internal

The RAC section provides the names of the RAC servers, only the rac1-vip and rac2-vip addresses are actually needed.  The SOA section provides the addresses needed by the clustered SOA environment.  The fixed addresses provide hostnames for the web servers, web services policy managers, BAM report servers and OSB servers.  The floating IP address for the admin server must be manually managed and allows for the admin server to be moved between machines.  The other floating IP addresses are managed by the node manager and are used to support whole server migration of the SOA servers and the BAM active data cache.  The virtual IP addresses are the IP addresses used by the load balancer to provide access to the SOA cluster for admin users, internal users and external users.  The use of different virtual addresses allows the load balancer to restrict access to certain services based on the source of the user.

I use separate hostnames for all the different managed servers to make it easier to move them between machines.  To run a managed server on a different machine I just need to change the target machine in WebLogic and change the IP address in the /etc/hosts file.

Note that the IP addresses used by RAC and SOA Suite components are only accessible on the internal and external LANs, they are not routable outside of that environment.  This is a good security practice.  The only way to access the SOA and RAC servers is to be on the external LAN or to go through the load balancer.  The virtual addresses used by the load balancer should be routable (I have replaced my real IP addresses used by the load balancer with a different address to avoid exposing Oracle internal addresses)

File Structure

I created the following file structure on the Linux servers.  Folders in bold are mount points for shared files.


Ownership of the entire /u01 sub-tree was given to oracle in group oinstall (chown –R oracle:oinstall /u01).  Permissions were set to 775 (chmod –R 775 /u01).

The aserver folder is used to hold master cluster config and is used by the admin server, putting it on shared storage allows the admin server to be run on any host in the cluster.

The fmw folder is used to hold all software.  Putting it into shared storage means that the software only needs to be installed once for the cluster (or twice if you follow the recommendation to have two shared volumes for software to allow for shared storage failure).

The soa-cluster folder is used to hold transaction logs, JMS queues, deployment descriptors and file adapter control files.   Putting these items onto shared storage allows for whole server migration (JMS and transaction logs), allows for co-ordination of adapters accessing shared resources (file adapter) and simplifies adapter configuration (deployment plans are accessible to all nodes).

NFS Client

I added the following entries to the /etc/fstab file to enable the RAC servers to mount the shared NFS file system.

nas1:/mnt/soa/fmw/share /u01/app/oracle/product/fmw  nfs

  rw,bg,hard,nointr,tcp,vers=3,timeo=600,rsize=32768,wsize=32768,actimeo=0  0 0

nas1:/mnt/soa/aserver/share   /u01/app/oracle/admin/soa-domain/aserver  nfs

  rw,bg,hard,nointr,tcp,vers=3,timeo=600,rsize=32768,wsize=32768,actimeo=0  0 0

nas1:/mnt/soa/cluster/share   /u01/app/oracle/admin/soa-domain/soa-cluster  nfs

  rw,bg,hard,nointr,tcp,vers=3,timeo=600,rsize=32768,wsize=32768,actimeo=0  0 0

After mounting the NFS directories it was necessary to rerun the chown and chmod commands executed earlier to set permissions correctly on the NFS folders.  If you get a permission denied error make sure that you set no_root_squash on all the shares.


I am using 64-bit Linux and so I want to use a 64-bit JDK.  Oracle Fusion Middleware only ships with 32-bit JDKs so it is necessary to install the 64-bit JDK separately and use the generic WebLogic installer.  I downloaded JRockit and installed it as the Oracle user in /u01/app/oracle/product/fmw/jrrt-4.0.1-1.6.0.  It only needs to be installed on one node as we are installing it to a shared location.  Instead of JRockit I could install Oracle Hotspot JVM.

In addition to installing JRockit I also installed JRockit Mission Control as the oracle user in /u01/app/oracle/product/fmw/jrmc-4.0.1-1.6.0 to assist in diagnosing any Java related problems.  Again it only needs to be installed once to be available to all nodes in the cluster.

After installing the JDK I edited the jre/lib/security/java.security file and changed the reference to /dev/urandom to be /dev/./urandom.  This may improve SOA startup times by using a pseudo-random generator rather than a random generator.


My final OS preparation step was to set the desktop background differently on each machine so that I knew what machine I was on just by seeing the background.  Helps to avoid unfortunate incidents of doing the wrong thing on the wrong machine.

Load Balancer

I used the Zeus Traffic Manager as a load balancer.  I added the following to the /etc/hosts file for the load balancer;            lb.soa.oracle.com lb          web1.soa.oracle.com web1          web2.soa.oracle.com web2   soa-cluster.soa.oracle.com soa-cluster   soa-cluster-admin.soa.oracle.com soa-cluster-admin   soa-cluster-internal.soa.oracle.com soa-cluster-internal

Normally the soa-cluster internal, external and admin virtual hosts would have separate virtual IP addresses, however I was limited in the number of fixed IP addresses I had so I used the traffic manager to treat the three hostnames as three different sites.

Server Pools

I configured a pool called “SOA Cluster Pool” with two servers, web1:7777 and web2:7777.  This server pool consists of the web servers configured to front end the SOA Suite.


I then created some rules to control access to back end URLs.

Restrict Host Names

This rule only allows requests that have the correct hostname to be forwarded to the SOA cluster.  If the wrong hostname is used it will reply with a message indicating which host names can be used and how to add them to a Windows machine.  This rule enforces the internal/external/admin separation by making sure that the request is targeted only at one of these three hostnames.

$headerHost = http.getHostHeader();

if( $headerHost != "soa-cluster.soa.oracle.com"

    && $headerHost != "soa-cluster"

    && $headerHost != "soa-cluster-internal.soa.oracle.com"

    && $headerHost != "soa-cluster-internal"

    && $headerHost != "soa-cluster-admin.soa.oracle.com"

    && $headerHost != "soa-cluster-admin"


    http.sendResponse( "403 Permission Denied",


    "Access not allowed using hostname ".


        "Please use <A href="\"https://soa-cluster.soa.oracle.com".http.getPath()."\">soa-cluster.soa.oracle.com</A>," <A href="\"http://internal-soa-cluster.soa.oracle.com".http.getPath()."\">internal-soa-cluster.soa.oracle.com</A>" or <A href="\"http://admin-soa-cluster.soa.oracle.com".http.getPath()."\">admin-soa-cluster.soa.oracle.com</A>" as appropriate.<BR>\n".

        "To access these host names add the following to your hosts file (Linux <A href="\"file:///etc/hosts\">/etc/hosts</A>" or Windows <A href="\"file:///C:/windows/system32/drivers/etc/hosts\">C:\\windows\\system32\\drivers\\etc\\hosts</A>).<BR>\n<HR>\n".

        "\tsoa-cluster.soa.oracle.com soa-cluster<BR>\n".

        "\tsoa-cluster-admin.soa.oracle.com soa-cluster-admin<BR>\n".

        "\tsoa-cluster-internal.soa.oracle.com soa-cluster-internal<BR>\n",

    "" );


Deny console and em

This rule allows access to the /em and /console paths only if the target host is soa-cluster-admin, in a real deployment the soa-cluster-admin address would only be available internally, and even then may be restricted to an admin LAN.

$hostname = http.getHostHeader();

if( !string.startswith( $hostname, "soa-cluster-admin" ) ) { 
    $path = http.getPath(); 
    if( string.startswith( $path, "/em" )

        || string.startswith( $path, "/console" ) ){ 
        http.sendResponse( "403 Permission Denied", "text/html", "No access to admin functions on this host.", ""); 


Redirect External to SSL

This rule forces all access to the external hostname to be SSL by redirecting all non-SSL traffic sent to the external hostname to the SSL port.  In a real deployment the firewall would only allow SSL traffic through from external clients.

$headerHost = http.getHostHeader();

if( $headerHost == "soa-cluster.soa.oracle.com"

    || $headerHost == "soa-cluster" ){ 
    http.changeSite( "https://soa-cluster.soa.oracle.com:443" );


Virtual Servers

I then created the following virtual servers:

Virtual Server NameListen AddressRulesPool
External SOA Clustersoa-cluster.soa.oracle.com:443Restrict Host Names

Deny console and em
SOA Cluster Pool
Internal SOA Clustersoa-cluster-internal.soa.oracle.com:80Restrict Host Names

Redirect External to SSL

Deny console and em

SOA Cluster Pool

Note there is no virtual server for Admin SOA Cluster.  This is because virtual servers in Zeus are IP address/port number based and the Admin and Internal SOA cluster use the same IP and port number.  The Deny console and em rule prevents requests to the internal or external SOA clusters from accessing the em and console paths, and hence denies them access to admin functions.

DB Preparation

The RAC database must be configured for use by the SOA cluster as outlined in the EDG.

Service Creation

To do this we first create two database services; soaedg.soa.oracle.com and bamedg.soa.oracle.com.  This allows us to control the database resources allocated to SOA Suite.  We create the services with the following SQL commands

EXECUTE DBMS_SERVICE.CREATE_SERVICE(SERVICE_NAME => 'soaedg.soa.oracle.com', NETWORK_NAME => 'soaedg.soa.oracle.com');
EXECUTE DBMS_SERVICE.CREATE_SERVICE(SERVICE_NAME => 'bamedg.soa.oracle.com', NETWORK_NAME => 'bamedg.soa.oracle.com');

After adding the service to the database we then assign it to the instances and start it using srvctl:

srvctl add service -d rac -s soaedg -r rac1,rac2

srvctl add service -d rac -s bamedg -r rac1,rac2

srvctl start service -d rac -s soaedg
srvctl start service -d rac -s bamedg

Once added the services will automatically start with the database.

Process & Session Limits

The SOA Suite is a database session hog.  To run efficiently it needs a large number of sessions.  This is configured by setting the processes parameter (assuming you are not using MTS, if using MTS then set the Sessions parameter rather than the Processes parameter).  Alter the number of processes using the following SQL command.


The Enterprise Deployment Guide recommends 300 processes for SOA and another 100 for BAM, hence 400 for SOA & BAM together.  Note that this is in addition to any other processes requirements.

Repository Creation

With the database set up we can run the Repository Configuration Utility (RCU) (rcuHome/bin/rcu) to create the schemas required by the SOA Suite,  Select the SOA Infrastructure and it should also choose the AS Common Schemas by default.  If you don’t use BAM you can deselect that option but it doesn’t hurt to install it just in case you change your mind.

We are asked to provide the database details, provide either the rac-scan address of the RAC cluster or one of the RAC VIPs, rac1-vip or rac2-vip.  The service name should be the newly created service name.  You will need an account with SYDBA privileges to run this.

When asked to provide a prefix it is easiest to use the DEV prefix as this is what is assumed in the SOA domain creation wizard.  The prefix is provided to allow you to have multiple SOA installations in the same database.  If you don’t need to do this then stick with DEV as your prefix.

I found that it took 2 minutes to create the tablespaces on my cluster and 13 minutes to create the schemas.

XA Support

It is necessary to grant transactional management privileges to the soainfra user with the following SQL commands which must be run with sysdba privileges:

Grant select on sys.dba_pending_transactions to dev_soainfra;

Grant force any transaction to dev_soainfra;

XA is heavily used in the SOA Suite and failing to set this will cause problems recovering transactions after a crash.


After configuring our NAS device, our load balancer, the host OS and the database we are now ready to install and configure our SOA cluster.  I will look at that in my next post.


Join the discussion

Comments ( 4 )
  • guest Sunday, August 21, 2011

    hi Antony,

    I am trying to setup a cluster on linux(linux is running on Virtualbox and host is windows 7) so i am just wondering how did you setup three network cards? did you setup physical network cards? i wanted to setup two network cards, one for DHCP and another for floating IP.would appreciate your input.


  • David Tildesley Tuesday, September 6, 2011

    Is NFS performant enough? I would have thought shared file cluster via shared scsi with fibre attached storage would be the way to go?

    What is the recommendation for active-active across two or more data centres that don't have specific low-latency backlinks for cluster traffic?

    If the recommendation is for a active-passive replicated cold cluster, how do we sort out the fact that (v)ip addresses get replicated as well and this would be a problem in a layer 3 connected data centre topology? Is it possible to script a "search and replace" for all the ip address and host name references in the wls config?

  • Wiz Tuesday, September 13, 2011


    THanks for the detailed posts. Have few questions about NAS:

    Have you have had any slowness issues while starting AdminServer or ManagedServer after configuring the domain using NAS shares?

    For us, the AdminServer takes 15 mins to start and SOA server takes close to an hour to start.

    I see that you mentioned some specific remarks around NAS:

    "UID/GID Mapping was set to no_root_squash, I/O Mode set to sync, Write delay set to no_wdelay and Request Origin Port set to insecure(>1024)."

    Are these out of the box NAS filer/NFS mount settings? or did you come up with these after some discoveries.

    Thanks for the response


  • guest Wednesday, May 16, 2012

    This is excellent

Please enter your name.Please provide a valid email address.Please enter a comment.CAPTCHA challenge response provided was incorrect. Please try again.