In this post we will continue to set up our SOA cluster. Previously I covered setting up the environment with a Web Services Manager Policy Manager Cluster. We will now extend the domain created there to include SOA and BPM components as shown in the diagram above.
We will extend the domain to support SOA/BPM managed servers in a cluster. (EDG)
Because the SOA managed servers will later be configured to run on any machine through whole server migration we need to bind their IP addresses to some physical machines because we have not set up the automatic migration of IP addresses. We do this by running the ifconfig and arping commands as shown below for each SOA server we will create on the physical machine we first want to run it on. In the example below I use sub-adapter 8 on both machines for consistency and to avoid sub-adapter 9 which we used for the admin server. 10.0.3.111 and 10.0.3.112 are the IP addresses for soa1 and soa2.
sudo /sbin/ifconfig eth1:8 10.0.3.111 netmask 255.255.255.0
sudo /sbin/arping -q -U -c 3 -I eth1 10.0.3.111
Check that you can ping SOA server soa1 from the server 2 and SOA server soa2 from server 1. (EDG)
Because we have an existing domain we will extend it to support the SOA and BPM components.
From the Oracle common home $MW_HOME/oracle_common run the config wizard at common/bin/config.sh. and select “Extend an existing WebLogic domain”. Select the WebLogic domain which is in the aserver (/u01/app/oracle/admin/soa-domain/aserver/soa_domain) directory. When requested make sure you have the following checked then proceed:
If you do not need or are not licensed for BPM Suite then just select SOA Suite.
The following should be selected but grayed out:
We need to configure all the JDBC component schema as multi data source schemas. Select the SOA Infrastructure, User Messaging Service and SOA MDS Schema and set the following:
If we selected a schema prefix of “DEV” then there is no need to set the individual schema names. If you chose a different prefix then you need to select each schema individually and change the username.
This will set the schemas to use the RAC database and fail over between nodes if necessary.
Because we are installing a cluster we need to customize the following sections:
In the Select JMS Distributed Destination Type screen we make sure that we are not using weighted distributed destinations but instead are using uniform distributed destinations (UDD) for all the JMS resources.
SOA & BPM run in the same managed servers so In the Configure Managed Servers screen we change the name of new server to SOA1 from soa_server1 and add a second SOA server SOA2. The SOA servers need to be listening on address soa1.soa.oracle.com or soa2.soa.oracle.com as appropriate. This will restrict them to listening on the IP addresses that are configured to float between machines. Both servers can be set to listen on port 8001. Having all servers of the same kind listening on the same port makes it easier to see what is happening.
|Server Name||Listen Address||Listen Port|
On the Configure Clusters screen we can add a new cluster called SOA_Cluster and assign the two SOA servers to that cluster on the Assign Servers to Clusters screen.
On the Configure Machines screen delete the LocalMachine that has been created as we previously created the machines we need in the Unix tab when original creating the domain.
We then assign the new servers to physical machines SOAHost1 and SOAHost2 on the Assign Servers to Machines screen.
Finally we need to make the following changes to the deployments and services to make sure that they are correctly targeted.
|OWSM Startup class||WSM_Cluster|
All other items are left as they are set up by the wizard.
The domain has now been extended to support SOA and BPM. (EDG)
To get our changes to the domain to take effect we need to restart the admin server. We are then ready to make some further config changes.
Coherence is Oracles data grid software and the higher levels of the Oracle stack are moving away from JGroups and Java Object Cache to use Coherence for both cluster membership decisions and distributed object caching. A Coherence cluster has no master node, any machine can be the master. There are two ways to locate a Coherence cluster, through a broadcast request (the default), or through well known addresses. The table below compares the two approaches:
|Configuration||Same for all machines.||Different setting for Coherence cluster localhost property for each machine.|
|Discovery Mechanism||Multicast IP packets.||Unicast IP packets to list of servers specified in config (well known addresses or wka).|
|Startup Order||No order required.||At least one of the servers in the wka list must be the first to start|
|Impact of Adding Servers||No Impact||Ideally should update config of all servers to add a new wka to their lists.|
|Impact of Routers Between Servers||Unless configured will drop multi-cast packets.||No impact.|
|Impact of Multiple Coherence Clusters on Network||Need to configure each cluster with a unique multicast address.||As long as wka’s are separate for each cluster then no impact.|
When using well known addresses Coherence will go through the list of WKAs in an attempt to find an existing cluster. If it does not find a server then it will start a cluster only if its localhost setting is the same as one of the wka’s, otherwise it will give up on the cluster! When using multi-cast then Coherence will broadcast a message saying in effect, is there a cluster out there, and if it gets a response then it will join that cluster, otherwise it will create a cluster. Once a cluster is joined then the two models behave in the same way.
Because of the problems with putting multi-cast messages through routers the EDG recommends using well known addresses. The impact of adding additional servers can be mitigated by creating hostnames for additional SOA servers ahead of time and adding them to the list of wka’s in each server. If a server is unable to find any well known addresses and it is not configured to listen on one of the well known addresses then the server will not start. (Coherence Wiki)
If you decide to use multi-cast because you are not going through a router then I recommend that you configure a unique cluster address different from the default cluster of “220.127.116.11:9778” by changing the coherence.clusteraddress and coherence.clusterport settings. Valid values for the cluster address are 18.104.22.168 to 22.214.171.124 and for the cluster port are 1 to 65535. (Coherence Wiki)
To change the multi-case settings we change the EXTRA_JAVA_PROPERTIES in setDomainEnv.sh.
Note if you change the multcast settings it needs to be done for each set of domain files, see note later on using pack and unpack.
To apply the Coherence WKA configuration we need to edit the startup command arguments for each SOA managed server in the WebLogic console (managed server->Configuration->ServerStart tab Arguments field)and setting it as follows:
|Managed Server||WKA Config|
|SOA1||-Dtangosol.coherence.wka1=soa1 –Dtangosol.coherence.wka2=soa2 –Dtangosol.coherence.wka3=soa3 –Dtangosol.coherence.localhost=soa1|
|SOA2||-Dtangosol.coherence.wka1=soa1 –Dtangosol.coherence.wka2=soa2 –Dtangosol.coherence.wka3=soa3 –Dtangosol.coherence.localhost=soa2|
Note that the multicast config is simpler but will probably not work across routers. Also notice that I have added a dummy third server to the list of WKAs so that if I decide to add a third server to my cluster I don’t need to do anything to the existing two servers and can start the servers in any sequence. Finally make sure that there are no newlines in the Arguments box! (EDG)
Because of the way B2B uses queues and topics it is necessary to create specific destination identifiers as outlined below:
|Queue||Create Destination Identifier|
We need to disable hostname verification for the SOA1 and SOA2 servers as we previously did for the AdminServer and the WSM1 and WSM2 servers.
We need to make sure that the managed domains have the latest changes by using the pack command to bundle up a new domain template from the oracle_common/common/bin directory, as before we use the aserver shared directory to move the domain template:
We then run the unpack command on each host to unpack the propagated template to the domain directory of the managed server:
If you changed the Coherence clusteraddress and clusterport in setDomainEnv.sh then you need to make the same changes on every machines managed domain where you ran unpack as changes to setDomainEnv.sh are not propagated through the pack/unpack command.
For each additional copy of the SOA binaries that you have (one on each machine if you are not using shared binaries) you will need to unpack the B2B XEngine files. These files are unpacked for you on the machine that you ran the domain configuration wizard.
tar -xzvf XEngine.tar.gz
Note that if you have shared binaries then you only need to do this for the shared binaries that were not mounted on the machine that ran the domain configuration wizard. (EDG)
At this point I would recommend shutting down all servers and node managers and then bringing up the node managers, the Admin Server and then using the admin server to start the WSM_Cluster servers followed by the SOA_Cluster servers. If there are problems in starting then check that you have correctly targeted all the SOA components and that the Coherence cluster in SOA managed servers does not show any errors.
You can then validate that you can access the following URLs
Note weblogic/<password> will let you login to all these URLs. If you can get to all these components then you are well on your way to a working SOA cluster. Note the EDG WSM URL is wrong. (EDG and EDG)
I amended my httpd.conf configuration as shown below to add routing to the SOA and BPM Suite. Obviously if you did not deploy BPM Suite you do not need the bpm mappings.
MatchExpression /wsm-pm WebLogicCluster=wsm1:7010,wsm2:7010
# SOA soa-infra app
MatchExpression /soa-infra WebLogicCluster=soa1:8001,soa2:8001
MatchExpression /integration WebLogicCluster=soa1:8001,soa2:8001
MatchExpression /b2bconsole WebLogicCluster=soa1:8001,soa2:8001
# UMS prefs
MatchExpression /sdpmessaging/userprefs-ui WebLogicCluster=soa1:8001,soa2:8001
# Default to-do taskflow
MatchExpression /DefaultToDoTaskFlow WebLogicCluster=soa1:8001,soa2:8001
MatchExpression /workflow WebLogicCluster=soa1:8001,soa2:8001
#Required if attachments are added for workflow tasks
MatchExpression /ADFAttachmentHelper WebLogicCluster=soa1:8001,soa2:8001
# SOA composer application
MatchExpression /soa/composer WebLogicCluster=soa1:8001,soa2:8001
MatchExpression /bpm/composer WebLogicCluster=soa1:8001,soa2:8001
MatchExpression /bpm/workspace WebLogicCluster=soa1:8001,soa2:8001
Note I used MatchExpression rather than Location because I had problems with location when using virtual hosts. In my configuration I use virtual hosts to restrict access to the WebLogic console and EM. These changes need to be made to all OHS instances in /u01/app/oracle/admin/OHSN/config/OHS/ohsN/httpd.conf. We then use opmnctl restartproc to restart the HTTP servers.
/u01/app/oracle/admin/OHS1/bin/opmnctl restartproc ias-component=ohs1
/u01/app/oracle/admin/OHS2/bin/opmnctl restartproc ias-component=ohs2
We should now have access to our SOA services through the HTTP server and load balancer. (EDG)
We can then check the following URLs to ensure that the web servers and load balancer are working
If all the above URLs work then our web servers and load balancer are configured correctly. (EDG)
We need to make the SOA cluster aware of how it is being accessed. We do this by selecting the SOA cluster from the Clusters summary in WebLogic console and selecting the HTTP tab and setting the frontend host to be soa-cluster.soa.oracle.com and frontend HTTPS port to be 443.
This also effectively sets the callback URL for SOA so that it will correctly generate callback addresses for the cluster rather than the individual node that is generating the reference. (EDG)
To take advantage of the cluster for direct binding (RMI rather than SOAP over HTTP) then we need to set the cluster address to the list of machines in the cluster. This is set from the SOA cluster General tab in WebLogic console. We need to set the Cluster Address field to be a comma separated list of the managed servers in our SOA/BPM cluster.
This will enable the SOA Suite to take advantage of the cluster when using optimized message calls. (EDG)
If you are using SSL for your front-end host (as recommended by the EDG) and are not using publicly certified certificates then you need to import the certificate from your load balancer into your Java cacerts file otherwise you will not be able to test your processes and some components that make loopback calls may fail because they cannot verify the host. To import the certificate first export it from the load balancer using your browser then take the saved file and add it using keytool
<JAVA_HOME>/bin/keytool –importcert –file DownLoadedCert
–keystore <JAVA_HOME>/jre/lib/security/cacerts –alias soa-cluster
This adds the certificate as a trusted certificate to the trust store and identifies it with the name soa-cluster.
The SOA infrastructure uses JMS queues and because these are set up as uniform distributed queues they need to be made available on shared storage for server failover. Distributed queues allow part of each queue to be managed by individual managed servers. If a managed server fails then any messages in its portion of the queue will get stuck until the server is restarted. So that we can restart the server on a different node, to cope with hardware failure, we need to put the associated queue files onto shared storage.
We change all the stores from the WebLogic Console by going to the Services->Persisence Stores page and then for each store we go in and selec the configuration tab and change the Directory field to be /u01/app/oracle/admin/soa-domain/soa-cluster/jms, which is the shared directory we set up. Each managed server will then create a unique file in that directory to store its messages. (EDG)
The SOA Infrastructure makes use of XA transactions and so uses the WebLogic transaction coordinator. In the event of a managed server failing the in-flight transactions are persisted in a file. This file must be accessible from all nodes so that in event of machine failure a managed server can be restarted and still be able to find its in-flight transactions, it can then either roll them forward or back as appropriate. Leaving in-flight transactions active can cause problems for resource managers such as a database, which may maintain locks.
To place the transaction logs in a shared location for each of the SOA managed servers go to the Configuration->Services tab and set the Default Store to be a shared location such /u01/app/oracle/admin/soa-domain/soa-cluster/tlogs which is in our shared cluster directory. Each managed server will then store its transaction logs here and will be able to find them in the event of being started on a different node. (EDG)
The file and FTP adapters are not dealing with transactional resources and so to avoid race conditions the adapters can be configured to communicate through locks in the database and through a shared file structure. The FTP config for HA is basically the same as file adapter config, except you will do additional configuration of the FTP adapter for each specific FTP server. To configure the adapters to use the DB for HA coordination go to the FileAdapter or FTPAdapter under Deployments and choose the Configuration->Outbound Pools tab. Expand the ConnectoinFactory entry and for the FileAdapter select the eis/HAFileAdapter outbound connection properties. The data sources are already set up to point to the SOA repository. You just need to change the controlDir to point to a shared location available to all servers, such as /u01/app/oracle/admin/soa-domain/soa-cluster/fadapter.
After saving your changes you will prompted for a plan location. The plan needs to be available to all managed servers and so set the plan location to be /u01/app/oracle/admin/soa-domain/soa-cluster/dd/fadapter/Plan.xml. (EDG)
Anytime that you create a plan file you need to store it in shared storage that is accessible from all nodes because it does not get propagated with the rest of the WebLogic configuration.
We have now done basic configuration of our SOA cluster for HA, but we have not yet enabled it for automatic server failover between nodes and we have not yet added the BAM components to our cluster. We will perform these activities in the next couple of blog entries.