Spinning-up a Coherence Cluster with Weblogic Scripting (WLST)

The WebLogic scripting and management features available with Coherence 12c Managed Servers make it easy to create Coherence clusters and manage applications. Using the Weblogic Scripting Tool (WLST),  the whole lifecycle of Managed Coherence Servers can be controlled, from creating and starting a Coherence cluster to deploying Coherence applications.

WLST scripts are written in Jython and can manipulate Weblogic JMX MBean's to manage Weblogic and Coherence. The flexibility and power they provide make it easy to create, configure and startup up a complete Coherence environment - in just a few minutes.  This post will outline how to do just this, using some sample WLST scripts.

Installing Coherence

So lets get started. If you haven’t already done so you need to install the Java JDK 1.8 and zipped distribution of Weblogic - which also contains Coherence. You can find these here;
For the JDK installation just follow the instructions. To keep things really simple for the Cohernece installation we will be using the zip installer for Weblogic (and Coherence). This avoids the need to have Administrator rights on Windows etc. The directory you unzip Coherence into will be referred to as the MW_HOME

Note: Use a zip utility like 7-Zip rather than the Windows zip tool as the path for some files is too long for Windows to handle.
  • Download the Weblogic zip installation to the directory where you want to install Weblogic and Coherence and unzip it
  • Update the env.sh/cmd test script to reflect your installation parameters (Java Home directory if not using Windows and MW_HOME directory where you have unzipped Weblogic to, e.g. MW_HOME=<directory created by unziped WLS>
  • Change into the directory created by unzipping the installation and run the script configure.sh/cmd with the -silent option from a console, to do a silent installation of Weblogic and Coherence. 
Note: Before running configure.sh/cmd you will need to set your MW_HOME and JAVA_HOME environment variable first on Windows, for instance SET JAVA_HOME=“c:\Program Files\Java\jdk1.8.0_24”. The MW_HOME can be set by running env.sh/cmd.All this information is in the README for the zip distribution here.

Creating the Coherence Cluster

Once you done this, we can begin creating our cluster below and deploying a very simple Coherence application. Here we are only going to setup the Coherence cluster on a local machine but its easy to expand this across multiple servers. The cluster also includes a selection of server types and components. These are shown below;

example architecture

Ok that’s enough theory, lets get started with setting up the Coherence cluster.

Now there are a number of ways to do this.
  • We could use the configuration wizard (here: MW_HOME/wlserver/common/bin/config.sh) to take you through the process
  • Create a basic Weblogic domain (a configuration environment) using the configuration wizard and use the Weblogic Admin Console to create the Coherence cluster. 
  • Or use WLST, which is what we are going to do here. 
As this is an introduction to using WLST scripting, we'll just create a basic cluster on one machine and leave the process of replicating this to multiple machines to another post. To ensure the scripts run smoothly on platforms (including a Mac) without native support for encryption we'll disable this feature if neccessary and use HTTP. On Windows and Linux this isn’t neccessary.

A WLST script can be run using the Java WLST scripting application bundled with Weblogic, so no additional tools are requierd. To run a WLST script you need to do the following:

Setup your environment, for instance (on a Mac);

  # Setup some environment variables to make it easier to call other scripts
  export JAVA_HOME=/Library/Java/JavaVirtualMachines/jdk1.7.0_51.jdk/Contents/Home
  export MW_HOME=/Users/Dave/apps/Oracle/Middleware1213/Oracle_Home

  # Setup other Weblogic environment variables used by WLST
  source $MW_HOME/wlserver/server/bin/setWLSEnv.sh

Then call the WLST scripting tool, passing your script as a parameter;

  $JAVA_HOME/bin/java weblogic.WLST createManagedCohServers.py

To make this easier the environment settings are contained in a script env.sh/cmd (which you will need to adjust to reflect your environment) and the WLST tool can be called from another script runWLST.sh/cmd

First we are going to run createCoherenceCluster.py, to create a Coherence cluster. Its simplified to make it easy to read, but additional error checking etc. could be addeded to make it more robust. It does the following;

Sets up the environment for creating the Coherence cluster
  # Setup environment
It does this by calling anothe script that loads some parameters from a property file and creates variables that will be used in the installation. It also declares a couple of simple functions. Maintaining settings in a properties file (setup.properties) allows parameters like the number of storage nodes to be easily change and the cluster re-created.

Then it creates a Weblogic domain. A domain is a configuration environment for a number of other Weblogic components. Weblogic domains are created from templates - here we use a default template as a starting point, on which we will add the other components. To start with we configure the Admin Server and security settings.
  # Read base template

  # Set Admin Server parameters
  cd('Servers/' + adminServerName)
  set('ListenPort', int(adminServerListenPort))
  setOption('OverwriteDomain', overwriteDomain)
  printInStyle('Configured Admin Server')

  # Configure security for the domain
  printInStyle('Created password')

  # Create domain and write to file system
  setOption('OverwriteDomain', 'true')
  printInStyle('Created domain')

Then start a Node Manager to remotely startup/shutdown Coherence/Weblogic instances.

Note: A Node Manager enables Weblogic server instances (or managed servers) to be controlled remotely. A Node Manager is not tied to a domain but to a machine/host.

  # Start Node Manager
  startNodeManager(debug = 'false', verbose = 'true', NodeManagerHome = nmDir, ListenPort = '5556', SecureListener = useSecurity, NativeVersionEnabled = useSecurity, ListenAddress = host, QuitEnabled = 'true')
  printInStyle('Started node manager')
Start the Admin Server for the domain. This provides a single point for controlling and administering the domain. It only needs to be started for administration tasks (but here will also be the Coherence management node too)
  # Start Admin Server
  startServer(adminServerName, domainName, connUri, adminUser, adminPassword, domainLoc, jvmArgs='-XX:MaxPermSize=128m, -Xmx512m, -XX:+UseConcMarkSweepGC')
  printInStyle('Started the Admin Server')
Now the Administration Server and Node Manager are running the other Weblogic and Coherence resources can now be created.
First we associated the new domain with the Node Manager.
  # Create Managed Servers and Coherence cluster

  # Connect to Admin Server
  connect(adminUser, adminPassword, connUri)

  # Begin editing session

  # Tell Node Manager about the new domain we have created
  printInStyle('Enrolling this domain with node manager')
  nmEnroll(domainLoc, nmDir)

Create a machine resournce to associate the Weblogic and Cohernece resources with a Node Manager
  # Create a machine for everything to be managed by
  create(machineName, 'Machine')
  machine = cd('/Machines/' + machineName)
  cd('NodeManager/' + machineName)
  set('ListenAddress', host)
  set('NMType', nmType)
  printInStyle('Created machine')

Creates a Coherence cluster. A Coherence cluster in a Weblogic environment, like other Weblogic resources, is defined by a number of JMX MBeans, primarily the CoherenceClusterSystemResource MBean
  # Create the Coherence cluster
  cohSR = create(cohClusterName, 'CoherenceClusterSystemResource')
  cohBean = cohSR.getCoherenceClusterResource()
  cohCluster = cohBean.getCoherenceClusterParams()
Creates 2 Weblogic clusters, one for storage nodes and one for proxy nodes. These are not the same as a Coherence cluster they just make it easier to manage Coherence nodes as a group - as we'll see in a minute
  # Create a WebLogic cluster for storage members
  clu1 = create(cacheClusterName, 'Cluster')

  # Create a WebLogic cluster for proxy servers
  clu2 = create(proxyClusterName, 'Cluster')
  cohTier = clu2.getCoherenceTier()


Add the Admin Server to the Coherence cluster as a storage disable node and make it the management node (BTW 'cmo' is the Current Managed Object).

  # Add Admin Server to cluster - for management purposes
cd('Servers/' + adminServerName)


Finally the script creates 4 Coherence cluster nodes, or Weblogic Managed Servers (depending on the settings in the setup.properties file). A Jython function, createServer(), in the setEnv.py script is called here to simplify the setup. These are Weblogic instances that will host Coherence applications and are equivalent to traditional Coherence nodes
  # Create storage enabled MCS
  port = int(startPort)
  unicastPort = int(cacheUnicastListenPort)

  for id in range(startId, int(numStorageServers) + 1):
    serverName = storageServerName + '_' + machineName + '_' + str(id)
    createServer(serverName, port, clu1, 'StorageServer', storageArgs + ' -Dtangosol.coherence.member=' + serverName, true, unicastPort)
    id = id + 1
    port = port + 1
    unicastPort = unicastPort + 2

  # Create storage disabled MCS for proxy servers
  unicastPort = int(proxyUnicastListenPort)

  for id in range(startId, int(numProxyServers) + 1):
    serverName = proxyServerName + '_' + machineName + '_' + str(id)
    createServer(serverName, port, clu2, 'ProxyServer', proxyArgs + ' -Dtangosol.coherence.member=' + serverName, false, unicastPort)
    id = id + 1
    port = port + 1
    unicastPort = unicastPort + 2

  # Save all domain configuration changes
Finally we shutdown the Node Manager and Admin Server as the creation of our cluster is complete.

  # Stop Admin Server and Node Manager
  printInStyle('Use Security: ' + useSecurity)
  nmConnect(adminUser, adminPassword, host, '5556', domainName, domainLoc, nmType)


The Weblogic domain that’s just been created is called coh_domain and can be found in the dir MW_HOME/user_projects/domains dir. If you want to remove it and recreate you domain with different settings, just shut it down, delete the coh_domain dir and remove the domain mapping from the MW_HOME/domain-registry.xml.

Starting the Coherence Cluster

To start the cluster run the start-up script startCoherenceCluster.py in the same way as the cluster creation script. This script starts the Node Manager and Admin Server and then uses the Node Manager to start all the Managed Servers.

  # Starts a Coherence Cluster using Managed Coherence Servers

  # Setup environment

  # Start Node Manager
  startNodeManager(debug = 'false', verbose = 'true', NodeManagerHome = nmDir, ListenPort = '5556', SecureListener = useSecurity, NativeVersionEnabled = useSecurity, ListenAddress = adminServerListenHost, QuitEnabled = 'true')
  printInStyle('Started node manager')

  # Start Admin Server
  startServer(adminServerName, domainName , connUri, adminUser, adminPassword, domainLoc, jvmArgs='-XX:MaxPermSize=128m, -Xmx512m, -XX:+UseConcMarkSweepGC')
  printInStyle('Started the Admin Server')

  # Connect to Admin Server
  connect(adminUser, adminPassword, connUri)

  # Start Servers
  start(cacheClusterName, 'Cluster')
  start(proxyClusterName, 'Cluster')

  # Deploy GAR
  deploy(appName, appName + '.gar', cacheClusterName, block = 'true')
  deploy(appName, appName + '.gar', proxyClusterName, block = 'true')

  # Disconnect from Admin Server


To stop the cluster just run the script stopCoherenceCluster.py.

Testing your Coherence Cluster

As part of the cluster startup a simple Coherence application, was deployed as a GAR file. A GAR file contains your classes and configuration files and is just a Jar file with the GAR extension. It contains all the classes and configuration files the Coherene application needs). Here is our simple Coherence application;

Directory structure:


Note: Any classes that your Coherence application uses need to be in the base directory of the GAR and libraries in a lib directory. Here we don't have either

And here is a sample coherence-application.xml file:
<?xml version="1.0"?>
<cache-configuration-ref override-property="cache-config/ProxyExample">META-INF/coherence-cache-config.xml
There are a number of ways to deploy a Coherence GAR application. You can use the Admin Console, the Maven plugin to add it as a step to your deployment process or you can use a WLST script. Here we’ve just used a WLST script.

To test your installation you can also run a simple external (extend) client to put and get entires in a “test” cache managed by the Coherence application ExampleGAR1. To run the client execute the script extend-client.sh/cmd and issue the command “put 1 one” at the prompt.

Managing the Coherence Cluster

To see and manage the cluster you have just created and started, open a browser window and goto the Weblogic Admin console at http://localhost:7001/console. Enter weblogic/welcome1 at the login screen. Then navigate to the Environment->Servers screen you will see all the Managed Coherence Servers in your cluster.

Weblogic Admin Console

Monitoring the Coherence Cluster

Finally you can monitor your cluster using the JVisualVM plugin for Coherence by running the script jvvm.sh/cmd, installing the Coherence plugin (in MW_HOME/coherence/plugins/jvisualvm) and creating a JMX connection using the URL echoed by the script (service:jmx:iiop://<admin hostname>:7001/jndi/weblogic.management.mbeanservers.domainruntime) and the credentials used above to connect to the Weblogic Admin Server (weblogic/welcome1)

Java visual VM plugin for Coherence

Now we have covered a lot of ground here but I hope you can see power of WLST,  how it can simplify managing Coherence and the benefits of using Managed Coherence Servers.  If you would like to try these scripts out for yourself you can download them from here.

Be the first to comment

Comments ( 0 )
Please enter your name.Please provide a valid email address.Please enter a comment.CAPTCHA challenge response provided was incorrect. Please try again.