Thursday Jul 19, 2012

Dynamic Authorised Hosts

A simple way to help secure a Coherence cluster is to configure the authorised hosts that can be part of the cluster. If a Coherence application tries to join a cluster and its not running on a server in the authorised host it will be rejected. To setup authorised hosts the cluster host names or IP addresses can be explicitly added to the authorised hosts section of the tangosol-coherence-override.xml file, as shown below:

  <authorized-hosts>
    <host-address id="1">192.168.56.101</host-address>
    <host-address id="2">192.168.56.1</host-address>
  </authorized-hosts>

But what happens when your cluster needs to grow and you need to add another server that is not in the authorised host list or your cluster topology needs to change that and this includes a hostnames or IP address of the server not in the list?

Sometimes a full cluster restart with a different configuration is an option but in other cases this is not. Fortunately there are a couple of solutions to this problem - apart from a complete cluster re-start. 

The first is to perform a rolling-restart (shutting down each node in turn, changing its configuration and restarting it). But this operation will impact cluster performance as cache data will need to be recovered and re-balanced. Another option is to specify a range of authorised hosts rather than explicitly name them. This approach balances the need to secure your cluster with the need to change the configuration at some point in the future. A range of hosts that con join a cluster can be specified as follows:

  <authorized-hosts>
    <host-range>
      <from-address>192.168.56.101</from-address>
      <to-address>192.168.56.190</to-address>
    </host-range>
  </authorized-hosts>

A third option is to use a Coherence filter to determine which hosts are authorised to join the cluster. This allows for a more dynamic configuration as a custom filter can access a separate file with a list of hosts, LDAP server or database. By using a filter its possible to specify exactly which hosts can run cluster members while at the same time allowing this lit to be changed dynamically.

A filter to determine authorised hosts can be added to a cluster configuration (tangosol-coherence-override.xml file) as shown below:

  <authorized-hosts>
    <host-filter>
      <!-- Custom Filter to perform authorised host check -->
      <class-name>com.oracle.coherence.test.AuthroizedHostsFilter</class-name>
      <init-params>
        <!-- URL for authorised host list -->
        <init-param>
          <param-type>String</param-type>
          <param-value>

            file:/Users/Dave/workspace/CoherenceAuthorizedHostsTest/hosts.txt

          </param-value>
        </init-param>
        <!-- Time in ms between re-reading authorised hosts -->
        <init-param>
          <param-type>Int</param-type>
          <param-value>10000</param-value>
        </init-param>
      </init-params>
    </host-filter>
  </authorized-hosts>

And an example custom filter that periodically checks a list of hosts at a URL could look like this:

/**
 * File: AuthorizedHostFilter.java
 * 
 * Copyright (c) 2012. All Rights Reserved. Oracle Corporation.
 * 
 * Oracle is a registered trademark of Oracle Corporation and/or its affiliates.
 * 
 * This software is the confidential and proprietary information of Oracle
 * Corporation. You shall not disclose such confidential and proprietary
 * information and shall use it only in accordance with the terms of the license
 * agreement you entered into with Oracle Corporation.
 * 
 * Oracle Corporation makes no representations or warranties about the
 * suitability of the software, either express or implied, including but not
 * limited to the implied warranties of merchantability, fitness for a
 * particular purpose, or non-infringement. Oracle Corporation shall not be
 * liable for any damages suffered by licensee as a result of using, modifying
 * or distributing this software or its derivatives.
 * 
 * This notice may not be removed or altered.
 */
package com.oracle.coherence.test;

import java.io.BufferedReader;
import java.io.InputStreamReader;
import java.net.URL;
import java.util.ArrayList;
import java.util.Collections;
import java.util.List;
import java.util.Timer;
import java.util.TimerTask;

import com.tangosol.net.CacheFactory;
import com.tangosol.util.Filter;

/**
 * Simple filter to check if a host is in an authoirsed host list
 * Note: this implementation checks the IP address of a host not the hostname
 * 
 * @author Dave Felcey
 */
public class AuthroizedHostsFilter implements Filter {
  /**
   * List of authorised hosts. Thi list is synchronised in case an update and check are being 
   * perform at the same time
   */
  private List hosts = Collections.synchronizedList(new ArrayList());
  /**
   * URL where authorised host list is located
   */
  private String hostsFileUrl;
  /**
   * Timer use to re-read authorised hosts
   */
  private Timer timer = new Timer();
  
  /**
   * Constructor for AuthroizedHostsFilter
   * @param hostsFileUrl the URL where authorised hosts list is located
   * @param reLoadInterval interval in ms at which authorised hosts list is re-read
   */
  public AuthroizedHostsFilter(String hostsFileUrl, int reLoadInterval) {
    this.hostsFileUrl = hostsFileUrl;
    
    // Load values
    load();
    
    // Schedule periodic reload
    timer.scheduleAtFixedRate(new TimerTask() {
        public void run() {
          CacheFactory.log("About to refresh host list");
          load();
        }
    }, reLoadInterval, reLoadInterval);
  }

  /**
   * Loads authorised host list
   */
  private void load() {
    try {
      CacheFactory.log("Curent dir: " + System.getProperty("user.dir"), CacheFactory.LOG_DEBUG);
      CacheFactory.log("Loading hosts file from URL: " + hostsFileUrl, CacheFactory.LOG_DEBUG);
      URL url = new URL(hostsFileUrl);
      BufferedReader in = new BufferedReader(new InputStreamReader(url.openStream()));

      String inputLine;
      while ((inputLine = in.readLine()) != null) {
          CacheFactory.log("Host IP address: " + inputLine, CacheFactory.LOG_DEBUG);
          hosts.add(inputLine);
      }
      in.close();
    } catch (Exception e) {
      e.printStackTrace();
    }
  }
  
  @Override
  public boolean evaluate(Object host) {
    String h = host.toString();   
    h = h.substring(h.indexOf('/') + 1);
    
    CacheFactory.log("Validating host IP address: " + host + "(" + h + "), " + hosts.contains(h), CacheFactory.LOG_DEBUG);
    return hosts.contains(h);
  }
}

The dynamic lookup in the example above could be secured further by requiring some kind of credentials for the lookup or instead of polling the resource some kind of change notification could be used to tell the filter when to re-read the authorised hosts. However, in many cases the above approach should be adequate. A full example of the above filter is available from here.

Friday Jun 01, 2012

Throttling Cache Events

The real-time eventing feature in Coherence is great for relaying state changes to other systems or to users. However, sometimes not all changes need to or can be sent to consumers. For instance;

  • If rapid changes cannot be consumed or interpreted as fast as they are being sent. A user looking at changing Stock prices may only be able to interpret and react to 1 change per second.
  • A client may be using low bandwidth connection, so rapidly sending events will only result in them being queued and delayed
  • A large number of clients may need to be notified of state changes and sending 100 events p/s to 1000 clients cannot be supported with the available hardware, but 10 events p/s to 1000 clients can. Note this example assumes that many of the state changes are to the same value.

One simple approach to throttling Coherence cache events is to use a cache store to capture changes to one cache (data cache) and insert those changes periodically in another cache (events cache). Consumers interested in state changes to entires in the first cache register an interest (event listener) against the second event cache. By using the cache store write-behind feature rapid updates to the same cache entry are coalesced so that updates are merged and written at the interval configured to the event cache. The time interval at which changes are written to the events cache can easily be configured using the write-behind delay time in the cache configuration, as shown below.

  <caching-schemes>
    <distributed-scheme>
      <scheme-name>CustomDistributedCacheScheme</scheme-name>
      <service-name>CustomDistributedCacheService</service-name>
      <thread-count>5</thread-count>
      <backing-map-scheme>
        <read-write-backing-map-scheme>
          <scheme-name>CustomRWBackingMapScheme</scheme-name>
          <internal-cache-scheme>
            <local-scheme />
          </internal-cache-scheme>
          <cachestore-scheme>
            <class-scheme>
              <scheme-name>CustomCacheStoreScheme</scheme-name>
              <class-name>com.oracle.coherence.test.CustomCacheStore</class-name>
              <init-params>
                <init-param>
                  <param-type>java.lang.String</param-type>
                  <param-value>{cache-name}</param-value>
                </init-param>
                <init-param>
                  <param-type>java.lang.String</param-type>
                  <!-- The name of the cache to write events to -->
                  <param-value>cqc-test</param-value>
                </init-param>
              </init-params>
            </class-scheme>
          </cachestore-scheme>
          <write-delay>1s</write-delay>
          <write-batch-factor>1</write-batch-factor>
        </read-write-backing-map-scheme>
      </backing-map-scheme>
      <autostart>true</autostart>
    </distributed-scheme>
  </caching-schemes>

The cache store implementation to perform this throttling is trivial and only involves overriding the basic cache store functions.

public class CustomCacheStore implements CacheStore {
	private String publishingCacheName;
	private String sourceCacheName;

	public CustomCacheStore(String sourceCacheStore, String publishingCacheName) {
		this.publishingCacheName = publishingCacheName;
		this.sourceCacheName = sourceCacheName;
	}

	@Override
	public Object load(Object key) {
		return null;
	}

	@Override
	public Map loadAll(Collection keyCollection) {
		return null;
	}

	@Override
	public void erase(Object key) {
		if (sourceCacheName != publishingCacheName) {
			CacheFactory.getCache(publishingCacheName).remove(key);
			CacheFactory.log("Erasing entry: " + key, CacheFactory.LOG_DEBUG);
		}
	}

	@Override
	public void eraseAll(Collection keyCollection) {
		if (sourceCacheName != publishingCacheName) {
			for (Object key : keyCollection) {
				CacheFactory.getCache(publishingCacheName).remove(key);
				CacheFactory.log("Erasing collection entry: " + key,
						CacheFactory.LOG_DEBUG);
			}
		}
	}

	@Override
	public void store(Object key, Object value) {
		if (sourceCacheName != publishingCacheName) {
			CacheFactory.getCache(publishingCacheName).put(key, value);
			CacheFactory.log("Storing entry (key=value): " + key + "=" + value,
					CacheFactory.LOG_DEBUG);
		}
	}

	@Override
	public void storeAll(Map entryMap) {
		if (sourceCacheName != publishingCacheName) {
			CacheFactory.getCache(publishingCacheName).putAll(entryMap);
			CacheFactory.log("Storing entries: " + entryMap,
					CacheFactory.LOG_DEBUG);
		}
	}
} 

As you can see each cache store operation on the data cache results in a similar operation on event cache. This is a very simple pattern which has a lot of additional possibilities, but it also has a few drawbacks you should be aware of:

  • This event throttling implementation will use additional memory as a duplicate copy of entries held in the data cache need to be held in the events cache too - 2 if the event cache has backups
  • A data cache may already use a cache store, so a "multiplexing cache store pattern" must also be used to send changes to the existing and throttling cache store. 
If you would like to try out this throttling example you can download it here. I hope its useful and let me know if you spot any further optimizations.

Thursday Feb 16, 2012

Coherence Clustering Principles

Overview 

A Coherence environment consists of a number of components. Below I’ll describe how they release to each other and what the terms mean. But just to give you a flavor, here they are displayed as a hierarchy.

Coherence Cluster

Distributed cache services with the same name will cluster together to manage their cache data. So you can have multiple clustered services across Coherence nodes. This article will explain how these components work together and how applications interacts with them.

What do we mean by a Coherence Cluster?

It’s a set of configuration parameters that control the operational and run-time settings for clustering, communication, and data management services. These are defined in the Coherence operational override file and include such things as:
  • Multi-cast or unicast addresses for locating cluster members
  • Cluster identity information
  • Management settings
  • Networking parameters
  • Security information
  • Logging information
  • Service configuration parameters

These settings are used by Coherence services to communicate with other nodes, to determine cluster membership, for logging and other operational parameters and are similar to the Domain configuration used by Weblogic Server. They also apply to the entire Coherence cluster node, which usually means the whole JVM. Although a cluster node will usually equate to a JVM, it is possible to have more than one node per JVM by loading the Coherence libraries multiple times in different isolated class loaders, e.g. a "child first" class loader. However, this is only usually done within the contaxt of an application server or a test framework.

The coherence.jar file contains a default operational override file – tangosol-coherence-override.xml – that will be used if another is not detected in the CLASSPATH before the coherence.jar library is loaded. In actual fact there are 3 versions of this file;
  • tangosol-coherence-override-eval.xml
  • tangosol-coherence-override-dev.xml
  • tangosol-coherence-override-prod.xml

Which is selected will depend on the mode that Coherence is running in (the default is Developer mode) and can be set using the system property -Dtangosol.coherence.mode=prod.

Note: Its important to ensure that the production mode is selected for production usage, as in production mode certain communication timeouts etc will be extended so that Coherence will wait longer for services to recover – amongst other things.

What is a Coherence Service?

A Coherence service is a thread (and sometimes pool of worker threads) that has a specific function. This can be:
  • Connectivity Services
    • Clustering Service – manage cluster membership communications. There is exactly one of these services per JVM (or within a class-loader). This service tracks which other nodes are in the cluster, node failures etc.
    • Proxy Services – manage external connections into the cluster from “extend” clients
  • Processing Services
    • Invocation Service – executes processing requests/tasks on the node in the cluster it is running on.
  • Data Services
    • Distributed Cache Service – manages cache data for caches created from the scheme its defined in
    • Replicated Cache Service – provides synchronous replication cache services, where all managed cache data exists on every cluster node 
This document will focus on Distributed Cache Services that manage distributed caches defined from a distributed schema definition. 

When a schema definition (as shown below) is parsed Coherence instantiates a service thread with the name specified. This service thread will manage the data from caches created using the schema definition. So how does this all happen then?

If an application using Coherence calls:

NamedCache tradeCache = CacheFactory.getCache(“trade-cache”);

A couple of things happen:

  • When the Coherence classes are loaded they will by default search the CLASSPATH for the coherence-cache-config.xml file – which is actually the name specified in the default operational override file. The first instance that is found wild be used. However, if one is specified using the system property –Dtangosol.coherence.cacheconfig=<cache config file> it will use that cache configuration file instead. Also a cache configuration can be explicitly loaded from the CLASSPATH, as follows:

// Load a applicaton X cache configuration file
ClassLoader loader  = getClass().getClassLoader();
// Note: it’s a URI that is specified for the location
// of the cache configuration file
ConfigurableCacheFactory factory = CacheFactory.getCacheFactoryBuilder.getConfigurableCacheFactory(“applicationx-cache-config”, loader);

NamedCache localCacheInstance = factory.ensureCache("trade-cache", loader);

  • When the cache configuration file is parsed distributed cache service threads are started for all the cache “schemes” that are defined if the autostart parameter is set to true (by default its false).

<caching-schemes>
  <distributed-scheme>
    <scheme-name>DefaultDistributedCacheScheme</scheme-name>
    <service-name>DefaultDistributedCacheService</service-name>
    <autostart>true</autostart>
  </distributed-scheme>
</caching-schemes>

Note: For data services the autostart flag is only observed for distributed caches. So a replicated cache service would automatically be started.
  • Each cache service started is given a name - or one is created if none is specified.  This service thread then attempts to join with other services that have the same name in the Coherence cluster. If none are found, i.e. it’s the first to start, it will become the “senior member” of the service. To illustrate this take a look at a sample log statement when Coherence starts a service.
2012-02-02 11:00:04.666/16.462 Oracle Coherence GE 3.7.1.0 <D5> (thread=DistributedCache:DefaultDistributedCacheService, member=1): Service DefaultDistributedCacheService joined the cluster with senior service member 1

In this case a distributed cache service called DefaultDistributedCacheService has started up on Member 1 of the cluster (the first JVM). As it’s the first service with this name to start it becomes the senior member – which means it has a couple of extra responsibilities, like sending out heartbeats etc.
  • Once the cache services have been started Coherence will try and match the cache name that has been passed, in this case “trade-cache”, with the appropriate cache scheme (and service) that will manage this cache. It uses the cache scheme mappings part of the cache configuration file to do this and wild card matching (if necessary) to identify the right cache scheme. 

Note: Regular expression parsing is not used and wild cards cannot be used at the start of the cache name.

<caching-scheme-mapping>
  <cache-mapping>
    <cache-name>trade-*</cache-name>
    <scheme-name>DefaultDistributedCacheScheme</scheme-name>
  </cache-mapping>
</caching-scheme-mapping>
  • Once the correct cache scheme has been matched a reference to an existing cache managed by the cache service for this scheme will be returned or a new cache created, using  the parameters of the cache scheme.
Cache services that use the same cluster name should try and scope their names to prevent name clashes. This is so that multiple applications sharing the same cluster don’t inadvertently use the same service name - and unintentionally shares data. 

Each application that wishes to cache data in isolation from others can load its own cache configuration file (as shown above) and specify a unique scope name. This will then be used as a prefix to cluster service names to prevent service name collision. Below is an example scope definition in a cache configuration file:

<?xml version="1.0"?>
<cache-config>
<scope-name>com.oracle.custom</scope-name>
...
</cache-config>

And in JConsole you can see the scope prefix being used in conjunction with service names – circled in red. 

JConsole

Note: Its not used as a prefix to cache names.

By scoping service names multiple services can co-exist in the same Coherence cluster without interacting, thus allowing individual applications to create and manage their own caches in isolation.

Below we have 3 nodes (JVM’s) each running 2 partitioned cache services. Each cache service manages 2 caches and the data (key/value pairs) stored in these caches is shared evenly between the nodes.

Coherence Nodes

Key: S=Service, C=Cache

If you want to find out more information about caches, how they relate to maps and how they are partitioned across cache services please see this (http://blackbeanbag.net/wp/2010/07/01/partitions-backing-maps-and-services-oh-my/) excellent blog article covering these topics. 

Monday Feb 13, 2012

Oracle Coherence SIG Meeting on Thursday 1st March at the Oracle City Office in London

If you are interested in finding out more about what customers are doing with Coherence or how it can be used, then why not register for the up-coming London Coherence Special Interest Group (SIG) meeting. Its in the Oracle City office on Thursday 1st March and FREE.  There will be talks from customers, partners and our engineers - amongst others - covering a wide range of topics.

Thursday Oct 20, 2011

Oracle Coherence SIG Meeting on 4th November at the Oracle City Office in London

If you are interested in finding out more about what customers are doing with Coherence or how it can be used, then why not register for the up-coming London Coherence Special Interest Group (SIG) meeting. Its in the Oracle City office on 4th November and FREE.  There will be talks from customers, partners and our engineers - amongst others - covering a wide range of topics.

Tuesday Aug 09, 2011

Using Weblogic Server ActiveCache for Coherence

As web applications are architected/re-architected caching is often added to reduce data access bottlenecks that often hinder scalability and performance. Weblogic Server (WLS) now has a range of features, collectively called "ActiveCache", that provides Coherence cache integration in a Java Servlet context, a JPA context (Toplink Grid) and an HTTP session context (Coherence*Web).Ê This post will concentrate on the first of these integration points and show how a J2EE web application cab be packaged, deployed and managed in WLS to seamlessly take advantage of a data caching tier.

Deployment Architecture

So how do all the parts work together? With WLS 10.3.4+ you can now use its NodeManager (a process monitoring and management service) to manage components other than WLS, like Coherence. There are also additional administration pages in the WLS Administration Console (http://localhost:7001/console) to configure Coherence clusters (via the Environment > Coherence Clusters menu):

And Coherence cluster nodes (via the Environment > Coherence Servers menu):

WLS Admin Console Coherence Nodes

These allow a cluster to be defined and all the cluster nodes to be started and stopped from the WLS Admin console.

In this example the Servlet and all the Coherence cache nodes are all part of the same Coherence cluster, but the Servlet does not hold any data, i.e. its storage attribute is disabled. This means that when it is deployed or un-deployed no cache data needs to be moved between the cluster nodes - because it will not hold any data. This topology is shown below:

In-cluster topology

Note: the diagram above shows how HTTP session data can be cached in Coherence but the topology is identical for this use case, where web application data is being cached.

Setup

These instructions make a few assumptions, namely that:

To setup the environment to deploy the sample application the following steps are required:

  1. Configure and startup WLS NodeManager that will start and stop the Coherence cluster nodes.

    This step may be obvious for readers who are familiar with WLS, but just for completeness these are the steps I needed to do on a Mac.

    FirstÊ I set the environment parameters used to start-up WLS:


    export JAVA_HOME=/System/Library/Frameworks/JavaVM.framework/Versions/1.6/Home
    # Dir where WLS installed to
    export MW_HOME=/Users/Dave/apps/wls1035_dev/
    export USER_MEM_ARGS="-Xmx1024m -XX:PermSize=1024m"


    Then I set the NodeManager parameters in the nodemanager.properties file ($MW_HOME/wlserver/common/nodemanager/nodemanager.properties) so that it did not try to use the native libraries (this step is platform specific) and that SSL was not used - to make the setup simpler;

    NativeVersionEnabled=false
    PropertiesVersion=10.3
    ListenAddress=localhost
    SecureListener=false
    DomainRegistrationEnabled=true


    I also added the WLS domain that my application will be deployed to the nodemamanger.domain ($MW_HOME/wlserver/common/nodemanager/nodemanager.domain) properties file;

    base_domain=/Users/Dave/apps/mywls/user_projects/domains/base_domain

    Finally I started the NodeManager for my laptop (note a NodeManager is associated with a machine and not a domain);

    cd $MW_HOME/wlserver/server/bin
    ./startNodeManager.sh
  2. Configure a new Machine definition using the WLS Admin console

    Next you need to define a Machine definition that will enable a NodeManager to be used to start the Coherence cluster. As was mentioned above, a NodeManager is associated with a server not a domain and "A machine is the logical representation of the computer that hosts one or more Weblogic Server instances (servers)."

    To configure a Machine navigate to Machines screen in the WLS Admin console via the Environment > Machines menu and click on the New button and specify:

    Screen 1
    Name: MyMachine - the name of your machine
    Machine OS: Other - I did not specify one here as there are no native libraries for Mac OSX

    WLS create machine - screen 1

    Screen 2
    Type: Plain - so as to avoid additional security configuration

    Note: I left all other parameters to the defaults.

    WLS create machine - screen 2

    As the NodeManager has been started up using the default parameters it should be found without specifying any additional setup. The monitoring page for the machine should then show that the NodeManager is reachable.

    WLS Machine status
    If the status is not reachable the take a look at the output for the NodeManager if you have started it from a console or at the NodeManager log file ($MW_HOME/wlserver/common/nodemanager/nodemanager.log).
  3. Configure and startup the Coherence cluster

    To startup the Coherence cluster that will be used to cache the web application data you need to create a Coherence cluster configuration and then define the and startup the Coherence servers. To configure the Coherence cluster navigate to the Coherence cluster definition screen via the Environment > Coherence Clusters menu. The configuration parameters I used were:

    Name: MyCluster - this is the name of the cluster and will help prevent accidental clustering between different environments
    Use a Custom Cluster Configuration File: /Users/Dave/Documents/workspace/WLSCohWebAppEAR/EarContent/APP-INF/classes/tangosol-coherence-override.xml - this is the Coherence operational override file for the cluster, NOT the cache configuration file

    To configure the Coherence cluster servers navigate to the Coherence cluster definition screen via the Environment > Coherence Servers menu. The
    General configuration parameters I used were:

    Name: CacheServer1 - this can be any name
    Machine: MyMachine - select the name of the Machine definition you have just created
    Cluster: MyCluster - select the name of the cluster you have just created
    Unicast Listen Port: 9999 - the default is 8888 but its worth selecting something different and from that other clusters to prevent your servers accidentally joining another cluster

    All other default general configuration parameters were accepted.

    The Server Start configuration parameters entered were:


    Java Home: /Library/Java/Home - the base dir of your Java installation
    Java Vendor: Apple
    BEA Home: /Users/Dave/apps/wls1035_dev/wlserver - the installation dir for WLS
    Root Directory: /Users/Dave/apps/mywls/user_projects/domains/base_domain - the domain base dir
    Class Path: /Users/Dave/apps/wls1035_dev/modules/features/weblogic.server.modules.coherence.server_10.3.4.1.jar:/Users/Dave/coherence/3.7/coherence-java/coherence/lib/coherence.jar - note the order of these Jar files seems to be important and you should specify the correct path separator, in this case a ':' char
    Arguments: -Xms1024m -Xmx1024m -Dtangosol.coherence.cacheconfig=/Users/Dave/Documents/workspace/WLSCohWebAppEAR/EarContent/APP-INF/classes/coherence-cache-config.xml -Dtangosol.coherence.distributed.localstorage=true -Dtangosol.coherence.role=CacheServer - the arguments to the JVM that will run the Coherence cache server. Note: the last parameter as in the Coherence override file the default storage setting has been set to false, so for storage enabled nodes it has to be explicitly enable. This is because its not possible to override the storage enabled default setting by using system properties just for the TestServlet other than setting this property in the WLS startup script.

    Obviously you will need to modify these paths to reflect thoseÊ in your environment.

    To create additional Coherence Servers you can just clone this one on the Coherence Servers admin page and change the name of the new Coherence Server.


    The Coherence Cluster is started from the Coherence Servers page which you can navigate to by selecting the Environment > Coherence Servers menu. On this page select the Control tab, select all the Coherence Servers that you have just defined and click on the StartÊ button.

    Start Cohernce Servers
  4. Deploy the Coherence and ActiveCache shared libraries

    The Coherence and ActiveCache Jar files can be deployed in a number of ways, added to the CLASSPATH of WLS, deployed as a shared library or as part of the web application. In this case the Jar files have been deployed as a shared library, which provides both usage isolation (as only applications that import the libraries will have them in scope) and a minimal resource overhead (as only 1 copy of the classes will be loaded). Shared libraries can be deployed through the WLS Admin Console via the Environment > Deployments menu, as shown below (accepting the default settings should be fine to get started):

    Deploy WLS Shared Libraty

  5. Package and deploy the web application that will utilize a Coherence cache

    This can be done as either a WAR or EAR file. I choose an EAR file here. You can do this either by dropping the archive into the autodeploy directory of WLS, through the Admin console, using the Weblogic Scripting Tool (WLST) or through and IDE like Eclipse (that uses WLST). I choose the later for convenience. An easy way to do so is through the wizards provided by the Oracle Enterprise Pack for Eclipse (OEPE) plugin - as shown below:

    Deploy EAR to WLS from Eclipse

    To do this you can just import the example EAR web application projects (and its dependent projects),Ê update the library paths for Coherence etc and deploy to the target WLS.

Testing of the sample web application

To access the TestServlet sample web application go to the URL http://<your hostname>:7001//WLSCohWebApp/TestServlet , for instance http://192.168.56.1:7001/WLSCohWebApp/TestServlet.. It outputs the Coherence cluster members and also puts an entry in a cache called "MyCache" whose key is the HTTP session Id and the value is the last modified value of the HTTP session. Its output is shown below:

Web app output

This output tells you how many clients have an active session and how many cluster members are in the cluster. All this cache data is visible across all invocations of this Servlet, so application data like reference data or values in drop-down-lists etc, that doesn't change frequently are ideal candidates.

Description of the sample web application

The ActiveCache functionality allows cache references to be configured as resources. In the TestServlet the Resournce annotation is used to dynamically inject a reference to the NamedCache "MyCache", as shown below:

mycache resource ref

A Coherence cache reference (NamedCache) resource and also be looked up using a JNDI lookup.

The shared libraries that the Servlet references are specified in the weblogic-application.xml file:

Shared libs config

The Coherence cluster can be monitored through JMX. To see the metrics for the cluster open JConsole on the machine you are testing on and select one of the weblogic.server.nodemanager... process (a Coherence management server could also be started to provide remote access). If you expand the node tree you will see the following metrics:

JConsole view

For explicit startup and shutdown operations a Servlet listener class has been registered that will shutdown the web application as a cluster node if the application is un-deployed.

I hope this post has been useful and gives you an overview of how the WLS ActiveCache functionality can be used in a web application to provide fast access to frequently used data. Although the cache is used as a "side cache" here - where the data is added to the cache explicitly - it could easily be configured to load the required data into the cache on-demand from a database or some other data source. Other options not explored here, but worth considering, are scripting the cluster setup and startup using WLST.

The sample application can be downloaded from here.

Other useful resources

About

Views and ideas about Oracle Coherence and other software

Search

Categories
Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today