X

Recent Posts

Coherence

Spinning-up a Coherence Cluster with Weblogic Scripting (WLST)

The WebLogic scripting and management features available with Coherence 12c Managed Servers make it easy to create Coherence clusters and manage applications. Using the Weblogic Scripting Tool (WLST),  the whole lifecycle of Managed Coherence Servers can be controlled, from creating and starting a Coherence cluster to deploying Coherence applications. WLST scripts are written in Jython and can manipulate Weblogic JMX MBean's to manage Weblogic and Coherence. The flexibility and power they provide make it easy to create, configure and startup up a complete Coherence environment - in just a few minutes.  This post will outline how to do just this, using some sample WLST scripts. Installing Coherence So lets get started. If you haven’t already done so you need to install the Java JDK 1.8 and zipped distribution of Weblogic - which also contains Coherence. You can find these here; Java JDK 1.8 (JDK 7 is ok too) Weblogic for Developers Zip Install (which includes Coherence) For the JDK installation just follow the instructions. To keep things really simple for the Cohernece installation we will be using the zip installer for Weblogic (and Coherence). This avoids the need to have Administrator rights on Windows etc. The directory you unzip Coherence into will be referred to as the MW_HOME.  Note: Use a zip utility like 7-Zip rather than the Windows zip tool as the path for some files is too long for Windows to handle. Download the Weblogic zip installation to the directory where you want to install Weblogic and Coherence and unzip it Update the env.sh/cmd test script to reflect your installation parameters (Java Home directory if not using Windows and MW_HOME directory where you have unzipped Weblogic to, e.g. MW_HOME=<directory created by unziped WLS> Change into the directory created by unzipping the installation and run the script configure.sh/cmd with the -silent option from a console, to do a silent installation of Weblogic and Coherence.  Note: Before running configure.sh/cmd you will need to set your MW_HOME and JAVA_HOME environment variable first on Windows, for instance SET JAVA_HOME=“c:\Program Files\Java\jdk1.8.0_24”. The MW_HOME can be set by running env.sh/cmd.All this information is in the README for the zip distribution here. Creating the Coherence Cluster Once you done this, we can begin creating our cluster below and deploying a very simple Coherence application. Here we are only going to setup the Coherence cluster on a local machine but its easy to expand this across multiple servers. The cluster also includes a selection of server types and components. These are shown below; Ok that’s enough theory, lets get started with setting up the Coherence cluster. Now there are a number of ways to do this. We could use the configuration wizard (here: MW_HOME/wlserver/common/bin/config.sh) to take you through the process Create a basic Weblogic domain (a configuration environment) using the configuration wizard and use the Weblogic Admin Console to create the Coherence cluster.  Or use WLST, which is what we are going to do here.  As this is an introduction to using WLST scripting, we'll just create a basic cluster on one machine and leave the process of replicating this to multiple machines to another post. To ensure the scripts run smoothly on platforms (including a Mac) without native support for encryption we'll disable this feature if neccessary and use HTTP. On Windows and Linux this isn’t neccessary. A WLST script can be run using the Java WLST scripting application bundled with Weblogic, so no additional tools are requierd. To run a WLST script you need to do the following: Setup your environment, for instance (on a Mac);   # Setup some environment variables to make it easier to call other scripts   export JAVA_HOME=/Library/Java/JavaVirtualMachines/jdk1.7.0_51.jdk/Contents/Home   export MW_HOME=/Users/Dave/apps/Oracle/Middleware1213/Oracle_Home   # Setup other Weblogic environment variables used by WLST   source $MW_HOME/wlserver/server/bin/setWLSEnv.sh Then call the WLST scripting tool, passing your script as a parameter;   $JAVA_HOME/bin/java weblogic.WLST createManagedCohServers.py To make this easier the environment settings are contained in a script env.sh/cmd (which you will need to adjust to reflect your environment) and the WLST tool can be called from another script runWLST.sh/cmd First we are going to run createCoherenceCluster.py, to create a Coherence cluster. Its simplified to make it easy to read, but additional error checking etc. could be addeded to make it more robust. It does the following; Sets up the environment for creating the Coherence cluster   # Setup environment   execfile('setEnv.py') It does this by calling anothe script that loads some parameters from a property file and creates variables that will be used in the installation. It also declares a couple of simple functions. Maintaining settings in a properties file (setup.properties) allows parameters like the number of storage nodes to be easily change and the cluster re-created. Then it creates a Weblogic domain. A domain is a configuration environment for a number of other Weblogic components. Weblogic domains are created from templates - here we use a default template as a starting point, on which we will add the other components. To start with we configure the Admin Server and security settings.   # Read base template   readTemplate(wlsTemplate)   # Set Admin Server parameters   cd('Servers/' + adminServerName)   set('Name',adminServerName)   set('ListenAddress',adminServerListenHost)   set('ListenPort', int(adminServerListenPort))   setOption('OverwriteDomain', overwriteDomain)   printInStyle('Configured Admin Server')   # Configure security for the domain   cd('/')   cd(applicationDomainPassPath)   cmo.setPassword(adminPassword)   printInStyle('Created password')   # Create domain and write to file system   setOption('OverwriteDomain', 'true')   writeDomain(domainLoc)   closeTemplate()   printInStyle('Created domain') Then start a Node Manager to remotely startup/shutdown Coherence/Weblogic instances. Note: A Node Manager enables Weblogic server instances (or managed servers) to be controlled remotely. A Node Manager is not tied to a domain but to a machine/host.   # Start Node Manager   startNodeManager(debug = 'false', verbose = 'true', NodeManagerHome = nmDir, ListenPort = '5556', SecureListener = useSecurity, NativeVersionEnabled = useSecurity, ListenAddress = host, QuitEnabled = 'true')   printInStyle('Started node manager') Start the Admin Server for the domain. This provides a single point for controlling and administering the domain. It only needs to be started for administration tasks (but here will also be the Coherence management node too)   # Start Admin Server   startServer(adminServerName, domainName, connUri, adminUser, adminPassword, domainLoc, jvmArgs='-XX:MaxPermSize=128m, -Xmx512m, -XX:+UseConcMarkSweepGC')   printInStyle('Started the Admin Server') Now the Administration Server and Node Manager are running the other Weblogic and Coherence resources can now be created. First we associated the new domain with the Node Manager.   # Create Managed Servers and Coherence cluster   # Connect to Admin Server   connect(adminUser, adminPassword, connUri)   # Begin editing session   edit()   startEdit()   # Tell Node Manager about the new domain we have created   printInStyle('Enrolling this domain with node manager')   nmEnroll(domainLoc, nmDir)   cd('/') Create a machine resournce to associate the Weblogic and Cohernece resources with a Node Manager   # Create a machine for everything to be managed by   create(machineName, 'Machine')   machine = cd('/Machines/' + machineName)   cd('NodeManager/' + machineName)   set('ListenAddress', host)   set('NMType', nmType)   printInStyle('Created machine')   cd('/') Creates a Coherence cluster. A Coherence cluster in a Weblogic environment, like other Weblogic resources, is defined by a number of JMX MBeans, primarily the CoherenceClusterSystemResource MBean   # Create the Coherence cluster   cohSR = create(cohClusterName, 'CoherenceClusterSystemResource')   cohBean = cohSR.getCoherenceClusterResource()   cohCluster = cohBean.getCoherenceClusterParams()   cohCluster.setTransport('tmb') Creates 2 Weblogic clusters, one for storage nodes and one for proxy nodes. These are not the same as a Coherence cluster they just make it easier to manage Coherence nodes as a group - as we'll see in a minute   # Create a WebLogic cluster for storage members   clu1 = create(cacheClusterName, 'Cluster')   clu1.setClusterMessagingMode('unicast')   clu1.setCoherenceClusterSystemResource(cohSR)   cohSR.addTarget(clu1)   # Create a WebLogic cluster for proxy servers   clu2 = create(proxyClusterName, 'Cluster')   clu2.setClusterMessagingMode('unicast')   clu2.setCoherenceClusterSystemResource(cohSR)   cohSR.addTarget(clu2)   cohTier = clu2.getCoherenceTier()   cohTier.setLocalStorageEnabled(false)   cd('/') Add the Admin Server to the Coherence cluster as a storage disable node and make it the management node (BTW 'cmo' is the Current Managed Object). # Add Admin Server to cluster - for management purposes cd('Servers/' + adminServerName) cmo.setCoherenceClusterSystemResource(cohSR) cohSR.addTarget(cmo) cmo.getCoherenceMemberConfig().setSiteName(siteName) cmo.getCoherenceMemberConfig().setRackName(rackName) cmo.getCoherenceMemberConfig().setRoleName('Management') cmo.getCoherenceMemberConfig().setLocalStorageEnabled(false) cmo.getCoherenceMemberConfig().setUnicastListenPort(int(adminUnicastListenPort)) cmo.getCoherenceMemberConfig().setManagementProxy(true)  cd('/')  Finally the script creates 4 Coherence cluster nodes, or Weblogic Managed Servers (depending on the settings in the setup.properties file). A Jython function, createServer(), in the setEnv.py script is called here to simplify the setup. These are Weblogic instances that will host Coherence applications and are equivalent to traditional Coherence nodes   # Create storage enabled MCS   port = int(startPort)   unicastPort = int(cacheUnicastListenPort)   for id in range(startId, int(numStorageServers) + 1):     serverName = storageServerName + '_' + machineName + '_' + str(id)     createServer(serverName, port, clu1, 'StorageServer', storageArgs + ' -Dtangosol.coherence.member=' + serverName, true, unicastPort)     id = id + 1     port = port + 1     unicastPort = unicastPort + 2   # Create storage disabled MCS for proxy servers   unicastPort = int(proxyUnicastListenPort)   for id in range(startId, int(numProxyServers) + 1):     serverName = proxyServerName + '_' + machineName + '_' + str(id)     createServer(serverName, port, clu2, 'ProxyServer', proxyArgs + ' -Dtangosol.coherence.member=' + serverName, false, unicastPort)     id = id + 1     port = port + 1     unicastPort = unicastPort + 2   # Save all domain configuration changes   save()   activate() Finally we shutdown the Node Manager and Admin Server as the creation of our cluster is complete.   # Stop Admin Server and Node Manager   printInStyle('Use Security: ' + useSecurity)   nmConnect(adminUser, adminPassword, host, '5556', domainName, domainLoc, nmType)   stopNodeManager()   shutdown(adminServerName)   printInStyle('Finished') The Weblogic domain that’s just been created is called coh_domain and can be found in the dir MW_HOME/user_projects/domains dir. If you want to remove it and recreate you domain with different settings, just shut it down, delete the coh_domain dir and remove the domain mapping from the MW_HOME/domain-registry.xml. Starting the Coherence Cluster To start the cluster run the start-up script startCoherenceCluster.py in the same way as the cluster creation script. This script starts the Node Manager and Admin Server and then uses the Node Manager to start all the Managed Servers.   # Starts a Coherence Cluster using Managed Coherence Servers   # Setup environment   execfile('setEnv.py')   # Start Node Manager   startNodeManager(debug = 'false', verbose = 'true', NodeManagerHome = nmDir, ListenPort = '5556', SecureListener = useSecurity, NativeVersionEnabled = useSecurity, ListenAddress = adminServerListenHost, QuitEnabled = 'true')   printInStyle('Started node manager')   # Start Admin Server   startServer(adminServerName, domainName , connUri, adminUser, adminPassword, domainLoc, jvmArgs='-XX:MaxPermSize=128m, -Xmx512m, -XX:+UseConcMarkSweepGC')   printInStyle('Started the Admin Server')   # Connect to Admin Server   connect(adminUser, adminPassword, connUri)   # Start Servers   start(cacheClusterName, 'Cluster')   start(proxyClusterName, 'Cluster')   # Deploy GAR   deploy(appName, appName + '.gar', cacheClusterName, block = 'true')   deploy(appName, appName + '.gar', proxyClusterName, block = 'true')   startApplication(appName)   # Disconnect from Admin Server   disconnect()   printInStyle('Finished') To stop the cluster just run the script stopCoherenceCluster.py. Testing your Coherence Cluster As part of the cluster startup a simple Coherence application, was deployed as a GAR file. A GAR file contains your classes and configuration files and is just a Jar file with the GAR extension. It contains all the classes and configuration files the Coherene application needs). Here is our simple Coherence application; Directory structure: META-INF/coherence-application.xml META-INF/coherence-cache-config.xml META-INF/pof-config.xml Note: Any classes that your Coherence application uses need to be in the base directory of the GAR and libraries in a lib directory. Here we don't have either And here is a sample coherence-application.xml file: <?xml version="1.0"?> <coherence-application xmlns="http://xmlns.oracle.com/coherence/coherence-application"> <cache-configuration-ref override-property="cache-config/ProxyExample">META-INF/coherence-cache-config.xml </cache-configuration-ref> <pof-configuration-ref>META-INF/pof-config.xml</pof-configuration-ref> </coherence-application> There are a number of ways to deploy a Coherence GAR application. You can use the Admin Console, the Maven plugin to add it as a step to your deployment process or you can use a WLST script. Here we’ve just used a WLST script. To test your installation you can also run a simple external (extend) client to put and get entires in a “test” cache managed by the Coherence application ExampleGAR1. To run the client execute the script extend-client.sh/cmd and issue the command “put 1 one” at the prompt. Managing the Coherence Cluster To see and manage the cluster you have just created and started, open a browser window and goto the Weblogic Admin console at http://localhost:7001/console. Enter weblogic/welcome1 at the login screen. Then navigate to the Environment->Servers screen you will see all the Managed Coherence Servers in your cluster. Monitoring the Coherence Cluster Finally you can monitor your cluster using the JVisualVM plugin for Coherence by running the script jvvm.sh/cmd, installing the Coherence plugin (in MW_HOME/coherence/plugins/jvisualvm) and creating a JMX connection using the URL echoed by the script (service:jmx:iiop://<admin hostname>:7001/jndi/weblogic.management.mbeanservers.domainruntime) and the credentials used above to connect to the Weblogic Admin Server (weblogic/welcome1) Now we have covered a lot of ground here but I hope you can see power of WLST,  how it can simplify managing Coherence and the benefits of using Managed Coherence Servers.  If you would like to try these scripts out for yourself you can download them from here.

The WebLogic scripting and management features available with Coherence 12c Managed Servers make it easy to create Coherence clusters and manage applications. Using the Weblogic Scripting Tool (WLST),...

Coherence

One Client Two Clusters

Sometimes its desirable to have a client connect to multiple clusters, either because the data is dispersed or for instance the clusters are in different locations for high availability. For an extend or external client, which connects to a Coherence cluster via a proxy service over TCP, this is very straight forward. This post outlines how to do this and includes a simple example that you can try yourself. The architecture for this configuration is as follows; Unless your client is co-located in the same environment as the different clusters, an extend configuration is preferable. This is because the performance of an extend client will not impact a cluster and when it conencts and disconnects - no data is recovered or re-balanced - and the membership of the cluster will not change. To configure and extend client to connect to multiple clusters you just need to define a separate cache configuration file for each cluster, where each configuration file contains a mapping from a cache name to that clusters remote address. This allows you to have the same cache name for multiple clusters. An example of the unique elements for the remote cluster configuration is shown below; The service name for each remote cluster needs to be unique, though the remote scheme name can be the same. Obviously the remote address will be different for each cluster too. Each cache configuration file is loaded by a separate ConfiguratbleCacheFactory - so each cluster has its own factory. This factory is then used to instantiate a NamedCache for that cluster, which is completely separate from the NamedCache's for the other clusters. An example of how a seperate NamedCache is created using a distinct ConfiguratbleCacheFactory is shown in the code snippet below; If you are using POF then the client and each cluster will also need to include the same serialisation information, but that's it, it's pretty simple really.   Finally, although we have just discussed connecting and interacting with just 2 separate clusters, there could be more than 2 and the client need not be based on Java either, it could be a .NET or C++ client too. I hope this post has been useful and provides you with some additional options when architecting solutions using Coherence. If you want to try this out you can find a simple 1 client 2 cluster Java example here.

Sometimes its desirable to have a client connect to multiple clusters, either because the data is dispersed or for instance the clusters are in different locations for high availability. For an extend...

Coherence

Dynamic Authorised Hosts

A simple way to help secure a Coherence cluster is to configure the authorised hosts that can be part of the cluster. If a Coherence application tries to join a cluster and its not running on a server in the authorised host it will be rejected. To setup authorised hosts the cluster host names or IP addresses can be explicitly added to the authorised hosts section of the tangosol-coherence-override.xml file, as shown below:   <authorized-hosts>    <host-address id="1">192.168.56.101</host-address>    <host-address id="2">192.168.56.1</host-address>  </authorized-hosts> But what happens when your cluster needs to grow and you need to add another server that is not in the authorised host list or your cluster topology needs to change that and this includes a hostnames or IP address of the server not in the list? Sometimes a full cluster restart with a different configuration is an option but in other cases this is not. Fortunately there are a couple of solutions to this problem - apart from a complete cluster re-start.  The first is to perform a rolling-restart (shutting down each node in turn, changing its configuration and restarting it). But this operation will impact cluster performance as cache data will need to be recovered and re-balanced. Another option is to specify a range of authorised hosts rather than explicitly name them. This approach balances the need to secure your cluster with the need to change the configuration at some point in the future. A range of hosts that con join a cluster can be specified as follows:   <authorized-hosts>    <host-range>      <from-address>192.168.56.101</from-address>      <to-address>192.168.56.190</to-address>    </host-range>  </authorized-hosts> A third option is to use a Coherence filter to determine which hosts are authorised to join the cluster. This allows for a more dynamic configuration as a custom filter can access a separate file with a list of hosts, LDAP server or database. By using a filter its possible to specify exactly which hosts can run cluster members while at the same time allowing this lit to be changed dynamically. A filter to determine authorised hosts can be added to a cluster configuration (tangosol-coherence-override.xml file) as shown below:   <authorized-hosts>    <host-filter>      <!-- Custom Filter to perform authorised host check -->      <class-name>com.oracle.coherence.test.AuthroizedHostsFilter</class-name>      <init-params>        <!-- URL for authorised host list -->        <init-param>          <param-type>String</param-type>          <param-value>             file:/Users/Dave/workspace/CoherenceAuthorizedHostsTest/hosts.txt           </param-value>        </init-param>        <!-- Time in ms between re-reading authorised hosts -->        <init-param>          <param-type>Int</param-type>          <param-value>10000</param-value>        </init-param>      </init-params>    </host-filter>  </authorized-hosts> And an example custom filter that periodically checks a list of hosts at a URL could look like this: /** * File: AuthorizedHostFilter.java * * Copyright (c) 2012. All Rights Reserved. Oracle Corporation. * * Oracle is a registered trademark of Oracle Corporation and/or its affiliates. * * This software is the confidential and proprietary information of Oracle * Corporation. You shall not disclose such confidential and proprietary * information and shall use it only in accordance with the terms of the license * agreement you entered into with Oracle Corporation. * * Oracle Corporation makes no representations or warranties about the * suitability of the software, either express or implied, including but not * limited to the implied warranties of merchantability, fitness for a * particular purpose, or non-infringement. Oracle Corporation shall not be * liable for any damages suffered by licensee as a result of using, modifying * or distributing this software or its derivatives. * * This notice may not be removed or altered. */package com.oracle.coherence.test;import java.io.BufferedReader;import java.io.InputStreamReader;import java.net.URL;import java.util.ArrayList;import java.util.Collections;import java.util.List;import java.util.Timer;import java.util.TimerTask;import com.tangosol.net.CacheFactory;import com.tangosol.util.Filter;/** * Simple filter to check if a host is in an authoirsed host list * Note: this implementation checks the IP address of a host not the hostname * * @author Dave Felcey */public class AuthroizedHostsFilter implements Filter { /** * List of authorised hosts. Thi list is synchronised in case an update and check are being * perform at the same time */ private List hosts = Collections.synchronizedList(new ArrayList()); /** * URL where authorised host list is located */ private String hostsFileUrl; /** * Timer use to re-read authorised hosts */ private Timer timer = new Timer(); /** * Constructor for AuthroizedHostsFilter * @param hostsFileUrl the URL where authorised hosts list is located * @param reLoadInterval interval in ms at which authorised hosts list is re-read */ public AuthroizedHostsFilter(String hostsFileUrl, int reLoadInterval) { this.hostsFileUrl = hostsFileUrl; // Load values load(); // Schedule periodic reload timer.scheduleAtFixedRate(new TimerTask() { public void run() { CacheFactory.log("About to refresh host list"); load(); } }, reLoadInterval, reLoadInterval); } /** * Loads authorised host list */ private void load() { try { CacheFactory.log("Curent dir: " + System.getProperty("user.dir"), CacheFactory.LOG_DEBUG); CacheFactory.log("Loading hosts file from URL: " + hostsFileUrl, CacheFactory.LOG_DEBUG); URL url = new URL(hostsFileUrl); BufferedReader in = new BufferedReader(new InputStreamReader(url.openStream())); String inputLine; while ((inputLine = in.readLine()) != null) { CacheFactory.log("Host IP address: " + inputLine, CacheFactory.LOG_DEBUG); hosts.add(inputLine); } in.close(); } catch (Exception e) { e.printStackTrace(); } } @Override public boolean evaluate(Object host) { String h = host.toString(); h = h.substring(h.indexOf('/') + 1); CacheFactory.log("Validating host IP address: " + host + "(" + h + "), " + hosts.contains(h), CacheFactory.LOG_DEBUG); return hosts.contains(h); }} The dynamic lookup in the example above could be secured further by requiring some kind of credentials for the lookup or instead of polling the resource some kind of change notification could be used to tell the filter when to re-read the authorised hosts. However, in many cases the above approach should be adequate. A full example of the above filter is available from here.

A simple way to help secure a Coherence cluster is to configure the authorised hosts that can be part of the cluster. If a Coherence application tries to join a cluster and its not running on a server...

Coherence

Throttling Cache Events

The real-time eventing feature in Coherence is great for relaying state changes to other systems or to users. However, sometimes not all changes need to or can be sent to consumers. For instance; If rapid changes cannot be consumed or interpreted as fast as they are being sent. A user looking at changing Stock prices may only be able to interpret and react to 1 change per second. A client may be using low bandwidth connection, so rapidly sending events will only result in them being queued and delayed A large number of clients may need to be notified of state changes and sending 100 events p/s to 1000 clients cannot be supported with the available hardware, but 10 events p/s to 1000 clients can. Note this example assumes that many of the state changes are to the same value. One simple approach to throttling Coherence cache events is to use a cache store to capture changes to one cache (data cache) and insert those changes periodically in another cache (events cache). Consumers interested in state changes to entires in the first cache register an interest (event listener) against the second event cache. By using the cache store write-behind feature rapid updates to the same cache entry are coalesced so that updates are merged and written at the interval configured to the event cache. The time interval at which changes are written to the events cache can easily be configured using the write-behind delay time in the cache configuration, as shown below.   <caching-schemes>    <distributed-scheme>      <scheme-name>CustomDistributedCacheScheme</scheme-name>      <service-name>CustomDistributedCacheService</service-name>      <thread-count>5</thread-count>      <backing-map-scheme>        <read-write-backing-map-scheme>          <scheme-name>CustomRWBackingMapScheme</scheme-name>          <internal-cache-scheme>            <local-scheme />          </internal-cache-scheme>          <cachestore-scheme>            <class-scheme>              <scheme-name>CustomCacheStoreScheme</scheme-name>              <class-name>com.oracle.coherence.test.CustomCacheStore</class-name>              <init-params>                <init-param>                  <param-type>java.lang.String</param-type>                  <param-value>{cache-name}</param-value>                </init-param>                <init-param>                  <param-type>java.lang.String</param-type>                  <!-- The name of the cache to write events to -->                  <param-value>cqc-test</param-value>                </init-param>              </init-params>            </class-scheme>          </cachestore-scheme>          <write-delay>1s</write-delay>          <write-batch-factor>1</write-batch-factor>        </read-write-backing-map-scheme>      </backing-map-scheme>      <autostart>true</autostart>    </distributed-scheme>  </caching-schemes> The cache store implementation to perform this throttling is trivial and only involves overriding the basic cache store functions. public class CustomCacheStore implements CacheStore {private String publishingCacheName;private String sourceCacheName;public CustomCacheStore(String sourceCacheStore, String publishingCacheName) {this.publishingCacheName = publishingCacheName;this.sourceCacheName = sourceCacheName;}@Overridepublic Object load(Object key) {return null;}@Overridepublic Map loadAll(Collection keyCollection) {return null;}@Overridepublic void erase(Object key) {if (sourceCacheName != publishingCacheName) {CacheFactory.getCache(publishingCacheName).remove(key);CacheFactory.log("Erasing entry: " + key, CacheFactory.LOG_DEBUG);}}@Overridepublic void eraseAll(Collection keyCollection) {if (sourceCacheName != publishingCacheName) {for (Object key : keyCollection) {CacheFactory.getCache(publishingCacheName).remove(key);CacheFactory.log("Erasing collection entry: " + key,CacheFactory.LOG_DEBUG);}}}@Overridepublic void store(Object key, Object value) {if (sourceCacheName != publishingCacheName) {CacheFactory.getCache(publishingCacheName).put(key, value);CacheFactory.log("Storing entry (key=value): " + key + "=" + value,CacheFactory.LOG_DEBUG);}}@Overridepublic void storeAll(Map entryMap) {if (sourceCacheName != publishingCacheName) {CacheFactory.getCache(publishingCacheName).putAll(entryMap);CacheFactory.log("Storing entries: " + entryMap,CacheFactory.LOG_DEBUG);}}}  As you can see each cache store operation on the data cache results in a similar operation on event cache. This is a very simple pattern which has a lot of additional possibilities, but it also has a few drawbacks you should be aware of: This event throttling implementation will use additional memory as a duplicate copy of entries held in the data cache need to be held in the events cache too - 2 if the event cache has backups A data cache may already use a cache store, so a "multiplexing cache store pattern" must also be used to send changes to the existing and throttling cache store.  If you would like to try out this throttling example you can download it here. I hope its useful and let me know if you spot any further optimizations.

The real-time eventing feature in Coherence is great for relaying state changes to other systems or to users. However, sometimes not all changes need to or can be sent to consumers. For instance; If...

Coherence

Testing the Coherence Simple Partition Assignment Strategy

The Simple Partition Assignment Strategy introduced with Coherence 3.7.1 now allows Coherence to re-balance cache data partitions for primary and backup data using a centralised partitioning strategy. Previously (and the current default) each cluster member determined how cache data partitions were distributed autonomously (by itself), for instance by taking the total number of partitions and dividing it by the number of cluster members. A centralised strategy allows the complete topology of the cluster to be taken into account. Like the default autonomous strategy the simple partition assignment strategy tries to fairly distribute cache data partitions, but as far as is possible it also tries to ensure "data safety", by placing primary and backup copies of data on different sites, racks and machines. So if there are multiple sites then primary and backup data will be put on different sites.  If a cluster spans multiple racks then primary and backup partitions will be put on different racks so a complete rack failure will not result in data loss. Lastly if all the machines are on the same rack then primay and backup partitions will be placed on different machines. It should be noted that in order for the the simple partition assignment strategy to determine where cache data partitions should be located each member in the cluster should specify its full identity, i.e. the name of the machine it is on, its rack and site using member identity parameters. These are usually set as system properties (as shown below) but can also be placed in the cluster override file. -Dtangosol.coherence.cluster=ProdAppCluster -Dtangosol.coherence.site=MainDataCentre -Dtangosol.coherence.rack=Rack07 -Dtangosol.coherence.machine=Blade11 Normally to test that the simple partition assignment strategy works would involve quite a bit of setup. However, a small test framework called "littlegrid" (written by Jon Hall) enables the whole test to be run in a single JVM and written as a simple JUnit test. Below is the cache configuration file that introduces the new simple assignment strategy class (to the distributed service CustomDistributedCacheService) and is highlighted in bold. <?xml version="1.0"?> <cache-config xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"  xmlns="http://xmlns.oracle.com/coherence/coherence-cache-config"  xsi:schemaLocation="http://xmlns.oracle.com/coherence/coherence-cache-config     http://xmlns.oracle.com/coherence/coherence-cache-config/1.0/coherence-cache-config.xsd">  <defaults>    <serializer>pof</serializer>  </defaults>  <caching-scheme-mapping>    <cache-mapping>      <cache-name>*</cache-name>      <scheme-name>CustomDistributedCacheScheme</scheme-name>    </cache-mapping>  </caching-scheme-mapping>  <caching-schemes>    <distributed-scheme>      <scheme-name>CustomDistributedCacheScheme</scheme-name>      <service-name>CustomDistributedCacheService</service-name>      <thread-count>5</thread-count>      <partition-assignment-strategy>        <instance>          <class-name>com.tangosol.net.partition.SimpleAssignmentStrategy          </class-name>        </instance>      </partition-assignment-strategy>      <partition-listener>        <class-name>com.oracle.coherence.test.PartitionLostListener</class-name>      </partition-listener>      <backing-map-scheme>        "><local-scheme>        </local-scheme>      </backing-map-scheme>      <autostart>true</autostart>    </distributed-scheme>  </caching-schemes></cache-config>  Now here is the JUnit test setup using the "littlegrid" test framework to simulate a rack failure. 3 racks, each containing 2 machines with 2 nodes in each machine is created, along with a management node and obviously a storage disabled test client. The last 2 nodes are in a separate rack - "default rack". The test involves the following steps: Some data is added to a test cache so that data is held by each partition A random rack (from the 3 created) is selected and all the nodes in that rack shutdown at the same time After a short pause the JMX MBean "Partition Lost" - shown below - is checked on all remaining storage members to ensure that no partitions were lost. This last step is possible because a Partition Event Listener is registered with the partitioned cache service (CustomDistributedCacheService) in the cache configuration file. This Partition Event Listener also exposes itself as  JMX MBean, making it possible to check for this kind of event. The code for the test is shown below: package com.oracle.coherence.test;import static org.hamcrest.CoreMatchers.is;import static org.junit.Assert.assertThat;import java.lang.management.ManagementFactory;import java.util.HashMap;import java.util.Map;import java.util.Random;import java.util.Set;import java.util.concurrent.TimeUnit;import javax.management.MBeanServerConnection;import javax.management.ObjectName;import org.junit.After;import org.junit.Before;import org.junit.Test;import org.littlegrid.ClusterMemberGroup;import org.littlegrid.ClusterMemberGroupUtils;import com.tangosol.net.CacheFactory;import com.tangosol.net.Member;import com.tangosol.net.NamedCache;import com.tangosol.net.PartitionedService;/** * Coherence simple assignment strategy tests. * * @author Dave Felcey */public class TestCase { private ClusterMemberGroup memberGroup; private NamedCache cache = null; private int[][] racks = null; @Before public void setUp() { // Create member group now, so later code simply merges into this group memberGroup = ClusterMemberGroupUtils.newBuilder() .buildAndConfigureForNoClient(); final int numberOfRacks = 3; final int numberOfMachines = 2; final int numberOfStorageEnabledMembersPerMachine = 2; final int expectedClusterSize = (numberOfRacks * numberOfMachines * numberOfStorageEnabledMembersPerMachine); racks = new int[numberOfRacks][numberOfMachines * numberOfStorageEnabledMembersPerMachine]; // Start up the storage enabled members on different racks and machines for (int rack = 0; rack < numberOfRacks; rack++) { for (int machine = 0; machine < numberOfMachines; machine++) { // Build using the identity parameters memberGroup.merge(ClusterMemberGroupUtils .newBuilder() .setFastStartJoinTimeoutMilliseconds(100) .setSiteName("site1") .setMachineName("r-" + rack + "-m-" + machine) .setRackName("r-" + rack) .setStorageEnabledCount( numberOfStorageEnabledMembersPerMachine) .buildAndConfigureForNoClient()); } // Save member id's for rack System.arraycopy(memberGroup.getStartedMemberIds(), rack * numberOfMachines * numberOfStorageEnabledMembersPerMachine, racks[rack], 0, numberOfMachines * numberOfStorageEnabledMembersPerMachine); } // Create Management and client members with default rack and machine // identities memberGroup.merge(ClusterMemberGroupUtils.newBuilder() .setJmxMonitorCount(1).setLogLevel(9) .buildAndConfigureForStorageDisabledClient()); assertThat( "Cluster size check - includes storage disabled client and JMX monitor", CacheFactory.ensureCluster().getMemberSet().size(), is(expectedClusterSize + 2)); assertThat( "Member group check size is as expected - includes JMX monitor, but not storage disabled client", memberGroup.getStartedMemberIds().length, is(expectedClusterSize + 1)); cache = CacheFactory.getCache("test"); } /** * Demonstrate SimpleAssignementStrategy. */ @Test public void testSimpleAssignmentStrategy() throws Exception { final Map entries = new HashMap(); final int totalEntries = 1000; // Load test data for (int i = 0; i < totalEntries; i++) { entries.put(i, "entry " + i); } cache.putAll(entries); assertThat(cache.size(), is(totalEntries)); // Kill rack - if partition lost then will exit Random random = new Random(); int rack = Math.abs(random.nextInt() % racks.length); System.out.println("Stopping rack: " + rack); memberGroup.stopMember(racks[rack]); System.out .println("Pausing to allow data to be recovered"); TimeUnit.SECONDS.sleep(memberGroup .getSuggestedSleepAfterStopDuration() * racks[rack].length * 10); assertThat("Partition lost", getPartitionLostEventCount(), is(0)); assertThat("Cache size", cache.size(), is(totalEntries)); } @After public void tearDown() { // Quick stop all - members *don't* leave the cluster politely - done for // this test so it shuts down quicker memberGroup.stopAll(); ClusterMemberGroupUtils .shutdownCacheFactoryThenClusterMemberGroups(memberGroup); } /** * Get the number of partitions lost. * * @return partitions lost */ private int getPartitionLostEventCount() throws Exception { // Create an MBeanServerConnection final MBeanServerConnection connection = ManagementFactory .getPlatformMBeanServer(); @SuppressWarnings("unchecked") final Set members = ((PartitionedService) cache .getCacheService()).getOwnershipEnabledMembers(); final String serviceName = cache.getCacheService() .getInfo().getServiceName(); int lostPartitions = 0; // Get any partition lost event information from cluster members for (Member member : members) { String path = "Coherence:type=PartitionListener,name=PartitionLostCount,service=" + serviceName + ",id=PartitionLost,nodeId=" + member.getId(); lostPartitions += (Integer) connection.getAttribute( new ObjectName(path), "PartitionLostCount"); } return lostPartitions; }} By extending the pause time after the rack has been stopped you can view the PartitionLost MBean information in JConsole, as shown below: The PartitionLostListerner (based on an example by Andrew Wilson) looks like this:  package com.oracle.coherence.test;import java.util.concurrent.atomic.AtomicInteger;import com.tangosol.net.CacheFactory;import com.tangosol.net.management.Registry;import com.tangosol.net.partition.PartitionEvent;import com.tangosol.net.partition.PartitionListener;/** * Partition lost listener */public class PartitionLostListener implements PartitionListener, PartitionLostListenerMBean { private final AtomicInteger lostCount = new AtomicInteger(); private String serviceName; public PartitionLostListener() { } @Override public synchronized void onPartitionEvent( PartitionEvent partitionEvent) { // Initialize if necessary if (serviceName == null) { serviceName = partitionEvent.getService().getInfo() .getServiceName(); System.out.println("Registering JMX"); Registry reg = CacheFactory.getCluster() .getManagement(); reg.register( reg.ensureGlobalName("Coherence:type=PartitionListener,name=PartitionLostCount,service=" + serviceName + ",id=PartitionLost"), this); System.out.println("Registered JMX"); } // Handle the event if (partitionEvent.getId() == PartitionEvent.PARTITION_LOST) { System.out.println("Partition lost: " + partitionEvent); lostCount.addAndGet(partitionEvent.getPartitionSet() .cardinality()); } } // Returns any partitions lost and resets public synchronized Integer getPartitionLostCount() { int temp = lostCount.get(); lostCount.set(0); return temp; }} If you would like to try this for yourself then you can download the complete example form here and find the "littlegrid" test tool here. Please note that to properly test the Simple Partition Strategy you should get the latest release of Coherence (which is available from Oracle Support and at the time of writting is 3.7.1.4). Unfortunately patch releases are not available from OTN. Another tip for using the simple partition assignment strategy with large clusters is to use the distribution quorum feature during cluster startup. It stops partition re-balancing taking place until a cluster membership threshold has been reached eliminating unneccessary network traffic and processing. A good starting point would be to set the threshold for an n rack cluster to be at n - 1 * number of machines per rack - 1. ⁞The restore and read quorum features can also be used to mitigate the the effects of a "split brain" scenario by making data read-only and enabling a cluster to reform should this occur.

The Simple Partition Assignment Strategy introduced with Coherence 3.7.1 now allows Coherence to re-balance cache data partitions for primary and backup data using a centralised partitioning...

Coherence

Coherence Clustering Principles

Overview  A Coherence environment consists of a number of components. Below I’ll describe how they release to each other and what the terms mean. But just to give you a flavor, here they are displayed as a hierarchy. Distributed cache services with the same name will cluster together to manage their cache data. So you can have multiple clustered services across Coherence nodes. This article will explain how these components work together and how applications interacts with them. What do we mean by a Coherence Cluster? It’s a set of configuration parameters that control the operational and run-time settings for clustering, communication, and data management services. These are defined in the Coherence operational override file and include such things as: Multi-cast or unicast addresses for locating cluster members Cluster identity information Management settings Networking parameters Security information Logging information Service configuration parameters These settings are used by Coherence services to communicate with other nodes, to determine cluster membership, for logging and other operational parameters and are similar to the Domain configuration used by Weblogic Server. They also apply to the entire Coherence cluster node, which usually means the whole JVM. Although a cluster node will usually equate to a JVM, it is possible to have more than one node per JVM by loading the Coherence libraries multiple times in different isolated class loaders, e.g. a "child first" class loader. However, this is only usually done within the contaxt of an application server or a test framework. The coherence.jar file contains a default operational override file – tangosol-coherence-override.xml – that will be used if another is not detected in the CLASSPATH before the coherence.jar library is loaded. In actual fact there are 3 versions of this file; tangosol-coherence-override-eval.xml tangosol-coherence-override-dev.xml tangosol-coherence-override-prod.xml Which is selected will depend on the mode that Coherence is running in (the default is Developer mode) and can be set using the system property -Dtangosol.coherence.mode=prod. Note: Its important to ensure that the production mode is selected for production usage, as in production mode certain communication timeouts etc will be extended so that Coherence will wait longer for services to recover – amongst other things. What is a Coherence Service? A Coherence service is a thread (and sometimes pool of worker threads) that has a specific function. This can be: Connectivity Services Clustering Service – manage cluster membership communications. There is exactly one of these services per JVM (or within a class-loader). This service tracks which other nodes are in the cluster, node failures etc. Proxy Services – manage external connections into the cluster from “extend” clients Processing Services Invocation Service – executes processing requests/tasks on the node in the cluster it is running on. Data Services Distributed Cache Service – manages cache data for caches created from the scheme its defined in Replicated Cache Service – provides synchronous replication cache services, where all managed cache data exists on every cluster node  This document will focus on Distributed Cache Services that manage distributed caches defined from a distributed schema definition.  When a schema definition (as shown below) is parsed Coherence instantiates a service thread with the name specified. This service thread will manage the data from caches created using the schema definition. So how does this all happen then? If an application using Coherence calls: NamedCache tradeCache = CacheFactory.getCache(“trade-cache”); A couple of things happen: When the Coherence classes are loaded they will by default search the CLASSPATH for the coherence-cache-config.xml file – which is actually the name specified in the default operational override file. The first instance that is found wild be used. However, if one is specified using the system property –Dtangosol.coherence.cacheconfig=<cache config file> it will use that cache configuration file instead. Also a cache configuration can be explicitly loaded from the CLASSPATH, as follows: // Load a applicaton X cache configuration file ClassLoader loader  = getClass().getClassLoader(); // Note: it’s a URI that is specified for the location // of the cache configuration file ConfigurableCacheFactory factory = CacheFactory.getCacheFactoryBuilder.getConfigurableCacheFactory(“applicationx-cache-config”, loader); NamedCache localCacheInstance = factory.ensureCache("trade-cache", loader); When the cache configuration file is parsed distributed cache service threads are started for all the cache “schemes” that are defined if the autostart parameter is set to true (by default its false). <caching-schemes>   <distributed-scheme>     <scheme-name>DefaultDistributedCacheScheme</scheme-name>     <service-name>DefaultDistributedCacheService</service-name>     <autostart>true</autostart>   </distributed-scheme> </caching-schemes> Note: For data services the autostart flag is only observed for distributed caches. So a replicated cache service would automatically be started. Each cache service started is given a name - or one is created if none is specified.  This service thread then attempts to join with other services that have the same name in the Coherence cluster. If none are found, i.e. it’s the first to start, it will become the “senior member” of the service. To illustrate this take a look at a sample log statement when Coherence starts a service. 2012-02-02 11:00:04.666/16.462 Oracle Coherence GE 3.7.1.0 <D5> (thread=DistributedCache:DefaultDistributedCacheService, member=1): Service DefaultDistributedCacheService joined the cluster with senior service member 1 In this case a distributed cache service called DefaultDistributedCacheService has started up on Member 1 of the cluster (the first JVM). As it’s the first service with this name to start it becomes the senior member – which means it has a couple of extra responsibilities, like sending out heartbeats etc. Once the cache services have been started Coherence will try and match the cache name that has been passed, in this case “trade-cache”, with the appropriate cache scheme (and service) that will manage this cache. It uses the cache scheme mappings part of the cache configuration file to do this and wild card matching (if necessary) to identify the right cache scheme.  Note: Regular expression parsing is not used and wild cards cannot be used at the start of the cache name. <caching-scheme-mapping>   <cache-mapping>     <cache-name>trade-*</cache-name>     <scheme-name>DefaultDistributedCacheScheme</scheme-name>   </cache-mapping> </caching-scheme-mapping> Once the correct cache scheme has been matched a reference to an existing cache managed by the cache service for this scheme will be returned or a new cache created, using  the parameters of the cache scheme. Cache services that use the same cluster name should try and scope their names to prevent name clashes. This is so that multiple applications sharing the same cluster don’t inadvertently use the same service name - and unintentionally shares data.  Each application that wishes to cache data in isolation from others can load its own cache configuration file (as shown above) and specify a unique scope name. This will then be used as a prefix to cluster service names to prevent service name collision. Below is an example scope definition in a cache configuration file: <?xml version="1.0"?> <cache-config> <scope-name>com.oracle.custom</scope-name> ... </cache-config> And in JConsole you can see the scope prefix being used in conjunction with service names – circled in red.  Note: Its not used as a prefix to cache names. By scoping service names multiple services can co-exist in the same Coherence cluster without interacting, thus allowing individual applications to create and manage their own caches in isolation. Below we have 3 nodes (JVM’s) each running 2 partitioned cache services. Each cache service manages 2 caches and the data (key/value pairs) stored in these caches is shared evenly between the nodes. Key: S=Service, C=Cache If you want to find out more information about caches, how they relate to maps and how they are partitioned across cache services please see this (http://blackbeanbag.net/wp/2010/07/01/partitions-backing-maps-and-services-oh-my/) excellent blog article covering these topics. 

Overview  A Coherence environment consists of a number of components. Below I’ll describe how they release to each other and what the terms mean. But just to give you a flavor, here they are...

Coherence

Using Weblogic Server ActiveCache for Coherence

As web applications are architected/re-architected caching is often added to reduce data access bottlenecks that often hinder scalability and performance. Weblogic Server (WLS) now has a range of features, collectively called "ActiveCache", that provides Coherence cache integration in a Java Servlet context, a JPA context (Toplink Grid) and an HTTP session context (Coherence*Web).Ê This post will concentrate on the first of these integration points and show how a J2EE web application cab be packaged, deployed and managed in WLS to seamlessly take advantage of a data caching tier. Deployment Architecture So how do all the parts work together? With WLS 10.3.4+ you can now use its NodeManager (a process monitoring and management service) to manage components other than WLS, like Coherence. There are also additional administration pages in the WLS Administration Console (http://localhost:7001/console) to configure Coherence clusters (via the Environment > Coherence Clusters menu): And Coherence cluster nodes (via the Environment > Coherence Servers menu): These allow a cluster to be defined and all the cluster nodes to be started and stopped from the WLS Admin console. In this example the Servlet and all the Coherence cache nodes are all part of the same Coherence cluster, but the Servlet does not hold any data, i.e. its storage attribute is disabled. This means that when it is deployed or un-deployed no cache data needs to be moved between the cluster nodes - because it will not hold any data. This topology is shown below: Note: the diagram above shows how HTTP session data can be cached in Coherence but the topology is identical for this use case, where web application data is being cached. Setup These instructions make a few assumptions, namely that: WLS 10.3.5 has been installed Java JDK 1.6+ is installed A domain has been defined - in my case a domain called base_domain Coherence 3.7+ for Java has been installed - this just requires it being unziped The Eclipse Indigo IDE is installed (3.7.0) - optional To setup the environment to deploy the sample application the following steps are required: Configure and startup WLS NodeManager that will start and stop the Coherence cluster nodes.This step may be obvious for readers who are familiar with WLS, but just for completeness these are the steps I needed to do on a Mac. FirstÊ I set the environment parameters used to start-up WLS:export JAVA_HOME=/System/Library/Frameworks/JavaVM.framework/Versions/1.6/Home# Dir where WLS installed toexport MW_HOME=/Users/Dave/apps/wls1035_dev/export USER_MEM_ARGS="-Xmx1024m -XX:PermSize=1024m"Then I set the NodeManager parameters in the nodemanager.properties file ($MW_HOME/wlserver/common/nodemanager/nodemanager.properties) so that it did not try to use the native libraries (this step is platform specific) and that SSL was not used - to make the setup simpler;NativeVersionEnabled=falsePropertiesVersion=10.3ListenAddress=localhostSecureListener=falseDomainRegistrationEnabled=trueI also added the WLS domain that my application will be deployed to the nodemamanger.domain ($MW_HOME/wlserver/common/nodemanager/nodemanager.domain) properties file;base_domain=/Users/Dave/apps/mywls/user_projects/domains/base_domainFinally I started the NodeManager for my laptop (note a NodeManager is associated with a machine and not a domain);cd $MW_HOME/wlserver/server/bin./startNodeManager.sh Configure a new Machine definition using the WLS Admin consoleNext you need to define a Machine definition that will enable a NodeManager to be used to start the Coherence cluster. As was mentioned above, a NodeManager is associated with a server not a domain and "A machine is the logical representation of the computer that hosts one or more Weblogic Server instances (servers)."To configure a Machine navigate to Machines screen in the WLS Admin console via the Environment > Machines menu and click on the New button and specify:Screen 1Name: MyMachine - the name of your machineMachine OS: Other - I did not specify one here as there are no native libraries for Mac OSXScreen 2Type: Plain - so as to avoid additional security configurationNote: I left all other parameters to the defaults.As the NodeManager has been started up using the default parameters it should be found without specifying any additional setup. The monitoring page for the machine should then show that the NodeManager is reachable.If the status is not reachable the take a look at the output for the NodeManager if you have started it from a console or at the NodeManager log file ($MW_HOME/wlserver/common/nodemanager/nodemanager.log). Configure and startup the Coherence clusterTo startup the Coherence cluster that will be used to cache the web application data you need to create a Coherence cluster configuration and then define the and startup the Coherence servers. To configure the Coherence cluster navigate to the Coherence cluster definition screen via the Environment > Coherence Clusters menu. The configuration parameters I used were:Name: MyCluster - this is the name of the cluster and will help prevent accidental clustering between different environmentsUse a Custom Cluster Configuration File: /Users/Dave/Documents/workspace/WLSCohWebAppEAR/EarContent/APP-INF/classes/tangosol-coherence-override.xml - this is the Coherence operational override file for the cluster, NOT the cache configuration fileTo configure the Coherence cluster servers navigate to the Coherence cluster definition screen via the Environment > Coherence Servers menu. The General configuration parameters I used were:Name: CacheServer1 - this can be any nameMachine: MyMachine - select the name of the Machine definition you have just createdCluster: MyCluster - select the name of the cluster you have just createdUnicast Listen Port: 9999 - the default is 8888 but its worth selecting something different and from that other clusters to prevent your servers accidentally joining another clusterAll other default general configuration parameters were accepted.The Server Start configuration parameters entered were:Java Home: /Library/Java/Home - the base dir of your Java installationJava Vendor: AppleBEA Home: /Users/Dave/apps/wls1035_dev/wlserver - the installation dir for WLSRoot Directory: /Users/Dave/apps/mywls/user_projects/domains/base_domain - the domain base dirClass Path: /Users/Dave/apps/wls1035_dev/modules/features/weblogic.server.modules.coherence.server_10.3.4.1.jar:/Users/Dave/coherence/3.7/coherence-java/coherence/lib/coherence.jar - note the order of these Jar files seems to be important and you should specify the correct path separator, in this case a ':' charArguments: -Xms1024m -Xmx1024m -Dtangosol.coherence.cacheconfig=/Users/Dave/Documents/workspace/WLSCohWebAppEAR/EarContent/APP-INF/classes/coherence-cache-config.xml -Dtangosol.coherence.distributed.localstorage=true -Dtangosol.coherence.role=CacheServer - the arguments to the JVM that will run the Coherence cache server. Note: the last parameter as in the Coherence override file the default storage setting has been set to false, so for storage enabled nodes it has to be explicitly enable. This is because its not possible to override the storage enabled default setting by using system properties just for the TestServlet other than setting this property in the WLS startup script.Obviously you will need to modify these paths to reflect thoseÊ in your environment.To create additional Coherence Servers you can just clone this one on the Coherence Servers admin page and change the name of the new Coherence Server.The Coherence Cluster is started from the Coherence Servers page which you can navigate to by selecting the Environment > Coherence Servers menu. On this page select the Control tab, select all the Coherence Servers that you have just defined and click on the StartÊ button. Deploy the Coherence and ActiveCache shared librariesThe Coherence and ActiveCache Jar files can be deployed in a number of ways, added to the CLASSPATH of WLS, deployed as a shared library or as part of the web application. In this case the Jar files have been deployed as a shared library, which provides both usage isolation (as only applications that import the libraries will have them in scope) and a minimal resource overhead (as only 1 copy of the classes will be loaded). Shared libraries can be deployed through the WLS Admin Console via the Environment > Deployments menu, as shown below (accepting the default settings should be fine to get started): Package and deploy the web application that will utilize a Coherence cacheThis can be done as either a WAR or EAR file. I choose an EAR file here. You can do this either by dropping the archive into the autodeploy directory of WLS, through the Admin console, using the Weblogic Scripting Tool (WLST) or through and IDE like Eclipse (that uses WLST). I choose the later for convenience. An easy way to do so is through the wizards provided by the Oracle Enterprise Pack for Eclipse (OEPE) plugin - as shown below:To do this you can just import the example EAR web application projects (and its dependent projects),Ê update the library paths for Coherence etc and deploy to the target WLS. Testing of the sample web application To access the TestServlet sample web application go to the URL http://<your hostname>:7001//WLSCohWebApp/TestServlet , for instance http://192.168.56.1:7001/WLSCohWebApp/TestServlet.. It outputs the Coherence cluster members and also puts an entry in a cache called "MyCache" whose key is the HTTP session Id and the value is the last modified value of the HTTP session. Its output is shown below: This output tells you how many clients have an active session and how many cluster members are in the cluster. All this cache data is visible across all invocations of this Servlet, so application data like reference data or values in drop-down-lists etc, that doesn't change frequently are ideal candidates. Description of the sample web application The ActiveCache functionality allows cache references to be configured as resources. In the TestServlet the Resournce annotation is used to dynamically inject a reference to the NamedCache "MyCache", as shown below: A Coherence cache reference (NamedCache) resource and also be looked up using a JNDI lookup. The shared libraries that the Servlet references are specified in the weblogic-application.xml file: The Coherence cluster can be monitored through JMX. To see the metrics for the cluster open JConsole on the machine you are testing on and select one of the weblogic.server.nodemanager... process (a Coherence management server could also be started to provide remote access). If you expand the node tree you will see the following metrics: For explicit startup and shutdown operations a Servlet listener class has been registered that will shutdown the web application as a cluster node if the application is un-deployed. I hope this post has been useful and gives you an overview of how the WLS ActiveCache functionality can be used in a web application to provide fast access to frequently used data. Although the cache is used as a "side cache" here - where the data is added to the cache explicitly - it could easily be configured to load the required data into the cache on-demand from a database or some other data source. Other options not explored here, but worth considering, are scripting the cluster setup and startup using WLST. The sample application can be downloaded from here. Other useful resources Deep-dive presentation on ActiveCache James Bayers Blog entry on setting up WLS NodeManager Pas Apicella's Blog entry on starting Coherence clusters using the WLS Administration Console

As web applications are architected/re-architected caching is often added to reduce data access bottlenecks that often hinder scalability and performance. Weblogic Server (WLS) now has a range of...

Coherence Rolling Upgrades

Oracle Coherence Rolling Upgrades Oracle Coherence provides a highly available resilient in-memory cache that provides fast access to frequently used data. It is increasingly becoming a key component of systems that need to be online 24x7 with little or no opportunity to make changes offline. This document outlines a number of techniques that can be used to upgrade a Coherence cluster online without the need to stop the consumers of its data. How can Coherence be upgraded online? Typically this is done in a rolling upgrade. This is where each node or JVM in a cluster is stopped, its configuration or CLASSPATH changed and then it is re-started. By cycling through all nodes or JVM's in the cluster they can all be migrated to a new version of the Coherence binaries, a new configuration or version of custom classes. Although Coherence has been designed to handle nodes or JVM's dynamically leaving and re-joining a cluster, the internal data recovery and re-balancing that Coherence performs can take a few seconds. Coherence can usually detect the failure of a node or JVM within a second, but recovering and re-balancing the data can take a few seconds – depending on the amount of data it held, the speed of the clusters network and how busy the cluster is. So a rolling re-start must be managed. Every Coherence distributed service exposes a JMX attribute called StatusHA. It indicates the status of the data managed by that service, i.e. the data in the caches managed by the service. It can have 3 values. MACHINE-SAFE, when data in caches managed by the service has been copied to more than one machine, meaning no data will be lost if a machine fails. NODE-SAFE, when multiple copies of data exist on more than one node or JVM, and finally ENDANGERED, when some of the data it manages is only held on one node or JVM. In a rolling re-start of a cluster another node cannot be re-started if the StatusHA of any distributed service is ENDANGERED or data may be lost if the next node or JVM to be re-started contains the only copy of a piece of un-recoverd data. Some changes can be also be made dynamically, with no node or JVM re-start, for instance JMX changes. However, it should be noted that these will not be persisted and so will not survive node re-starts. The following change can be made dynamically through JMX (as of release 3.5.2): Cluster Node Settings JMX Parameter Setting Comments BufferPublishSize Integer The buffer size of the unicast datagram socket used by the Publisher, measured in the number of packets. Changing this value at runtime is an inherently unsafe operation that will pause all network communications and may result in the termination of all cluster services. BufferReceiveSize Integer The buffer size of the unicast datagram socket used by the Receiver, measured in the number of packets. Changing this value at runtime is an inherently unsafe operation that will pause all network communications and may result in the termination of all cluster services. BurstCount Integer The maximum number of packets to send without pausing. Anything less than one (e.g. zero) means no limit. BurstDelay Integer The number of milliseconds to pause between bursts. Anything less than one (e.g. zero) is treated as one millisecond. LoggingFormat String Specifies how messages will be formatted before being passed to the log destination LoggingLevel Integer Specifies which logged messages will be output to the log destination. Valid values are non-negative integers or -1 to disable all logger output. LoggingLimit Integer The maximum number of characters that the logger daemon will process from the message queue before discarding all remaining messages in the queue. Valid values are integers in the range [0...]. Zero implies no limit. MulticastThreshold Integer The percentage (0 to 100) of the servers in the cluster that a packet will be sent to, above which the packet will be multicasted and below which it will be unicasted. ResendDelay Integer The minimum number of milliseconds that a packet will remain queued in the Publisher`s re-send queue before it is resent to the recipient(s) if the packet has not been acknowledged. Setting this value too low can overflow the network with unnecessary repetitions. Setting the value too high can increase the overall latency by delaying the re-sends of dropped packets. Additionally, change of this value may need to be accompanied by a change in SendAckDelay value. SendAckDelay Integer The minimum number of milliseconds between the queueing of an Ack packet and the sending of the same. This value should be not more then a half of the ResendDelay value. TrafficJamCount Integer The maximum total number of packets in the send and resend queues that forces the publisher to pause client threads. Zero means no limit. TrafficJamDelay Integer The number of milliseconds to pause client threads when a traffic jam condition has been reached. Anything less than one (e.g. zero) is treated as one millisecond. Point-to-point Settings JMX Parameter Setting Comments ViewedMemberId Integer The Id of the member being viewed. Service Settings JMX Parameter Setting Comments RequestTimeoutMillis Long The default timeout value in milliseconds for requests that can be timed-out (e.g. implement the com.tangosol.net.PriorityTask interface), but do not explicitly specify the request timeout value. TaskHungThresholdMillis Long The amount of time in milliseconds that a task can execute before it is considered hung. Note that a posted task that has not yet started is never considered as hung. TaskTimeoutMillis Long The default timeout value in milliseconds for tasks that can be timed-out (e.g. implement the com.tangosol.net.PriorityTask interface), but do not explicitly specify the task execution timeout value. ThreadCount Integer The number of threads in the service thread pool. Cache Settings JMX Parameter Setting Comments BatchFactor Double The BatchFactor attribute is used to calculate the `soft-ripe` time for write-behind queue entries. A queue entry is considered to be `ripe` for a write operation if it has been in the write-behind queue for no less than the QueueDelay interval. The `soft-ripe` time is the point in time prior to the actual `ripe` time after which an entry will be included in a batched asynchronous write operation to the CacheStore (along with all other `ripe` and `soft-ripe` entries). This attribute is only applicable if asynchronous writes are enabled (i.e. the value of the QueueDelay attribute is greater than zero) and the CacheStore implements the storeAll() method. The value of the element is expressed as a percentage of the QueueDelay interval. Valid values are doubles in the interval [0.0, 1.0]. ExpiryDelay Integer The time-to-live for cache entries in milliseconds. Value of zero indicates that the automatic expiry is disabled. Change of this attribute will not affect already-scheduled expiry of existing entries. FlushDelay Integer The number of milliseconds between cache flushes. Value of zero indicates that the cache will never flush. HighUnits Integer The limit of the cache size measured in units. The cache will prune itself automatically once it reaches its maximum unit level. This is often referred to as the `high water mark` of the cache. LowUnits Integer The number of units to which the cache will shrink when it prunes. This is often referred to as a `low water mark` of the cache. QueueDelay Integer The number of seconds that an entry added to a write-behind queue will sit in the queue before being stored via a CacheStore. Applicable only for WRITE-BEHIND persistence type. RefreshFactor Double The RefreshFactor attribute is used to calculate the `soft-expiration` time for cache entries. Soft-expiration is the point in time prior to the actual expiration after which any access request for an entry will schedule an asynchronous load request for the entry. This attribute is only applicable for a ReadWriteBackingMap which has an internal LocalCache with scheduled automatic expiration. The value of this element is expressed as a percentage of the internal LocalCache expiration interval. Valid values are doubles in the interval[0.0, 1.0]. If zero, refresh-ahead scheduling will be disabled. RequeueThreshold Integer The maximum size of the write-behind queue for which failed CacheStore write operations are requeued. If zero, the write-behind requeueing will be disabled. Applicable only for WRITE-BEHIND persistence type. Management Settings JMX Parameter Setting Comments ExpiryDelay Long The number of milliseconds that the MBeanServer will keep a remote model snapshot before refreshing. RefreshPolicy String The policy used to determine the behavior when refreshing remote models. Valid values are: refresh-ahead, refresh-behind, refresh-expired, refresh-onquery. Invalid values will convert to `refresh-expired`. Others may need a parallel cluster to be available so that clients can be failed over to it and failed back when the original cluster has been completely shutdown and re-started. The process for upgrading an application that uses Coherence will start at the cache store (usually a database) level and then at subsequent dependent application tiers, cluster level and client level, until everything has been upgraded. This document will outline what types of changes can be made to Coherence online and how they can be made. What types of online changes can be made to a Coherence cluster? The Coherence binaries/JAR files Minor upgrades to the Coherence binaries can be done online in a rolling fashion. This also applies to patches to the Coherence binaries. So a patch release upgrade from release 3.6.1.2 to 3.6.1.6 (Major.Major.Pack:Patch) can be done online. For major and pack release upgrades a complete cluster shutdown will be required. This is because a major or pack release will likely not be compatible on the TCMP protocol boundary (there are plans to document these). To perform a major upgrade to the Coherence binaries, e.g. from 3.4 to 3.5, a parallel and identical cluster must be available. Usually this will be a Disaster Recovery (DR) site or another cluster setup in an active-active configuration using the Push Replication feature so that the data on each is synchronised. For Extend Clients the procedure will first require clients using the primary site to be re-directed to the DR or another active site. Any data that has not yet been replicated can be pulled through from the primary site if necessary. Once all the active clients have been re-directed and changes on the primary site fully replicated it can then be shut-down, the major release of the binaries placed in the CLASSPATH and the cluster re-started. Finally, it can be re-synchronized with the other active-active site and clients re-directed back. For data clients, i.e. storage disabled TCMP clients, it is not possible to re-direct them to another cluster- as described above. Extend clients Forward compatibility is from client to server, i.e. from Extend Client to the cluster. For example a 3.4 client can connect to a 3.5 and 3.6 proxy. However a 3.6 client cannot connect to a 3.4 proxy. In addition, this requires the use of POF all around, since POF can support backward compatible serialization change. It should be noted that during an online upgrade new clients must not access a Coherence cluster until all the nodes or JVM's have been upgraded first. Coherence Configuration Changes The cache configuration can be made either in the XML configuration file or through JMX. JMX enables a limited set of changes to be made online, such as changing the logging level (see above for the full list of online changes that can be made). These are made at a node or JVM level. Management tools, like Oracle Enterprise Manager, can JMX changes to be propagated to all nodes in a cluster, but they are not persisted. Where JMX does not expose the configuration setting as an attribute that can be modified then it must be made to the cache or override configuration file and the cluster re-started in a rolling fashion. It should be noted that not all changes can be made in a rolling fashion, for instance changes to the cluster ports and addresses. Portable Object Format (POF) changes must be synchronised with any CLASSPATH changes on both the cluster and clients using the cluster. Changes to the POF configuration can be made in a rolling fashion, but classes listed must also be added to the CLASSPATH at the same time. Furthermore, the whole cluster must be upgraded first before clients using any new classes access the cluster. Changes must also be both backwardly and forwardly compatible. So for instance a POF entry can be removed if it is for a class which represents a parameter to an entry processor which will no longer be used by any client. If older clients are still active though it must be retained. Database Schema Changes Many Coherence clusters use a database to persist cache data. For the most part database changes must be “additive”, i.e. a new table column may be added but one cannot be dropped while it still contains data. Some databases do provide sophisticated features to enable more dynamic changes to be made online, but they will not be discussed here. Nor will upgrading alternative cache store technologies. Database changes must be made before any cache data can be modified, so that any dependencies new cache classes will have on an underlying database will be in place when they are used. Changing Cache Data Here we are going to focus on how cache data classes can be changed online, rather than code that implements custom eviction policies, entry processors etc. These types of classes usually have no dependency on a database and can simply be made to the CLASSPATH and POF configuration (if necessary) and the cluster re-started in a rolling fashion. To enable cache classes to be upgraded online they must be able to contain different versions of the same data. The Coherence Evolvable interface facilitates this. The Evolvable interface is implemented by classes that require forwards and backward compatibility of their serialized form. Although evolvable classes are able to support the evolution of classes by the addition of data, changes that involve the removal of data or modifications cannot be safely accomplished as long as a previous version of the class exists that relies on that data. When an Evolvable object is de-serialized, it retains any unknown data that has been added to newer versions of the class, and the version identifier for that data format. When the Evolvable object is subsequently serialized, it includes both that version identifier and the unknown future data. For instance in the rolling upgrade example version 1 of the class DummyObject looks like this: The implementation of the Evolvable interface looks like this: Version 2 of DummyObject has a new field, newStr: In the readExternal() method there is a check to see what version of the object it is. This is because if a version 1 stream of DummyObject were written to the dummy cache and it was then read by a version 2 client there would be no newStr field value as the object data has been created from a version 1 not version 2 client. Changing the Environment Changing a Coherence environment, like modifying the system properties for a JVM or even changing the physical resources of each server can also be performed in a rolling fashion. If cluster resources are modified then ensure that they are still sufficient and balanced after the change. Rolling Upgrade Script Below is an example script to perform a rolling upgrade of a Coherence cluster to introduce a new version of a cache client while an existing version of a client is still running. Upgrade database schema Update Coherence nodes CLASSPATH For each server For each JVM While JMX value of StatusHA not MACHINE_SAFE for all nodes for all services Pause for 5s End while Stop / Start JVM End For End for Introduce new version of client Potential Issues Online upgrades of a Coherence cluster are not always necessary. Many systems using Coherence have maintenance windows where changes can be made off-line or are not online 24x7. Where this is the case and the cache data is persisted an alternative data store, e.g. a database, an offline upgrade will be much simpler. When considering an online upgrade of Coherence you should: Script the upgrade process and thoroughly test it beforehand. Ensure that the overhead of a rolling upgrade on your cluster is acceptable. When a cluster contains a lot of data you are effectivly going to move all the data around – twice!. This can significantly degrade cluster performance if the cluster is heavily loaded with normal activity. Rolling Upgrade Example To accompany this document there is an example application with scripts to upgrade it online. The application consists of a simple class, DummyObject, that is used to hold cache values, containing an int and a String attribute The example has been tested against Coherence 3.5.3 for Java and a MySQL database. The scripts are simple bash scripts that can be run standalone on a laptop. The example is packaged as an Eclipse Galileo project, but the classes have been pre-compiles so an the Eclipse IDE is not essential. The sequence for running the example is: Download the necessary drivers for MySQL Setup. Change the paths, IP addresses and ports in the scriipts/env.sh file to reflect those of your system. Use the script create.sql to create the base tables in MySQL Start the cache servers. Change to the scripts directory and run the ./start.sh script. This will start 3 cache servers and 1 JMX node in the background. Open another terminal and in the scripts directory run the ./client1.sh to start version 1 of the client. This will run version 1 of the client that will connect to the cluster, put some objects in a cache called dummy and then continually update the string attribute to “dummy 1” Update the database schema by running the alter-table.sql script in the sql directory to add an additional column to the Dummy table. Copy the new DummyObject (version 2) JAR file over the old one (version 1) – that adds a new field (newString) to the DummyObject class. In a new terminal go to the scripts directory and run the script ./restart.sh to perform a rolling cluster re-start. If will return and its output is written to the file logs/rserver.log... Open another terminal and in the scripts directory run the ./client2.sh script. This will run a version 2 client that will update version 2 of the DummyObject values in the dummy cache. It will update the string attribute to “Dummy 2” and also print out the value of the new string attribute – which will always be “” Summary Many applications using Coherence have maintenance windows or are not required to run continuously, 24x7. However, some are. For these, the dynamic features of Coherence can be used to make online changes using rolling upgrades and side-by-side clusters. Even where downtimes are available these features allow for critical changes to be made when needed. These techniques complement the existing high availability features of Coherence and make it a truly unique data caching platform. Examples There are 2 example Eclipse projects to support this document, which allow you to try out a simple rolling upgrade. The examples can be downloaded here.

Oracle Coherence Rolling Upgrades Oracle Coherence provides a highly available resilient in-memory cache that provides fast access to frequently used data. It is increasingly becoming a key component...

Coherence

A Simple Coherence C++ Client

Coherence provides native support for C++ clients. Not through JNI or some other C++ wrapper, but pure native C++ integration. The C++ client for Coherence has been written from scratch to support memory management, concurrency and type conversion - across multiple platforms (Windows, Linux, OSX and Solaris). The C++ client for Coherence also supports virtually every feature of the other clients (.NET and Java). So for instance you can:Perform Continuous Query for C++ ClientsQuery the Cache for C++ ClientsPerform Remote Invocation Service requests from C++ ClientsDeliver events for changes as they occur to C++ clientsThis post will show you a simple C++ client - targeted for the Windows platform - but in actual fact the only Windows specific resources are the scripts used to build and demonstrate the example. The code could easily be run on the other supported platforms.The example has been put together to demonstrate how a native C++ client can put objects in a cache - in this case Trade objects - and an atribute aggregated. There is no Java and the C++ data model classes do not include any Coherence specific code. So you can hopefully see how you could easily take your native C++ classes and put objects created from them into Coherence.Here is the definition of the Trade class (Trade.h):#ifndef TRADE_H#define TRADE_H#include <ostream>#include <string>// Constants for POF offsets#define TRADE_SYMBOL 0#define TRADE_PRICE 1#define TRADE_ID 2#define TRADE_QUANTITY 3/*** The Trade class encapsulates common information for a Trade.** This serves as an example data object which does not have direct knowledge* of Coherence but can be stored in the data grid.*/class Trade{ // ----- data members ---------------------------------------------------private: /** * The trade symbos. */ std::string symbol; /** * The trade price. */ double price; /** * The trade id. */ int id; /** * The trade quantity. */ int quantity; // ----- constructors ---------------------------------------------------public: /** * Create a new ContactInfo object. * * @param symbod the trade symbol * @param price the trade price * @param id the trade id * @param quantity the trade quantity */ Trade(const std::string& symbol,   const double price, const int id,   const int quantity); /** * Copy constructor. */ Trade(const Trade& that);protected: /** * Default constructor. */ Trade(); // ----- accessors ------------------------------------------------------public: /** * Determine the symbol for this Trade object * * @return the Trade symbol */ std::string getSymbol() const; /** * Configure the symbol for a Trade. * * @param sName the Trades symbol */ void setSymbol(const std::string& symbol); /** * Determine the price of a Trade. * * @return the price of a Trade */ double getPrice() const; /** * Configure the price of a Trade. * * @param price of a Trade */ void setPrice(const double price); /** * Determine the id of a Trade. * * @return the id of a Rrade */ int getId() const; /** * Configure the id of a Trade. * * @param id of a Trade */ void setId(const int id); /** * Determine the quantity of a Trade. * * @return the quantity of a Trade */ int getQuantity() const; /** * Configure the quantity of a Trade. * * @param quantity of a Trade */ void setQuantity(const int quantity);};// ----- free functions -----------------------------------------------------/*** Output this Trade to the stream** @param out the stream to output to** @return the stream*/std::ostream& operator<<(std::ostream& out, const Trade& trade);/*** Perform an equality test on two Trade objects** @param infoA the first Trade* @param infoB the second Trade** @return true if the objects are equal*/bool operator==(const Trade& tradeA, const Trade& tradeB);/*** Return the has for the Trade.** @param info the Trade to hash** @return the hash for the Trade*/size_t hash_value(const Trade& trade);#endif // TRADE_HAnd here is the implementation (Trade.cpp):#include "Trade.h"// ----- constructors -------------------------------------------------------Trade::Trade(const std::string& s, const double p, const int i, const int q){ setSymbol(s); setId(i); setPrice(p); setQuantity(q);}Trade::Trade(const Trade& that){ setSymbol(that.getSymbol()); setId(that.getId()); setPrice(that.getPrice()); setQuantity(that.getQuantity());}Trade::Trade(){}// ----- accessors ----------------------------------------------------------std::string Trade::getSymbol() const{ return symbol;}void Trade::setSymbol(const std::string& s){ symbol = s;}int Trade::getId() const{ return id;}void Trade::setId(const int i){ id = i;}double Trade::getPrice() const{ return price;}void Trade::setPrice(const double p){ price = p;}int Trade::getQuantity() const{ return quantity;}void Trade::setQuantity(const int q){ quantity = q;}// ----- free functions -----------------------------------------------------std::ostream& operator<<(std::ostream& out, const Trade& trade){ out << "Trade(" << "Symbol=" << trade.getSymbol() << ", Price=" << trade.getPrice() << ", Id=" << trade.getId() << ", Quantity=" << trade.getQuantity() << ')'; return out;}bool operator==(const Trade& tradeA, const Trade& tradeB){ return tradeA.getSymbol() == tradeB.getSymbol() && tradeA.getPrice() == tradeB.getPrice() && tradeA.getId() == tradeB.getId() && tradeA.getQuantity() == tradeB.getQuantity();}size_t hash_value(const Trade& trade){ return size_t(&trade); // identity hash (note: not suitable for cache keys)}As you can see, no Coherence code. All the serialization code is implemented in a seperate file (TradeSerializer.cpp) and linked with the class to serialize using the Portable Object Format (POF) configuration file. The Coherence C++ client libraries locates its configuration files using environment variables.Finally here is the client console application (cppexample.cpp):// cppexample.cpp : Defines the entry point for the console application.#include "Trade.h"#include "coherence/lang.ns"#include "coherence/net/CacheFactory.hpp"#include "coherence/net/NamedCache.hpp"#include "coherence/util/HashSet.hpp"#include "coherence/util/aggregator/Integer64Sum.hpp"#include "coherence/util/filter/EqualsFilter.hpp"#include "coherence/util/Filter.hpp"#include "coherence/util/Hashtable.hpp"#include "coherence/util/ValueExtractor.hpp"#include "coherence/util/extractor/PofExtractor.hpp"#include <iostream>#include <sstream>#include <ostream>#include <string>using namespace coherence::lang;using coherence::net::CacheFactory;using coherence::net::NamedCache;using coherence::util::aggregator::Integer64Sum;using coherence::util::ValueExtractor;using coherence::util::extractor::PofExtractor;using coherence::util::filter::EqualsFilter;using coherence::util::Filter;using coherence::util::Hashtable;#include <stdlib.h>#define NUM_TRADES 20int main(int argc, char** argv){ // Create/get cache handle std::cout << "Getting cache..." << std::endl; NamedCache::Handle hCache = CacheFactory::getCache("test"); std::cout << " OK" << std::endl; // Put some trades in the cache std::string symbols[] = { "ORCL", "MSFT", "IBM", "SAP" }; Map::Handle hMap = Hashtable::create(); for(int i = 0; i < NUM_TRADES; i++) {  Trade t1 = Trade(symbols[rand() % 4], rand() % 100, i, rand() % 1000);  hMap->put(Integer32::create(i), Managed<Trade>::create(t1)); } hCache->putAll(hMap); std::cout << "Stored: " << NUM_TRADES << " trades" << std::endl; // Get objects back from cache std::cout << "Getting objects back from cache..." << std::endl; int sum = 0; for(int i = 0; i < NUM_TRADES; i++) {  Integer32::Handle hViewKey = Integer32::create(i);  Managed<Trade>::View vTrade =  cast<Managed<Trade>::View>(hCache->get(hViewKey));  std::cout << " The value for " << hViewKey << " is " << vTrade << std::endl;  if(vTrade->getSymbol() == "ORCL")  {   sum += vTrade->getQuantity();  } } std::cout << "Total ORCL trades is: " << sum << std::endl; // Perform aggregation. Get total number of "ORCL" trades and print results ValueExtractor::View vQuantityExtractor =   PofExtractor::create(typeid(int32_t), TRADE_QUANTITY); ValueExtractor::View vSymbolExtractor = PofExtractor::create(typeid(void), TRADE_SYMBOL); String::View vSymbol = "ORCL"; Filter::View vFilter = EqualsFilter::create(vSymbolExtractor, vSymbol); Integer64::View vSum = cast<Integer64::View>( hCache->aggregate(vFilter, Integer64Sum::create(vQuantityExtractor))); std::cout << "Total number of ORCL trades agregated is: " << vSum << std::endl; hCache->release(); std::cout << "Released local resources" << std::endl; return 0;}I hope this example has given you a feel for how simple a C++ Coherence client can be. If you would like try this out for yourself you can download the full example here.Additional NoteThe Coherence C++ shared library requires that the MSVC runtime libraries that correspond to the Visual C++ compiler that was used to build the shared library be installed on the machine where the shared library is being used.For 3.5 and earlier that means you need the Microsoft Visual C++ 2005 SP1 Redistributable Package (x86 or x64). For 3.6 you need the 2005 package if you are using the Coherence C++ library built with 2005 or the 2010 package if you are using the library built with Visual C++ 2010.Additionally, if you use a different compiler to build the executable. For example Visual C++ 2008, then you will also need to install the runtime libraries for that compiler.

Coherence provides native support for C++ clients. Not through JNI or some other C++ wrapper, but pure native C++ integration. The C++ client for Coherence has been written from scratch to support memo...

Caching PHP HTTP Sessions in Coherence

Coherence already provides HTTP Session caching support for ASP.NET and J2EE applications without any application changes. Coherence for .NET includes a custom SessionStateStoreProvider implementation that uses a Coherence cache to store session state. This makes Coherence for .NET the best solution for any large ASP.NET application running within a web farm. The HTTP session state of J2EE Web Applications is managed using Coherence*Web. Coherence*Web is an HTTP session management module dedicated to managing session state in clustered environments. Built on top of Oracle Coherence, it has a wide range of features and supports a wide range of J2EE containers. With WebLogic Suite Coherence*Web is built in, but for other containers a Coherence utility modifies the necessary settings in the Web Applications web.xml file to transfer management of the HTTP Session state from the container to Coherence. With PHP there is no “out-of-the-box” Coherence support for HTTP Session state management, until now. The rest of this article outlines how using a standard PHP extension Coherence can be used as a clustered, reliable HTTP session handler for PHP. Like ASP.NET and J2EE application containers, PHP makes it quite easy to plug-in your own HTTP session handler and lots of different flavours exist already, from in-memory stores using tools like memcached to file and database persistent session mechanisms using products like MySQL.  Coherence is an ideal persistence store for HTTP sessions because it is: Very fast, enabling dynamic web pages to be quickly displayed Resilient and fault-tolerant, ensuring that process or server failures do not impact users – logging them off etc. Scalable, to support and ever increasing number of online users/customers Simple to integrate and manage, so the overhead of introducing Coherence does not burden either web development of operational staff So can Coherence be used with PHP?. Well using the native Coherence C++ client to create a PHP extension that can then be wrapped using a PHP session_set_save_handler to setup some user defined session storage functions. A diagram to illustrate the relationship between the PHP runtime, Coherence C++ extension and the Coherence cache is shown below:   To include the PHP custom session handlers in your PHP page add the following to the top of your PHP page: <?php require 'coherence_session.php'; $s = new CoherencePHPSessionHandler(); . . ?> This basically enables a new session to be started using the custom session save handler. The custom session save handler looks like this: <?php class CoherencePHPSessionHandler {     private static $debug = False;     /**      * A reference to a Coherence named cache      * @var resource      */     private $cache;     /**      * Session lifetime      * @var resource      */     private $lifeTime;     private static function Log($msg)     {       if(CoherencePHPSessionHandler::$debug)       {         echo "<br>DEBUG PHP: " . $msg . "\n";       }     }     function __construct()     {         // get session-lifetime         $this->lifeTime = get_cfg_var("session.gc_maxlifetime");         CoherencePHPSessionHandler::Log("Lifetime: " . $this->lifeTime);         $this->cache = new Cache("dist-php-sessions");         if ($this->cache == null)         {             return false;         }         CoherencePHPSessionHandler::Log("Got cache handle");         session_set_save_handler(array(&$this, 'open'),                                 array(&$this, 'close'),                                 array(&$this, 'read'),                                 array(&$this, 'write'),                                 array(&$this, 'destroy'),                                 array(&$this, 'gc'));         register_shutdown_function('session_write_close');         session_start();         CoherencePHPSessionHandler::Log("Started session");         return true;     }     /**      * Open the session      * @return bool      */     function open($savePath, $sessName)     {         // get session-lifetime         $this->lifeTime = get_cfg_var("session.gc_maxlifetime");         CoherencePHPSessionHandler::Log("Opening session");         return true;     }     /**      * Close the session      * @return bool      */     public function close()     {         // ToDo: need to release Coherence resources         return true;     }     /**      * Read the session      * @param int session id      * @return string string of the session      */     public function read($id)     {         $result = $this->cache->get($id);         if($result != null)         {           CoherencePHPSessionHandler::Log("Read session, id: " . $id . ", value: " . $result);           return $result;         }         return '';     }     /**      * Write the session      * @param int session id      * @param string data of the session      */     public function write($id, $data)     {         CoherencePHPSessionHandler::Log("Session value: " . serialize($data));         $this->cache->put($id, $data, $this->lifeTime);         CoherencePHPSessionHandler::Log("Written session, id: " . $id . ", value: " . $data);         return true;     }     /**      * Destoroy the session      * @param int session id      * @return bool      */     public function destroy($id)     {         $this->cache->remove($id);         return true;     }     /**      * Garbage Collector      * @param int life time (sec.)      * @return bool      * @see session.gc_divisor      100      * @see session.gc_maxlifetime 1440      * @see session.gc_probability    1      * @usage execution rate 1/100      *        (session.gc_probability/session.gc_divisor)      */     public function gc($max)     {         return true;     } } ?> As for the Coherence PHP extension there is too much to show here, though in total it does not amount to a lot of code. The C++ client class that wraps the Coherence client API is also very simple and the class functions are shown below: #include "cache.h" extern "C" { #include <stdio.h> #include <stdlib.h> #include "php.h" } static char * copyString(const char *str) {     char *s;     if(str != NULL && (s = (char *)emalloc(sizeof(char) * (strlen(str)  + 1))) == NULL)     {         return NULL;     }     else     {         if(strncpy(s, str, strlen(str) + 1) == NULL)         {             efree(s);             return NULL;         }         return s;     } } Cache::Cache(const char *name) {     // Get handle to a Named Cache     String::View vsCacheName = String::create(name);     hCache = CacheFactory::getCache(vsCacheName); } const char *Cache::getCacheName() {     String::View name = cast<String::View>(hCache->getCacheName());     return copyString(name->getCString()); } char* Cache::put(const char *key, const char *value, long ttl) {     if(key == NULL || value == NULL)     {         throw std::invalid_argument("Key or value for put() NULL");     }     String::View vKey = String::create(key);     String::View vValue = String::create(value);     String::View vOldValue = cast<String::View>(hCache->put(vKey, vValue, ttl));     return (vOldValue == NULL ? NULL : copyString(vOldValue->getCString())); } char* Cache::remove(const char *key) {     String::View vKey = String::create(key);     String::View vOldValue = cast<String::View>(hCache->remove(vKey));     return (vOldValue == NULL ? NULL : copyString(vOldValue->getCString())); } char* Cache::get(const char *key) {     String::View vKey = String::create(key);     String::View vOldValue = cast<String::View>(hCache->get(vKey));     return (vOldValue == NULL ? NULL : copyString(vOldValue->getCString())); } int Cache::size() {     return (int)hCache->size(); } The Coherence PHP extension can also be used for simply caching PHP values, as well as PHP session information. The above example was tested on Oracle Enterprise Linux 4, but can easily be compiled on Windows. You can download the whole example from here to try it out for yourself. Instructions are included which should hopefully be easy to follow. A couple of outstanding issues still remain with the Coherence PHP extension which I have not been able to resolve. Although simple PHP types as well as objects and arrays can be cached – both as keys and values – I could not get objects with private or protected variables to serialize. If anyone more experienced in the internals of PHP can help here it would be much appreciated, as I drew a blank.

Coherence already provides HTTP Session caching support for ASP.NET and J2EE applications without any application changes. Coherence for .NET includes a custom SessionStateStoreProvider...

Look, no Java!

With release 3.5 of Oracle Coherence it is now possible to query, aggregate and modify serialized POF (Portable Object Format) values stored in a cache natively, that is without writing a Java representation of the object. So  .NET and C++ developers can just write C# etc. and C++. Note: With release 3.7.1 there is now no requirement to create corresponding Java classes if you choose to use data affinity and annotations can be be used with Java, .NET and C++ clients instead of explicitly adding POF serialisation code. See the end of this article for further details. This is achieved with the introduction of POF extractors and POF updaters, that fetch and update native POF objects without de-serializing the value to Java. As well as allowing .NET and C++ developers to just work in the language they are most comfortable and productive in, this new feature also has a number of other benefits: It dramatically improves performance, as no de-serialization needs to take place, so no new objects need to be created – or garbage collected. Less memory is required as a result. The development and deployment process is simpler, as no corresponding Java classes need to be created, managed and deployed. However, there are some occasions where you do still need to create complementary Java objects to match your .NET or C++ objects. These are: When you want to use Key Association, as Coherence will always de-serializes keys to determine whether they implement KeyAssociation. If you use a Cache Stores - Coherence passes the de-serialized version of the key and value to the cache store to write to the back end so that ORM (Object Relational Mapping) tools, like Hibernate and EclipseLink (JPA) have access to the Java objects. To update a value in a cache from C# you would use code similar to that shown below: // Parameters: key, (extractor, new value) cache.Invoke(0, new UpdaterProcessor(new PofUpdater(BetSerializer.ODDS), 1)); Here a cache value identified by the key ‘0’ is being updated using an UpdateProcessor. The ValueUpdator being used is a C# PofUpdater which will access the attribute at offset BetSerializer.ODD in the serialized POF value. Values held in a cache are held in a serialized format. When the POF serialization mechanism is used the values of an object are written and read from the POF stream in the same order. So here a constant  (BetSerializer.ODDS) is being used to provide a more readable representation for an offset like 3, i.e. the 3rd object value to be written and read from the POF stream.For reading values from a cache in C# a similar approach is used, as shown below: // Query cache for all entries whose event name is 'FA Cup' EqualsFilter equalsFilter = new EqualsFilter(new PofExtractor(BetSerializer.EVENT_NAME), "FA Cup"); Object[] results = cache.GetValues(equalsFilter); Console.WriteLine("Filter results:"); for (int i = 0; i < results.Length; i++) { Console.WriteLine("Value = " + results[i]); }Here all cache values that have an event name or “FA Cup” will be returned by the EqualsFilter. The C# PofExtractor is being used to extract the attribute at the offset BetSerializer.EVENT_NAME, which will be a number , like 4 indicating it was the 4th value to be written and read from the POF stream when the object it was part of was serialized.This is a great new feature of Coherence and very easy to use. If you would like to see the full example the above extracts were taken from the you can download it from here. Happy coding - in C# and C++ ;).Release 3.7.1 UpdateThere is now no longer any need to provide Java classes for classes used as keys in many circumstances.  Prior to 3.7.1, if you had a custom key class (for example a Name class that might have multiple strings like first and last name), then you needed to provide a corresponding Java class for that key class.  That is no longer the case. With Coherence 3.7.1, keys are not deserialized by the cluster. Now they don't even need to be deserialized for key assocation.  As of 3.7.1 key association checks are done at the Extend client (in this case .NET).  This is covered in the Coherence documentation in Oracle Coherence Release Notes, Release 3.7.1 under 1 Technical Changes and Enhancements -> Coherence*Extend Framework Enhancements and Fixes (here).POF Annotations have also been added in release 3.7.1.  With POF Annotations, you can annotate classes which need to be POF serializable and no longer need to write serialization methods or separate serialization classes.  POF Annotations for .NET classes is covered in the Oracle Coherence Client Guide, Release 3.7.1 under 18.9 Using POF Annotations to Serialize Objects (here).

With release 3.5 of Oracle Coherence it is now possible to query, aggregate and modify serialized POF (Portable Object Format) values stored in a cache natively, that is without writing a...

Pushing real-time data changes from an Oracle database into Coherence

A common requirement when using Coherence is for the data in it to remain synchronised with database. If all changes to the database flow through Coherence or the cached data can be periodically refreshed (using the refresh-ahead mechanism) then Coherence will be aware of of any database changes. However, if another application changes the database outside of Coherence or the database changes need to be relayed to Coherence in real-time then an alternative approach is required. To push database changes to Coherence in real-time when they are being made outside of Coherence, a queue needs to be used. Fortunately most databases support the queuing of transactional change information, either implicitly or explicitly. In this example architecture the explicit queuing of changes to an Oracle database, using Triggers and Oracle’s Advanced Queuing (AQ) technology, will be used to propagate them to a Coherence cache holding objects representing the same information. Overview Below is a diagram showing how the changes flow from the Oracle database to the Coherence cache. In the example the read/receive operation from the queue is performed as a transaction, so the message is only removed from the queue once the corresponding cache object has been updated. This ensures that if the Coherence JVM/node that the client runs on fails the update message will not be lost. Although the example code focuses on cache updates it could easily be modified to accommodate inserts and deletes as well. To enable Advanced Queuing (AQ) to be used for propagating table changes a number of database permissions first need to be granted. The PL/SQL to do this is shown below: -- Execute as sys/welcome1 GRANT SELECT_CATALOG_ROLE TO scott; GRANT EXECUTE ON DBMS_APPLY_ADM TO scott; GRANT EXECUTE ON DBMS_AQ TO scott; GRANT EXECUTE ON DBMS_AQADM TO scott; GRANT EXECUTE ON DBMS_CAPTURE_ADM TO scott; GRANT EXECUTE ON DBMS_FLASHBACK TO scott; GRANT EXECUTE ON DBMS_STREAMS_ADM TO scott; EXECUTE dbms_aqadm.grant_system_privilege('ENQUEUE_ANY', 'scott', TRUE); EXECUTE dbms_aqadm.grant_system_privilege('DEQUEUE_ANY', 'scott', TRUE); GRANT aq_administrator_role TO scott; GRANT EXECUTE ON dbms_lock TO scott; GRANT EXECUTE ON sys.dbms_aqin TO scott; GRANT EXECUTE ON sys.dbms_aqjms TO scott; EXIT; Note: These GRANT statements need to be executed as the database administrator (sys) or another database user who has privileges to grant them. Then the AQ queue need to be setup and the PL/SQL procedure and trigger created to put the database table changes in the appropriate queue. Below is the PL/SQL used to do this: -- Execute as scott/tiger -- Create queue EXECUTE dbms_aqadm.stop_queue(queue_name => 'trade_queue'); EXECUTE dbms_aqadm.drop_queue(queue_name => 'trade_queue'); EXECUTE DBMS_AQADM.DROP_QUEUE_TABLE(queue_table => 'trade_queue_table'); EXECUTE dbms_aqadm.create_queue_table(queue_table => 'trade_queue_table',queue_payload_type => 'sys.aq$_jms_text_message', multiple_consumers => false); EXECUTE dbms_aqadm.create_queue(queue_name => 'trade_queue', queue_table =>'trade_queue_table', queue_type => DBMS_AQADM.NORMAL_QUEUE, retention_time => 0, max_retries => 5, retry_delay => 60); EXECUTE dbms_aqadm.start_queue(queue_name => 'trade_queue'); -- Create table DROP TABLE Trade CASCADE CONSTRAINTS; CREATE TABLE Trade (     Id     NUMBER PRIMARY KEY,     Symbol VARCHAR2(5)       ,     Created DATE             ,     Quantity NUMBER          ,     Amount   NUMBER(8,2) ); -- Enable SERVEROUTPUT in SQL Command Line (SQL*Plus) to display output with -- DBMS_OUTPUT.PUT_LINE, this enables SERVEROUTPUT for this SQL*Plus session only SET SERVEROUTPUT ON CREATE OR REPLACE PROCEDURE testmessage(text_message VARCHAR2) AS   msg SYS.AQ$_JMS_TEXT_MESSAGE;   msg_hdr SYS.AQ$_JMS_HEADER;   msg_agent SYS.AQ$_AGENT;   msg_proparray SYS.AQ$_JMS_USERPROPARRAY;   msg_property SYS.AQ$_JMS_USERPROPERTY;   queue_options DBMS_AQ.ENQUEUE_OPTIONS_T;   msg_props DBMS_AQ.MESSAGE_PROPERTIES_T;   msg_id RAW(16);   dummy VARCHAR2(4000); BEGIN   msg_agent := SYS.AQ$_AGENT(' ', NULL, 0);   msg_proparray := SYS.AQ$_JMS_USERPROPARRAY()  ;   msg_proparray.EXTEND(1);   msg_property := SYS.AQ$_JMS_USERPROPERTY('JMS_OracleDeliveryMode', 100, '2', NULL, 27);   msg_proparray(1) := msg_property;   msg_hdr := SYS.AQ$_JMS_HEADER(msg_agent,NULL,'<USERNAME>',NULL,NULL,NULL,msg_proparray);   msg := SYS.AQ$_JMS_TEXT_MESSAGE(msg_hdr,NULL,NULL,NULL);   msg.text_vc  := text_message;   msg.text_len := LENGTH(msg.text_vc);   DBMS_AQ.ENQUEUE(queue_name => 'trade_queue' , enqueue_options => queue_options , message_properties => msg_props , payload => msg , msgid => msg_id); END; / CREATE OR REPLACE TRIGGER TradeAQTrigger AFTER INSERT OR UPDATE ON Trade FOR EACH row DECLARE xml_complete VARCHAR2(1000); BEGIN     xml_complete := '<?xml version="1.0" encoding="UTF-8" ?>' ||       '<TradeElement xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" ' ||       'xsi:schemaLocation="http://www.oracle.com/coherenceaq src/aq-object.xsd" ' ||       'xmlns="http://www.oracle.com/coherenceaq">' ||     '<Id>' || :new.ID || '</Id>' ||     '<Symbol>' || :new.SYMBOL || '</Symbol>' ||     '<Quantity>' || :new.QUANTITY || '</Quantity>' ||     '<Amount>' || :new.AMOUNT || '</Amount>' ||     '<Created>' || :new.CREATED || '</Created>' ||     '</TradeElement>';        testmessage(xml_complete); END; / SHOW ERRORS; The Java code that runs in the background Coherence thread – started by the BackingMapListener – that reads messages from the queue looks like this: . . . /** * <p>AQ client interface</p> */ public class AQClient {   .   .   .   /**    * Send an acknowledgement message for a received message. This enables the    * message to be kept by the sender if client dies. An acknowledgement should only    * be sent when the received message has been saved, e.g. in a cache    *    * @throws JMSException If an exception occurs sending an acknowledgement    */   public void aknowledgeMessage()     throws JMSException   {     textMsg.acknowledge();   }   /**    * Reads XML text message from the queue and transforms it into a Trade    * object using JAXB classes.    *    * @return a Trade object    * @throws JMSException If an exception occurs sending an acknowledgement    * @throws JAXBException If an exception occurs during JAXB processing    */   public Trade receiveMessage()     throws JMSException, JAXBException   {     // Wait for a message to show up in the queue     textMsg = (TextMessage) queueReciever.receive();     System.out.println("Message is: " + textMsg.getText());     // Use JAXB to create object     JAXBContext jc = JAXBContext.newInstance("com.oracle.demo.model");     Unmarshaller u = jc.createUnmarshaller();     JAXBElement<Trade> trade =       (JAXBElement<Trade>) u.unmarshal(new StreamSource(new StringReader(textMsg.getText())));     Trade t = trade.getValue();     System.out.println("Trade created on: " + t.toString());     return t;   } } A full code example can be found here, including all the necessary PL/SQL scripts. All you need to try it out is Coherence, Oracle XE and Coherence .NET – if you want to see the changes propagated to a .NET Coherence client. If changes in the Coherence cache data also need to be written back to the database then something like a status field or flag would need to be added to the the cached objects and table, to ensure that a circular loop isn’t created. For instance the following logic could enable bi-directional synchronization: Propagate flag added to cached object (Trade) and database table (Trade) to signify if changes should be propagated by the after update trigger from the database to Coherence Data changes propagated from Coherence to Database have propagate flag set to FALSE. Introduce a new before insert/update trigger to reset the propagate flag to TRUE when it is FALSE. When the after insert/update trigger fires only place the new value on the queue if the old value is not FALSE. A modified example of the PL/SQL to add this functionality is in the setup-queue2.sql file, though you will need to add the additional column to the Trade table and attribute to the Trade object. Summary This approach is not the only mechanism for propagating changes from an Oracle Database to Coherence. Using a BackingMapListener to start the background queue client thread has an added benefit. If the node it is running on fails, Coherence will re-start the queue client on another node as part of its recovery process, i.e. when the queue configuration entry is failed-over and the new primary created it will cause a new background queue client thread to be started. Alternative approaches to the one above might be to use Oracle Streams or the Data Change Notification mechanism in Oracle 11g

A common requirement when using Coherence is for the data in it to remain synchronised with database. If all changes to the database flow through Coherence or the cached data can be periodically...

Using Excel as a Real-Time client for Coherence – A Full Working Example

On my previous posting about using Excel as a Coherence client I talked about how this could be done but gave no example code. In this posting I will explain how to do this in more detail and provide the code to a working example that you can modify to use your own objects. In the example a .NET client loads Stock objects into a Coherence cache and then randomly updates the Stock prices. An Excel spreadsheet can then be opened that is a client for the Coherence cache, receiving changes to the Stock objects – the price changes. The prices can also be updated from the spreadsheet, so the communications is both ways.   Stock price updates are pushed down to the Excel spreadsheet using the standard RTD (Read Time Data) server mechanism. As for the cache updates from Excel, they are send through a Coherence COM interface. VBA User Defined Functions (UDF’s) provide the interface in Excel for specifying where event data goes or which cell updates get mapped onto cache objects. These are added to Excel via the Add-in option from the Tools menu. The UDF’s also hide some of the complexity of the interfaces from users. Here are he 3 UDF’s that provide the interface to Coherence: For updating properties of Stock objects in the cache UpdateDoubleProperty("dist-stocks",E4,"ORCL","StockPrice") UpdateStringProperty("dist-stocks",E4,"ORCL","StockPrice") For receiving update events for Stock objects in the cache RealTimeData("dist-stocks","ORCL","StockPrice") In the UDF’s above “dist-stocks” is the name of the Coherence cache, “ORCL” is the key for the object you wish to update or receive change events from (this must be a string in the example) and “StockPrice” is the name of the property you want to update or get the latest value of. “E4” in the UpdateDoubleProperty() is the cell to take the new property value from – when it changes. If properties of objects in the cache that you wish to receive updates about or change are in nested objects you can specify the target property via a “.” or if the target is in an object nested in a List or array you can use the “[ ]” operator. So for instance if the cache contained a Portfolio object with nested Stock objects in an array, you could specify the target Stock price for one of the nested Stocks as “Stocks[2].StockPrice”. This would get the 3rd Stock in a property of the Portfolio object called “Stocks” and from that Stock the “StockPrice” property. At the moment the UDF functions only support String and Double properties – but they could easily be enhanced to support other properties. This property specification method can also be nested as much as you like and provide as convenient way of specifying a target property as a string in Excel. The easiest way to see how this all works in practice is to download the example and try it out – there is a readme.txt file in the example that explains how to set it up and try it out. Some of the things you will need to run the example are: Coherence for Java and .NET 3.4.2. It should work with more recent versions, though this is the release I have tested the example against. Java JDK 1.4.2 (or above) Microsoft Visual Studio 2008 and .NET 3.5. I tried to build the example using Visual Studio Express and via the command line but was unable to specify that the Coherence configuration files be embedded resources. Also someone has successfully back-ported the example to Visual Studio 2005 (though I don’t have the code) Excel 2003. It should work with later versions, though I am not sure when Excel started supporting the RTD mechanism. Office XP Primary Interop Assemblies (for Office 2003). This is need for the Excel integration assemblies. Have fun trying this out and if anyone has any additional thoughts, recommendations or feedback I’d be keen to hear them.

On my previous posting about using Excel as a Coherence client I talked about how this could be done but gave no example code. In this posting I will explain how to do this in more detail and provide...

Using Coherence as a Cache from a J2EE Web Application

Although Coherence is often used to manage Servlet session state for J2EE applications – without changing the application code – it can also be used by J2EE Web applications to access data in a simple cache. This article discusses how to do this and how you should package your Web application so that it is a completely self contained Coherence client. Although the principles should work for all J2EE applications servers you will not be surprised to learn that I have only tested this on WebLogic Server (10.3). The Web application outlined below is a Coherence*Extend Client, that means that it connects over a TCP/IP connection to the Coherence cluster (which uses UDP for inter-node communications) holding the cache data. A Proxy service run either on one or more of the cache nodes (or in a separate JVM) is used to pass requests and response between the client and the Coherence cluster – in a similar fashion to the way an Oracle Database listener works. Note, the client could also be a Coherence Data Client, which means that the Web client would actually be part of the Coherence cluster. In this case the client would need to be ‘storage disabled’ through a system property or through the tangosol-coherence-override.xml configuration file. This means that although it will be part of the cluster it will not actually hold any data, so when the client is deployed or un-deployed there will be no cluster overhead re-organising cache data. Accessing the configuration files You really have a choice here. put them in the packaged Web application or reference them externally through the CLASSPATH or via a system property. Here I am going to package them with the Web application to make is completely self-contained. The code to do this is shown below:  56    /** 57     * <p>Reference to ConfigurableCacheFactory that can be used to 58     * create a cache.</p> 59     */ 60    private ConfigurableCacheFactory factory = null; 61   62    /** 63     * @inheritDoc 64     */ 65    public void init(ServletConfig config) 66      throws ServletException 67    { 68      super.init(config); 69      // Create DefaultConfigurableCacheFactory using cache configuration file 70      // in classes dir 71      factory = 72          new DefaultConfigurableCacheFactory("/config/cache-config.xml", 73                                              getClass().getClassLoader()); 74    }Here a DefaultConfigurableCacheFactory is used to read the configuration file used to setup the Coherence client parameters. All the configuration files that the Coherence client needs will be stored in the /WEB-INF/classes/config directory of the Web application. These are:cache-config.xml indicating where the Coherence cluster can be contacted pof-config.xml indicating the mapping between client and server objects. As both the client and server are Java only one object needs to be defined. tangosol-coherence-override.xml encapsulated many of the Coherence settings the are often specified as system properties. The cache is accessed in the doGet(..) method of Servlet  as shown below:  76    /** 77     * @inheritDoc 78     */ 79    public void doGet(HttpServletRequest request, 80                      HttpServletResponse response) 81      throws ServletException, IOException 82    { 83      // Create response 84      response.setContentType(CONTENT_TYPE); 85      PrintWriter out = response.getWriter(); 86      out.println("<html>"); 87      out.println("<head><title>CoherenceServlet</title></head>"); 88      out.println("<body>"); 89      // Get Coherence cache date 90      NamedCache cache = 91        factory.ensureCache("stock-cache", getClass().getClassLoader()); 92      // Get Stock object from cache 93      Stock s = (Stock) cache.get("ORCL"); 94      double price = 0; 95      if (s != null) 96      { 97        price = s.getPrice(); 98      } 99      // Output stock price in response100      out.println("<p>The latest Oracle Stock Price is: <b>" + price +101                  "</b></p>");102      out.println("</body></html>");103      out.close();104    }The other source file that needs to be deployed with the Servlet is the Stock object that is being cached – a separate Java application is used in this scenario to add a Stock object to the “stock-cache” and then continually update it. This means that if the content displayed by the Servlet is refreshed in a browser a new stock price will appear.Packaging it all upTo package it all up you naturally wrap the Servlet and Stock classes – along with the Coherence configuration files and supporting web configuration files – in a WAR file. This WAR file then needs to be wrapped in an EAR file with the coherence.jar file and a reference to this JAR in the MANIFEST.MF file as follows:Class-Path: lib/coherence.jarNote in the Eclipse project the coherence.jar file is in the base dir of the EAR file. The complete structure of the EAR file is shown below:   When the application is deployed and run you should see something like this which if you keep refreshing should change.  SummaryThat’s it. You should now be able to create a cache, put some data into it and then access it from the above Servlet. The full source code for this example can be found here. There is a JDeveloper 11g project and an Eclipse Eurpoa project. There are some batch files to run-up a cache server and a separate client to insert and update Stock data, but unfortunately they are only in JDeveloper project – at the moment. See the readme.txt file for details of how to set things up.

Although Coherence is often used to manage Servlet session state for J2EE applications – without changing the application code – it can also be used by J2EE Web applications to access data in a simple...

Using Excel as a Real-Time client for Coherence

Often Coherence is thought of as a Java data caching product, and the server side component – where the data is kept is. But the client can not only be a Java application, it can also be a native .NET or C++ application. The native C++ application can also run on Windows, Linux or Solaris. So getting back to .NET. Since the .NET client is just a native C# or VB.NET application (no Java) you can do some nice things, like use the Real Time Data (RTD) Server API to integrate it with Excel, turning Excel into a real-time Coherence client. If you want to see a flash demonstration then click here to view it in action. The screen shot of the demonstration shows how a constantly updating graph (shown below) of stock price information can be displayed in Excel. The data which is constantly changing, in this case stock prices, is held in a Coherence cache. A separate client application is changing the stock prices and these changes in the data cache are then being pushed to the .NET Coherence client (RTD Server) and on to the Excel spreadsheet. The RTD Server is a C# .NET client packaged as an .NET Assembly/DLL and then registered as a COM object. The example in the demonstration is pretty flexible but the next version will be enhanced to provide a great range of filtering. Now the example being shown in the demonstration is a first cut (developed with help from Aleksandar Seovic). An improved version will be published very soon with a fuller explanation about it works along with the code itself.

Often Coherence is thought of as a Java data caching product, and the server side component – where the data is kept is. But the client can not only be a Java application, it can also be a native .NETo...

Coherence

Coherence - Up and running in 15 minutes!

What is Coherence and why would you used it? Oracle Coherence stores data in-memory so access to it is very fast. Therefore, you would commonly use Coherence to provide fast access to data. Typically this is necessary in applications where performance and latency are key. How does it work? Oracle Coherence is a small Java programme that stores data in lookup tables, key/value pairs, or a Map of Java objects in memory (or a cache in Coherence terminology). Well I can do that myself I can hear you say, why do I need Coherence. Well if you do it yourself and crate a lookup table in your own Java program what happens if the Java program crashes?, all your data in-memory is lost. You could persist the data to disk or a database as soon as it changes, but then that would degrade performance. Another problem is what happens when you need to store more data than the Java program has access to (i.e. in its heap)?. Well you could increase the amount memory the Java program has available, but if you are running on a 32 Bit Operating System (OS) this will be limited to 2 GB and even if you are on a 64 Bit OS this can cause problems. Java programs which use large amounts of memory can suffer from un-predictable pauses, anything upwards from 1-2 seconds depending on the amount of memory used. This happens when the Java Virtual Machine (JVM) tidies up all the un-used data (or performs a full Garbage Collection (GC)). For applications that need to provide a consistent level of service, e.g. a customer facing web application, this is often unacceptable. Oracle Coherence address both of these problems. When lots of Coherence Java programs (or cache servers) are started on one or more connected computers they 'glue' themselves together (to form a cluster) using mulit-cast (or a specified IP of addresses). The look-up table, or cache, can then be spread or partitioned across all the Coherence Java programs (nodes) in the cluster. More nodes can be started to increase the size of the cache ,you just have to start more nodes and they will automatically join the cluster – no re-start  is required. Since a cache can be built from lots of nodes with a relatively small memory footprint processing outages, when the JVM is performing memory housekeeping chores (GC's), is also greatly reduced as you can run lots of JVM’s with a small memory footprint and the effect of any particular JVM GC is also diluted across the whole cluster. To address the problem of loosing data, if a node or server running a number of nodes crashes, Coherence can keep backup copies of data on different nodes and servers. All changes to the primary copy of a value are synchronously made to the backup, to make sure they are always the same, and if either are lost a new primary or backup is created from the remaining value. Furthermore, Coherence will automatically re-organise the partitioned data in a cache if a failure occurs to ensure that it is re-balanced across the cluster and any in-flight client transactions or queries will still complete successfully. So simplistically Coherence is just a in-memory lookup table. The clever thing that Coherence does is enable the lookup table to grow and be resilient. How do I use it? Well that's enough background information. To use Coherence you only need the Java runtime (1.4+). You can download the Oracle JRockit Java runtime and Coherence from the Oracle Technology Network. Coherence can be downloaded from here. To install Coherence you just unzip it - that's it!. Oracle JRockit JVM can be downloaded from here (if you do not already have Java installed) In addition to being able to put data in a lookup table and look it up Coherence also supports queries, aggregations, transactions, events and other features. Here we will just concentrate on putting data in a cache and looking it up - to keep things quick and simple. Coherence works in a sort of client-server mode. I say sort of because a client will make a direct request to the node in the cluster which has the data it wants, rather than go through a central server node or lookup service. This also means that Coherence has no 'single point of failure' or 'single point of bottleneck'. Every node in a cluster is effectively equal and is just responsible for its own data. So what does a Coherence client look like? Well the object we are going to cache looks like this: package com.oracle.coherence.demo; import java.io.Serializable; import java.util.Date; /** * Order class representing an order for a shares */ public class Order   implements Serializable {   private String symbol;   private int quantity;   private double amount;   private Date timeStamp;   /**    * Default constructor    */   public Order()   {   }   /**    * Constructor    * @param symbol the Order symbol, e.g. ORCL    * @param quantity the number of shares    * @param amount the amount of the shares to buy at    * @param timeStamp a time stamp    */   public Order(String symbol, int quantity, double amount, Date timeStamp)   {     this.symbol = symbol;     this.quantity = quantity;     this.amount = amount;     this.timeStamp = timeStamp;   }   /**    * Gets the symbol    * @return the symbol    */   public String getSymbol()   {     return symbol;   }   /**    * Sets the new symbol    * @param symbol the new symbol value    */   public void setSymbol(String symbol)   {     this.symbol = symbol;   }   /**    * Gets the quantity    * @return the quantity    */   public int getQuantity()   {     return quantity;   }   /**    * Sets the new quantity    * @param quantity the new quantity    */   public void setQuantity(int quantity)   {     this.quantity = quantity;   }   /**    * Gets the amount    * @return the amount    */   public double getAmount()   {     return amount;   }   /**    * Sets the amount    * @param amount the new amount    */   public void setAmount(double amount)   {     this.amount = amount;   }   /**    * Gets the time stamp    * @return the time stamp    */   public Date getTimeStamp()   {     return timeStamp;   }   /**    * Sets the new time stamp    * @param timeStamp the new time stamp    */   public void setTimeStamp(Date timeStamp)   {     this.timeStamp = timeStamp;   } } The Coherence client application looks like this: package com.oracle.coherence.demo; import com.tangosol.net.CacheFactory; import com.tangosol.net.NamedCache; import java.util.Date; import java.util.Iterator; import java.util.Set; /** * Simple Coherence cache client example */ public class SimpleCoherenceClient {   /**    * Main method    * @param args not used    */   public static void main(String[] args)   {     // Get a handle/reference to a cache by name - which does not have to exist     NamedCache namedCache = CacheFactory.getCache("test");     // Create an object value to put in the cache     Order value = new Order("ORCL", 400, 19.4, new Date());     // Create a key for the object     Integer key = new Integer(0);     // Put an item into the cache     namedCache.put(key, value);     // Retrieve the object based upon its key     Order order = (Order) namedCache.get(key);     System.out.println("------------------------------------------------------");     System.out.println("Retrieved order: " + "symbol=" +                        order.getSymbol() + ", amount=" +                        order.getAmount() + ", quantity=" +                        order.getQuantity() + ", timestamp=" +                        order.getTimeStamp());     System.out.println("------------------------------------------------------");     // Shutdown client connection to cache     CacheFactory.shutdown();   } } As you can see the client code to add and retrieve an object from a Coherence cache is trivial. The above example can be downloaded from here, with all the files ready to run it. Look at the readme.txt file for instructions. When you run a Coherence client all you need to do is include the Coherence Jar files in the CLASSPATH and either include the XML configuration file in the CLASSPATH or indicate its location through a system property. That's it. Through multi-cast it will locate the cluster on the default TCP port. Using a specific IP address etc. for clustering is simply a matter of overriding default configuration file settings. So how fast is Coherence? Well that depends on how much data you need to access, what your network is like, how fast your hardware is etc, but it is usually run on commodity hardware (blades and GBit Ethernet). For more information on performance and scalability see the results of our internal test here. How much data can Coherence store? Some Coherence customers have caches with 100's of GB of data and 100's of nodes. Others only cache a few GB of data. Can I save cache entries to a database? Yes. Coherence provides the facility to persist cache data to any database (using JPA, Hibernate or Toplink) synchronously or asynchronously. Its also very easy to plug-in persistence to everything from a file system to a Mainframe. I use .NET of C++ not Java. Can I still use Coherence? Coherence also comes with native .NET and C++ client libraries. When using these technologies, .NET or C++ objects are created to map onto Java objects and Coherence then transforms the .NET or C++ objects to and from the Java objects when the clients communicate with the Java cache servers.   There is obviously a lot more to Coherence than has been covered here, but hopefully this has given you a flavour of how to use it. For more information about the other features mentioned, examples and white papers etc,go to the Coherence home page on Oracle Technology Network.

What is Coherence and why would you used it? Oracle Coherence stores data in-memory so access to it is very fast. Therefore, you would commonly use Coherence to provide fast access to data. Typically...

Oracle

Integrated Cloud Applications & Platform Services