Tuesday Aug 03, 2010

Coherence Rolling Upgrades

Oracle Coherence Rolling Upgrades

Oracle Coherence provides a highly available resilient in-memory cache that provides fast access to frequently used data. It is increasingly becoming a key component of systems that need to be online 24x7 with little or no opportunity to make changes offline. This document outlines a number of techniques that can be used to upgrade a Coherence cluster online without the need to stop the consumers of its data.

How can Coherence be upgraded online?

Typically this is done in a rolling upgrade. This is where each node or JVM in a cluster is stopped, its configuration or CLASSPATH changed and then it is re-started. By cycling through all nodes or JVM's in the cluster they can all be migrated to a new version of the Coherence binaries, a new configuration or version of custom classes.

Although Coherence has been designed to handle nodes or JVM's dynamically leaving and re-joining a cluster, the internal data recovery and re-balancing that Coherence performs can take a few seconds. Coherence can usually detect the failure of a node or JVM within a second, but recovering and re-balancing the data can take a few seconds – depending on the amount of data it held, the speed of the clusters network and how busy the cluster is.

So a rolling re-start must be managed. Every Coherence distributed service exposes a JMX attribute called StatusHA. It indicates the status of the data managed by that service, i.e. the data in the caches managed by the service. It can have 3 values. MACHINE-SAFE, when data in caches managed by the service has been copied to more than one machine, meaning no data will be lost if a machine fails. NODE-SAFE, when multiple copies of data exist on more than one node or JVM, and finally ENDANGERED, when some of the data it manages is only held on one node or JVM.

In a rolling re-start of a cluster another node cannot be re-started if the StatusHA of any distributed service is ENDANGERED or data may be lost if the next node or JVM to be re-started contains the only copy of a piece of un-recoverd data.

Some changes can be also be made dynamically, with no node or JVM re-start, for instance JMX changes. However, it should be noted that these will not be persisted and so will not survive node re-starts. The following change can be made dynamically through JMX (as of release 3.5.2):

Cluster Node Settings

JMX Parameter





The buffer size of the unicast datagram socket used by the Publisher, measured in the number of packets. Changing this value at runtime is an inherently unsafe operation that will pause all network communications and may result in the termination of all cluster services.



The buffer size of the unicast datagram socket used by the Receiver, measured in the number of packets. Changing this value at runtime is an inherently unsafe operation that will pause all network communications and may result in the termination of all cluster services.



The maximum number of packets to send without pausing. Anything less than one (e.g. zero) means no limit.



The number of milliseconds to pause between bursts. Anything less than one (e.g. zero) is treated as one millisecond.



Specifies how messages will be formatted before being passed to the log destination



Specifies which logged messages will be output to the log destination. Valid values are non-negative integers or -1 to disable all logger output.



The maximum number of characters that the logger daemon will process from the message queue before discarding all remaining messages in the queue. Valid values are integers in the range [0...]. Zero implies no limit.



The percentage (0 to 100) of the servers in the cluster that a packet will be sent to, above which the packet will be multicasted and below which it will be unicasted.



The minimum number of milliseconds that a packet will remain queued in the Publisher`s re-send queue before it is resent to the recipient(s) if the packet has not been acknowledged. Setting this value too low can overflow the network with unnecessary repetitions. Setting the value too high can increase the overall latency by delaying the re-sends of dropped packets. Additionally, change of this value may need to be accompanied by a change in SendAckDelay value.



The minimum number of milliseconds between the queueing of an Ack packet and the sending of the same. This value should be not more then a half of the ResendDelay value.



The maximum total number of packets in the send and resend queues that forces the publisher to pause client threads. Zero means no limit.



The number of milliseconds to pause client threads when a traffic jam condition has been reached. Anything less than one (e.g. zero) is treated as one millisecond.

Point-to-point Settings

JMX Parameter





The Id of the member being viewed.

Service Settings

JMX Parameter





The default timeout value in milliseconds for requests that can be timed-out (e.g. implement the com.tangosol.net.PriorityTask interface), but do not explicitly specify the request timeout value.



The amount of time in milliseconds that a task can execute before it is considered hung. Note that a posted task that has not yet started is never considered as hung.



The default timeout value in milliseconds for tasks that can be timed-out (e.g. implement the com.tangosol.net.PriorityTask interface), but do not explicitly specify the task execution timeout value.



The number of threads in the service thread pool.

Cache Settings

JMX Parameter





The BatchFactor attribute is used to calculate the `soft-ripe` time for write-behind queue entries. A queue entry is considered to be `ripe` for a write operation if it has been in the write-behind queue for no less than the QueueDelay interval. The `soft-ripe` time is the point in time prior to the actual `ripe` time after which an entry will be included in a batched asynchronous write operation to the CacheStore (along with all other `ripe` and `soft-ripe` entries). This attribute is only applicable if asynchronous writes are enabled (i.e. the value of the QueueDelay attribute is greater than zero) and the CacheStore implements the storeAll() method. The value of the element is expressed as a percentage of the QueueDelay interval. Valid values are doubles in the interval [0.0, 1.0].



The time-to-live for cache entries in milliseconds. Value of zero indicates that the automatic expiry is disabled. Change of this attribute will not affect already-scheduled expiry of existing entries.



The number of milliseconds between cache flushes. Value of zero indicates that the cache will never flush.



The limit of the cache size measured in units. The cache will prune itself automatically once it reaches its maximum unit level. This is often referred to as the `high water mark` of the cache.



The number of units to which the cache will shrink when it prunes. This is often referred to as a `low water mark` of the cache.



The number of seconds that an entry added to a write-behind queue will sit in the queue before being stored via a CacheStore. Applicable only for WRITE-BEHIND persistence type.



The RefreshFactor attribute is used to calculate the `soft-expiration` time for cache entries. Soft-expiration is the point in time prior to the actual expiration after which any access request for an entry will schedule an asynchronous load request for the entry. This attribute is only applicable for a ReadWriteBackingMap which has an internal LocalCache with scheduled automatic expiration. The value of this element is expressed as a percentage of the internal LocalCache expiration interval. Valid values are doubles in the interval[0.0, 1.0]. If zero, refresh-ahead scheduling will be disabled.



The maximum size of the write-behind queue for which failed CacheStore write operations are requeued. If zero, the write-behind requeueing will be disabled. Applicable only for WRITE-BEHIND persistence type.

Management Settings

JMX Parameter





The number of milliseconds that the MBeanServer will keep a remote model snapshot before refreshing.



The policy used to determine the behavior when refreshing remote models. Valid values are: refresh-ahead, refresh-behind, refresh-expired, refresh-onquery. Invalid values will convert to `refresh-expired`.

Others may need a parallel cluster to be available so that clients can be failed over to it and failed back when the original cluster has been completely shutdown and re-started.

The process for upgrading an application that uses Coherence will start at the cache store (usually a database) level and then at subsequent dependent application tiers, cluster level and client level, until everything has been upgraded.

This document will outline what types of changes can be made to Coherence online and how they can be made.

What types of online changes can be made to a Coherence cluster?

The Coherence binaries/JAR files

Minor upgrades to the Coherence binaries can be done online in a rolling fashion. This also applies to patches to the Coherence binaries. So a patch release upgrade from release to (Major.Major.Pack:Patch) can be done online. For major and pack release upgrades a complete cluster shutdown will be required. This is because a major or pack release will likely not be compatible on the TCMP protocol boundary (there are plans to document these).

To perform a major upgrade to the Coherence binaries, e.g. from 3.4 to 3.5, a parallel and identical cluster must be available. Usually this will be a Disaster Recovery (DR) site or another cluster setup in an active-active configuration using the Push Replication feature so that the data on each is synchronised.

For Extend Clients the procedure will first require clients using the primary site to be re-directed to the DR or another active site. Any data that has not yet been replicated can be pulled through from the primary site if necessary. Once all the active clients have been re-directed and changes on the primary site fully replicated it can then be shut-down, the major release of the binaries placed in the CLASSPATH and the cluster re-started. Finally, it can be re-synchronized with the other active-active site and clients re-directed back.

For data clients, i.e. storage disabled TCMP clients, it is not possible to re-direct them to another cluster- as described above.

Extend clients

Forward compatibility is from client to server, i.e. from Extend Client to the cluster. For example a 3.4 client can connect to a 3.5 and 3.6 proxy. However a 3.6 client cannot connect to a 3.4 proxy. In addition, this requires the use of POF all around, since POF can support backward compatible serialization change.

It should be noted that during an online upgrade new clients must not access a Coherence cluster until all the nodes or JVM's have been upgraded first.

Coherence Configuration Changes

The cache configuration can be made either in the XML configuration file or through JMX. JMX enables a limited set of changes to be made online, such as changing the logging level (see above for the full list of online changes that can be made). These are made at a node or JVM level. Management tools, like Oracle Enterprise Manager, can JMX changes to be propagated to all nodes in a cluster, but they are not persisted. Where JMX does not expose the configuration setting as an attribute that can be modified then it must be made to the cache or override configuration file and the cluster re-started in a rolling fashion. It should be noted that not all changes can be made in a rolling fashion, for instance changes to the cluster ports and addresses.

Portable Object Format (POF) changes must be synchronised with any CLASSPATH changes on both the cluster and clients using the cluster. Changes to the POF configuration can be made in a rolling fashion, but classes listed must also be added to the CLASSPATH at the same time. Furthermore, the whole cluster must be upgraded first before clients using any new classes access the cluster. Changes must also be both backwardly and forwardly compatible. So for instance a POF entry can be removed if it is for a class which represents a parameter to an entry processor which will no longer be used by any client. If older clients are still active though it must be retained.

Database Schema Changes

Many Coherence clusters use a database to persist cache data. For the most part database changes must be “additive”, i.e. a new table column may be added but one cannot be dropped while it still contains data. Some databases do provide sophisticated features to enable more dynamic changes to be made online, but they will not be discussed here. Nor will upgrading alternative cache store technologies.

Database changes must be made before any cache data can be modified, so that any dependencies new cache classes will have on an underlying database will be in place when they are used.

Changing Cache Data

Here we are going to focus on how cache data classes can be changed online, rather than code that implements custom eviction policies, entry processors etc. These types of classes usually have no dependency on a database and can simply be made to the CLASSPATH and POF configuration (if necessary) and the cluster re-started in a rolling fashion.

To enable cache classes to be upgraded online they must be able to contain different versions of the same data. The Coherence Evolvable interface facilitates this. The Evolvable interface is implemented by classes that require forwards and backward compatibility of their serialized form. Although evolvable classes are able to support the evolution of classes by the addition of data, changes that involve the removal of data or modifications cannot be safely accomplished as long as a previous version of the class exists that relies on that data.

When an Evolvable object is de-serialized, it retains any unknown data that has been added to newer versions of the class, and the version identifier for that data format. When the Evolvable object is subsequently serialized, it includes both that version identifier and the unknown future data.

For instance in the rolling upgrade example version 1 of the class DummyObject looks like this:

The implementation of the Evolvable interface looks like this:

Version 2 of DummyObject has a new field, newStr:

In the readExternal() method there is a check to see what version of the object it is. This is because if a version 1 stream of DummyObject were written to the dummy cache and it was then read by a version 2 client there would be no newStr field value as the object data has been created from a version 1 not version 2 client.

Changing the Environment

Changing a Coherence environment, like modifying the system properties for a JVM or even changing the physical resources of each server can also be performed in a rolling fashion. If cluster resources are modified then ensure that they are still sufficient and balanced after the change.

Rolling Upgrade Script

Below is an example script to perform a rolling upgrade of a Coherence cluster to introduce a new version of a cache client while an existing version of a client is still running.

Upgrade database schema

Update Coherence nodes CLASSPATH

For each server

For each JVM

While JMX value of StatusHA not MACHINE_SAFE

for all nodes for all services

Pause for 5s

End while

Stop / Start JVM

End For

End for

Introduce new version of client

Potential Issues

Online upgrades of a Coherence cluster are not always necessary. Many systems using Coherence have maintenance windows where changes can be made off-line or are not online 24x7. Where this is the case and the cache data is persisted an alternative data store, e.g. a database, an offline upgrade will be much simpler.

When considering an online upgrade of Coherence you should:

  • Script the upgrade process and thoroughly test it beforehand.

  • Ensure that the overhead of a rolling upgrade on your cluster is acceptable. When a cluster contains a lot of data you are effectivly going to move all the data around – twice!. This can significantly degrade cluster performance if the cluster is heavily loaded with normal activity.

Rolling Upgrade Example

To accompany this document there is an example application with scripts to upgrade it online. The application consists of a simple class, DummyObject, that is used to hold cache values, containing an int and a String attribute

The example has been tested against Coherence 3.5.3 for Java and a MySQL database. The scripts are simple bash scripts that can be run standalone on a laptop. The example is packaged as an Eclipse Galileo project, but the classes have been pre-compiles so an the Eclipse IDE is not essential.

The sequence for running the example is:

  • Download the necessary drivers for MySQL

  • Setup. Change the paths, IP addresses and ports in the scriipts/env.sh file to reflect those of your system.

  • Use the script create.sql to create the base tables in MySQL

  • Start the cache servers. Change to the scripts directory and run the ./start.sh script. This will start 3 cache servers and 1 JMX node in the background.

  • Open another terminal and in the scripts directory run the ./client1.sh to start version 1 of the client. This will run version 1 of the client that will connect to the cluster, put some objects in a cache called dummy and then continually update the string attribute to “dummy 1”

  • Update the database schema by running the alter-table.sql script in the sql directory to add an additional column to the Dummy table.

  • Copy the new DummyObject (version 2) JAR file over the old one (version 1) – that adds a new field (newString) to the DummyObject class.

  • In a new terminal go to the scripts directory and run the script ./restart.sh to perform a rolling cluster re-start. If will return and its output is written to the file logs/rserver.log...

  • Open another terminal and in the scripts directory run the ./client2.sh script. This will run a version 2 client that will update version 2 of the DummyObject values in the dummy cache. It will update the string attribute to “Dummy 2” and also print out the value of the new string attribute – which will always be “”


Many applications using Coherence have maintenance windows or are not required to run continuously, 24x7. However, some are. For these, the dynamic features of Coherence can be used to make online changes using rolling upgrades and side-by-side clusters. Even where downtimes are available these features allow for critical changes to be made when needed. These techniques complement the existing high availability features of Coherence and make it a truly unique data caching platform.


There are 2 example Eclipse projects to support this document, which allow you to try out a simple rolling upgrade. The examples can be downloaded here.

Monday Nov 23, 2009

Caching PHP HTTP Sessions in Coherence

[Read More]

Thursday Oct 15, 2009

Look, no Java!


With release 3.5 of Oracle Coherence it is now possible to query, aggregate and modify serialized POF (Portable Object Format) values stored in a cache natively, that is without writing a Java representation of the object. So  .NET and C++ developers can just write C# etc. and C++.

Note: With release 3.7.1 there is now no requirement to create corresponding Java classes if you choose to use data affinity and annotations can be be used with Java, .NET and C++ clients instead of explicitly adding POF serialisation code. See the end of this article for further details.

This is achieved with the introduction of POF extractors and POF updaters, that fetch and update native POF objects without de-serializing the value to Java. As well as allowing .NET and C++ developers to just work in the language they are most comfortable and productive in, this new feature also has a number of other benefits:

  • It dramatically improves performance, as no de-serialization needs to take place, so no new objects need to be created – or garbage collected.
  • Less memory is required as a result.
  • The development and deployment process is simpler, as no corresponding Java classes need to be created, managed and deployed.

However, there are some occasions where you do still need to create complementary Java objects to match your .NET or C++ objects. These are:

  • When you want to use Key Association, as Coherence will always de-serializes keys to determine whether they implement KeyAssociation.
  • If you use a Cache Stores - Coherence passes the de-serialized version of the key and value to the cache store to write to the back end so that ORM (Object Relational Mapping) tools, like Hibernate and EclipseLink (JPA) have access to the Java objects.

To update a value in a cache from C# you would use code similar to that shown below:

    // Parameters: key, (extractor, new value)

    cache.Invoke(0, new UpdaterProcessor(new PofUpdater(BetSerializer.ODDS), 1)); 



Here a cache value identified by the key ‘0’ is being updated using an UpdateProcessor. The ValueUpdator being used is a C# PofUpdater which will access the attribute at offset BetSerializer.ODD in the serialized POF value. Values held in a cache are held in a serialized format. When the POF serialization mechanism is used the values of an object are written and read from the POF stream in the same order. So here a constant  (BetSerializer.ODDS) is being used to provide a more readable representation for an offset like 3, i.e. the 3rd object value to be written and read from the POF stream.



For reading values from a cache in C# a similar approach is used, as shown below:



    // Query cache for all entries whose event name is 'FA Cup'

    EqualsFilter equalsFilter =

        new EqualsFilter(new PofExtractor(BetSerializer.EVENT_NAME), "FA Cup");



    Object[] results = cache.GetValues(equalsFilter);

    Console.WriteLine("Filter results:");



    for (int i = 0; i < results.Length; i++)


        Console.WriteLine("Value = " + results[i]);




Here all cache values that have an event name or “FA Cup” will be returned by the EqualsFilter. The C# PofExtractor is being used to extract the attribute at the offset BetSerializer.EVENT_NAME, which will be a number , like 4 indicating it was the 4th value to be written and read from the POF stream when the object it was part of was serialized.



This is a great new feature of Coherence and very easy to use. If you would like to see the full example the above extracts were taken from the you can download it from here. Happy coding - in C# and C++ ;).

Release 3.7.1 Update

There is now no longer any need to provide Java classes for classes used as keys in many circumstances.  Prior to 3.7.1, if you had a custom key class (for example a Name class that might have multiple strings like first and last name), then you needed to provide a corresponding Java class for that key class.  That is no longer the case. With Coherence 3.7.1, keys are not deserialized by the cluster. Now they don't even need to be deserialized for key assocation.  As of 3.7.1 key association checks are done at the Extend client (in this case .NET).  This is covered in the Coherence documentation in Oracle Coherence Release Notes, Release 3.7.1 under 1 Technical Changes and Enhancements -> Coherence*Extend Framework Enhancements and Fixes (here).

POF Annotations have also been added in release 3.7.1.  With POF Annotations, you can annotate classes which need to be POF serializable and no longer need to write serialization methods or separate serialization classes.  POF Annotations for .NET classes is covered in the Oracle Coherence Client Guide, Release 3.7.1 under 18.9 Using POF Annotations to Serialize Objects (here).


Friday May 08, 2009

Using Coherence as a Cache from a J2EE Web Application

[Read More]

Friday Mar 13, 2009

Using Excel as a Real-Time client for Coherence

[Read More]

Views and ideas about Oracle Coherence and other software


« July 2016