Friday Jan 31, 2014

Handling Large Objects (e.g., Audio/Video files) in Oracle NoSQL DB

Recently, I have worked in a project that required using Oracle NoSQL Database to store complex structures like audio and video files, and make those files rapidly accessible through an web application. When I started the development of file retrieval from the KVStore, I got surprised about how bad documented this scenario is, specially if you are dealing with LOB's. So I decided to document examples of how I got the scenario solved.

Writing LOB Values into KVStore

So you have a file stored in the file system and you would like to store it in the KVStore to enable further retrieval? The best way to accomplish this is to extract the InputStream from the file and put its value into KVStore like this:

package com.oracle.bigdata.nosql;

import java.io.IOException;
import java.io.InputStream;
import java.util.List;
import java.util.concurrent.TimeUnit;

import oracle.kv.Durability;
import oracle.kv.KVStore;
import oracle.kv.Key;

public class HowToStoreFilesFromFileSystem {

	// The reference below should be injected
	// by some runtime IoC mechanism or initialized
	// through constructor or setter methods...
	
	private KVStore store;

	public void storeValue(List<String> majorKeys, List<String> minorKeys,
			InputStream inputStream) {

		Key key = null;

		try {

			key = Key.createKey(majorKeys, minorKeys);
			store.putLOBIfAbsent(key, inputStream, Durability.COMMIT_NO_SYNC,
					10, TimeUnit.SECONDS);

		} catch (IOException ex) {

			throw new RuntimeException(
					"Error trying to store files into NoSQL Database", ex);

		}

	}

}

Reading LOB Values from the KVStore

Once you are ready to retrieve LOB values from the KVStore, perhaps wondering about how to rebuild the file in the file system, you can proceed this way:

package com.oracle.bigdata.nosql;

import java.io.BufferedInputStream;
import java.io.ByteArrayOutputStream;
import java.io.File;
import java.io.FileOutputStream;
import java.io.IOException;
import java.io.InputStream;
import java.util.List;
import java.util.concurrent.TimeUnit;

import oracle.kv.Consistency;
import oracle.kv.KVStore;
import oracle.kv.Key;
import oracle.kv.lob.InputStreamVersion;

public class HowToLoadFilesFromKVStore {

	// The reference below should be injected
	// by some runtime IoC mechanism or initialized
	// through constructor or setter methods...

	private KVStore store;

	public File buildFileFromKVStore(List<String> majorKeys,
			List<String> minorKeys, String fileName) {

		Key key = null;
		InputStreamVersion isVersion = null;

		int _byte = 0;
		File file = null;
		InputStream inputStream = null;
		BufferedInputStream bis = null;
		ByteArrayOutputStream baos = null;
		FileOutputStream fos = null;

		try {

			key = Key.createKey(majorKeys, minorKeys);
			isVersion = store.getLOB(key, Consistency.NONE_REQUIRED, 10,
					TimeUnit.SECONDS);

			if (isVersion != null) {

				inputStream = isVersion.getInputStream();
				bis = new BufferedInputStream(inputStream);
				baos = new ByteArrayOutputStream();

				while ((_byte = bis.read()) != -1) {
					baos.write(_byte);
				}

				file = new File(fileName);
				fos = new FileOutputStream(file);
				fos.write(baos.toByteArray());
				fos.flush();

			}

		} catch (Exception ex) {

			throw new RuntimeException(
					"Error trying to get files from NoSQL Database", ex);

		} finally {

			if (fos != null) {

				try {
					fos.close();
				} catch (IOException ioe) {
					ioe.printStackTrace();
				}

			}

		}

		return file;

	}

}

In the example above, the file is completely rebuilt in the file system to be accessed from any application. Note that once you have the BufferedInputStream in place, you don't actually need to rebuild the file completely in the file system in order to play with it. Perhaps you can transfer all those bytes to an mechanism that is perfectly capable to handle that stream in-memory, taking full advantage of handling data on-demand based on the streaming API of LOB's.

Saturday Jan 18, 2014

Capturing Business Events for Future Playback through Oracle Event Processing Platform

In the heart of OEP application development there is the understanding of the business events anatomy, the knowledge about its structure, frequency about when they happen, its relationship with other types of events and of course, its volume. Business events play a critical role in event-driven applications because they are the input for the problem that you are trying to solve. Without the business events, an event-driven application would be like a car engine without fuel.

When you are designing an EPN (Event Processing Network) of an OEP application, you are actually expressing the way about how the business events will be received, processed and perhaps transformed in some kind of output. Like any other technology, the behavior of this output depends heavily of which data you have used as input. For one specific volume, the EPN could work, for another volume, maybe not. For one specific ordering, the EPN could work, with another ordering, it could not. It will only work when the right set of events is being used.

Its very common the situation when you deploy the first version of your OEP application and after a while users start complaining about undesired results. This happens because no matter how many times you tested your application, you will never get close to the mass of events present in the customer environment. The ordering, volume, size, encoding and frequency of the events found in the customer environment is always different of that mass of events that you used during functional testing. If this situation happens to you, its time to think in a way to record those events and bring them back to the development environment to figure out what is wrong.

This article aims to explore the record and playback feature of Oracle Event Processing Platform. Through the examples shown here, you will be able to record and replay a mass of business events to simulate some behavior that maybe you have forgot to capture at your EPN. This feature is also pretty cool for prototyping scenarios. Instead of bringing with a you a simulator of events, you can just replay a recorded set of events and present your OEP application.

Configuring the OEP Default Provider

In order to the record and playback features work, OEP needs to relies in a repository in which events will be stored. To make the developer work easier, OEP brings out-of-the-box an default provider for this repository based on the Berkeley DB technology. Located inside of every OEP domain, there is an instance of Berkeley DB (aka "BDB") which can be adjusted through the configuration file of the domain. In most cases, you don't need to change anything before start using the record and playback features, but there is one catch that you should be aware of.

BDB stores each recorded event as an entry in this repository. Each entry has a default size of 1KB (1000 bytes) pre-defined in the moment that you create the domain. The information about the size of each entry is important in terms of performance and how BDB will growth its storage. So if you know the average size of your events, it is a good idea to inform the size of each entry in the domain configuration file. For instance, suppose that we are dealing with a mass of events with 32KB of average size for each event. You can adjust BDB editing the domain configuration file located in <DOMAIN_HOME>/<SERVER_NAME>/config/config.xml:

As you can see in the example above, you need to adjust the cache-size property of the bdb-config section. This property is measured in terms of bytes. The value of this property represents the amount of memory that BDB need to allocate for each entry.

Recording the Events

In order to record events, you need to access the OEP Visualizer Console of your domain. Log in onto the console and navigate to the EPN of the OEP application that you want to record events.

In the EPN of your OEP application, you can right-click any of the elements inside, no matter if it is an adapter, channel, processor or an cache. But make totally sense to record the events from the adapters perspective. They are the genesis of your EPN, and all the events that will flow through it will come from the adapters. Right-click the input adapter of your EPN and choose the "Record Event" option:

You will open the record tab of the adapter. In the bottom of the page, in a section named "Change Recording Schedule", you will find an button named "Add". Click on it.

Adding a new record configuration scheme enables the UI for full-filling some fields. Enter in the "DataSet Name" field the name of this recording session. This field will be used to construct the physical repository of BDB. You are also required to set the "Selected Event Type List" field. You need to describe in this field which incoming events you are expecting to record. After this you can click in the "Save" button.

We are all set to start recording. Double-check if there are events being sent to your OEP application and click in the "Start" button. Doing this will start a recording session and all events received by the adapter will be recorded. At any point of time, you can click in the "Stop" button to stop the recording session.

Playing Back the Events

To start the playback of any recorded session, you need to go back to the EPN screen and right-click the channel that will receive the events. This will bring you to the playback tab of the selected channel. Just like the recording session, you need to add one playback configuration scheme clicking in the "Add" button.

The group of fields to be full-filled is almost the same. Pay attemption that in the "DataSet Name" field you need to inform the exactly name that you informed in the recording session. Since you can have multiple recording sessions under different names, when you playback you need to tell OEP which session you would like to playback. As such, you need to inform which events you would like to playback, filling the "Selected Event Type List" field.

Optionally, you can set some additional parameters related to the behavior of the playback. In the "Change Playback Schedule Parameters" you can find fields to define which interval of time you want to playback. This is useful when you recorded hours/days of event streaming and you would like to reproduce just one piece of it. You can also define the speed in which events will be injected into the channel, useful to test how your EPN behave in a high speed velocity situation. Finally, you can set if the playback will be repeated forever. Since the recording session is finite, you can playback it as many times you want, or can just set the "Repeat" field as true to automatically restart from the beginning when the recorded streaming ends.

When you are done, save the playback configuration scheme and click in the "Start" button to start the playback session according your settings. An "Playing..." message will be flipping in the screen during the playback session. At any time, you can stop the playback clicking in the "Stop" button.

All the configuration you made so far is stored in the OEP domain along with your application so don't be shy to restart the OEP server and potentially lose your job: all the settings will remain there when the server come back to life. This statement is true as long you don't undeploy your application from the OEP domain. If you undeploy your application, all the settings will be lost. If you want make those settings permanent, you can set those into your application. Access this link to learn how to configure a component to record events, and this link to learn how to configure a component to playback events.

Wednesday Jan 08, 2014

Service Enablement of CORBA Applications through Oracle Service Bus

An quick overview in the definition of an ESB will tell us that one of its main responsibilities is among other things, the enablement of existing systems to provide new fresh services, using the same or maybe different protocols and/or contracts. This is when the ESB make magical things happen, virtualizing existing services and making them available for the outside world, abstracting from the applications that will consume the service details about the underlying system.

With this statement in mind, it is reasonable to think that an good ESB must be able to handle different types of systems and technologies found in legacy systems, no matter if this system was built last year, five years ago or even in the last decade. Those systems represents the assets of an organization in terms of their business building blocks, so there is a huge chance that those systems carries a substantial number of business services that could be leveraged by an SOA initiative.

CORBA is a distributed technology very powerful, that was pretty popular in the 90's and the beginning of 2000 year. Many industries that demands an robust infrastructure to handle their business transactions, critical by nature and extreme sensitive in terms of performance and reliability, relied on CORBA as technology implementation. It is pretty common to find communications companies like internet providers, mobile operators and pre-paid services chain that built its foundation (also known as engineering services) on top of CORBA systems.

This article will show how to enable CORBA systems through OSB, the ESB implementation from Oracle that is part of the SOA Suite. Through the steps showed here, you will be able to leverage existing CORBA systems and expose that business logic (defined as CORBA objects) in different transports, protocols and contracts, making the reuse of that business logic both possible and viable. This article will not cover any CORBA specific ORB to make the techniques available here reproducible in different contexts.

The Interface Definition Language

The definition of any CORBA object is written in an neutral description language called IDL, acronym of Interface Definition Language. For this example, I will consider that OSB will service enable an functionality that sends SMS messages, and this functionality is currently implemented as an object of an CORBA system. The IDL of this object is described below:

module corbaEnablementExample
{

    struct Message
    {

        string content;
        string phoneId;

    };

    interface SMSGateway
    {

        oneway void send(in Message message);

    };

};

As you can see, this is a very simple object that accepts an message as main parameter and the message has attributes that represents the content to be sent as an SMS message, and the mobile phone number that will receive the content.

The CORBA Server Application

It does not matter for the didactic of this article in which programming language the server part of the CORBA application will be implemented. What really matters is which ORB the CORBA server application will register its implementation stub. To illustrate the example, lets suppose that this CORBA object is implemented in Java.

package corbaEnablementApplication;

import org.omg.CORBA.ORB;
import org.omg.CORBA.Object;
import org.omg.CosNaming.NameComponent;
import org.omg.CosNaming.NamingContextExt;
import org.omg.CosNaming.NamingContextExtHelper;
import org.omg.PortableServer.POA;
import org.omg.PortableServer.POAHelper;

import corbaEnablementExample.SMSGateway;
import corbaEnablementExample.SMSGatewayHelper;

public class ServerApplication {

	public static void main(String[] args) {
		
		ORB orb = null;
		Object tmp = null;
		POA rootPOA = null;
		SMSGatewayImpl impl = null;
		Object ref, objRef = null;
		SMSGateway href = null;
		NamingContextExt ncRef = null;
		NameComponent path[] = null;
		
		try {
			
			System.setProperty("org.omg.CORBA.ORBInitialHost", "soa.suite.machine");
			System.setProperty("org.omg.CORBA.ORBInitialPort", "8001");
			orb = ORB.init(args, null);
			
			tmp = orb.resolve_initial_references("RootPOA");
			rootPOA = POAHelper.narrow(tmp);
			rootPOA.the_POAManager().activate();
			
			impl = new SMSGatewayImpl();
			ref = rootPOA.servant_to_reference(impl);
			href = SMSGatewayHelper.narrow(ref);
			
			objRef = orb.resolve_initial_references("NameService");
			ncRef = NamingContextExtHelper.narrow(objRef);
			
			path = ncRef.to_name("sms-gateway");
			ncRef.rebind(path, href);
			
			System.out.println("----------------------------------------------------");
			System.out.println("  CORBA Server is Running and waiting for Requests  ");
			System.out.println("----------------------------------------------------");
			
			orb.run();
			
		} catch (Exception ex) {
			ex.printStackTrace();
		}
		
	}

}

The code listing above shows an CORBA server application that connects onto a ORB available on one 8001 TCP/IP port. After retrieve the POA from the ORB, it get access to the naming service that will be used to register the object implementation. Finally, the application binds the object implementation under the name of "sms-gateway", the name which the CORBA object will be known from the outside world. In order to test this CORBA server application, start an ORB under the port 8001 and execute the program using one JVM. If you don't have any commercial ORB available, you can use the ORB which comes with the JDK. Just enter in the /bin folder of your JDK and type:

orbd -ORBInitialHost soa.suite.machine -ORBInitialPort 8001

To check if this remote object is working properly, you need to write an CORBA client application. Here is an example of an CORBA client written upon the same IDL interface which the server was written:

package corbaEnablementApplication;

import java.util.Random;
import java.util.UUID;

import org.omg.CORBA.ORB;
import org.omg.CORBA.Object;
import org.omg.CosNaming.NamingContextExt;
import org.omg.CosNaming.NamingContextExtHelper;

import corbaEnablementExample.Message;
import corbaEnablementExample.SMSGateway;
import corbaEnablementExample.SMSGatewayHelper;

public class ClientApplication {
	
	private static final Random random = new Random();

	public static void main(String[] args) {
		
		ORB orb = null;
		Object objRef = null;
		NamingContextExt ncRef = null;
		SMSGateway smsGateway = null;
		
		try {
			
			System.setProperty("org.omg.CORBA.ORBInitialHost", "soa.suite.machine");
			System.setProperty("org.omg.CORBA.ORBInitialPort", "8001");
			orb = ORB.init(args, null);
			
			objRef = orb.resolve_initial_references("NameService");
			ncRef = NamingContextExtHelper.narrow(objRef);
			
			smsGateway = SMSGatewayHelper.narrow(ncRef.resolve_str("sms-gateway"));
			Message message = createNewRandomMessage();
			smsGateway.send(message);
			
		} catch (Exception ex) {
			ex.printStackTrace();
		}
		
	}

	private static Message createNewRandomMessage() {
		String content = UUID.randomUUID().toString();
		String phoneId = String.valueOf(random.nextLong());
		Message message = new Message(content, phoneId);
		return message;
	}

}

The Business Services Layer

In order to OSB get access to the remote object, it is necessary to create an mechanism that can translate the IIOP protocol (the protocol used in pure CORBA systems) for one protocol that OSB can understand, which could be RMI/IIOP or pure RMI. To accomplish that, the best way is to implement the wrapper pattern. Write down one EJB 3.0 service that encapsulates the CORBA remote object, and delegates its service calls to this object. The interface for this EJB 3.0 service should be something simpler like this:

package com.oracle.fmw.soa.osb.corba;

public interface SMSGateway {
	
	public void sendMessage(String content, String phoneId);

}

The implementation of this EJB 3.0 service should perform a job similar to the CORBA client application described previously, but quite different in terms of how it connect to an ORB:

package com.oracle.fmw.soa.osb.corba;

import javax.annotation.PostConstruct;
import javax.annotation.Resource;
import javax.ejb.Remote;
import javax.ejb.Stateless;

import org.omg.CORBA.ORB;
import org.omg.CORBA.Object;
import org.omg.CosNaming.NamingContextExt;
import org.omg.CosNaming.NamingContextExtHelper;

import corbaEnablementExample.Message;
import corbaEnablementExample.SMSGatewayHelper;

@Remote(value = SMSGateway.class)
@Stateless(name = "SMSGateway", mappedName = "SMSGateway")
public class SMSGatewayImpl implements SMSGateway {
	
	private corbaEnablementExample.SMSGateway smsGateway;
	
	@Resource(name = "namingService")
	private String namingService;
	
	@Resource(name = "objectName")
	private String objectName;
	
	@PostConstruct
	@SuppressWarnings("unused")
	private void retrieveStub() {
		
		ORB orb = null;
		Object objRef = null;
		NamingContextExt ncRef = null;
		
		try {
			
			orb = ORB.init();
			objRef = orb.resolve_initial_references(namingService);
			ncRef = NamingContextExtHelper.narrow(objRef);
			smsGateway = SMSGatewayHelper.narrow(ncRef.resolve_str(objectName));
			
		} catch (Exception ex) {
			
			throw new RuntimeException("EJB wrapper failed in the retrieval of the CORBA stub.", ex);
			
		}
		
	}

	@Override
	public void sendMessage(String content, String phoneId) {
		
		smsGateway.send(new Message(content, phoneId));
			
	}

}

The code is very similar to the CORBA client application showed before, with one important difference: it has no information about which ORB to connect. In this case, the EJB will reside in on WebLogic JVM. Each WebLogic JVM has one ORB implementation out-of-the-box. So when you write EJB objects that will wrap-up CORBA remote objects, you don't need to worry about which ORB to use. WebLogic it is already an ORB.


Note: as explained before, this article will not enter in details of any commercial ORB available, in defense of the clarity and didactic of the article. But keep in mind that the steps shown here for the stub retrieval can be quite different if you are using another ORB. If you are using Borland VisiBroker for instance, there is a unique way to access the ORB which is using an service called "Smart Agent", which dynamically finds another objects in the network. IONA Orbix has another unique way to connect to an ORB, which is by the use of the domain configuration location of the Orbix network.


Create one WebLogic domain and execute one or more WebLogic managed servers, and re-run the CORBA server application again. Remember that now, the CORBA server application should point to the WebLogic port since the ORB now should be the one available in the WebLogic subsystem. If you check the JNDI tree of the WebLogic JVM, you should see something like this:

This means that the remote CORBA object was properly registered in the CosNaming service available in the WebLogic ORB. Package the EJB 3.0 implementation into a JAR or an EAR and deploy it in the same WebLogic JVM that the CORBA remote object was registered. Now we have everything in place to start the development of the OSB project. For the purposes of this article, I will assume that the EJB 3.0 object is available under the following JNDI name: "SMSGateway#com.oracle.fmw.soa.osb.corba.SMSGateway".

The OSB Project

In the OSB side, all you have to do is to create an business service that points to one or more endpoints of the EJB 3.0 that is running in the one or more servers of the WebLogic domain. In order to accomplish that, you will need to teach OSB about how to communicate with this foreign WebLogic domain. This is done creating an JNDI provider for the OSB configuration scheme:


OSB also needs to access the EJB 3.0 interfaces (and any other helper classes) to instantiate client proxies, so you need to package all the EJB 3.0 artifacts (except of course from the enterprise bean implementation) and deploy it onto your OSB project:

Now we have everything in place. It is time to create the business service that will point to the EJB 3.0 wrapper. Create one business service and set its service type to "Transport Typed":

Configure the business service protocol as "EJB" and set its endpoint URI to the prefix "ejb:" plus the name of the JNDI provider and plus the JNDI name of the EJB 3.0:

Finally, you need to configure the client interface of the EJB 3.0 endpoint in the business service configuration page. Check the "EJB 3.0" checkbox and choose from the drop-down list which interface will be used for message communication.

Finish the creation of the business service and save the changes. You can now test your business service using the testing tool available on OSB:

After making an request to the business service using the OSB testing tool, you can check the CORBA server application log to see the results of this invocation. Here is an example:

With the business service in place, you can easily create one or more proxy services to access the remote CORBA object with minimal effort. For the OSB perspective, it is all about routing messages to the business service that you created, making the fact that this business service is a CORBA remote object really irrelevant.

No matter what will be your use case, now you have the CORBA remote object available in OSB for virtually anything. You can expose it directly using one of the available transports, you can forward messages for it in the middle of your pipeline, you can use it as enrichment mechanism using service callouts or you can just use the business service as one of the choices of an dynamic routing. If you choose to expose this business service into a new protocol, you can play with SOAP, REST, HTTP, JMS, Email, Tuxedo, File and FTP with zero-coding. OSB will take care of the protocol translation during messages exchanges.

You can download the project artifacts created in this article here.

Monday Oct 28, 2013

Oracle Coherence, Split-Brain and Recovery Protocols In Detail

This article provides a high level conceptual overview of Split-Brain scenarios in distributed systems. It will focus on a specific example of cluster communication failure and recovery in Oracle Coherence. This includes a discussion on the witness protocol (used to remove failed cluster members) and the panic protocol (used to resolve Split-Brain scenarios).

Note that the removal of cluster members does not necessarily indicate a Split-Brain condition. Oracle Coherence does not (and cannot) detect a Split-Brain as it occurs, the condition is only detected when cluster members that previously lost contact with each other regain contact.

Cluster Topology and Configuration

In order to create an good didactic for the article, let's assume a cluster topology and configuration. In this example we have a six member cluster, consisting of one JVM on each physical machine. The member IDs are as follows:

Member ID  IP Address
 1  10.149.155.76
 2  10.149.155.77
 3  10.149.155.236
 4  10.149.155.75
 5  10.149.155.79
 6  10.149.155.78

Members 1, 2, and 3 are connected to a switch, and members 4, 5, and 6 are connected to a second switch. There is a link between the two switches, which provides network connectivity between all of the machines.


Member 1 is the first member to join this cluster, thus making it the senior member. Member 6 is the last member to join this cluster. Here is a log snippet from Member 6 showing the complete member set:

2010-02-26 15:27:57.390/3.062 Oracle Coherence GE 12.1.2.0.0 <Info> (thread=main, member=6): Started DefaultCacheServer...

SafeCluster: Name=cluster:0xDDEB

Group{Address=224.3.5.3, Port=35465, TTL=4}

MasterMemberSet
  (
  ThisMember=Member(Id=6, Timestamp=2010-02-26 15:27:58.635, Address=10.149.155.78:8088, MachineId=1102, Location=process:228, Role=CoherenceServer)
  OldestMember=Member(Id=1, Timestamp=2010-02-26 15:27:06.931, Address=10.149.155.76:8088, MachineId=1100, Location=site:usdhcp.oraclecorp.com,machine:dhcp-burlington6-4fl-east-10-149,process:511, Role=CoherenceServer)
  ActualMemberSet=MemberSet(Size=6, BitSetCount=2
    Member(Id=1, Timestamp=2010-02-26 15:27:06.931, Address=10.149.155.76:8088, MachineId=1100, Location=site:usdhcp.oraclecorp.com,machine:dhcp-burlington6-4fl-east-10-149,process:511, Role=CoherenceServer)
    Member(Id=2, Timestamp=2010-02-26 15:27:17.847, Address=10.149.155.77:8088, MachineId=1101, Location=site:usdhcp.oraclecorp.com,machine:dhcp-burlington6-4fl-east-10-149,process:296, Role=CoherenceServer)
    Member(Id=3, Timestamp=2010-02-26 15:27:24.892, Address=10.149.155.236:8088, MachineId=1260, Location=site:usdhcp.oraclecorp.com,machine:dhcp-burlington6-4fl-east-10-149,process:32459, Role=CoherenceServer)
    Member(Id=4, Timestamp=2010-02-26 15:27:39.574, Address=10.149.155.75:8088, MachineId=1099, Location=process:800, Role=CoherenceServer)
    Member(Id=5, Timestamp=2010-02-26 15:27:49.095, Address=10.149.155.79:8088, MachineId=1103, Location=site:usdhcp.oraclecorp.com,machine:dhcp-burlington6-4fl-east-10-149,process:3229, Role=CoherenceServer)
    Member(Id=6, Timestamp=2010-02-26 15:27:58.635, Address=10.149.155.78:8088, MachineId=1102, Location=process:228, Role=CoherenceServer)
    )
  RecycleMillis=120000
  RecycleSet=MemberSet(Size=0, BitSetCount=0
    )
  )

At approximately 15:30, the connection between the two switches is severed:


Thirty seconds later (the default packet timeout in development mode) the logs indicate communication failures across the cluster. In this example, the communication failure was caused by a network failure. In a production setting, this type of communication failure can have many root causes, including (but not limited to) network failures, excessive GC, high CPU utilization, swapping/virtual memory, and exceeding maximum network bandwidth. In addition, this type of failure is not necessarily indicative of a split brain. Any communication failure will be logged in this fashion. Member 2 logs a communication failure with Member 5:

2010-02-26 15:30:32.638/196.928 Oracle Coherence GE 12.1.2.0.0 <Warning> (thread=PacketPublisher, member=2): Timeout while delivering a packet; requesting the departure confirmation for Member(Id=5, Timestamp=2010-02-26 15:27:49.095, Address=10.149.155.79:8088, MachineId=1103, Location=site:usdhcp.oraclecorp.com,machine:dhcp-burlington6-4fl-east-10-149,process:3229, Role=CoherenceServer)
by MemberSet(Size=2, BitSetCount=2
  Member(Id=1, Timestamp=2010-02-26 15:27:06.931, Address=10.149.155.76:8088, MachineId=1100, Location=site:usdhcp.oraclecorp.com,machine:dhcp-burlington6-4fl-east-10-149,process:511, Role=CoherenceServer)
  Member(Id=4, Timestamp=2010-02-26 15:27:39.574, Address=10.149.155.75:8088, MachineId=1099, Location=process:800, Role=CoherenceServer)
  )

The Coherence clustering protocol (TCMP) is a reliable transport mechanism built on UDP. In order for the protocol to be reliable, it requires an acknowledgement (ACK) for each packet delivered. If a packet fails to be acknowledged within the configured timeout period, the Coherence cluster member will log a packet timeout (as seen in the log message above). When this occurs, the cluster member will consult with other members to determine who is at fault for the communication failure. If the witness members agree that the suspect member is at fault, the suspect is removed from the cluster. If the witnesses unanimously disagree, the accuser is removed. This process is known as the witness protocolSince Member 2 cannot communicate with Member 5, it selects two witnesses (Members 1 and 4) to determine if the communication issue is with Member 5 or with itself (Member 2). However, Member 4 is on the switch that is no longer accessible by Members 1, 2 and 3; thus a packet timeout for member 4 is recorded as well:

2010-02-26 15:30:35.648/199.938 Oracle Coherence GE 12.1.2.0.0 <Warning> (thread=PacketPublisher, member=2): Timeout while delivering a packet; requesting the departure confirmation for Member(Id=4, Timestamp=2010-02-26 15:27:39.574, Address=10.149.155.75:8088, MachineId=1099, Location=process:800, Role=CoherenceServer)
by MemberSet(Size=2, BitSetCount=2
  Member(Id=1, Timestamp=2010-02-26 15:27:06.931, Address=10.149.155.76:8088, MachineId=1100, Location=site:usdhcp.oraclecorp.com,machine:dhcp-burlington6-4fl-east-10-149,process:511, Role=CoherenceServer)
  Member(Id=6, Timestamp=2010-02-26 15:27:58.635, Address=10.149.155.78:8088, MachineId=1102, Location=process:228, Role=CoherenceServer)
  )

Member 1 has the ability to confirm the departure of member 4, however Member 6 cannot as it is also inaccessible. At the same time, Member 3 sends a request to remove Member 6, which is followed by a report from Member 3 indicating that Member 6 has departed the cluster:

2010-02-26 15:30:35.706/199.996 Oracle Coherence GE 12.1.2.0.0 <D5> (thread=Cluster, member=2): MemberLeft request for Member 6 received from Member(Id=3, Timestamp=2010-02-26 15:27:24.892, Address=10.149.155.236:8088, MachineId=1260, Location=site:usdhcp.oraclecorp.com,machine:dhcp-burlington6-4fl-east-10-149,process:32459, Role=CoherenceServer)
2010-02-26 15:30:35.709/199.999 Oracle Coherence GE 12.1.2.0.0 <D5> (thread=Cluster, member=2): MemberLeft notification for Member 6 received from Member(Id=3, Timestamp=2010-02-26 15:27:24.892, Address=10.149.155.236:8088, MachineId=1260, Location=site:usdhcp.oraclecorp.com,machine:dhcp-burlington6-4fl-east-10-149,process:32459, Role=CoherenceServer)

The log for Member 3 determines how Member 6 departed the cluster:

2010-02-26 15:30:35.161/191.694 Oracle Coherence GE 12.1.2.0.0 <Warning> (thread=PacketPublisher, member=3): Timeout while delivering a packet; requesting the departure confirmation for Member(Id=6, Timestamp=2010-02-26 15:27:58.635, Address=10.149.155.78:8088, MachineId=1102, Location=process:228, Role=CoherenceServer)
by MemberSet(Size=2, BitSetCount=2
  Member(Id=1, Timestamp=2010-02-26 15:27:06.931, Address=10.149.155.76:8088, MachineId=1100, Location=site:usdhcp.oraclecorp.com,machine:dhcp-burlington6-4fl-east-10-149,process:511, Role=CoherenceServer)
  Member(Id=2, Timestamp=2010-02-26 15:27:17.847, Address=10.149.155.77:8088, MachineId=1101, Location=site:usdhcp.oraclecorp.com,machine:dhcp-burlington6-4fl-east-10-149,process:296, Role=CoherenceServer)
  )
2010-02-26 15:30:35.165/191.698 Oracle Coherence GE 12.1.2.0.0 <Info> (thread=Cluster, member=3): Member departure confirmed by MemberSet(Size=2, BitSetCount=2
  Member(Id=1, Timestamp=2010-02-26 15:27:06.931, Address=10.149.155.76:8088, MachineId=1100, Location=site:usdhcp.oraclecorp.com,machine:dhcp-burlington6-4fl-east-10-149,process:511, Role=CoherenceServer)
  Member(Id=2, Timestamp=2010-02-26 15:27:17.847, Address=10.149.155.77:8088, MachineId=1101, Location=site:usdhcp.oraclecorp.com,machine:dhcp-burlington6-4fl-east-10-149,process:296, Role=CoherenceServer)
  ); removing Member(Id=6, Timestamp=2010-02-26 15:27:58.635, Address=10.149.155.78:8088, MachineId=1102, Location=process:228, Role=CoherenceServer)

In this case, Member 3 happened to select two witnesses that it still had connectivity with (Members 1 and 2) thus resulting in a simple decision to remove Member 6.


Given the departure of Member 6, Member 2 is left with a single witness to confirm the departure of Member 4:

2010-02-26 15:30:35.713/200.003 Oracle Coherence GE 12.1.2.0.0 <Info> (thread=Cluster, member=2): Member departure confirmed by MemberSet(Size=1, BitSetCount=2
  Member(Id=1, Timestamp=2010-02-26 15:27:06.931, Address=10.149.155.76:8088, MachineId=1100, Location=site:usdhcp.oraclecorp.com,machine:dhcp-burlington6-4fl-east-10-149,process:511, Role=CoherenceServer)
  ); removing Member(Id=4, Timestamp=2010-02-26 15:27:39.574, Address=10.149.155.75:8088, MachineId=1099, Location=process:800, Role=CoherenceServer)

In the meantime, Member 4 logs a missing heartbeat from the senior member. This message is also logged on Members 5 and 6.

2010-02-26 15:30:07.906/150.453 Oracle Coherence GE 12.1.2.0.0 <Info> (thread=PacketListenerN, member=4): Scheduled senior member heartbeat is overdue; rejoining multicast group.

Next, Member 4 logs a TcpRing failure with Member 2, thus resulting in the termination of Member 2:

2010-02-26 15:30:21.421/163.968 Oracle Coherence GE 12.1.2.0.0 <D4> (thread=Cluster, member=4): TcpRing: Number of socket exceptions exceeded maximum; last was "java.net.SocketTimeoutException: connect timed out"; removing the member: 2

For quick process termination detection, Oracle Coherence utilizes a feature called TcpRing which is a sparse collection of TCP/IP-based connections between different members in the cluster. Each member in the cluster is connected to at least one other member, which (if at all possible) is running on a different physical box. This connection is not used for any data transfer, only heartbeat communications are sent once a second per each link. If a certain number of exceptions are thrown while trying to re-establish a connection, the member throwing the exceptions is removed from the cluster. Member 5 logs a packet timeout with Member 3 and cites witnesses Members 4 and 6:

2010-02-26 15:30:29.791/165.037 Oracle Coherence GE 12.1.2.0.0 <Warning> (thread=PacketPublisher, member=5): Timeout while delivering a packet; requesting the departure confirmation for Member(Id=3, Timestamp=2010-02-26 15:27:24.892, Address=10.149.155.236:8088, MachineId=1260, Location=site:usdhcp.oraclecorp.com,machine:dhcp-burlington6-4fl-east-10-149,process:32459, Role=CoherenceServer)
by MemberSet(Size=2, BitSetCount=2
  Member(Id=4, Timestamp=2010-02-26 15:27:39.574, Address=10.149.155.75:8088, MachineId=1099, Location=process:800, Role=CoherenceServer)
  Member(Id=6, Timestamp=2010-02-26 15:27:58.635, Address=10.149.155.78:8088, MachineId=1102, Location=process:228, Role=CoherenceServer)
  )
2010-02-26 15:30:29.798/165.044 Oracle Coherence GE 12.1.2.0.0 <Info> (thread=Cluster, member=5): Member departure confirmed by MemberSet(Size=2, BitSetCount=2
  Member(Id=4, Timestamp=2010-02-26 15:27:39.574, Address=10.149.155.75:8088, MachineId=1099, Location=process:800, Role=CoherenceServer)
  Member(Id=6, Timestamp=2010-02-26 15:27:58.635, Address=10.149.155.78:8088, MachineId=1102, Location=process:228, Role=CoherenceServer)
  ); removing Member(Id=3, Timestamp=2010-02-26 15:27:24.892, Address=10.149.155.236:8088, MachineId=1260, Location=site:usdhcp.oraclecorp.com,machine:dhcp-burlington6-4fl-east-10-149,process:32459, Role=CoherenceServer)

Eventually we are left with two distinct clusters consisting of Members 1, 2, 3 and Members 4, 5, 6, respectively. In the latter cluster, Member 4 is promoted to senior member.


The connection between the two switches is restored at 15:33. Upon the restoration of the connection, the cluster members immediately receive cluster heartbeats from the two senior members. In the case of Members 1, 2, and 3, the following is logged:

2010-02-26 15:33:14.970/369.066 Oracle Coherence GE 12.1.2.0.0 <Warning> (thread=Cluster, member=1): The member formerly known as Member(Id=4, Timestamp=2010-02-26 15:30:35.341, Address=10.149.155.75:8088, MachineId=1099, Location=process:800, Role=CoherenceServer) has been forcefully evicted from the cluster, but continues to emit a cluster heartbeat; henceforth, the member will be shunned and its messages will be ignored.

Likewise for Members 4, 5, and 6:

2010-02-26 15:33:14.343/336.890 Oracle Coherence GE 12.1.2.0.0 <Warning> (thread=Cluster, member=4): The member formerly known as Member(Id=1, Timestamp=2010-02-26 15:30:31.64, Address=10.149.155.76:8088, MachineId=1100, Location=site:usdhcp.oraclecorp.com,machine:dhcp-burlington6-4fl-east-10-149,process:511, Role=CoherenceServer) has been forcefully evicted from the cluster, but continues to emit a cluster heartbeat; henceforth, the member will be shunned and its messages will be ignored.

This message indicates that a senior heartbeat is being received from members that were previously removed from the cluster, in other words, something that should not be possible. For this reason, the recipients of these messages will initially ignore them. After several iterations of these messages, the existence of multiple clusters is acknowledged, thus triggering the panic protocol to reconcile this situation. When the presence of more than one cluster (i.e. Split-Brain) is detected by a Coherence member, the panic protocol is invoked in order to resolve the conflicting clusters and consolidate into a single cluster. The protocol consists of the removal of smaller clusters until there is one cluster remaining. In the case of equal size clusters, the one with the older Senior Member will survive. Member 1, being the oldest member, initiates the protocol:

2010-02-26 15:33:45.970/400.066 Oracle Coherence GE 12.1.2.0.0 <Warning> (thread=Cluster, member=1): An existence of a cluster island with senior Member(Id=4, Timestamp=2010-02-26 15:27:39.574, Address=10.149.155.75:8088, MachineId=1099, Location=process:800, Role=CoherenceServer) containing 3 nodes have been detected. Since this Member(Id=1, Timestamp=2010-02-26 15:27:06.931, Address=10.149.155.76:8088, MachineId=1100, Location=site:usdhcp.oraclecorp.com,machine:dhcp-burlington6-4fl-east-10-149,process:511, Role=CoherenceServer) is the senior of an older cluster island, the panic protocol is being activated to stop the other island's senior and all junior nodes that belong to it.

Member 3 receives the panic:

2010-02-26 15:33:45.803/382.336 Oracle Coherence GE 12.1.2.0.0 <Error> (thread=Cluster, member=3): Received panic from senior Member(Id=1, Timestamp=2010-02-26 15:27:06.931, Address=10.149.155.76:8088, MachineId=1100, Location=site:usdhcp.oraclecorp.com,machine:dhcp-burlington6-4fl-east-10-149,process:511, Role=CoherenceServer) caused by Member(Id=4, Timestamp=2010-02-26 15:27:39.574, Address=10.149.155.75:8088, MachineId=1099, Location=process:800, Role=CoherenceServer)

Member 4, the senior member of the younger cluster, receives the kill message from Member 3:

2010-02-26 15:33:44.921/367.468 Oracle Coherence GE 12.1.2.0.0 <Error> (thread=Cluster, member=4): Received a Kill message from a valid Member(Id=3, Timestamp=2010-02-26 15:27:24.892, Address=10.149.155.236:8088, MachineId=1260, Location=site:usdhcp.oraclecorp.com,machine:dhcp-burlington6-4fl-east-10-149,process:32459, Role=CoherenceServer); stopping cluster service.

In turn, Member 4 requests the departure of its junior members 5 and 6:

2010-02-26 15:33:44.921/367.468 Oracle Coherence GE 12.1.2.0.0 <Error> (thread=Cluster, member=4): Received a Kill message from a valid Member(Id=3, Timestamp=2010-02-26 15:27:24.892, Address=10.149.155.236:8088, MachineId=1260, Location=site:usdhcp.oraclecorp.com,machine:dhcp-burlington6-4fl-east-10-149,process:32459, Role=CoherenceServer); stopping cluster service.

2010-02-26 15:33:43.343/349.015 Oracle Coherence GE 12.1.2.0.0 <Error> (thread=Cluster, member=6): Received a Kill message from a valid Member(Id=4, Timestamp=2010-02-26 15:27:39.574, Address=10.149.155.75:8088, MachineId=1099, Location=process:800, Role=CoherenceServer); stopping cluster service.

Once Members 4, 5, and 6 restart, they rejoin the original cluster with senior member 1. The log below is from Member 4. Note that it receives a different member id when it rejoins the cluster.

2010-02-26 15:33:44.921/367.468 Oracle Coherence GE 12.1.2.0.0 <Error> (thread=Cluster, member=4): Received a Kill message from a valid Member(Id=3, Timestamp=2010-02-26 15:27:24.892, Address=10.149.155.236:8088, MachineId=1260, Location=site:usdhcp.oraclecorp.com,machine:dhcp-burlington6-4fl-east-10-149,process:32459, Role=CoherenceServer); stopping cluster service.
2010-02-26 15:33:46.921/369.468 Oracle Coherence GE 12.1.2.0.0 <D5> (thread=Cluster, member=4): Service Cluster left the cluster
2010-02-26 15:33:47.046/369.593 Oracle Coherence GE 12.1.2.0.0 <D5> (thread=Invocation:InvocationService, member=4): Service InvocationService left the cluster
2010-02-26 15:33:47.046/369.593 Oracle Coherence GE 12.1.2.0.0 <D5> (thread=OptimisticCache, member=4): Service OptimisticCache left the cluster
2010-02-26 15:33:47.046/369.593 Oracle Coherence GE 12.1.2.0.0 <D5> (thread=ReplicatedCache, member=4): Service ReplicatedCache left the cluster
2010-02-26 15:33:47.046/369.593 Oracle Coherence GE 12.1.2.0.0 <D5> (thread=DistributedCache, member=4): Service DistributedCache left the cluster
2010-02-26 15:33:47.046/369.593 Oracle Coherence GE 12.1.2.0.0 <D5> (thread=Invocation:Management, member=4): Service Management left the cluster
2010-02-26 15:33:47.046/369.593 Oracle Coherence GE 12.1.2.0.0 <D5> (thread=Cluster, member=4): Member 6 left service Management with senior member 5
2010-02-26 15:33:47.046/369.593 Oracle Coherence GE 12.1.2.0.0 <D5> (thread=Cluster, member=4): Member 6 left service DistributedCache with senior member 5
2010-02-26 15:33:47.046/369.593 Oracle Coherence GE 12.1.2.0.0 <D5> (thread=Cluster, member=4): Member 6 left service ReplicatedCache with senior member 5
2010-02-26 15:33:47.046/369.593 Oracle Coherence GE 12.1.2.0.0 <D5> (thread=Cluster, member=4): Member 6 left service OptimisticCache with senior member 5
2010-02-26 15:33:47.046/369.593 Oracle Coherence GE 12.1.2.0.0 <D5> (thread=Cluster, member=4): Member 6 left service InvocationService with senior member 5
2010-02-26 15:33:47.046/369.593 Oracle Coherence GE 12.1.2.0.0 <D5> (thread=Cluster, member=4): Member(Id=6, Timestamp=2010-02-26 15:33:47.046, Address=10.149.155.78:8088, MachineId=1102, Location=process:228, Role=CoherenceServer) left Cluster with senior member 4
2010-02-26 15:33:49.218/371.765 Oracle Coherence GE 12.1.2.0.0 <Info> (thread=main, member=n/a): Restarting cluster
2010-02-26 15:33:49.421/371.968 Oracle Coherence GE 12.1.2.0.0 <D5> (thread=Cluster, member=n/a): Service Cluster joined the cluster with senior service member n/a
2010-02-26 15:33:49.625/372.172 Oracle Coherence GE 12.1.2.0.0 <Info> (thread=Cluster, member=n/a): This Member(Id=5, Timestamp=2010-02-26 15:33:50.499, Address=10.149.155.75:8088, MachineId=1099, Location=process:800, Role=CoherenceServer, Edition=Grid Edition, Mode=Development, CpuCount=2, SocketCount=1) joined cluster "cluster:0xDDEB" with senior Member(Id=1, Timestamp=2010-02-26 15:27:06.931, Address=10.149.155.76:8088, MachineId=1100, Location=site:usdhcp.oraclecorp.com,machine:dhcp-burlington6-4fl-east-10-149,process:511, Role=CoherenceServer, Edition=Grid Edition, Mode=Development, CpuCount=2, SocketCount=2)

Cool isn't it?

Saturday Jul 13, 2013

Upgrading to Coherence for C++ 12.1.2 and The "Ambiguous Compilation Error" in Types Derived From AbstractAggregator

This weekend, I started the migration of some old C++ applications built on top of Coherence for C++ 3.7.1 API to its newest and refreshed version, released a few days ago. Version 12.1.2 introduces a lot of cool changes and features in the Coherence product, not mentioning the improvements done in different areas like installation, WebLogic integration, TCMP, Exabus, REST and Coherence*Extend. If you got yourself interested about those changes and improvements, check out the documentation at the following link, and please join us for the live virtual launch event on July 31st. Registration here.

After a couple hours migrating my projects from the oldest version to the newest, I got surprised with the following compiler message when trying to build the code:

call of overloaded 'Float64Sum(const coherence::lang::TypedHandle<coherence::util::extractor::ReflectionExtractor &)' is ambiguous 

If you are experiencing the same compiler message... don't worry. There is a quick and clean solution for this. This compiler error happens because in the 12.1.2 version of Coherence for C++ API, an overloaded version of the create() factory method was introduced in the types derived from the coherence::util::aggregator::AbstractAggregator class.

Until 3.7.1 version, the only way to instantiate an AbstractAggregator object was passing an instance of an coherence::util::ValueExtractor to its factory method. Now you also have the option to pass an instance of an coherence::lang::String::View. This object should contain the name of the attribute that will be aggregated. Automatically and behind the scenes, the Coherence for C++ API will fabricate an coherence::util::ValueExtractor for you.

In my case, I've changed my code from this:

Float64Sum::Handle scoreAggregator = Float64Sum::create(ReflectionExtractor::create("getScore")); 

To this:

Float64Sum::Handle scoreAggregator = Float64Sum::create("getScore"); 

And my project was able to be completely compiled again, both using the GNU Linux G++ and MS Visual Studio for C++ compilers.

Hope this blog helps you eliminate some hours of code debugging :-)

Tuesday Mar 19, 2013

Creating Scalable Fast Data Applications using Oracle Event Processing Platform (Setting Up an Active-Active Oracle CEP Domain)

This article will discover some technical aspects that should be considered if you are involved in serious implementations of Oracle CEP, the technical foundation of the Oracle's strategy for Fast Data called Oracle Event Processing Platform. It is expected that you have some basic knowledge about Oracle CEP, JMS and some knowledge about programming using Java

Fast Data and the Concern with Scalability

There is no such thing of application not meant to grow. Every application, even the simpler ones should expect some growth across the months or years during the time they are up and running. Growing is a consequence of a lot of things such as organizational growth, application maturity which in turn gives users more confidence to use it, an marketing campaign that worked and brought much more clients than expected, more front-ends enabling people to interact with your application through other types of devices, exponential generation of data from social networks that your application is configured to listen to, an market opportunity that demands more of your software or perhaps just natural growth of the users installed base.

It doesn't matter the source of growing, your application need to be ready to scale up. And this is true no matter which architectural style you're considering it as your strategy. Of course, there are some architectural styles that suggests a moderated growth like the client-server style or maybe the monolithic style. But take for instance the SOA ("Service-Oriented Architecture") style. The basic concept behind this architectural style is the reuse of services, which are the building blocks of the functional architecture, representing the business knowledge (standards, culture, procedures, routines) of an organization in a form of reusable functions. As much the reuse growths, more scalable your SOA foundation must be. You virtually can't predict the level of reuse of your services, but the key thing is, you should design your services to really scale up.

Another great example of architectural style that need to be designed to scale up is EDA ("Event-Driven Architecture"), which basically deals with processing of heterogeneous events coming from different sources, with different message formats and more importantly, with event channels that could potentially generate a number of events with different throughput's, frequencies and volumes. In the SOA architectural style this could happens too of course, but the scariest thing about EDA is that you don't necessarily deals with fixed message schemas, neither with well known message contracts. The previous knowledge about message contracts and schemas gives you the ability to predict the message size that the hardware infrastructure must deal with, an important requirement when you are sizing an infrastructure based on reasonable levels of reuse, like in the case of SOA.

As mentioned earlier, in the EDA architectural style you cannot predict the message schema or contract of yours events. It can be virtually any message format containing both structured and/or unstructured content. An good event-driven solution must be able to deal with this kind of situation, making the task of sizing an ideal hardware infrastructure really tough. Sizing an ideal hardware for an event-driven solution is a combination of both science and imagination. There are a lot of things to consider, a lot of scenarios to evaluate, a lot of hardware and/or software failures to predict and a huge number of situations that could potentially stops your application to run due hardware resources limitation, even in the first five hours running in production. Believe me, it is really tough.

Designing an event-driven solution that are ready to scale up demands more from the regular architect role that we found nowadays. It requires deep knowledge of the problem domain, deep knowledge of distributed systems, deep knowledge of servers systems (and perhaps engineered systems), deep knowledge of enterprise integration patterns and deep knowledge of the software's stacks used to build the solution, regardless if it is a proprietary, open-source or a combination of both.

Why do you need to worry about scalability? Because ten years ago the market demanded for event-driven solutions prepared to handle hundred of events per second. Today is the time of building event-driven solutions that should handle thousands of events per second. Fast Data, one of the new buzzwords of IT, demands for event-driven solutions that should handle millions of thousands events per second. My advice for any architect responsible for an event-driven solution should be, thinking in scalability as a huge main goal just like the problem domain to be solved.

Business Scenario for Scalability Study on Oracle CEP

Let's start our study of applying scalability in EDA considering a business scenario. Imagine that you are designing a EDA solution that combines technologies like CEP, BAM and JMS to deliver near real-time business monitoring of KPIs ("Key Performance Indicators") for an financial services company. All the information needed to process the KPIs came from a JMS channel that must be listened by an specialized adapter. Those JMS messages will be the business events. Inside every business event, there are information about payment transactions, containing some data like the total amount paid, the credit card brand, etc. The idea here is to process those payment transactions as they happens, in order to generate valuable KPIs that could be monitored through a near real-time monitoring solution like a BAM. For simplicity reasons, let's consider only one KPI for instance, and concentrate our focus on the CEP layer that is responsible for the KPIs compilation and aggregation. The example KPI will be the total count of payment transactions per second.

In order to compute this KPI, the CEP layer must execute the count aggregation function onto the stream of events, considering only those events of the last second. This means that this KPI will be compiled and aggregated on every one second (1000 milliseconds of time window) and the output should be also generated on every one second. The EPN ("Event Processing Network") of this business scenario should be something simpler like this:


Reading this EPN is not that complicated. You must basically read the flow from the left to the right. The basic idea behind this EPN is: listen the business events from an JMS adapter, put those events sequentially based on their temporal order into a event channel, compute the KPI based on the stream of events using an processor, send the generated output event (the KPI itself) to an new event channel and finally, present the KPI into the server output console using a custom adapter.

The event model of this EPN is composed by two simple event types. The first event type is the concept of the payment transaction, which acts in the EPN as event source. This event type contains three fields: an dateTime field that tells you the exactly moment that the payment transaction occurred, an amount field that reveals the amount paid for a product and/or service, and an brand field that tells you which credit card type was used in the payment transaction. The second event type would be the transactions per second KPI, which acts in this EPN as complex event. The only field that this event type has is the totalCountTPS, which represents the computed value of this KPI. The following UML class diagram summarizes this event model.


All the payment transactions are received through an built-in JMS adapter called in the EPN of paymentTransactionsJMSAdapter. This adapter is configured to listen to an JMS destination through an dedicated connection factory. The listing below is the configuration file for this JMS adapter.

<?xml version="1.0" encoding="UTF-8"?>
<wlevs:config xmlns:wlevs="http://www.bea.com/ns/wlevs/config/application">

	<jms-adapter>
		<name>paymentTransactionsJMSAdapter</name>
		<event-type>PaymentTransaction</event-type>
		<jndi-provider-url>t3://localhost:7001</jndi-provider-url>
		<connection-jndi-name>jmsConnectionFactory</connection-jndi-name>
		<destination-jndi-name>distributedQueue</destination-jndi-name>
		<concurrent-consumers>24</concurrent-consumers>
	</jms-adapter>
	
</wlevs:config>

The processor that will compute the KPI is also very simple. It basically counts the events coming from the event channel, filtering only those events that make part of one whole second. It also filters those events that has some meaningful values in the amount and brand fields, to prevent the computation of the KPI based on dirty events. The following CQL ("Continuous Query Language") statement are used to compute the KPI:

SELECT COUNT(dateTime) AS totalCountTPS
FROM paymentTransactionsChannel [RANGE 1 SECOND SLIDE 1 SECOND]
WHERE ammount > 0 AND brand IS NOT NULL

Finally, let's take a look in the last part of the EPN, which is the custom adapter created to print the results of the TransactionsPerSecondKPI event type in the server output console. As I mentioned earlier, for simplicity reasons I will not show how this event will be monitored in a BAM. Instead, I have created a custom adapter using the Oracle CEP adapters API to print in the server output console the content of the totalCountTPS field present in the TransactionsPerSecondKPI event type. The listing below is the implementation in Java of this custom console adapter.

package com.oracle.cep.examples.ha;

import com.bea.wlevs.ede.api.EventProperty;
import com.bea.wlevs.ede.api.EventRejectedException;
import com.bea.wlevs.ede.api.EventType;
import com.bea.wlevs.ede.api.EventTypeRepository;
import com.bea.wlevs.ede.api.StreamSink;
import com.bea.wlevs.util.Service;

public class ConsoleAdapter implements StreamSink {

	private EventTypeRepository eventTypeRepository;

	@Override
	public void onInsertEvent(Object event) throws EventRejectedException {
		EventType eventType = eventTypeRepository.getEventType("TransactionsPerSecondKPI");
		EventProperty eventProperty = eventType.getProperty("totalCountTPS");
		System.out.println("   ---> [Console] Total Count of TPS: " + eventProperty.getValue(event));
	}

	@Service
	public void setEventTypeRepository(EventTypeRepository eventTypeRepository) {
		this.eventTypeRepository = eventTypeRepository;
	}

}

As you can see in the code, this custom adapter doesn't make anything special. It just access the totalCountTPS field from the TransactionsPerSecondKPI event type and print the value in the server output console. The idea here is just to monitor in near real-time the computation of the KPIs, in order to test the behavior of Oracle CEP when is running in a single JVM or when its running in clustered on multiple JVMs.

Creating a Testing Environment to Simulate the Payment Transactions

Now that we are all set regarding the business scenario, we can start the tests. You need to create a simple Oracle CEP domain to test this application. You will also need an JMS provider to host the JMS destination. I would recommend you to use Oracle WebLogic as the JMS provider, but feel free to use any JMS provider compliant with JMS 1.1. Setup your JMS provider and create an queue to be used in the tests, and setup your Oracle CEP JMS adapter to listen this queue. Later in this article I will discuss more about the difference of using queues and topics, but for now let's just focus on the functional testing.

Implement the following Java program in your development environment. The program is just an example about how to send the JMS messages to an queue, and it considers that you are connecting to an WebLogic JMS domain. If you prefer using another JMS provider, you should adapt this program to correctly connect to your host system.

package com.oracle.cep.examples.ha;

import java.util.Hashtable;
import java.util.Random;

import javax.jms.Connection;
import javax.jms.ConnectionFactory;
import javax.jms.MapMessage;
import javax.jms.MessageProducer;
import javax.jms.Queue;
import javax.jms.Session;
import javax.naming.InitialContext;

public class EventsGenerator {

	public static void main(String[] args) {
		
		final String[] brands = {"Visa", "Mastercard",
				"Dinners", "American Express"};
		final Random random = new Random();
		
		InitialContext jndi = null;
		Hashtable<String, String> params = null;
		ConnectionFactory jmsConnectionFactory = null;
		Queue distributedQueue = null;
		
		Connection jmsConn = null;
		Session session = null;
		MessageProducer msgProd = null;
		MapMessage message = null;
		
		try {
			
			params = new Hashtable<String, String>();
			params.put(InitialContext.PROVIDER_URL, "t3://localhost:7001");
			params.put(InitialContext.INITIAL_CONTEXT_FACTORY,
					"weblogic.jndi.WLInitialContextFactory");
			jndi = new InitialContext(params);
			jmsConnectionFactory = (ConnectionFactory) jndi.lookup("jmsConnectionFactory");
			distributedQueue = (Queue) jndi.lookup("distributedQueue");
			
			jmsConn = jmsConnectionFactory.createConnection();
			session = jmsConn.createSession(false, Session.AUTO_ACKNOWLEDGE);
			msgProd = session.createProducer(distributedQueue);
			
			long startTime, endTime = 0;
			long elapsedTime = 0;
			
			for (;;) {
				startTime = System.currentTimeMillis();
				for (int i = 0; i < 10; i++) {
					message = session.createMapMessage();
					message.setLong("dateTime", System.currentTimeMillis());
					message.setDouble("ammount", random.nextInt(1000));
					message.setString("brand", brands[random.nextInt(brands.length)]);
					msgProd.send(message);
				}
				endTime = System.currentTimeMillis();
				elapsedTime = endTime - startTime;
				if (elapsedTime < 1000) {
					Thread.sleep(1000 - elapsedTime);
				}
				System.out.println("Tick...");
			}
			
		} catch (Exception ex) {
			ex.printStackTrace();
		} finally {
			
			try {
				if (msgProd != null) { msgProd.close(); }
				if (session != null) { session.close(); }
				if (jmsConn != null) { jmsConn.close(); }
			} catch (Exception ex) {
				ex.printStackTrace();
			}
			
		}
		
	}

}

What this program does is continuously send ten messages to the specified JMS queue. As you can see in the code, after sending the messages, it takes a pause of one second, considering the elapsed time taken to send the messages to the JMS queue. This program never ends, unless of course that the user terminate the JVM created.

Starting the Functional Tests with one Single Oracle CEP JVM

Make sure that all your development environment are up and running, including your JMS provider and an fully operational Oracle CEP domain. Deploy the Oracle CEP application into this domain and run the client JMS application. An recorded result of this functional test is published on Youtube, so you can check it out the results of this test.

In the following sections, it will be discovered two different approaches for applying scalability. These approaches will be applied in the scenario of near real-time business monitoring of KPIs, transforming it into a more scalable solution that could handle the growing of the number of events just adding more Oracle CEP JVMs across the same and/or multiple server systems.

The Simpler Scalability Approach Ever: Using JMS Queues

Being you just designing an application that should deal with asynchronous calls or, just worried about delivery guarantees of messages, use JMS queues is always a good choice. Consumer applications connected into a JMS queue works hard for the reading of the most recent message, in a FIFO style. This means that each consumer will enter in a race condition along with other consumers to compete about which one gets the maximum number of messages possible. For the application consumer perspective, this could be a hard problem since there are no guarantees that an specific message will be consumed by an specific consumer, but for the JMS queue perspective, this is a really powerful scalability technique. The reason for that is because each consumer application will work hard to provide maximum throughput for the consumption of messages.

Imagine for instance an consumer application connected to an JMS queue, running on a server system with 24 cores of CPU, being two hardware chips with 6 cores each, using the Hyper-Threading Intel technology. If we realize that this consumer application running on this type of hardware gives us an average throughput of 3,000 TPS ("Transactions per Second"), we can certainly assume that putting another copy of the same consumer application, running on another server system with the same hardware configuration will give us an average throughput of 6,000 TPS. This is what we call horizontally scalability, when you increase the throughput of an software based application adding more server systems that will engage its hardware resources to an specific software goal, which in this case is consume as fast as possible the messages from an JMS queue.

This type of scalability technique delivers another important advantage: when you increase the total count of server systems running the consumer applications, you minimize the number of messages that each server system will need to handle. This seems to be a little bit contradictory isn't it, since in theory should be "a good thing" that each server system handle more messages as possible. Well, it is a good thing, but like anything else in software architecture, there are trade-offs. Consider for instance each consumer application running on the same hardware configuration. In this scenario, is reasonable to think that each server system will handle the same number of messages in average, because each server system are putting its hardware resources to work equally, and due the nature of the JMS queues (FIFO style consumption), the total number of messages of the queue will be divided to the number of server systems available.

The problem here is the memory footprint of each JVM running the consumer application. If you are consuming messages from the JMS queue with fewer server systems, the total number of messages that each server system will need to handle will be higher. Since each received message is allocated in the heap memory of the JVM, the total size of the heap will increase. Did you know that an javax.jms.MapMessage with an payload of 256 bytes allocates more than 400 bytes in the heap space? Imagine all those messages being received by your consumer application in the same time. You could reach the maximum size of your heap in a question of seconds. And the problem of reaching the maximum size of an JVM is the inevitable execution of the garbage collector. When the JVM detects that the heap memory are full (or almost full depending of the algorithm used) or to much fragmented, it engages the garbage collector threads to reclaim the allocated memory and/or rearrange the heap layout space. Depending of the situation of the heap memory, the JVM could use almost all the CPU cores of the server system to accomplish this task, and that's when your consumer application starts to be slower than usual, presenting performance issues and becoming a system bottleneck.

Let's consider the usage of JMS queues in our business scenario of near real-time business monitoring of KPIs. Each Oracle CEP JVM would be connected to an JMS queue that would maintain the messages of payment transactions. Each Oracle CEP JVM would act as an consumer application through its JMS adapter. Considering that each Oracle CEP JVM (or a group of it) would be running on separated server system, we could assume that the increasing of the number of server systems will increase the average throughput of message consumption from the JMS queue, and also that each Oracle CEP JVM would handle a reasonable number of messages on its JVM heap. Enough of theory, let's in see in practice how this scalability approach could be applied in our business scenario.

The first thing to do is transform your Oracle CEP domain in a multi-server and clustered domain. You can find a comprehensive set of information in the product documentation to help you doing this, but I will highlight the main steps for you here.

In the root directory of your Oracle CEP domain, you will find a sub-directory called "defaultserver". As the name suggests, this is your default server that are created automatically during the domain creation. For development and staging environments, this server is more than enough. But if you want to expand your domain, will will need to change that. Rename the default server directory from "defaultserver" to "server1". After that, make two copies of this directory, and call these newly created directories of "server2" and "server3" respectively.

Now you have to setup up some aspects of the cluster. Open the configuration file of the server1 in a text editor. The configuration file of the server1 can be found in the following location: <DOMAIN_HOME>/server1/config/config.xml. With the configuration file opened in the text editor, you will have to change the "netio" and "cluster" tags. Change the port value of the "NetIO" component to "9001", and the port value of the element "sslNetIo" component to "9011".

In the cluster tag you will have to define how the server1 will behave together with other members. Edit the cluster tag according to the listing below:

<cluster>
    <server-name>server1</server-name>
    <multicast-address>239.255.0.1</multicast-address>
    <groups>clusteredServers</groups>
    <enabled>true</enabled>
</cluster>

That's it. This is what have to be done to transform a simple server in a clustered-aware member. Repeat the same steps to the server2 and to the server3. Just keep and mind that if you plan to execute those servers in the same server system, will need to define different ports for each server. Also remember that the tag "server-name" must match with the server name you gave, which would be the name of the directory of the server.

One of the advantages of using JMS queues as your scalability approach is that you don't need to change anything at your Oracle CEP application. Just deploy the same application in all the new servers and start the tests again. Start the servers server1 and server2 and run the client JMS application. You will see in the servers output that each server will be generating the totalCountTPS KPI based on the number of events that it could receive from the JMS queue, which would be the division of the total count of events (ten events per second) by the total count of servers, which in this case is two. This will result in five events per server in average. If you start the server3 during the processing of events, you will see that the total number of events that each server is handling will decrease, which is an evidence that the scalability is really working due the fact that the servers partitioned the load of events.

An recorded result of this second test are also available on Youtube. Watch out the video below for how using JMS queues affects the scalability of your Oracle CEP applications.

So far, it seems that using JMS queues as your scalability approach is the right thing to do right? In fact, for the most demanding scenarios, this approach should be enough. But you should be aware of one catch: in-flight events can be missed during failures. I mean "in-flight" for all those events that had been received by the JMS adapter and an acknowledge of this receiving has been sent back to the JMS provider. On that moment, the message are not longer in the JMS queue, and more importantly, no other Oracle CEP JVM is aware of this event. This means that there are no recovery of events during failures in this approach. If you cannot tolerate missing events, using JMS queues for scalability is not the most reliable approach. But if your scenario can tolerate missing events, don't think twice and choose this approach once is simpler and does not implies in changing the Oracle CEP application.

Reliability Really Matters: Complicating Things Just a Little Bit with JMS Topics

OK, so you realized that you cannot tolerate missing events, and you need to improve as much as possible the reliability of your Oracle CEP application. The road to achieve this is quite more longer than just using JMS queues. There are some challenges that you need to surpass in order to conquer this level of reliability when no missing events can be tolerated without of course putting scalability aside. The challenges that you will need to surpass are:

  • Provide guarantees that all of the JVMs are aware of all events
  • Equally distribute the load of events across all of the JVMs
  • Ensure that even in-flight events are completely synchronized
  • Provide backups for every cluster member to ensure HA

The good news is, all those challenges can be easily surpassed, and the Oracle CEP product provides native support for all that stuff. But the bad news is, compared to the previous approach, the final solution starts looking like more complicated in terms of design and in terms of product configuration. Let's learn how apply this type of scalability approach (with the highest level of reliability that exists) in our near real-time business monitoring of KPIs scenario.

The situation has now changed. You need to make the solution available in two different data centers, working in a active-active schema and with high availability assured, in case of failure of the primary servers. There is no chance that any event be missed since the operational people from the financial services company must rely with the KPIs to take important decisions.

The two data centers are located in the state of California, but are geographically separated. The first one is based in Redwood, and the other one is based on Palo Alto. These data centers are connected through a fiber optical cable with an 10 GB/s of high bandwidth.


There are some changes that need to take place both in the Oracle CEP application and at the Oracle CEP domain. Let's start with the Oracle CEP application changes. First, you need to modify the EPN assembly descriptor file to include an instance of the following Oracle CEP component: com.oracle.cep.cluster.hagroups.ActiveActiveGroupBean. This special component dynamically manages the group subscriptions available to partition the incoming events between the cluster members.

Secondly, you need to synchronize all the events with all members that belongs to the cluster, including in-flight events. This can be explicitly done using special HA adapters that Oracle CEP make available. Change your EPN flow to include just after of your input adapter an instance of an HA adapter. This adapter need to be aware about which property of your event type carries the information about its age. In that case, you need to configure in this HA adapter the "timeProperty" property. This is necessary to provides guarantees that even in case of failure of one cluster member, the ordering of the events won't be affected. You will also need to create an instance of an HA correlating adapter, just before your output adapter. After this changes, your EPN assembly descriptor should include the following components:

<wlevs:adapter id="haInboundAdapter" provider="ha-inbound">
    <wlevs:listener ref="paymentTransactionsChannel" />
    <wlevs:instance-property name="timeProperty" value="dateTime" />
</wlevs:adapter>

<wlevs:adapter id="haCorrelatingAdapter" provider="ha-correlating">
    <wlevs:listener ref="consoleAdapter" />
    <wlevs:instance-property name="failOverDelay" value="2000" /> 
</wlevs:adapter>
	
<bean id="clusterAdapter"
      class="com.oracle.cep.cluster.hagroups.ActiveActiveGroupBean" />

Now here comes the most important change of this approach, which is change the JMS destination from an queue to an topic. This change should be executed both at your JMS provider (because you need to explicitly create an topic endpoint) and at your JMS adapter configuration file. Using JMS topics provides guarantees that all of the JVMs are aware of all events. You will also need to define a events partition criteria at your JMS adapter configuration file. Since topic consumers acts more like subscribers instead of just consumers, every consumer application will receive a copy of all events sent to the topic endpoint. This means that an automatic partition of the events won't happen, but this need to occur in order to provide real scalability.

Thanks to the JMS technology, there is one way of provide some criteria during the messages consumption, which is using selectors. Selectors gives you the possibility to apply the Content Based Router EAI pattern based on existing header/property values. Change the JMS adapter configuration file to include selectors criteria to the message consumption:

<?xml version="1.0" encoding="UTF-8"?>
<wlevs:config xmlns:wlevs="http://www.bea.com/ns/wlevs/config/application">

	<jms-adapter>
		<name>paymentTransactionsJMSAdapter</name>
		<event-type>PaymentTransaction</event-type>
		<jndi-provider-url>t3://localhost:7001</jndi-provider-url>
		<connection-jndi-name>jmsConnectionFactory</connection-jndi-name>
		<destination-jndi-name>distributedTopic</destination-jndi-name>
		<message-selector>${CONDITION}</message-selector>
		<bindings>
			<group-binding group-id="ActiveActiveGroupBean_Redwood">
				<param id="CONDITION">site = 'rw'</param>
			</group-binding>
			<group-binding group-id="ActiveActiveGroupBean_PaloAlto">
				<param id="CONDITION">site = 'pa'</param>
			</group-binding>
		</bindings>
	</jms-adapter>
	
</wlevs:config>

Let's understand the changes. The group bindings entries provided tells the JMS adapter which events listen to. The first group binding tells that messages containing "rw" in the site property should be listened only by the Redwood site. The second group binding tells that messages containing "pa" in the site property should be listened only by the Palo Alto site. Each group binding is associated with a group id, which reveals what servers will be able to receive those events. For instance, in the case of the first group binding, only servers associated with the group "ActiveActiveGroupBean_Redwood" will be able to receive events that belongs to the Redwood site. This configuration will ensure that the load of events will be equally distributed across all the JVMs. Let's start the configuration of the Oracle CEP domain.

In the root directory of your Oracle CEP domain, create four copies from one of your current servers, naming them "rw1", "rw2", "pa1" and "pa2" respectively. Since the solution must be available in two different data centers, one in Redwood and another in Palo Alto, these four servers will sustain this topology. The rw1 and rw2 servers will be the Redwood servers, being rw1 the primary and rw2 its backup. The pa1 and pa2 servers will be the Palo Alto servers, being pa1 the primary and pa2 its backup. The idea here is to provide an active-active load balancing between servers rw1 and pa1, each one having its backups on each site. Those backups will ensure high availability for every cluster member. The final topology should be something similar to this:

You need to the change the server's configuration file in order make this topology works. Servers rw1 and rw2 should be part of the "ActiveActiveGroupBean_Redwood" group, and servers pa1 and pa2 should be part of the "ActiveActiveGroupBean_PaloAlto" group. Apply the configuration below on each server's file configuration of the Oracle CEP domain:

Don't forget to change the ports of the "NetIO" and "sslNetIo" components if you plan to execute all those servers in the same server system. In terms of Oracle CEP domain, this is all the configuration necessary, so we are all set in terms of infrastructure. Before starting the tests, we need to adapt the client JMS application in order to provide two things. First, to send the messages to an topic instead to an queue. Second, to provide the "site" property on each message, in order to sustain the message criteria provided in the JMS adapter configuration file. Implement the client JMS application according to the listing below.

package com.oracle.cep.examples.ha;

import java.util.Hashtable;
import java.util.Random;

import javax.jms.Connection;
import javax.jms.ConnectionFactory;
import javax.jms.MapMessage;
import javax.jms.MessageProducer;
import javax.jms.Session;
import javax.jms.Topic;
import javax.naming.InitialContext;

public class EventsGenerator {
	
	private static final int REDWOOD_SITE = 1;
	private static final int PALOALTO_SITE = 2;
	private static final String REDWOOD = "rw";
	private static final String PALOALTO = "pa";

	public static void main(String[] args) {
		
		final String[] brands = {"Visa", "Mastercard",
				"Dinners", "American Express"};
		final Random random = new Random();
		
		InitialContext jndi = null;
		Hashtable<String, String> params = null;
		ConnectionFactory jmsConnectionFactory = null;
		Topic distributedTopic = null;
		Connection jmsConn = null;
		Session session = null;
		MessageProducer msgProd = null;
		MapMessage message = null;
		int siteId = REDWOOD_SITE;
		
		try {
			
			params = new Hashtable<String, String>();
			params.put(InitialContext.PROVIDER_URL, "t3://localhost:7001");
			params.put(InitialContext.INITIAL_CONTEXT_FACTORY,
					"weblogic.jndi.WLInitialContextFactory");
			jndi = new InitialContext(params);
			jmsConnectionFactory = (ConnectionFactory) jndi.lookup("jmsConnectionFactory");
			distributedTopic = (Topic) jndi.lookup("distributedTopic");
			
			jmsConn = jmsConnectionFactory.createConnection();
			session = jmsConn.createSession(false, Session.AUTO_ACKNOWLEDGE);
			msgProd = session.createProducer(distributedTopic);
			
			long startTime, endTime = 0;
			long elapsedTime = 0;
			
			for (;;) {
				startTime = System.currentTimeMillis();
				for (int i = 0; i < 10; i++) {
					message = session.createMapMessage();
					switch (siteId) {
						case REDWOOD_SITE:
							message.setStringProperty("site", REDWOOD);
							siteId = PALOALTO_SITE;
							break;
						case PALOALTO_SITE:
							message.setStringProperty("site", PALOALTO);
							siteId = REDWOOD_SITE;
							break;
						default:
							break;
					}
					message.setLong("dateTime", System.currentTimeMillis());
					message.setDouble("ammount", random.nextInt(1000));
					message.setString("brand", brands[random.nextInt(brands.length)]);
					msgProd.send(message);
				}
				endTime = System.currentTimeMillis();
				elapsedTime = endTime - startTime;
				if (elapsedTime < 1000) {
					Thread.sleep(1000 - elapsedTime);
				}
				System.out.println("Tick...");
			}
			
		} catch (Exception ex) {
			ex.printStackTrace();
		} finally {
			
			try {
				if (msgProd != null) { msgProd.close(); }
				if (session != null) { session.close(); }
				if (jmsConn != null) { jmsConn.close(); }
			} catch (Exception ex) {
				ex.printStackTrace();
			}
			
		}
		
	}

}

This version of the client JMS application didn't change so much compared to its original version using queues, except from the usage of the message property called site. This message property will be used by the selectors as criteria to load-balance the incoming events across all the servers. In the real world, this type of message enrichment is commonly applied by an architectural mechanism that is situated before of the Oracle CEP JVMs, which could be an ESB, an application server or an corporate load balancer. Regardless which mechanism implementation will intend to use, in terms of architecture, it is responsibility of this mechanism to provide this message enrichment, another EAI pattern that must be part of your final solution.

Now we can start the tests. Deploy this new version of the Oracle CEP application into the servers (rw1, rw2, pa1 and pa2) and run again the client JMS application. An recorded result of this third and last test are also available on Youtube. Watch out the video below for how using JMS topics affects the scalability of your Oracle CEP applications.

Conclusion

Today more than ever, solution architects and developers should be aware about what kind of techniques can be applied in their solutions in order to provide real scalability. Trends like Fast Data are biggest motivators for that. This article discussed the importance of scalability, specially when necessary in event-driven solutions. The article, through an didactic business scenario, showed how to apply scalability in Oracle CEP applications, discovering the pros and cons of two different approaches. Finally, the article showed in details how to implement the two approaches in the Oracle CEP example application.

Sunday Jul 08, 2012

The Developers Conference 2012: Presentation about CEP & BAM

This year I had the pleasure again of being one of the speakers in the TDC ("The Developers Conference") event. I have spoken in this event for three years from now. This year, the main theme of the SOA track was EDA ("Event-Driven Architecture") and I decided to delivery a comprehensive presentation about one of my preferred personal subjects: Real-time using Complex Event Processing. The theme of the presentation was "Business Intelligence in Real-time using CEP & BAM" and I would like to share here the presentation that I have done. The material is in Portuguese since was an Brazilian event that happened in São Paulo.

Once my presentation has a lot of videos, I decided to share the material as a Youtube video, so you can pause, rewind and play again how many times you want it. I strongly recommend you that before starting watching the video, you change the video quality settings to 1080p in High Definition.

Saturday Jun 23, 2012

Calculating the Size (in Bytes and MB) of a Oracle Coherence Cache

The concept and usage of data grids are becoming very popular in this days since this type of technology are evolving very fast with some cool lead products like Oracle Coherence. Once for a while, developers need an programmatic way to calculate the total size of a specific cache that are residing in the data grid. In this post, I will show how to accomplish this using Oracle Coherence API. This example has been tested with 3.6, 3.7 and 3.7.1 versions of Oracle Coherence.

To start the development of this example, you need to create a POJO ("Plain Old Java Object") that represents a data structure that will hold user data. This data structure will also create an internal fat so I call that should increase considerably the size of each instance in the heap memory. Create a Java class named "Person" as shown in the listing below.

package com.oracle.coherence.domain;

import java.io.Serializable;
import java.util.ArrayList;
import java.util.HashMap;
import java.util.List;
import java.util.Random;

@SuppressWarnings("serial")
public class Person implements Serializable {
	
	private String firstName;
	private String lastName;
	private List<Object> fat;
	private String email;
	
	public Person() {
		generateFat();
	}
	
	public Person(String firstName, String lastName,
			String email) {
		setFirstName(firstName);
		setLastName(lastName);
		setEmail(email);
		generateFat();
	}
	
	private void generateFat() {
		fat = new ArrayList<Object>();
		Random random = new Random();
		for (int i = 0; i < random.nextInt(18000); i++) {
			HashMap<Long, Double> internalFat = new HashMap<Long, Double>();
			for (int j = 0; j < random.nextInt(10000); j++) {
				internalFat.put(random.nextLong(), random.nextDouble());
			}
			fat.add(internalFat);
		}
	}
	
	public String getFirstName() {
		return firstName;
	}

	public void setFirstName(String firstName) {
		this.firstName = firstName;
	}

	public String getLastName() {
		return lastName;
	}

	public void setLastName(String lastName) {
		this.lastName = lastName;
	}

	public String getEmail() {
		return email;
	}

	public void setEmail(String email) {
		this.email = email;
	}

}

Now let's create a Java program that will start a data grid into Coherence and will create a cache named "People", that will hold people instances with sequential integer keys. Each person created in this program will trigger the execution of a custom constructor created in the People class that instantiates an internal fat (the random amount of data generated to increase the size of the object) for each person. Create a Java class named "CreatePeopleCacheAndPopulateWithData" as shown in the listing below.

package com.oracle.coherence.demo;

import com.oracle.coherence.domain.Person;
import com.tangosol.net.CacheFactory;
import com.tangosol.net.NamedCache;

public class CreatePeopleCacheAndPopulateWithData {

	public static void main(String[] args) {
		
		// Asks Coherence for a new cache named "People"...
		NamedCache people = CacheFactory.getCache("People");
		
		// Creates three people that will be putted into the data grid. Each person
		// generates an internal fat that should increase its size in terms of bytes...
		Person pessoa1 = new Person("Ricardo", "Ferreira", "ricardo.ferreira@example.com");
		Person pessoa2 = new Person("Vitor", "Ferreira", "vitor.ferreira@example.com");
		Person pessoa3 = new Person("Vivian", "Ferreira", "vivian.ferreira@example.com");
		
		// Insert three people at the data grid...
		people.put(1, pessoa1);
		people.put(2, pessoa2);
		people.put(3, pessoa3);
		
		// Waits for 5 minutes until the user runs the Java program
		// that calculates the total size of the people cache...
		try {
			System.out.println("---> Waiting for 5 minutes for the cache size calculation...");
			Thread.sleep(300000);
		} catch (InterruptedException ie) {
			ie.printStackTrace();
		}
		
	}

}

Finally, let's create a Java program that, using the Coherence API and JMX, will calculate the total size of each cache that the data grid is currently managing. The approach used in this example was retrieve every cache that the data grid are currently managing, but if you are interested on an specific cache, the same approach can be used, you should only filter witch cache will be looked for. Create a Java class named "CalculateTheSizeOfPeopleCache" as shown in the listing below.

package com.oracle.coherence.demo;

import java.text.DecimalFormat;
import java.util.Map;
import java.util.Set;
import java.util.TreeMap;

import javax.management.MBeanServer;
import javax.management.MBeanServerFactory;
import javax.management.ObjectName;

import com.tangosol.net.CacheFactory;

public class CalculateTheSizeOfPeopleCache {
	
	@SuppressWarnings({ "unchecked", "rawtypes" })
	private void run() throws Exception {
		
        // Enable JMX support in this Coherence data grid session...
	System.setProperty("tangosol.coherence.management", "all");
		
        // Create a sample cache just to access the data grid...
	CacheFactory.getCache(MBeanServerFactory.class.getName());
		
	// Gets the JMX server from Coherence data grid...
	MBeanServer jmxServer = getJMXServer();
        
        // Creates a internal data structure that would maintain
	// the statistics from each cache in the data grid...
	Map cacheList = new TreeMap();
        Set jmxObjectList = jmxServer.queryNames(new ObjectName("Coherence:type=Cache,*"), null);
        for (Object jmxObject : jmxObjectList) {
            ObjectName jmxObjectName = (ObjectName) jmxObject;
            String cacheName = jmxObjectName.getKeyProperty("name");
            if (cacheName.equals(MBeanServerFactory.class.getName())) {
            	continue;
            } else {
            	cacheList.put(cacheName, new Statistics(cacheName));
            }
        }
        
        // Updates the internal data structure with statistic data
        // retrieved from caches inside the in-memory data grid...
        Set<String> cacheNames = cacheList.keySet();
        for (String cacheName : cacheNames) {
            Set resultSet = jmxServer.queryNames(
            	new ObjectName("Coherence:type=Cache,name=" + cacheName + ",*"), null);
            for (Object resultSetRef : resultSet) {
                ObjectName objectName = (ObjectName) resultSetRef;
                if (objectName.getKeyProperty("tier").equals("back")) {
                    int unit = (Integer) jmxServer.getAttribute(objectName, "Units");
                    int size = (Integer) jmxServer.getAttribute(objectName, "Size");
                    Statistics statistics = (Statistics) cacheList.get(cacheName);
                    statistics.incrementUnit(unit);
                    statistics.incrementSize(size);
                    cacheList.put(cacheName, statistics);
                }
            }
        }
        
        // Finally... print the objects from the internal data
        // structure that represents the statistics from caches...
        cacheNames = cacheList.keySet();
        for (String cacheName : cacheNames) {
            Statistics estatisticas = (Statistics) cacheList.get(cacheName);
            System.out.println(estatisticas);
        }
        
    }

    public MBeanServer getJMXServer() {
        MBeanServer jmxServer = null;
        for (Object jmxServerRef : MBeanServerFactory.findMBeanServer(null)) {
            jmxServer = (MBeanServer) jmxServerRef;
            if (jmxServer.getDefaultDomain().equals(DEFAULT_DOMAIN) || DEFAULT_DOMAIN.length() == 0) {
                break;
            }
            jmxServer = null;
        }
        if (jmxServer == null) {
            jmxServer = MBeanServerFactory.createMBeanServer(DEFAULT_DOMAIN);
        }
        return jmxServer;
    }
	
    private class Statistics {
		
        private long unit;
        private long size;
        private String cacheName;
		
	public Statistics(String cacheName) {
            this.cacheName = cacheName;
        }

        public void incrementUnit(long unit) {
            this.unit += unit;
        }

        public void incrementSize(long size) {
            this.size += size;
        }

        public long getUnit() {
            return unit;
        }

        public long getSize() {
            return size;
        }

        public double getUnitInMB() {
            return unit / (1024.0 * 1024.0);
        }

        public double getAverageSize() {
            return size == 0 ? 0 : unit / size;
        }

        public String toString() {
            StringBuffer sb = new StringBuffer();
            sb.append("\nCache Statistics of '").append(cacheName).append("':\n");
            sb.append("   - Total Entries of Cache -----> " + getSize()).append("\n");
            sb.append("   - Used Memory (Bytes) --------> " + getUnit()).append("\n");
            sb.append("   - Used Memory (MB) -----------> " + FORMAT.format(getUnitInMB())).append("\n");
            sb.append("   - Object Average Size --------> " + FORMAT.format(getAverageSize())).append("\n");
            return sb.toString();
        }

    }
	
    public static void main(String[] args) throws Exception {
	new CalculateTheSizeOfPeopleCache().run();
    }
	
    public static final DecimalFormat FORMAT = new DecimalFormat("###.###");
    public static final String DEFAULT_DOMAIN = "";
    public static final String DOMAIN_NAME = "Coherence";

}

I've commented the overall example so, I don't think that you should get into trouble to understand it. Basically we are dealing with JMX. The first thing to do is enable JMX support for the Coherence client (ie, an JVM that will only retrieve values from the data grid and will not integrate the cluster) application. This can be done very easily using the runtime "tangosol.coherence.management" system property. Consult the Coherence documentation for JMX to understand the possible values that could be applied. The program creates an in memory data structure that holds a custom class created called "Statistics".

This class represents the information that we are interested to see, which in this case are the size in bytes and in MB of the caches. An instance of this class is created for each cache that are currently managed by the data grid. Using JMX specific methods, we retrieve the information that are relevant for calculate the total size of the caches. To test this example, you should execute first the CreatePeopleCacheAndPopulateWithData.java program and after the CreatePeopleCacheAndPopulateWithData.java program. The results in the console should be something like this:

2012-06-23 13:29:31.188/4.970 Oracle Coherence 3.6.0.4 <Info> (thread=Main Thread, member=n/a): Loaded operational configuration from "jar:file:/E:/Oracle/Middleware/oepe_11gR1PS4/workspace/calcular-tamanho-cache-coherence/lib/coherence.jar!/tangosol-coherence.xml"
2012-06-23 13:29:31.219/5.001 Oracle Coherence 3.6.0.4 <Info> (thread=Main Thread, member=n/a): Loaded operational overrides from "jar:file:/E:/Oracle/Middleware/oepe_11gR1PS4/workspace/calcular-tamanho-cache-coherence/lib/coherence.jar!/tangosol-coherence-override-dev.xml"
2012-06-23 13:29:31.219/5.001 Oracle Coherence 3.6.0.4 <D5> (thread=Main Thread, member=n/a): Optional configuration override "/tangosol-coherence-override.xml" is not specified
2012-06-23 13:29:31.266/5.048 Oracle Coherence 3.6.0.4 <D5> (thread=Main Thread, member=n/a): Optional configuration override "/custom-mbeans.xml" is not specified

Oracle Coherence Version 3.6.0.4 Build 19111
 Grid Edition: Development mode
Copyright (c) 2000, 2010, Oracle and/or its affiliates. All rights reserved.

2012-06-23 13:29:33.156/6.938 Oracle Coherence GE 3.6.0.4 <Info> (thread=Main Thread, member=n/a): Loaded Reporter configuration from "jar:file:/E:/Oracle/Middleware/oepe_11gR1PS4/workspace/calcular-tamanho-cache-coherence/lib/coherence.jar!/reports/report-group.xml"
2012-06-23 13:29:33.500/7.282 Oracle Coherence GE 3.6.0.4 <Info> (thread=Main Thread, member=n/a): Loaded cache configuration from "jar:file:/E:/Oracle/Middleware/oepe_11gR1PS4/workspace/calcular-tamanho-cache-coherence/lib/coherence.jar!/coherence-cache-config.xml"
2012-06-23 13:29:35.391/9.173 Oracle Coherence GE 3.6.0.4 <D4> (thread=Main Thread, member=n/a): TCMP bound to /192.168.177.133:8090 using SystemSocketProvider
2012-06-23 13:29:37.062/10.844 Oracle Coherence GE 3.6.0.4 <Info> (thread=Cluster, member=n/a): This Member(Id=2, Timestamp=2012-06-23 13:29:36.899, Address=192.168.177.133:8090, MachineId=55685, Location=process:244, Role=Oracle, Edition=Grid Edition, Mode=Development, CpuCount=2, SocketCount=2) joined cluster "cluster:0xC4DB" with senior Member(Id=1, Timestamp=2012-06-23 13:29:14.031, Address=192.168.177.133:8088, MachineId=55685, Location=process:1128, Role=CreatePeopleCacheAndPopulateWith, Edition=Grid Edition, Mode=Development, CpuCount=2, SocketCount=2)
2012-06-23 13:29:37.172/10.954 Oracle Coherence GE 3.6.0.4 <D5> (thread=Cluster, member=n/a): Member 1 joined Service Cluster with senior member 1
2012-06-23 13:29:37.188/10.970 Oracle Coherence GE 3.6.0.4 <D5> (thread=Cluster, member=n/a): Member 1 joined Service Management with senior member 1
2012-06-23 13:29:37.188/10.970 Oracle Coherence GE 3.6.0.4 <D5> (thread=Cluster, member=n/a): Member 1 joined Service DistributedCache with senior member 1
2012-06-23 13:29:37.188/10.970 Oracle Coherence GE 3.6.0.4 <Info> (thread=Main Thread, member=n/a): Started cluster Name=cluster:0xC4DB

Group{Address=224.3.6.0, Port=36000, TTL=4}

MasterMemberSet
  (
  ThisMember=Member(Id=2, Timestamp=2012-06-23 13:29:36.899, Address=192.168.177.133:8090, MachineId=55685, Location=process:244, Role=Oracle)
  OldestMember=Member(Id=1, Timestamp=2012-06-23 13:29:14.031, Address=192.168.177.133:8088, MachineId=55685, Location=process:1128, Role=CreatePeopleCacheAndPopulateWith)
  ActualMemberSet=MemberSet(Size=2, BitSetCount=2
    Member(Id=1, Timestamp=2012-06-23 13:29:14.031, Address=192.168.177.133:8088, MachineId=55685, Location=process:1128, Role=CreatePeopleCacheAndPopulateWith)
    Member(Id=2, Timestamp=2012-06-23 13:29:36.899, Address=192.168.177.133:8090, MachineId=55685, Location=process:244, Role=Oracle)
    )
  RecycleMillis=1200000
  RecycleSet=MemberSet(Size=0, BitSetCount=0
    )
  )

TcpRing{Connections=[1]}
IpMonitor{AddressListSize=0}

2012-06-23 13:29:37.891/11.673 Oracle Coherence GE 3.6.0.4 <D5> (thread=Invocation:Management, member=2): Service Management joined the cluster with senior service member 1
2012-06-23 13:29:39.203/12.985 Oracle Coherence GE 3.6.0.4 <D5> (thread=DistributedCache, member=2): Service DistributedCache joined the cluster with senior service member 1
2012-06-23 13:29:39.297/13.079 Oracle Coherence GE 3.6.0.4 <D4> (thread=DistributedCache, member=2): Asking member 1 for 128 primary partitions

Cache Statistics of 'People':
   - Total Entries of Cache -----> 3
   - Used Memory (Bytes) --------> 883920
   - Used Memory (MB) -----------> 0.843
   - Object Average Size --------> 294640

I hope that this post could save you some time when calculate the total size of Coherence cache became a requirement for your high scalable system using data grids. See you!

Wednesday Apr 18, 2012

Oracle Technical Workshop: WebLogic Suite 12c (March 28, São Paulo)

In March 28 of 2012, I presented in the Pullman hotel in São Paulo, a whole day workshop about the technical innovations of Oracle WebLogic 12c, plus also the correlated middleware stack that Oracle created around it. This workshop, for those that could be in person, was very informational and productive once without any exception, every presentation was made in practice with the audience.

I would like to share with you the slides I have used in this workshop. The slides content are written in Portuguese, since was a Brazilian workshop for the Brazil folks. Please enjoy it!

Wednesday Feb 22, 2012

Oracle Coherence: First Steps Using Clusters and Basic API Usage

When we talk about distributed data grids, elastic caching platforms and in-memory caching technologies, Oracle Coherence is the first option that came in our minds. This happens because Oracle Coherence is the oldest and most mature implementation of data grids, creating successful histories across the world. It is Oracle Coherence the implementation with the bigger number of use cases in the world. Since it's aquisition by Oracle in 2007, the product has been enhanced with powerful enterprise features to remain it's position of the "better of the world" against it's competitors.


This article will help you given your first steps with Oracle Coherence. I have prepared a sequence of three videos that will guide you in the process of creating a data grid cluster, managing data using both Java API and CohQL ("Coherence Query Language") to finally test the reliability and fail over features of the product.

Oracle allows you to download and use any of your products for free, if you are interested in learning or testing the technology. Different  of other vendors that put you first in contact with a sales representative or simply not put their software available for download, Oracle encourages you to use the technology so you gain confidence with it. You can download Oracle Coherence at this link. If you don't possess a credential in the OTN ("Oracle Technology Network"), you will be asked to create one.

If you have a powerful computer and a fastest internet bandwidth, change the video quality settings to 1080p HD ("High Definition"). It will improve considerably the quality of your viewing.

Tuesday Oct 11, 2011

Getting Started with Oracle Tuxedo: Creating a COBOL-based Application

This article will show how COBOL developers can create, with minimum effort, robust distributed and service-oriented applications using the facilities offered by Oracle Tuxedo. Through a step by step example, you will learn how to configure and setup an Oracle Tuxedo development environment, how to implement COBOL code that enable client and server interactions and how to deploy and test the application into Oracle Tuxedo. For this article, is expected that you have a basic knowledge of the Linux operating system and basic knowledge about programming, if you are not a COBOL developer of course.

What is exactly the Oracle Tuxedo?

Simply put, Oracle Tuxedo is an application server for non-Java developers. This means that developers of programming languages like C/C++, COBOL, Python and Ruby can implement distributed applications using enterprise features like messaging, clustering, security, load-balancing, scalability and thread management from an middleware implementation. The main difference is that the platform itself are not based on Java, and the programming language used to develop the applications are not restricted to Java.

Historically speaking, the concept of an application server had been used in distributed architectures to promote loosely coupling between client and server applications (reason because it's commonly named as middleware) and the reuse of common features that, in the old days, developers had to write every time they need to create a distributed application, features like messaging, clustering, security, load-balancing scalability and thread management, most of them, features that represent critical non-functional requirements. Instead of create every time those features, you can just delegate to a middleware that host those features and reuse across different applications, since they will offer the same behavior for every application. Another key thing about application servers is the fact that they introduce a programing model that forces developers to focus only in business logic instead of basic infrastructure.

With this programming model in mind, you can write distributed applications without worrying about that they are distributed, meaning that you don't have to worry about how client applications invokes services from server applications, how the message are serialized and unserialized between remote computers and how cross-cutting concerns (aspects) are applied at every interaction. Oracle Tuxedo has been very popular in the market as an application server, and it has more than 25 years of maturity and evolution.

In fact, the firsts line of code of Tuxedo was written in 1983 at the AT&T laboratories. Years later, Novell acquired the Unix System Laboratories, the AT&T division responsible for the Tuxedo development. In a exclusive formal agreement, BEA Systems started the development of Tuxedo in non-Netware platforms and became the principal maintainer and distributor of Tuxedo technology. Since the acquisition by Oracle in 2008, Tuxedo was renamed to Oracle Tuxedo and now are part of the Oracle Fusion Middleware stack, being massively optimized year after year.

COBOL? Why not C/C++ or Java?

You probably are wondering why this article will be focused in the COBOL programming language instead of more popular and equally powerful programming languages like C/C++ or Java. COBOL is a structured programming language largely used worldwide at many organizations, due to it's popularity in the 80's and the 90's and the highly mainframe adoption. In fact, most banking organizations today runs their critical transactions at mainframes and those transactions are written in COBOL. Even in the x86 architectures we found many applications written at this programming language, so it is a little bit fair with COBOL developers dedicate this article for them.

In the C/C++ world, the concept and usage of application servers are pretty common. In the Java world is almost a rule of development, so it's natural for Java developers (actually, Java EE developers) use application servers. Unfortunately, this scenario does not apply for COBOL developers. Having said that, Oracle Tuxedo was designed from the source to handle C/C++ implementations, and if you are a C/C++ developer, you will not find any difficulties to work with Oracle Tuxedo. It is unnecessary to say that, if you are a Java developer, there are a bunch of application server implementations available in the market today, like Oracle WebLogic and Oracle GlassFish. There is no need to use Oracle Tuxedo as application server.

Setting Up an Oracle Tuxedo Development Environment for COBOL

Let's start the development of a simple distributed application using Oracle Tuxedo and the COBOL programming language. For this article, I have used a Linux operating system (Fedora 14) as platform. If you intend to use another operating system, check if both Oracle Tuxedo and the COBOL compiler supports it. The first thing to configure is a proper COBOL compiler. Oracle Tuxedo does not distribute any compiler, neither for C/C++. You need to use a certified compiler in order to develop using Oracle Tuxedo. Fortunately, there are many COBOL compilers available today. Oracle certifies two compilers in particular: Micro Focus COBOL compiler and COBOL-IT compiler suite. The main difference between this two implementations are the fact that Micro Focus COBOL compiler are proprietary, and demands that you pay licenses to use it. COBOL-IT on the other hand are free and open source. The company basically gives you support and consultancy through subscriptions. For this article, we will use COBOL-IT in the development of the example.

You will need a C/C++ compiler too. During the compilation of COBOL source code files, Oracle Tuxedo translate COBOL code to native C/C++ code, and after that, it compiles it to the target platform as a native executable. This means that implicitly, a C/C++ program compilation occurs, even being you used COBOL as programming language. There no restrictions about which C/C++ compiler to use. If you are a Linux user, the ANSI GNU compiler will be available already in the platform. Depending of your Linux distribution, some other packages must be installed too.

Download the COBOL-IT compiler suite clicking here. The COBOL-IT compiler suite installation is pretty simple. Just download the zipped files and unzip into a folder of your preference. In my environment, I have unzipped in the /home/riferrei/cobol-it-std-64 folder. After unzip the files, you need to define two environment variables:

      COBOLITDIR=/home/riferrei/cobol-it-std-64

      PATH=$PATH:$COBOLITDIR/bin

After defining these two environment variables, you are ready to use your COBOL-IT compiler suite. Let's create a simple "Hello World" COBOL application to certify that the COBOL-IT compiler are really working. Create a file in your environment named hello.cbl, and in this file write the following COBOL code:

       IDENTIFICATION DIVISION.
       PROGRAM-ID. 'hello'.
       ENVIRONMENT DIVISION.
       INPUT-OUTPUT SECTION.
       FILE-CONTROL.
       DATA DIVISION.
       FILE SECTION.

       WORKING-STORAGE SECTION.

       01  HELLOMSG    PIC X(80).

       LINKAGE SECTION.

       PROCEDURE DIVISION.

       MAIN SECTION.
            MOVE "Hello World using COBOL" TO HELLOMSG
            DISPLAY HELLOMSG
            ACCEPT  HELLOMSG
            EXIT PROGRAM.

To compile this source code and to generate a native executable program, you need to use the COBOL-IT compiler, using the following command:

     cobc -x hello.cbl

As you can see, the COBOL-IT compiler used in the command cobc available in the /bin installation directory of COBOL-IT compiler suite. After type this command,  you should see a native executable program generated in the same directory of the file hello.cbl. Executing this program should generate, unsurprisingly, a console output with the following message: "Hello World using COBOL".

Now that your COBOL-IT compiler suite are up and running, it is time to move forward and start the configuration of Oracle Tuxedo. For this article, I have used the 11g R1 version of Oracle Tuxedo, which in the time of the development of this article was the latest version available. Oracle Tuxedo is freely available for learning and evaluation. You can download the installation software here. Install the software following the recommendations documented in the Oracle Tuxedo installation guide. I will not repeat those instructions here since you can easily follow from the official documentation.

The configuration of Oracle Tuxedo is very straightforward. In the essence, you have to define a series of environment variables that change the behavior about how Oracle Tuxedo will compile, execute and manage the COBOL applications. It is important no remember that the instructions available here applies exclusively for COBOL development with Oracle Tuxedo. If you want to use Oracle Tuxedo with C/C++ programming language, there are another instructions available. I summarized the list of environment variables that, for the development of this article, you need to define.

It is a good idea to put those environment variables in the system scope. If you are a Linux operating system user, could be e good idea define those environment variables in the .bash_profile configuration file of your home directory.

Creating a COBOL-based Application using Oracle Tuxedo

When you develop applications using Oracle Tuxedo, you actually create a distributed application. This means that you should create at least two applications: The client-tier, which will be the "presentation layer" for the end-user, and the server-tier, which provides one or more services interfaces to be consumed from the client-tier. If you look closely, It is basically the same architecture that Java EE provides. The programming model used in Oracle Tuxedo can be ATMI and CORBA. For this article, I will use ATMI since is the only programming model available for COBOL programming language. But for most advanced applications, specially those ones based on C/C++ programming language, I encourage you to use CORBA instead of ATMI. This approach frees your code from Oracle Tuxedo specific API's, turning your business logic actually portable between other CORBA implementations like Progress Orbix, Micro Focus Visibroker, etc.

We will create a simple example of distributed application where the server-tier expose a service that takes a string as parameter and converts to the upper case mode. The client-tier will actually take an string from the command-line and send it as parameter to the server-tier for processing. The first thing to do is to create a folder that will be our application directory. Create a folder named MY_TUX_APP. After this, you need to define two environment variables:

      APPDIR=/home/riferrei/MY_TUX_APP

      TUXCONFIG=$APPDIR/tuxconfig

These two environment variables are used in the deployment phase of the Oracle Tuxedo development, which means that they are applied to every application that are deployed. The other environment variables are defined only once and reused across different deployments. Enter in the $APPDIR directory. Let's start the development of the application by the server-tier layer. Create a file named serverApp.cbl and write the following COBOL code:

        IDENTIFICATION DIVISION.
        PROGRAM-ID. serverApp.
        AUTHOR. TUXEDO DEVELOPMENT.
        ENVIRONMENT DIVISION.
        CONFIGURATION SECTION.
        DATA DIVISION.
        WORKING-STORAGE SECTION.

        01  TPSVCRET-REC.
        COPY TPSVCRET.

        01  TPTYPE-REC.
        COPY TPTYPE.

        01 TPSTATUS-REC.
        COPY TPSTATUS.

        01  TPSVCDEF-REC.
        COPY TPSVCDEF.

        01  LOGMSG.
                05  FILLER        PIC X(10) VALUE  
                        "server :".
                05  LOGMSG-TEXT   PIC X(50).
        01  LOGMSG-LEN            PIC S9(9)  COMP-5.

        01 RECV-STRING            PIC X(100).
        01 SEND-STRING            PIC X(100).

        LINKAGE SECTION.

        PROCEDURE DIVISION.

       START-FUNDUPSR.
           MOVE LENGTH OF LOGMSG TO LOGMSG-LEN. 
           MOVE "Started" TO LOGMSG-TEXT.
           PERFORM DO-USERLOG. 

           MOVE LENGTH OF RECV-STRING TO LEN.
           CALL "TPSVCSTART" USING TPSVCDEF-REC 
                        TPTYPE-REC 
                        RECV-STRING
                        TPSTATUS-REC.      

           IF NOT TPOK
                MOVE "TPSVCSTART Failed" TO LOGMSG-TEXT
                    PERFORM DO-USERLOG 
                PERFORM EXIT-PROGRAM 
           END-IF.

           IF TPTRUNCATE 
                MOVE "Data was truncated" TO LOGMSG-TEXT
                    PERFORM DO-USERLOG 
                PERFORM EXIT-PROGRAM 
           END-IF.

           INSPECT RECV-STRING CONVERTING
           "abcdefghijklmnopqrstuvwxyz" TO
           "ABCDEFGHIJKLMNOPQRSTUVWXYZ".
           MOVE "Success" TO LOGMSG-TEXT.
           PERFORM DO-USERLOG.

           SET TPSUCCESS TO TRUE.
           COPY TPRETURN REPLACING 
                DATA-REC BY RECV-STRING.

       DO-USERLOG.
           CALL "USERLOG" USING LOGMSG 
                LOGMSG-LEN 
                TPSTATUS-REC.
       EXIT-PROGRAM.
           MOVE "Failed" TO LOGMSG-TEXT.
           PERFORM DO-USERLOG.
           SET TPFAIL TO TRUE.
           COPY TPRETURN REPLACING 
                DATA-REC BY RECV-STRING.

This COBOL application receives a string parameters and converts to the upper case mode. You can see in the code that every interaction or "phase" are sent to a logger mechanism called user-log. This a interesting approach supported by Oracle Tuxedo that enable developers to debug "what is happening" in runtime. Functions like TPSVCSTART and USERLOG are included in the source code dynamically, are are part of the Oracle Tuxedo ATMI API. To compile this application and generate a native executable program, type the following command:

      buildserver -C -o serverApp -f serverApp.cbl -s serverApp

An native executable program named serverApp will be generated in the current directory. Let's understand which each parameter of the buildserver command does. The "-C" parameter tells to Oracle Tuxedo that a COBOL compilation will be done. Without this parameter, Oracle Tuxedo assumes that a C/C++ compilation will occur. The "-o" parameter tells what will be the name of the native executable program. The "-f" parameter tells which source code must be compiled. Finally, the "-s" parameter tells what service will be published when this server get up and running.

Let's create the client-tier application. Being in the same folder ($APPDIR), create a file named clientApp.cbl, and inside this file, write the following COBOL code:

        IDENTIFICATION DIVISION.
        PROGRAM-ID. clientApp.
        AUTHOR. TUXEDO DEVELOPMENT.
        ENVIRONMENT DIVISION.
        CONFIGURATION SECTION.
        SPECIAL-NAMES.
            SYSERR IS STANDARD-ERROR.

        DATA DIVISION.
        WORKING-STORAGE SECTION.

        01  PARM-CNT PIC 9(05).

        01  TPTYPE-REC. 
        COPY TPTYPE.

        01  TPSTATUS-REC. 
        COPY TPSTATUS.

        01  TPSVCDEF-REC. 
        COPY TPSVCDEF.

        01  TPINFDEF-REC VALUE LOW-VALUES.
        COPY TPINFDEF.

        01  LOGMSG.
            05  FILLER		PIC X(8) VALUE  "client:".
            05  LOGMSG-TEXT	PIC X(50).
        01  LOGMSG-LEN		PIC S9(9)  COMP-5.

        01  USER-DATA-REC	PIC X(75).
        01  SEND-STRING		PIC X(100) VALUE SPACES.
        01  RECV-STRING		PIC X(100) VALUE SPACES.

        PROCEDURE DIVISION.
        START-CSIMPCL.
          MOVE LENGTH OF LOGMSG TO LOGMSG-LEN. 
          ACCEPT PARM-CNT FROM ARGUMENT-NUMBER.
          IF PARM-CNT IS NOT EQUAL TO 1 THEN
              DISPLAY "Usage: clientApp String"
              STOP RUN
          END-IF.

          ACCEPT SEND-STRING FROM ARGUMENT-VALUE.
          DISPLAY "SEND-STRING:" SEND-STRING.
      
          MOVE "Started" TO LOGMSG-TEXT.
          PERFORM DO-USERLOG.
      
          PERFORM DO-TPINIT. 
          PERFORM DO-TPCALL. 
          DISPLAY "RECV-STRING:" RECV-STRING.
          PERFORM DO-TPTERM. 
          PERFORM EXIT-PROGRAM. 
      
        DO-TPINIT.
          MOVE SPACES TO USRNAME.
          MOVE SPACES TO CLTNAME.
          MOVE SPACES TO PASSWD.
          MOVE SPACES TO GRPNAME.
          MOVE ZERO TO DATALEN.
          SET TPU-DIP TO TRUE.

          CALL "TPINITIALIZE" USING TPINFDEF-REC 
                USER-DATA-REC 
                TPSTATUS-REC.      
      
          IF NOT TPOK
                MOVE "TPINITIALIZE Failed" TO LOGMSG-TEXT
                PERFORM DO-USERLOG
                PERFORM EXIT-PROGRAM
          END-IF.
      
        DO-TPCALL.
          MOVE 100 TO LEN.
          MOVE "STRING" TO REC-TYPE.
      
          MOVE "serverApp" TO SERVICE-NAME.
          SET TPBLOCK TO TRUE.
          SET TPNOTRAN TO TRUE.
          SET TPNOTIME TO TRUE.
          SET TPSIGRSTRT TO TRUE.
          SET TPCHANGE TO TRUE.
       
          CALL "TPCALL" USING TPSVCDEF-REC 
                TPTYPE-REC 
                SEND-STRING
                TPTYPE-REC 
                RECV-STRING
                TPSTATUS-REC. 
      
          IF NOT TPOK
                MOVE "TPCALL Failed" TO LOGMSG-TEXT
                PERFORM DO-USERLOG 
          END-IF.
      
        DO-TPTERM.
          CALL "TPTERM" USING TPSTATUS-REC.      
          IF  NOT TPOK
                MOVE "TPTERM Failed" TO LOGMSG-TEXT
                PERFORM DO-USERLOG
          END-IF.
      
        DO-USERLOG.
          CALL "USERLOG" USING LOGMSG 
                LOGMSG-LEN 
                TPSTATUS-REC.
      
        EXIT-PROGRAM.
          MOVE "Ended" TO LOGMSG-TEXT.
          PERFORM DO-USERLOG.
          STOP RUN.

Let's compile this client-tier application. To do this, just type the following command:

      buildclient -C -o clientApp -f clientApp.cbl

As you can see, the parameters used in the compilation are the same that we used in the compilation of the server-tier, with the exception of the parameter "-s" in which of the client-tier are unnecessary. At this point, you should have two executable applications in the directory, one name clientApp and another named serverApp. To start the deployment phase of this application, it is necessary to create the configuration file for it. This configuration file are called UBBCONFIG file. The UBBCONFIG file acts as the external configuration descriptor that defines the application. If you are familiar with Java EE development, think in this configuration file as a deployment descriptor. In the directory of the application, create a file named ubbConfig and edit this file as showed in the listing below:

*RESOURCES
IPCKEY	123459

DOMAINID	ubbConfig
MASTER		simple
MAXACCESSERS	5
MAXSERVERS	5
MAXSERVICES	10
MODEL		SHM
LDBAL		N

*MACHINES
DEFAULT:

APPDIR="/home/riferrei/MY_TUX_APP"
TUXCONFIG="/home/riferrei/MY_TUX_APP/tuxconfig"
TUXDIR="/home/riferrei/oracle/tuxedo11gR1"

riferrei_linux_64	LMID=simple

*GROUPS
GROUP1
	LMID=simple	GRPNO=1	OPENINFO=NONE

*SERVERS
DEFAULT:
		CLOPT="-A"

serverApp	SRVGRP=GROUP1 SRVID=1

*SERVICES
serverApp

There are one section of this configuration file that you must change to get the example working, the "MACHINES-DEFAULT" section. Change the variables "APPDIR", "TUXCONFIG" and "TUXDIR" to reflect your file system format and directories location. You also need to change the hostname of the server. As you can see in the UBBCONFIG file, there is a mapping between the machine "riferrei_linux_64" to the element "LMID=simple". Change the hostname to reflect the hostname of your machine. I have found some problems when I used hostnames with special characters like "-" or ".". If some problems occurs in the UBBCONFIG loading, maybe this could be the cause.

Deploying and Testing the Application in Oracle Tuxedo

It is time to deploy and test the application. Every application in Oracle Tuxedo must have a binary version of their UBBCONFIG file. To generate this binary version, Oracle Tuxedo gives you a very simple utility program named tmloadcf. Being in the application directory of the application, type the following command:

     tmloadcf ubbConfig

A binary version of the UBBCONFIG file named tuxconfig will be generated in the current directory. Now we are all set. Let's start the deployment of the application and begin the tests. To deploy the application and start the server service, type the following command:

     tmboot -y

You should see in the console that a few messages will appear. Which these messages basically says is about the correct execution of the admin and server services, which means that they will be ready to accept client requests and ready to process the incoming messages. Type the following command:

     ./clientApp "Oracle Tuxedo"

With this command, we actually execute the client-tier application and we pass a string from the command line parameters. After type the command above, you should see in the console the following output:

     SEND-STRING:Oracle Tuxedo
     RECV-STRING:ORACLE TUXEDO

This means that our server-tier application received the message and passed to the serverApp service. The service on the other hand transformed the string passed as parameter to the upper case format and sent back to the caller application. In the "middle" of these two applications, Oracle Tuxedo did take care of the messaging and transaction execution. To shutdown the services and the server application, type the following command:

     tmshutdown

I have recorded a video that capture the entire application development since the compilation process. If you feel that some step was not done correctly, run the video below to follow step by step the sequence of commands needed to compile, deploy and test the application created at this article.


Conclusion

This article gave you an overview about Oracle Tuxedo e how it can be used to create distributed applications using the COBOL programming language. It showed how to set up a development environment that enable COBOL developers to build a simple but complete application in Oracle Tuxedo. I hope that this article could help you and your team to explore the features that only Oracle Tuxedo offers.

About

Ricardo Ferreira is just a regular person that lives in Brazil and is passionate for technology, movies and his whole family. Currently is working at Oracle Corporation.

Search

Categories
Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today