Saturday Jan 18, 2014

Capturing Business Events for Future Playback through Oracle Event Processing Platform

In the heart of OEP application development there is the understanding of the business events anatomy, the knowledge about its structure, frequency about when they happen, its relationship with other types of events and of course, its volume. Business events play a critical role in event-driven applications because they are the input for the problem that you are trying to solve. Without the business events, an event-driven application would be like a car engine without fuel.

When you are designing an EPN (Event Processing Network) of an OEP application, you are actually expressing the way about how the business events will be received, processed and perhaps transformed in some kind of output. Like any other technology, the behavior of this output depends heavily of which data you have used as input. For one specific volume, the EPN could work, for another volume, maybe not. For one specific ordering, the EPN could work, with another ordering, it could not. It will only work when the right set of events is being used.

Its very common the situation when you deploy the first version of your OEP application and after a while users start complaining about undesired results. This happens because no matter how many times you tested your application, you will never get close to the mass of events present in the customer environment. The ordering, volume, size, encoding and frequency of the events found in the customer environment is always different of that mass of events that you used during functional testing. If this situation happens to you, its time to think in a way to record those events and bring them back to the development environment to figure out what is wrong.

This article aims to explore the record and playback feature of Oracle Event Processing Platform. Through the examples shown here, you will be able to record and replay a mass of business events to simulate some behavior that maybe you have forgot to capture at your EPN. This feature is also pretty cool for prototyping scenarios. Instead of bringing with a you a simulator of events, you can just replay a recorded set of events and present your OEP application.

Configuring the OEP Default Provider

In order to the record and playback features work, OEP needs to relies in a repository in which events will be stored. To make the developer work easier, OEP brings out-of-the-box an default provider for this repository based on the Berkeley DB technology. Located inside of every OEP domain, there is an instance of Berkeley DB (aka "BDB") which can be adjusted through the configuration file of the domain. In most cases, you don't need to change anything before start using the record and playback features, but there is one catch that you should be aware of.

BDB stores each recorded event as an entry in this repository. Each entry has a default size of 1KB (1000 bytes) pre-defined in the moment that you create the domain. The information about the size of each entry is important in terms of performance and how BDB will growth its storage. So if you know the average size of your events, it is a good idea to inform the size of each entry in the domain configuration file. For instance, suppose that we are dealing with a mass of events with 32KB of average size for each event. You can adjust BDB editing the domain configuration file located in <DOMAIN_HOME>/<SERVER_NAME>/config/config.xml:

As you can see in the example above, you need to adjust the cache-size property of the bdb-config section. This property is measured in terms of bytes. The value of this property represents the amount of memory that BDB need to allocate for each entry.

Recording the Events

In order to record events, you need to access the OEP Visualizer Console of your domain. Log in onto the console and navigate to the EPN of the OEP application that you want to record events.

In the EPN of your OEP application, you can right-click any of the elements inside, no matter if it is an adapter, channel, processor or an cache. But make totally sense to record the events from the adapters perspective. They are the genesis of your EPN, and all the events that will flow through it will come from the adapters. Right-click the input adapter of your EPN and choose the "Record Event" option:

You will open the record tab of the adapter. In the bottom of the page, in a section named "Change Recording Schedule", you will find an button named "Add". Click on it.

Adding a new record configuration scheme enables the UI for full-filling some fields. Enter in the "DataSet Name" field the name of this recording session. This field will be used to construct the physical repository of BDB. You are also required to set the "Selected Event Type List" field. You need to describe in this field which incoming events you are expecting to record. After this you can click in the "Save" button.

We are all set to start recording. Double-check if there are events being sent to your OEP application and click in the "Start" button. Doing this will start a recording session and all events received by the adapter will be recorded. At any point of time, you can click in the "Stop" button to stop the recording session.

Playing Back the Events

To start the playback of any recorded session, you need to go back to the EPN screen and right-click the channel that will receive the events. This will bring you to the playback tab of the selected channel. Just like the recording session, you need to add one playback configuration scheme clicking in the "Add" button.

The group of fields to be full-filled is almost the same. Pay attemption that in the "DataSet Name" field you need to inform the exactly name that you informed in the recording session. Since you can have multiple recording sessions under different names, when you playback you need to tell OEP which session you would like to playback. As such, you need to inform which events you would like to playback, filling the "Selected Event Type List" field.

Optionally, you can set some additional parameters related to the behavior of the playback. In the "Change Playback Schedule Parameters" you can find fields to define which interval of time you want to playback. This is useful when you recorded hours/days of event streaming and you would like to reproduce just one piece of it. You can also define the speed in which events will be injected into the channel, useful to test how your EPN behave in a high speed velocity situation. Finally, you can set if the playback will be repeated forever. Since the recording session is finite, you can playback it as many times you want, or can just set the "Repeat" field as true to automatically restart from the beginning when the recorded streaming ends.

When you are done, save the playback configuration scheme and click in the "Start" button to start the playback session according your settings. An "Playing..." message will be flipping in the screen during the playback session. At any time, you can stop the playback clicking in the "Stop" button.

All the configuration you made so far is stored in the OEP domain along with your application so don't be shy to restart the OEP server and potentially lose your job: all the settings will remain there when the server come back to life. This statement is true as long you don't undeploy your application from the OEP domain. If you undeploy your application, all the settings will be lost. If you want make those settings permanent, you can set those into your application. Access this link to learn how to configure a component to record events, and this link to learn how to configure a component to playback events.

Friday Aug 16, 2013

The Perfect Marriage: Oracle Business Rules & Coherence In-Memory Data Grid. High Scalable Business Rules with Extreme Low Latency

The idea of separating business rules from the application logic is by far an old concept. But in the last ten years, what we have seem is that dozen of platforms and technologies has been created to allow this separation of concerns. One of those technologies is BRMS, acronym of Business Rules Management System. The basic idea of one BRMS is to be a repository of rules, governing those rules in such way that they can be created, updated, tested and controlled by an external interface. Part of the BRMS responsibility it is also provide an API (more than one when possible) that allows external applications to interact with the BRMS, allowing those applications to send data over the network, and that data can trigger the execution of zero, one or multiples rules in the BRMS repository. This rule execution occurs outside of those external applications, minimizing their process memory footprint and generating much less CPU overhead since the execution processing of the rules happens in a separated server/cluster. This architecture approach is very powerful, allowing:

  • Rules can be managed (created, updated) outside of the application code
  • Rules can be reused across different applications, no matter their technology
  • Less CPU overhead and smaller memory footprint in the applications
  • More control over rules, auditing of changes and enterprise log history
  • Integration with other IT artifacts like dictionaries, processes, services

With this context in place, we are all agree that the usage of one BRMS is a mandatory approach on every IT architecture due its power, if it were not for the fact that BRMS technologies introduces a lot of overhead in the overall transaction latency. In the middle of the external application that invokes the BRMS to execute rules and the BRMS platform itself, there is the network channel. This means that we must deal with network I/O and their technical implications (serialization, instability, buffering bytes approach) when we send/receive data to/from the BRMS. No matter if the BRMS provides an SOAP API, an REST API or any other TCP/IP based API, the overall transaction latency is compromised by the network overhead.

Another huge problem of BRMS platforms is scalability. When the BRMS platform is first introduced to an architecture, it handles an acceptable number of TPS (Transactions Per Second), which nowadays varies from 1K TPS to 5K TPS. But when other applications starts using the same BRMS platform, or the number of transactions just naturally grows, you can face scenarios when your BRMS platform must deal with 20K TPS or even 100K TPS. What happens when a huge numbers of objects are allocated in the heap space of the Java based server? The memory footprint starts to reach its maximum size and the garbage collector starts to run to reclaim the unused memory and/or redesign the layout space. No matter what job the garbage collector has to do, it will use the entire processing power to runs its job as soon as possible, since the amount of garbage to handle will be huge. This is true for the almost BRMS platforms of the market, no matter if its from one vendor or another. If the BRMS platform are Java based, when those servers JVM reach more than 16 GB of space in average, they starts to face a huge performance problem due garbage collection.

Differently from other architecture designs in which the load is distributed across a cluster, BRMS platforms must handle the entire processing in a single server due a general concept of BRMS platforms known as execution agenda and working memory. All the facts (the data sent as input) are maintained in this agenda in a single server, making the BRMS platform a pinned service, in which they do their job in a singleton fashion. In this situation, when you need to scale, you can introduce series of equally servers, below a corporate load-balancer that instead of distribute load, it divides entire transaction volumes across those servers. Because each server below the load-balancer handle the entire volume by itself, those servers limit concurrency by the number of processors available in their mainboard. If you need more compute power, due lack of concurrency, you are forced to buy a much higher server. Those servers are huge, expensive and costs a lot of money since they need to be big enough in terms of processors to handle thousands of executions simultaneously and completely alone. Not a very smart approach when you considering to handle millions of TPS.

With this situation in mind, it is necessary to design an architecture that would allow business rules execution be distributed across different servers. To achieve this behavior, it is necessary to use another software component that could share data (business entities, fact types, data transfer objects) across different processes, running in the same or different hardware boxes. And more important than that, a software component that would allow transaction latency to be short enough, reducing a lot of milliseconds introduced by network overhead. In other words, this software component must bring data to the unique hardware layer that really doesn't implies in I/O overhead, which is memory.

Recently, in order to deal with this problem and provide for a customer an scalable plus high performance way to use Oracle Business Rules, I designed an solution that solves both problems in a once, without losing the power of separation of concerns provided by BRMS platforms. In-Memory Data Grid technologies like Oracle Coherence has the power of handling massive amounts of data (MB, GB or even TB) completely in-memory. Moreover, this kind of technology has been written from scratch to distribute data across a number of servers, so scalability is never a problem here. When you integrate BRMS with In-Memory Data Grid technologies, you can do both of the two worlds: scalability plus high performance and also extreme low latency. And when I say extreme low latency I mean, sub-milliseconds of latency. Something around less than 650 μs in my tests.

This article will show how to integrate Oracle Business Rules with Oracle Coherence. The steps showed here can be reproduced for a huge number of scenarios, making your investment on Oracle Fusion Middleware (Cloud Application Foundation and/or SOA Suite stack) even more attractive.

The Business Scenario: Automatic Promotions for Bank Customers

Before we move to the implementation details of this article, we need to understand the business scenario used as didactic. We are about to simulate an automatic decision system that create promotions for banking customers based on their profiles. The idea here is let the BRMS platform decide which promotions to offer based on customer profiles that applications send it. This automatic promotion system should allow applications like internet banking sites, mobile applications or kiosk terminals, to present promotions (up-selling/cross-selling) to its final customers.

Building the Solution Domain Model

Let's start the development of the example. The first thing to do is the creation of the domain model, which means that we need to design and implement the business entities that will drive the client-side application execution, as such the business rules. The automatic promotion system will be composed of three entities: promotions, products and customers. A promotion it is something that the bank would offer to the customer, with contextual information about the business value of one or more products, derived from the customer profile. Here is the implementation of the promotion entity:

package com.acme.architecture.multichannel.domain;

import com.tangosol.io.pof.annotation.Portable;
import com.tangosol.io.pof.annotation.PortableProperty;

@Portable public class Promotion {
    
    @PortableProperty(0) private String id;
    @PortableProperty(1) private String description;
    
    public Promotion() {}
    
    public Promotion(String id, String description) {
        setId(id);
        setDescription(description);
    }

    public String getId() {
        return id;
    }

    public void setId(String id) {
        this.id = id;
    }

    public String getDescription() {
        return description;
    }

    public void setDescription(String description) {
        this.description = description;
    }
    
}

A product is something that the customer hire from the bank. Some kind of service or item that make the customer account more valuable to the bank and more attractive to the customer since it is a differentiator. Here is the implementation of the product entity:

package com.acme.architecture.multichannel.domain;

import com.tangosol.io.pof.annotation.Portable;
import com.tangosol.io.pof.annotation.PortableProperty;

@Portable public class Product {
    
    @PortableProperty(0) private int id;
    @PortableProperty(1) private String name;
    
    public Product() {}
    
    public Product(int id, String name) {
        setId(id);
        setName(name);
    }

    public int getId() {
        return id;
    }

    public void setId(int id) {
        this.id = id;
    }

    public String getName() {
        return name;
    }

    public void setName(String name) {
        this.name = name;
    }
    
}

And finally, we need to design the customer entity. The customer entity will be the representation of the person or company that hires one or more products from the bank. Here is the implementation of the customer entity:

package com.acme.architecture.multichannel.domain;

import com.tangosol.io.pof.annotation.Portable;
import com.tangosol.io.pof.annotation.PortableProperty;

import java.util.ArrayList;
import java.util.Date;
import java.util.List;
import java.util.Set;

@Portable public class Customer {
    
    @PortableProperty(0) private String ssn;
    @PortableProperty(1) private String firstName;
    @PortableProperty(2) private String lastName;
    @PortableProperty(3) private Date birthDate;
    
    @PortableProperty(4) private String account;
    @PortableProperty(5) private String agency;
    @PortableProperty(6) private double balance;
    @PortableProperty(7) private char custType;
    
    @PortableProperty(8) private Set<Product> products;
    @PortableProperty(9) private List<Promotion> promotions;
    
    public Customer() {}
    
    public Customer(String ssn, String firstName, String lastName,
                    Date birthDate, String account, String agency,
                    double balance, char custType, Set<Product> products) {
        setSsn(ssn);
        setFirstName(firstName);
        setLastName(lastName);
        setBirthDate(birthDate);
        setAccount(account);
        setAgency(agency);
        setBalance(balance);
        setCustType(custType);
        setProducts(products);
    }
    
    public void addPromotion(String id, String description) {
        getPromotions().add(new Promotion(id, description));
    }

    public String getSsn() {
        return ssn;
    }

    public void setSsn(String ssn) {
        this.ssn = ssn;
    }

    public String getFirstName() {
        return firstName;
    }

    public void setFirstName(String firstName) {
        this.firstName = firstName;
    }

    public String getLastName() {
        return lastName;
    }

    public void setLastName(String lastName) {
        this.lastName = lastName;
    }

    public Date getBirthDate() {
        return birthDate;
    }

    public void setBirthDate(Date birthDate) {
        this.birthDate = birthDate;
    }

    public String getAccount() {
        return account;
    }

    public void setAccount(String account) {
        this.account = account;
    }

    public String getAgency() {
        return agency;
    }

    public void setAgency(String agency) {
        this.agency = agency;
    }

    public double getBalance() {
        return balance;
    }

    public void setBalance(double balance) {
        this.balance = balance;
    }

    public char getCustType() {
        return custType;
    }

    public void setCustType(char custType) {
        this.custType = custType;
    }

    public Set<Product> getProducts() {
        return products;
    }

    public void setProducts(Set<Product> products) {
        this.products = products;
    }

    public List<Promotion> getPromotions() {
        if (promotions == null) {
            promotions = new ArrayList<Promotion>();
        }
        return promotions;
    }

    public void setPromotions(List<Promotion> promotions) {
        this.promotions = promotions;
    }
    
} 

As you can see in the code, the customer entity has a relationship with the two other entities. Build this code and package those three entities into a JAR file. We can now move to the second part of the implementation which is the creation of one SOA project that includes an business rules dictionary.

Creating the Business Rules Dictionary

Business rules in the Oracle Business Rules product are defined in an artifact called dictionary. In order to create an dictionary, you must use the Oracle JDeveloper IDE plus the SOA extension for JDeveloper. I will assume here that you are familiar with those tools, so I will not enter in too much detail about them. In JDeveloper, create a new SOA project, and after that create a business rules dictionary. With the dictionary in place, you must configure the dictionary to consider our domain model as fact types.


Now you can write down some business rules. Using the JDeveloper business rules editor, define the following rules as shown in the picture below.


For testing purposes, the variable "MinimumBalanceForCreditCard" it is just a global variable of type java.lang.Double that contains a constant value. Finally, you are required to expose those business rules through an decision function. As you probably already know, decision functions are constructions that make easier external applications to interact with Oracle Business Rules, minimizing the developers effort to deal with the Oracle Business Rules API, besides providing a very nice contract-based access point. Create one decision point that receives an customer as input, and returns the same customer as output. Don't forget to associate the ruleset with the decision function.

Integrating Oracle Business Rules and Coherence through Interceptors

Now here came the most exciting part of the article: the integration between Oracle Business Rules and Oracle Coherence In-Memory Data Grid. Starting from 12.1.2 version of Coherence, Oracle announced an new API called Live Events. This new API allows applications to listen/consume events from Coherence, no matter what type of event it is being generated. You can learn more about Coherence Live Events in this Youtube presentation.

Using both Coherence and Oracle Business Rules main libraries, implement the following event interceptor at your favorite Java development environment:

package com.oracle.coherence.events;

import com.tangosol.net.events.EventInterceptor;
import com.tangosol.net.events.annotation.Interceptor;
import com.tangosol.net.events.partition.cache.EntryEvent;
import com.tangosol.util.BinaryEntry;

import java.io.File;
import java.io.FileNotFoundException;
import java.io.FileReader;
import java.io.IOException;

import java.util.ArrayList;
import java.util.Iterator;
import java.util.List;
import java.util.Set;

import oracle.rules.sdk2.decisionpoint.DecisionPoint;
import oracle.rules.sdk2.decisionpoint.DecisionPointBuilder;
import oracle.rules.sdk2.decisionpoint.DecisionPointDictionaryFinder;
import oracle.rules.sdk2.decisionpoint.DecisionPointInstance;
import oracle.rules.sdk2.dictionary.RuleDictionary;
import oracle.rules.sdk2.exception.SDKException;

/**
 * @author ricardo.s.ferreira@oracle.com
 */

@Interceptor
public class FSRulesInterceptor implements EventInterceptor<EntryEvent> {

    private List<EntryEvent.Type> types;
    private String dictionaryLocation;
    private String decisionFunctionName;
    private boolean dictionaryAutoUpdate;
    private long dictionaryTimestamp;

    private void parseEntryEventTypes(String entryEventTypes) {
        types = new ArrayList<EntryEvent.Type>();
        String[] listTypes = entryEventTypes.split(COMMA);
        for (String type : listTypes) {
            types.add(EntryEvent.Type.valueOf(type.trim()));
        }
    }

    private RuleDictionary loadRuleDictionary()
        throws FileNotFoundException, SDKException, IOException {
        File dictionaryFile = new File(dictionaryLocation);
        dictionaryTimestamp = dictionaryFile.lastModified();
        RuleDictionary ruleDictionary = RuleDictionary.readDictionary(
            new FileReader(dictionaryFile), new DecisionPointDictionaryFinder(null));
        return ruleDictionary;
    }

    private DecisionPoint createDecisionPoint() {
        DecisionPoint decisionPoint = null;
        try {
            decisionPoint =
                    new DecisionPointBuilder()
                    .with(loadRuleDictionary())
                    .with(decisionFunctionName).build();
        } catch (Exception ex) {
            throw new RuntimeException("Unable to create the DecisionPoint", ex);
        }
        return decisionPoint;
    }

    private void updateDecisionPointIfNecessary() {
        File dictionaryFile = new File(dictionaryLocation);
        if (dictionaryFile.lastModified() != dictionaryTimestamp) {
            decisionPoint.release();
            decisionPoint = createDecisionPoint();
        }
    }

    private boolean eventTypeIsAllowed(EntryEvent.Type entryEventType) {
        return types.contains(entryEventType);
    }

    public FSRulesInterceptor(String entryEventTypes,
                              String dictionaryLocation,
                              String decisionFunctionName) {
        parseEntryEventTypes(entryEventTypes);
        this.dictionaryLocation = dictionaryLocation;
        this.decisionFunctionName = decisionFunctionName;
        decisionPoint = createDecisionPoint();
    }

    public FSRulesInterceptor(String entryEventTypes,
                              String dictionaryLocation,
                              String decisionFunctionName,
                              boolean dictionaryAutoUpdate) {
        this(entryEventTypes, dictionaryLocation, decisionFunctionName);
        this.dictionaryAutoUpdate = dictionaryAutoUpdate;
    }

    public void onEvent(EntryEvent entryEvent) {

        BinaryEntry binaryEntry = null;
        Iterator<BinaryEntry> iter = null;
        Set<BinaryEntry> entrySet = null;
        DecisionPointInstance dPointInst = null;
        List<Object> inputs, outputs = null;

        try {

            if (eventTypeIsAllowed(entryEvent.getType())) {

                if (dictionaryAutoUpdate) {
                    updateDecisionPointIfNecessary();
                }

                inputs = new ArrayList<Object>();
                entrySet = entryEvent.getEntrySet();
                for (BinaryEntry binaryEntryInput : entrySet) {
                    inputs.add(binaryEntryInput.getValue());
                }

                dPointInst = decisionPoint.getInstance();
                dPointInst.setInputs(inputs);
                outputs = dPointInst.invoke();

                if (entryEvent.getType() == EntryEvent.Type.INSERTING ||
                    entryEvent.getType() == EntryEvent.Type.UPDATING) {
                    if (outputs != null && !outputs.isEmpty()) {
                        iter = entrySet.iterator();
                        for (Object output : outputs) {
                            if (iter.hasNext()) {
                                binaryEntry = iter.next();
                                binaryEntry.setValue(output);
                            }
                        }
                    }
                }
                
            }

        } catch (Exception ex) {
            ex.printStackTrace();
        }

    }
    
    private static final String COMMA = ",";
    private static DecisionPoint decisionPoint;

}

If you are familiar with the Oracle Business Rules Java API, you won't find any difficult to understand this code. What it does is simply create an DecisionPoint object during the constructor phase and put this object into a static variable, which allow this object to be shared across the entire JVM. Remember that the JVM in this context is a Coherence node, so what I am saying is that each Coherence node will hold an instance of one DecisionPoint. On the onEvent() method, there is the algorithm that checks which type of event the implementation should intercept, and also checks if the DecisionPoint instance should be updated. This last check is done based on the timestamp of the dictionary file.

After creating an DecisionPointInstance, the intercepted entries became the input variables for the business rules execution. The interceptor triggers the rules engine through the invoke() method, and after that it replaces the original intercepted entries with the result that came back from the business rules agenda. But only if one of the following events had happened: INSERTING or UPDATING. This check is necessary for two reasons. First, those are the only event types that occurs in the same thread of the cache transaction. Second, other event types like INSERTED or UPDATED happens in another thread, which means that they are triggered asynchronously by Coherence.

Setting Up an Coherence Distributed Cache with the Business Rules Interceptor

Now we can start the configuration of the Coherence cache. Since we are using POF as the serialization strategy, we need to assembly an POF configuration file. Starting from the 12.1.2 version of Coherence, there is a new tool called pof-config-gen that introspects JAR files searching for annotated classes with @Portable. Create a POF configuration file that should contain the following content:

<?xml version='1.0'?>

<pof-config
   xmlns:xsi='http://www.w3.org/2001/XMLSchema-instance'
   xmlns='http://xmlns.oracle.com/coherence/coherence-pof-config'
   xsi:schemaLocation='http://xmlns.oracle.com/coherence/coherence-pof-config coherence-pof-config.xsd'>
	
   <user-type-list>
      <include>coherence-pof-config.xml</include>
      <user-type>
         <type-id>1001</type-id>
         <class-name>com.acme.architecture.multichannel.domain.Promotion</class-name>
      </user-type>
      <user-type>
         <type-id>1002</type-id>
         <class-name>com.acme.architecture.multichannel.domain.Product</class-name>
      </user-type>
      <user-type>
         <type-id>1003</type-id>
         <class-name>com.acme.architecture.multichannel.domain.Customer</class-name>
      </user-type>
   </user-type-list>
	
</pof-config>

And as expected, we also need to create an Coherence cache configuration file. Create one file called coherence-cache-config.xml and fill it with the following contents:

<?xml version="1.0"?>

<cache-config xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
   xmlns="http://xmlns.oracle.com/coherence/coherence-cache-config"
   xsi:schemaLocation="http://xmlns.oracle.com/coherence/coherence-cache-config coherence-cache-config.xsd">

   <defaults>
      <serializer>pof</serializer>
   </defaults>

   <caching-scheme-mapping>
      <cache-mapping>
         <cache-name>customers</cache-name>
         <scheme-name>customersScheme</scheme-name>
      </cache-mapping>
   </caching-scheme-mapping>

   <caching-schemes>
	
      <distributed-scheme>
         <scheme-name>customersScheme</scheme-name>
         <service-name>DistribService</service-name>
         <backing-map-scheme>
            <local-scheme />
         </backing-map-scheme>
         <autostart>true</autostart>
         <interceptors>
            <interceptor>
               <name>rulesInterceptor</name>
               <instance>
                  <class-name>com.oracle.coherence.events.FSRulesInterceptor</class-name>
                  <init-params>
                     <init-param>
                        <param-type>java.lang.String</param-type>
                        <param-value>INSERTING, UPDATING</param-value>
                     </init-param>
                     <init-param>
                        <param-type>java.lang.String</param-type>
                        <param-value>C:\\multiChannelArchitecture.rules</param-value>
                     </init-param>
                     <init-param>
                        <param-type>java.lang.String</param-type>
                        <param-value>BankingDecisionFunction</param-value>
                     </init-param>
                     <init-param>
                        <param-type>java.lang.Boolean</param-type>
                        <param-value>true</param-value>
                     </init-param>
                  </init-params>
               </instance>
            </interceptor>
         </interceptors>
         <async-backup>true</async-backup>
      </distributed-scheme>
		
      <proxy-scheme>
         <scheme-name>customersProxy</scheme-name>
         <service-name>ProxyService</service-name>
         <acceptor-config>
            <tcp-acceptor>
               <local-address>
                  <address>cloud.app.foundation</address>
                  <port>5555</port>
               </local-address>
            </tcp-acceptor>
         </acceptor-config>
         <autostart>true</autostart>
      </proxy-scheme>

   </caching-schemes>
	
</cache-config>    

This cache configuration file is very straightforward. There is only three important things to consider here. First, we are using the new interceptor section to declare our interceptor and pass constructor arguments for it. Second, we used another feature from Coherence 12.1.2 version, which is the asynchronous backup feature. Using this feature dramatically reduces the latency of one single transaction, since backups are written after (in another thread) that the primary entry has been written. Not necessarily a pre-condition for the interceptor stuff works, but in the context of BRMS, should be a great idea. Third, we also defined a proxy-scheme that expose an TCP/IP endpoint, so we can use the Coherence*Extend feature later in this article, to allow a C++ application to access the same cache.

Testing the Scenario

Now that we have all the configuration in place, we can start the tests. Start an Coherence node JVM with the configuration file from the previous section. When you start the Coherence, a DecisionPoint object pointing to the business rules dictionary will be created in-memory. Implement a Java program to test the behavior of the implementation as the listing below:

package com.acme.architecture.multichannel.test;

import com.acme.architecture.multichannel.domain.Customer;

import com.tangosol.net.CacheFactory;
import com.tangosol.net.NamedCache;

import java.util.Date;

public class Application {
    
    public static void main(String[] args) {
        
        NamedCache customers = CacheFactory.getCache("customers");
        
        // Simulating a simple customer of type 'Person'...
        // Should trigger the rule 'Basic Products for First Customers'
        // Expected number of promotions: 02
        String ssn = "12345";
        Customer personCust = new Customer(ssn, "Ricardo", "Ferreira",
                                         new Date(1981, 10, 05), "245671",
                                         "3158", 98000, 'P', null);
        
        long startTime = System.currentTimeMillis();
        customers.put(ssn, personCust);
        personCust = (Customer) customers.get(ssn);
        long elapsedTime = System.currentTimeMillis() - startTime;
        
        System.out.println();
        System.out.println("   ---> Number of Promotions: " +
                           personCust.getPromotions().size());
        System.out.println("   ---> Elapsed Time: " + elapsedTime + " ms");
        
        // Simulating a simple customer of type 'Company'...
        // Should trigger the rule 'Corporate Credit Card for Enterprise Customers'
        // Expected number of promotions: 01
        ssn = "54321";
        Customer companyCust = new Customer(ssn, "Huge", "Company",
                                         new Date(1981, 10, 05), "235437",
                                         "7856", 8900000, 'C', null);
        
        startTime = System.currentTimeMillis();
        customers.put(ssn, companyCust);
        companyCust = (Customer) customers.get(ssn);
        elapsedTime = (System.currentTimeMillis() - startTime);
        
        System.out.println("   ---> Number of Promotions: " +
                           companyCust.getPromotions().size());
        System.out.println("   ---> Elapsed Time: " + elapsedTime + " ms");
        System.out.println();
        
    }
    
}

This Java application can be executed with the storage-enabled parameter set to false. Executing this code will give you an output similar to this:

2013-08-15 23:37:53.058/1.378 Oracle Coherence 12.1.2.0.0 <Info> (thread=Main Thread, member=n/a): Loaded operational configuration from "jar:file:/C:/mw-home/coherence/lib/coherence.jar!/tangosol-coherence.xml"
2013-08-15 23:37:53.093/1.413 Oracle Coherence 12.1.2.0.0 <Info> (thread=Main Thread, member=n/a): Loaded operational overrides from "jar:file:/C:/mw-home/coherence/lib/coherence.jar!/tangosol-coherence-override-dev.xml"
2013-08-15 23:37:53.093/1.413 Oracle Coherence 12.1.2.0.0 <D5> (thread=Main Thread, member=n/a): Optional configuration override "/tangosol-coherence-override.xml" is not specified
2013-08-15 23:37:53.093/1.413 Oracle Coherence 12.1.2.0.0 <D5> (thread=Main Thread, member=n/a): Optional configuration override "cache-factory-config.xml" is not specified
2013-08-15 23:37:53.093/1.413 Oracle Coherence 12.1.2.0.0 <D5> (thread=Main Thread, member=n/a): Optional configuration override "cache-factory-builder-config.xml" is not specified
2013-08-15 23:37:53.093/1.413 Oracle Coherence 12.1.2.0.0 <D5> (thread=Main Thread, member=n/a): Optional configuration override "/custom-mbeans.xml" is not specified

Oracle Coherence Version 12.1.2.0.0 Build 44396
 Grid Edition: Development mode
Copyright (c) 2000, 2013, Oracle and/or its affiliates. All rights reserved.

2013-08-15 23:37:53.380/1.700 Oracle Coherence GE 12.1.2.0.0 <Info> (thread=Main Thread, member=n/a): Loaded cache configuration from "file:/C:/poc-itau-brms/resources/coherence-cache-config.xml"
2013-08-15 23:37:55.615/3.935 Oracle Coherence GE 12.1.2.0.0 <Info> (thread=Main Thread, member=n/a): Created cache factory com.tangosol.net.ExtensibleConfigurableCacheFactory
2013-08-15 23:37:56.206/4.526 Oracle Coherence GE 12.1.2.0.0 <D4> (thread=Main Thread, member=n/a): TCMP bound to /10.0.3.15:8090 using SystemDatagramSocketProvider
2013-08-15 23:37:56.528/4.848 Oracle Coherence GE 12.1.2.0.0 <Info> (thread=Cluster, member=n/a): Failed to satisfy the variance: allowed=16, actual=54
2013-08-15 23:37:56.528/4.848 Oracle Coherence GE 12.1.2.0.0 <Info> (thread=Cluster, member=n/a): Increasing allowable variance to 20
2013-08-15 23:37:56.885/5.205 Oracle Coherence GE 12.1.2.0.0 <Info> (thread=Cluster, member=n/a): This Member(Id=2, Timestamp=2013-08-15 23:37:56.653, Address=10.0.3.15:8090, MachineId=2319, Location=site:,process:3484, Role=AcmeArchitectureApplication, Edition=Grid Edition, Mode=Development, CpuCount=3, SocketCount=3) joined cluster "cluster:0x50DB" with senior Member(Id=1, Timestamp=2013-08-15 21:16:58.769, Address=10.0.3.15:8088, MachineId=2319, Location=site:,process:3000, Role=CoherenceServer, Edition=Grid Edition, Mode=Development, CpuCount=3, SocketCount=3)
2013-08-15 23:37:57.189/5.509 Oracle Coherence GE 12.1.2.0.0 <Info> (thread=Main Thread, member=n/a): Started cluster Name=cluster:0x50DB

Group{Address=224.12.1.0, Port=12100, TTL=4}

MasterMemberSet(
  ThisMember=Member(Id=2, Timestamp=2013-08-15 23:37:56.653, Address=10.0.3.15:8090, MachineId=2319, Location=site:,process:3484, Role=AcmeArchitectureApplication)
  OldestMember=Member(Id=1, Timestamp=2013-08-15 21:16:58.769, Address=10.0.3.15:8088, MachineId=2319, Location=site:,process:3000, Role=CoherenceServer)
  ActualMemberSet=MemberSet(Size=2
    Member(Id=1, Timestamp=2013-08-15 21:16:58.769, Address=10.0.3.15:8088, MachineId=2319, Location=site:,process:3000, Role=CoherenceServer)
    Member(Id=2, Timestamp=2013-08-15 23:37:56.653, Address=10.0.3.15:8090, MachineId=2319, Location=site:,process:3484, Role=AcmeArchitectureApplication)
    )
  MemberId|ServiceVersion|ServiceJoined|MemberState
    1|12.1.2|2013-08-15 21:16:58.769|JOINED,
    2|12.1.2|2013-08-15 23:37:56.653|JOINED
  RecycleMillis=1200000
  RecycleSet=MemberSet(Size=0
    )
  )

TcpRing{Connections=[1]}
IpMonitor{Addresses=0}

2013-08-15 23:37:57.243/5.581 Oracle Coherence GE 12.1.2.0.0 <Info> (thread=Cluster, member=2): Loaded POF configuration from "file:/C:/poc-itau-brms/resources/pof-config.xml"
2013-08-15 23:37:57.261/5.581 Oracle Coherence GE 12.1.2.0.0 <Info> (thread=Cluster, member=2): Loaded included POF configuration from "jar:file:/C:/mw-home/coherence/lib/coherence.jar!/coherence-pof-config.xml"
2013-08-15 23:37:57.386/5.706 Oracle Coherence GE 12.1.2.0.0 <D5> (thread=Invocation:Management, member=2): Service Management joined the cluster with senior service member 1
2013-08-15 23:37:57.494/5.814 Oracle Coherence GE 12.1.2.0.0 <Info> (thread=Main Thread, member=2): Loaded Reporter configuration from "jar:file:/C:/mw-home/coherence/lib/coherence.jar!/reports/report-group.xml"
2013-08-15 23:37:58.012/6.332 Oracle Coherence GE 12.1.2.0.0 <D5> (thread=DistributedCache:DistribService, member=2): Service DistribService joined the cluster with senior service member 1

   ---> Number of Promotions: 2
   ---> Elapsed Time: 53 ms
   ---> Number of Promotions: 1
   ---> Elapsed Time: 0 ms

2013-08-15 23:37:58.137/6.457 Oracle Coherence GE 12.1.2.0.0 <D4> (thread=ShutdownHook, member=2): ShutdownHook: stopping cluster node

As you can see in the output, the number of promotions showed reveals that the business rules were really executed, since during the instantiation of the customer object promotions weren't provided. The output also tells us another important thing: transaction latency. For the first cache entry we got 53 ms as overall latency, quite short if you consider what happened behind the scenes. But the second cache entry is even much more faster, with 0 ms of latency. This means that the actual time necessary to execute the entire transaction was something below of one millisecond, giving us an real sub-millisecond latency scenario, measured in microseconds.

High Scalable Business Rules

It is not so obvious when you understand this implementation for first time, but another important aspect of this design is scalability. Since the cache type that we used was the distributed one, also known as partitioned, the overall cache entries are equally distributed among all Coherence nodes available. If we use only one node, of course that this one node will handle the entire dataset by itself. But if we use four nodes, each node will handle 25% of the dataset. This means that if we insert one million customer objects in the cache, each node will handle only 250K customers.

This type of data storage offers a huge benefit for Oracle Business Rules, which is the truly data load distribution. Remember that I said before that each Coherence node will hold one DecisionPoint instance? Since each node handle only a percentage of the entire dataset, its reasonable to think that each node will fire rules only for the data that it manages. This happens this way because Coherence interceptors are executed in the JVM that the data lives, not in the entire data grid since it is not a distributed processing. For instance, if the customer "A" is primarily stored in the "JVM 1", and this customer "A" has its fields updated by one client application, business rules will be fired and executed only in the "JVM 1". The other JVMs will not execute any business rules. This means that CPU overhead can be balanced across the cluster of servers, allowing the In-Memory Data Grid scale up horizontally, using the overall compute power of different servers available in the cluster.

API Transparency and Multiple Programming Language Support

Once the Oracle Business Rules is encapsulated in Coherence through an interceptor, there is another great advantage of this design: API transparency. Developers don't need to write custom code to interact with Oracle Business Rules. In fact, they don't ever need to know that business rules are being executed when objects are written in Coherence. Since all happens behind the scenes, this approach free developers from extra complexity, allowing them to work only in a data-oriented fashion which is very productive and less error prone.

And because Oracle Coherence offers you not only a Java API to interact with the In-Memory Data Grid, but also a C++, .NET and an REST API, you can leverage several types of clients and applications to trigger business rules executions. In fact, I have created a very small C++ application using Microsoft Visual Studio to test this behavior. The application code below inserts 1K customers into the In-Memory Data Grid, with an average transaction latency of ~5 ms, using a VM with 3 vCores and 10 GB of RAM.

#include "stdafx.h"
#include <windows.h>
#include <cstring>
#include <iostream>
#include "Customer.hpp"

#include "coherence/lang.ns"
#include "coherence/net/CacheFactory.hpp"
#include "coherence/net/NamedCache.hpp"

using namespace coherence::lang;
using coherence::net::NamedCache;
using coherence::net::CacheFactory;

int NUMBER_OF_ENTRIES = 1000;

__int64 currentTimeMillis()
{
	static const __int64 magic = 116444736000000000;
	SYSTEMTIME st;
	GetSystemTime(&st);
	FILETIME   ft;
	SystemTimeToFileTime(&st,&ft);
	__int64 t;
	memcpy(&t,&ft,sizeof t);
	return (t - magic)/10000;
}

int _tmain(int argc, _TCHAR* argv[])
{

	Customer customer;
	std::string _customerKey;
	String::View customerKey;
	__int64 startTime = 0, endTime = 0;
	__int64 elapsedTime = 0;
	
	NamedCache::Handle customer = CacheFactory::getCache("customers");

	startTime = currentTimeMillis();
	for (int i = 0; i < NUMBER_OF_ENTRIES; i++)
	{
		std::ostringstream stream;
		stream << "customer-" << (i + 1);
		_customerKey = stream.str();
		customerKey = String::create(_customerKey);
		std::string firstName = _customerKey;
		std::string agency = "3158";
		std::string account = "457899";
		char custType = 'P';
		double balance = 98000;
		Customer customer(customerKey, firstName,
			agency, account, custType, balance);
		Managed<Customer>::View customerWrapper = Managed<Customer>::create(customer);
		customers->put(customerKey, customerWrapper);
		Managed<Customer>::View result =
			cast<Managed<Customer>::View> (customers->get(customerKey));
	}
	endTime = currentTimeMillis();
	elapsedTime = endTime - startTime;

	std::cout << std::endl;
	std::cout << "   Elapsed Time..................: "
		<< elapsedTime << " ms" << std::endl;
	std::cout << std::endl;

	CacheFactory::shutdown();
	getchar();
	return 0;

}

An Alternative Version of the Interceptor for MDS Scenarios

The interceptor created in this article uses the Oracle Business Rules Java API to read the dictionary directly from the file system. This approach suggests two things: first, that the repository of the dictionary will be the file system. Second, that the authoring and management of the dictionary will be done through JDeveloper. This can lead into some lost of the BRMS power since business users won't feel comfortable authoring their rules in a technological environment such as JDeveloper. Administrators won't have the power of see who changed what since virtually any person can open the file in JDeveloper and change its contents.

A better way to manage this is storing the dictionary in a MDS repository, which is part of the Oracle SOA Suite platform. Storing the dictionary in the MDS repository allows business users to interact with business rules through the SOA composer, a very nice web tool, more simpler and easy-2-use than JDeveloper. Administrators can also track down changes, since everything in the MDS are audited, transaction based and securely controlled, since you have to first log in the console to get access to the composer.

I have implemented another version of the interceptor, making full use of the power of Oracle SOA Suite and MDS repositories. The implementation of MDSRulesInterceptor.java is being tested for over a month and is performing quite well, just like the FSRulesInterceptor.java implementation. In the future, I will post here this implementation, but for now just keep in mind the powerful things that can be done with Oracle Business Rules and Coherence In-Memory Data Grid. Oracle Fusion Middleware really rocks isn't?


Sunday Jul 08, 2012

The Developers Conference 2012: Presentation about CEP & BAM

This year I had the pleasure again of being one of the speakers in the TDC ("The Developers Conference") event. I have spoken in this event for three years from now. This year, the main theme of the SOA track was EDA ("Event-Driven Architecture") and I decided to delivery a comprehensive presentation about one of my preferred personal subjects: Real-time using Complex Event Processing. The theme of the presentation was "Business Intelligence in Real-time using CEP & BAM" and I would like to share here the presentation that I have done. The material is in Portuguese since was an Brazilian event that happened in São Paulo.

Once my presentation has a lot of videos, I decided to share the material as a Youtube video, so you can pause, rewind and play again how many times you want it. I strongly recommend you that before starting watching the video, you change the video quality settings to 1080p in High Definition.

Saturday Sep 24, 2011

Welcome to the Middleware Place

Welcome to my blog. The purpose of this blog is to provide vital information for you and your team about Oracle Fusion Middleware technologies, turning success a common word in the vocabulary of your projects. I will try to create regular entries at this blog about subjects like tips, architectural guidance, best practices and examples from real world experience with Oracle Fusion Middleware technologies like Tuxedo, Coherence, Application Grid, Web Center, SOA, BPM, EDA and Exalogic.

I little bit about myself. I started my career in 1997 as software engineer, and software development using distributed objects, EAI and enterprise middleware became my passion since. I always been curious about how technologies could be used and combined to create an architectural foundation for software intensive systems and I have got specialized in this subject. After more than a decade working  in consulting firms and system integrators as software engineer, developer, architect and team leader, I started to work in software vendors like JBoss, by Red Hat, Progress Software, and currently at Oracle Corporation, definitely one of the most biggest middleware vendors of the world, IMHO, the biggest.

About

Ricardo Ferreira is just a regular person that lives in Brazil and is passionate for technology, movies and his whole family. Currently is working at Oracle Corporation.

Search

Categories
Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today