Friday Mar 28, 2014

One Queue to Rule them All

Using a Single Queue for Multiple Message Types with SOA Suite

Problem Statement

You use a single JMS queue for sending multiple message types /  service requests.  You use a single JMS queue for receiving multiple message types / service requests.  You have multiple SOA JMS Adapter interfaces for reading and writing these queues.  In a composite it is random which interface gets a message from the JMS queue.  It is not a problem having multiple adapter instances writing to a single queue, the problem is only with having multiple readers because each reader gets the first message on the queue.

Background

The JMS Adapter is unaware of who receives the messages.  Each adapter instance just takes the message from the queue and delivers it to its own configured interface, one interface per adapter instance.  The SOA infrastructure is then responsible for routing that message, usually via a database table and an in memory notification message, to a component within a composite.  Each message will create a new composite but the BPEL engine and Mediator engine will attempt to match callback messages to the appropriate Mediator or BPEL instance.
Note that message type, including XML document type, has nothing to do with the preceding statements.

The net result is that if you have a sequence of two receives from the same queue using different adapters then the messages will be split equally between the two adapters, meaning that half the time the wrong adapter will receive the message.  This blog entry looks at how to resolve this issue.

Note that the same problem occurs whenever you have more than 1 adapter listening to the same queue, whether they are in the same composite or different composites.  The solution in this blog entry is also relevant to this use case.

Solutions

In order to correctly deliver the messages to the correct interface we need to identify the interface they should be delivered to.  This can be done by using JMS properties.  For example the JMSType property can be used to identify the type of the message.  A message selector can be added to the JMS inbound adapter that will cause the adapter to filter out messages intended for other interfaces.  For example if we need to call three services that are implemented in a single application:
  • Service 1 receives messages on the single outbound queue from SOA, it send responses back on the single inbound queue.
  • Similarly Service 2 and Service 3 also receive messages on the single outbound queue from SOA, they send responses back on the single inbound queue.
First we need to ensure the messages are delivered to the correct adapter instance.  This is achieved as follows:
  • aThe inbound JMS adapter is configured with a JMS message selector.  The message selector might be "JMSType='Service1'" for responses from Service 1.  Similarly the selector would be "JMSType='Service2'" for the adapter waiting on a response from Service 2.  The message selector ensures that each adapter instance will retrieve the first message from the queue that matches its selector.
  • The sending service needs to set the JMS property (JMSType in our example) that is used in the message selector.
Now our messages are being delivered to the correct interface we need to make sure that they get delivered to the correct Mediator or BPEL instance.  We do this with correlation.  There are several correlation options:
  1. We can do manual correlation with a correlation set, identifying parts of the outbound message that uniquely identify our instance and matching them with parts of the inbound message to make the correlation.
  2. We can use a Request-Reply JMS adapter which by default expects the response to contain a JMSCorrelationID equal to the outgoing JMSMessageID.  Although no configuration is required for this on the SOA client side, the service needs to copy the incoming JMSMessageID to the outgoing JMSCorrelationID.

Special Case - Request-Reply Synchronous JMS Adapter

When using a synchronous Request-Reply JMS adapter we can omit to specify the message selector because the Request-Reply JMS adapter will immediately do a listen with a message selector for the correlation ID rather than processing the incoming message asynchronously.
The synchronous request-reply will block the BPEL process thread and hold open the BPEL transaction until a response is received, so this should only be used when you expect the request to be completed in a few seconds.

The JCA Connection Factory used must point to a non-XA JMS Connection Factory and must have the isTransacted property set to “false”.  See the documentation for more details.

Sample

I developed a JDeveloper SOA project that demonstrates using a single queue for multiple incoming adapters.  The overall process flow is shown in the picture below.  The BPEL process on the left receives messages from the jms/TestQueue2 and sends messages to the jms/Test Queue1.  A Mediator is used to simulate multiple services and also provide a web interface to initiate the process.  The correct adapter is identified by using JMS message properties and a selector.

 

The flow above shows that the process is initiated from EM using a web service binding on mediator.  The mediator, acting as a client, posts the request to the inbound queue with a JMSType property set to "Initiate".

Model Client BPEL Service
Inbound Request Client receives web service request and posts the request to the inbound queue with JMSType='Initiate' The JMS adapter with a message selector "JMSType='Initiate'" receives the message and causes a composite to be created.  The composite in turn causes the BPEL process to start executing.
The BPEL process then sends a request to Service 1 on the outbound queue.
Key Points
  • Initiate message can be used to initate a correlation set if necessary
  • Selector required to distinguish initiate messages from other messages on the queue
Service 1 receives the request and sends a response on the inbound queue with JMSType='Service1' and JMSCorrelationID= incoming JMS Message ID.
Separate Request and Reply Adapters   The JMS adapter with a message selector "JMSType='Service1'" receives the message and causes a composite to be created.  The composite uses a correlation set to in turn deliver the message to BPEL which correlates it with the existing BPEL process.
The BPEL process then sends a request to Service 2 on the outbound queue.
Key Points
  • Separate request & reply adapters require a correlation set to ensure that reply goes to correct BPEL process instance
  • Selector required to distinguish service 1 response messages from other messages on the queue
Service 2 receives the request and sends a response on the inbound queue with JMSType='Service2' and JMSCorrelationID= incoming JMS Message ID.
Asynchronous Request-Reply Adapter   The JMS adapter with a message selector "JMSType='Service2'" receives the message and causes a composite to be created.  The composite in turn delivers the message to the existing BPEL process using native JMS correlation.
Key Point
  • Asynchronous request-reply adapter does not require a correlation set, JMS adapter auto-correlates using CorrelationID to ensure that reply goes to correct BPEL process instance
  • Selector still required to distinguish service 2 response messages from other messages on the queue
The BPEL process then sends a request to Service 3 on the outbound queue using a synchronous request-reply.
Service 3 receives the request and sends a response on the inbound queue with JMSType='Service2' and JMSCorrelationID= incoming JMS Message ID.
Synchronous Request-Reply Adapter   The synchronous JMS adapter receives the response without a message selector and correlates it to the BPEL process using native JMS correlation and sends the overall response to the outbound queue.
Key Points
  • Synchronous request-reply adapter does not require a correlation set, JMS adapter auto-correlates using CorrelationID to ensure that reply goes to correct BPEL process instance
  • Selector also not required to distinguish service 3 response messages from other messages on the queue because the synchronous adapter is doing a selection on the expected CorrelationID
 
Outbound Response Client receives the response on an outbound queue.      

Summary

When using a single JMS queue for multiple purposes bear in mind the following:

  • If multiple receives use the same queue then you need to have a message selector.  The corollary to this is that the message sender must add a JMS property to the message that can be used in the message selector.
  • When using a request-reply JMS adapter then there is no need for a correlation set, correlation is done in the adapter by matching the outbound JMS message ID to the inbound JMS correlation ID.  The corollary to this is that the message sender must copy the JMS request message ID to the JMS response correlation ID.
  • When using a synchronous request-reply JMS adapter then there is no need for the message selector because the message selection is done based on the JMS correlation ID.
  • Synchronous request-reply adapter requires a non-XA connection factory to be used so that the request part of the interaction can be committed separately to the receive part of the interaction.
  • Synchronous request-reply JMS adapter should only be used when the reply is expected to take just a few seconds.  If the reply is expected to take longer then the asynchronous request-reply JMS adapter should be used.

Deploying the Sample

The sample is available to download here and makes use of the following JMS resources:

JNDI Resource; Notes
jms/TestQueue Queue Outbound queue from the BPEL process
jms/TestQueue2 Queue Inbound queue to the BPEL process
eis/wls/TestQueue JMS Adapter Connector Factory This can point to an XA or non-XA JMS Connection Factory such as weblogic.jms.XAConnectionFactory
eis/wls/TestQueue None-XA JMS Adapter Connector Factory This must point to a non-XA JMS Connection Factory such as weblogic.jms.ConnectionFactory and must have isTransacted set to “false”

To run the sample then just use the test facility in the EM console or the soa-infra application.

Thursday Mar 27, 2014

Not Just a Cache

Coherence as a Compute Grid

Coherence is best known as a data grid, providing distributed caching with an ability to move processing to the data in the grid.  Less well known is the fact that Coherence also has the ability to function as a compute grid, distributing work across multiple servers in a cluster.  In this entry, which was co-written with my colleague Utkarsh Nadkarni, we will look at using Coherence as a compute grid through the use of the Work Manager API and compare it to manipulating data directly in the grid using Entry Processors.

Coherence Distributed Computing Options

The Coherence documentation identifies several methods for distributing work across the cluster, see Processing Data in a Cache.  They can be summarized as:

  • Entry Processors
    • An InvocableMap interface, inherited by the NamedCache interface, provides support for executing an agent (EntryProcessor or EntryAggregator) on individual entries within the cache.
    • The entries may or may not exist, either way the agent is executed once for each key provided, or if no key is provided then it is executed once for each object in the cache.
    • In Enterprise and Grid editions of Coherence the entry processors are executed on the primary cache nodes holding the cached entries.
    • Agents can return results.
    • One agent executes multiple times per cache node, once for each key targeted on the node.
  • Invocation Service
    • An InvocationService provides support for executing an agent on one or more nodes within the grid.
    • Execution may be targeted at specific nodes or at all nodes running the Invocation Service.
    • Agents can return results.
    • One agent executes once per node.
  • Work Managers
    • A WorkManager class provides a grid aware implementation of the commonJ WorkManager which can be used to run tasks across multiple threads on multiple nodes within the grid.
    • WorkManagers run on multiple nodes.
    • Each WorkManager may have multiple threads.
    • Tasks implement the Work interface and are assigned to specific WorkManager threads to execute.
    • Each task is executed once.

Three Models of Distributed Computation

The previous section listing the distributed computing options in Coherence shows that there are 3 distinct execution models:

  • Per Cache Entry Execution (Entry Processor)
    • Execute the agent on the entry corresponding to a cache key.
    • Entries processed on a single thread per node.
    • Parallelism across nodes.
  • Per Node Execution (Invocation Service)
    • Execute the same agent once per node.
    • Agent processed on a single thread per node.
    • Parallelism across nodes.
  • Per Task Execution (Work Manager)
    • Each task executed once.
    • Parallelism across nodes and across threads within a node.

The entry processor is good for operating on individual cache entries.  It is not so good for working on groups of cache entries.

The invocation service is good for performing checks on a node, but is limited in its parallelism.

The work manager is good for operating on groups of related entries in the cache or performing non-cache related work in parallel.  It has a high degree of parallelism.

As you can see the primary choice for distributed computing comes down to the Work Manager and the Entry Processor.

Differences between using Entry Processors and Work Managers in Coherence

Aspect Entry Processors Work Managers
Degree of parallelization Is a function of the number of Coherence nodes. EntryProcessors are run concurrently across all nodes in a cluster. However, within each node only one instance of the entry processor executes at a time. Is a function of the number of Work Manager threads. The Work is run concurrently across all threads in all Work Manager instances.
Transactionality Transactional. If an EntryProcessor running on one node does not complete (say, due to that node crashing), the entries targeted will be executed by an EntryProcessor on another node. Not transactional. The specification does not explicitly specify what the response should be if a remote server crashes during an execution. Current implementation uses WORK_COMPLETED with WorkCompletedException as a result. In case a Work does not run to completion, it is the responsibility of the client to resubmit the Work to the Work Manager.
How is the Cache accessed or mutated? Operations against the cache contents are executed by (and thus within the localized context of) a cache. Accesses and changes to the cache are done directly through the cache API.
Where is the processing performed? In the same JVM where the entries-to-be-processed reside. In the Work Manager server. This may not be the same JVM where the entries-to-be-processed reside.
Network Traffic Is a function of the size of the EntryProcessor. Typically, the size of an EntryProcessor is much smaller than the size of the data transferred across nodes in the case of a Work Manager approach. This makes the EntryProcessor approach more network-efficient and hence more scalable. One EntryProcessor is transmitted to each cache node. Is a function of the
  • Number of Work Objects, of which multiple may be sent to each server.
  • Size of the data set transferred from the Backing Map to the Work Manager Server.
Distribution of “Tasks” Tasks are moved to the location at which the entries-to-be-processed are being managed. This may result in a random distribution of tasks. The distribution tends to get equitable as the number of entries increases. Tasks are distributed equally across the threads in the Work Manager Instances.
Implementation of the EntryProcessor or Work class. Create a class that extends AbstractProcessor. Implement the process method. Update the cache item based on the key passed in to the process method. Create a class that is serializable and implements commonj.work.Work. Implement the run method.
Implementation of “Task” In the process method, update the cache item based on the key passed into the process method. In the run method, do the following:
  • Get a reference to the named cache
  • Do the Work – Get a reference to the Cache Item; change the cache item; put the cache item back into the named cache.
Completion Notification When the NamedCache.invoke method completes then all the entry processors have completed executing. When a task is submitted for execution it executes asynchronously on the work manager threads in the cluster.  Status may be obtained by registering a commonj.work.WorkListener class when calling the WorkManager.schedule method.  This will provide updates when the Work is accepted, started and completed or rejected.  Alternatively the WorkManager.waitForAll and WorkManager.waitForAny methods allow blocking waits for either all or one result respectively.
Returned Results java.lang.Object – when executed on one cache item. This returns result of the invocation as returned from the EntryProcessor.
java.util.Map – when executed on a collection of keys. This returns a Map containing the results of invoking the EntryProcessor against each of the specified keys.
commonj.work.WorkItem - There are three possible outcomes
  • The Work is not yet complete. In this case, a null is returned by WorkItem.getResult.
  • The Work started but completed with an exception. This may have happened due to a Work Manager Instance terminating abruptly. This is indicated by an exception thrown by WorkItem.getResult.
  • The Work Manager instance indicated that the Work is complete and the Work ran to completion. In this case, WorkItem.getResult returns a non-null and no exception is thrown by WorkItem.getResult.
Error Handling Failure of a node results in all the work assigned to that node being executed on the new primary. This may result in some work being executed twice, but Coherence ensures that the cache is only updated once per item. Failure of a node results in the loss of scheduled tasks assigned to that node. Completed tasks are sent back to the client as they complete.

Fault Handling Extension

Entry processors have excellent error handling within Coherence.  Work Managers less so.  In order to provide resiliency on node failure I implemented a “RetryWorkManager” class that detects tasks that have failed to complete successfully and resubmits them to the grid for another attempt.

A JDeveloper project with the RetryWorkManager is available for download here.  It includes sample code to run a simple task across multiple work manager threads.

To create a new RetryWorkManager that will retry failed work twice then you would use this:

WorkManager = new RetryWorkManager("WorkManagerName", 2);  // Change for number of retries, if no retry count is provided then the default is 0.
You can control the number of retries at the individual work level as shown below:
WorkItem workItem = schedule(work); // Use number of retries set at WorkManager creation
WorkItem workItem = schedule(work, workListener); // Use number of retries set at WorkManager creation
WorkItem workItem = schedule(work, 4); // Change number of retries
WorkItem workItem = schedule(work, workListener, 4); // Change number of retries
Currently the RetryWorkManager defaults to having 0 threads.  To change use this constructor:
WorkItem workItem = schedule(work, workListener, 3, 4); // Change number of threads (3) and retries (4)
Note that none of this sample code is supported by Oracle in any way, and is provided purely as a sample of what can be done with Coherence.

How the RetryWorkManager Works

The RetryWorkManager delegates most operations to a Coherence WorkManager instance.  It creates a WorkManagerListener to intercept status updates.  On receiving a WORK_COMPLETED callback the listener checks the result to see if the completion is due to an error.  If an error occurred and there are retries left then the work is resubmitted.  The WorkItem returned by scheduling an event is wrapped in a RetryWorkItem.  This RetryWorkItem is updated with a new Coherence WorkItem when the task is retried.  If the client registers a WorkManagerListener then the RetryWorkManagerListener delegates non-retriable events to the client listener.  Finally the waitForAll and waitForAny methods are modified to deal with work items being resubmitted in the event of failure.

Sample Code for EntryProcessor and RetryWorkManager

The downloadable project contains sample code for running the work manager and an entry processor.

The demo implements a 3-tier architecture

  1. Coherence Cache Servers
    • Can be started by running RunCacheServer.cmd
    • Runs a distributed cache used by the Task to be executed in the grid
  2. Coherence Work Manager Servers
    • Can be started by running RunWorkManagerServer.cmd
    • Takes no parameters
    • Runs two threads for executing tasks
  3. Coherence Work Manager Clients
    • Can be started by running RunWorkManagerClient.cmd
    • Takes three parameters currently
      • Work Manager name - should be "AntonyWork" - default is "AntonyWork"
      • Number of tasks to schedule - default is 10
      • Time to wait for tasks to complete in seconds - default is 60

The task stores the number of times it has been executed in the cache, so multiple runs will see the counter incrementing.  The choice between EntryProcessor and WorkManager is controlled by changing the value of USE_ENTRY_PROCESSOR between false and true in the RunWorkManagerClient.cmd script.

The SetWorkManagerEnv.cmd script should be edited to point to the Coherence home directory and the Java home directory.

Summary

If you need to perform operations on cache entries and don’t need to have cross-checks between the entries then the best solution is to use an entry processor.  The entry processor is fault tolerant and updates to the cached entity will be performed once only.

If you need to perform generic work that may need to touch multiple related cache entries then the work manager may be a better solution.  The extensions I created in the RetryWorkManager provide a degree of resiliency to deal with node failure without impacting the client.

The RetryWorkManager can be downloaded here.

Thursday Mar 20, 2014

Packt Publishing Buy One Get One Free Offer

Packt Publishing celebrates their 2000th title with a Buy One Get One Free Offer

Great time to get those Packt books you’ve been thinking of buying, like the SOA Suite 11g Developers Guide or the SOA Suite 11g Developers Cookbook.

Sunday Mar 09, 2014

The Impact of Change

Measuring Impact of Change in SOA Suite

Mormon prophet Thomas S. Monson once said:

When performance is measured, performance improves. When performance is measured and reported, the rate of performance accelerates.

(LDS Conference Report, October 1970, p107)

Like everything in life, a SOA Suite installation that is monitored and tracked has a much better chance of performing well than one that is not measured.  With that in mind I came up with tool to allow the measurement of the impact of configuration changes on database usage in SOA Suite.  This tool can be used to assess the impact of different configurations on both database growth and database performance, helping to decide which optimizations offer real benefit to the composite under test.

Basic Approach

The basic approach of the tool is to take a snapshot of the number of rows in the SOA tables before executing a composite.  The composite is then executed.  After the composite has completed another snapshot is taken of the SOA tables.  This is illustrated in the diagram below:

An example of the data collected by the tool is shown below:

Test Name Total Tables Changed Total Rows Added Notes
AsyncTest1 13 15 Async Interaction with simple SOA composite, one retry to send response.
AsyncTest2 12 13 Async interaction with simple SOA composite, no retries on sending response.
AsyncTest3 12 13 Async interaction with simple SOA composite, no callback address provided.
OneWayTest1 12 13 One-Way interaction with simple SOA composite.
SyncTest1 7 7 Sync interaction with simple SOA composite.

Note that the first three columns are provided by the tool, the fourth column is just an aide-memoir to identify what the test name actually did. The tool also allows us to drill into the data to get a better look at what is actually changing as shown in the table below:

Test Name Table Name Rows Added
AsyncTest1 AUDIT_COUNTER 1
AsyncTest1 AUDIT_DETAILS 1
AsyncTest1 AUDIT_TRAIL 2
AsyncTest1 COMPOSITE_INSTANCE 1
AsyncTest1 CUBE_INSTANCE 1
AsyncTest1 CUBE_SCOPE 1
AsyncTest1 DLV_MESSAGE 1
AsyncTest1 DOCUMENT_CI_REF 1
AsyncTest1 DOCUMENT_DLV_MSG_REF 1
AsyncTest1 HEADERS_PROPERTIES 1
AsyncTest1 INSTANCE_PAYLOAD 1
AsyncTest1 WORK_ITEM 1
AsyncTest1 XML_DOCUMENT 2

Here we have drilled into the test case with the retry of the callback to see what tables are actually being written to.

Finally we can compare two tests to see difference in the number of rows written and the tables updated as shown below:

Test Name Base Test Name Table Name Row Difference
AsyncTest1 AsyncTest2 AUDIT_TRAIL 1

Here are the additional tables referenced by this test

Test Name Base Test Name Additional Table Name Rows Added
AsyncTest1 AsyncTest2 WORK_ROWS 1

How it Works

I created a database stored procedure, soa_snapshot.take_soa_snaphot(test_name, phase). that queries all the SOA tables and records the number of rows in each table.  By running the stored procedure before and after the execution of a composite we can capture the number of rows in the SOA database before and after a composite executes.  I then created a view that shows the difference in the number of rows before and after composite execution.  This view has a number of sub-views that allow us to query specific items.  The schema is shown below:

The different tables and views are:

  • CHANGE_TABLE
    • Used to track number of rows in SOA schema, each test case has two or more phases.  Usually phase 1 is before execution and phase 2 is after execution.
    • This only used by the stored procedure and the views.
  • DELTA_VIEW
    • Used to track changes in number of rows in SOA database between phases of a test case.  This is a view on CHANGE_TABLE.  All other views are based off this view.
  • SIMPLE_DELTA_VIEW
    • Provides number of rows changed in each table.
  • SUMMARY_DELTA_VIEW
    • Provides a summary of total rows and tables changed.
  • DIFFERENT_ROWS_VIEW
    • Provides a summary of differences in rows updated between test cases
  • EXTRA_TABLES_VIEW
    • Provides a summary of the extra tables and rows used by a test case.
    • This view makes use of a session context, soa_ctx, which holds the test case name and the baseline test case name.  This context is initialized by calling the stored procedure soa_ctx_pkg.set(testCase, baseTestCase).

I created a web service wrapper to the take_soa_snapshot procedure so that I could use SoapUI to perform the tests.

Sample Output

How many rows and tables did a particular test use?

Here we can see how many rows in how many tables changed as a result of running a test:

-- Display the total number of rows and tables changed for each test
select * from summary_delta_view
order by test_name;

TEST_NAME            TOTALDELTAROWS TOTALDELTASIZE TOTALTABLES
-------------------- -------------- -------------- -----------
AsyncTest1                   15              0          13
AsyncTest1noCCIS             15              0          13
AsyncTest1off                 8              0           8
AsyncTest1prod               13              0          12
AsyncTest2                   13              0          12
AsyncTest2noCCIS             13              0          12
AsyncTest2off                 7              0           7
AsyncTest2prod               11              0          11
AsyncTest3                   13              0          12
AsyncTest3noCCIS             13          65536          12
AsyncTest3off                 7              0           7
AsyncTest3prod               11              0          11
OneWayTest1                  13              0          12
OneWayTest1noCCI             13          65536          12
OneWayTest1off                7              0           7
OneWayTest1prod              11              0          11
SyncTest1                     7              0           7
SyncTest1noCCIS               7              0           7
SyncTest1off                  2              0           2
SyncTest1prod                 5              0           5

20 rows selected

Which tables grew during a test?

Here for a given test we can see which tables had rows inserted.

-- Display the tables which grew and show the number of rows they grew by
select * from simple_delta_view
where test_name='AsyncTest1'
order by table_name;
TEST_NAME            TABLE_NAME                      DELTAROWS  DELTASIZE
-------------------- ------------------------------ ---------- ----------
AsyncTest1       AUDIT_COUNTER                           1          0
AsyncTest1       AUDIT_DETAILS                           1          0
AsyncTest1       AUDIT_TRAIL                             2          0
AsyncTest1       COMPOSITE_INSTANCE                      1          0
AsyncTest1       CUBE_INSTANCE                           1          0
AsyncTest1       CUBE_SCOPE                              1          0
AsyncTest1       DLV_MESSAGE                             1          0
AsyncTest1       DOCUMENT_CI_REF                         1          0
AsyncTest1       DOCUMENT_DLV_MSG_REF                    1          0
AsyncTest1       HEADERS_PROPERTIES                      1          0
AsyncTest1       INSTANCE_PAYLOAD                        1          0
AsyncTest1       WORK_ITEM                               1          0
AsyncTest1       XML_DOCUMENT                            2          0
13 rows selected

Which tables grew more in test1 than in test2?

Here we can see the differences in rows for two tests.

-- Return difference in rows updated (test1)
select * from different_rows_view
where test1='AsyncTest1' and test2='AsyncTest2';

TEST1                TEST2                TABLE_NAME                          DELTA
-------------------- -------------------- ------------------------------ ----------
AsyncTest1       AsyncTest2       AUDIT_TRAIL                             1

Which tables were used by test1 but not by test2?

Here we can see tables that were used by one test but not by the other test.

-- Register base test case for use in extra_tables_view
-- First parameter (test1) is test we expect to have extra rows/tables
begin soa_ctx_pkg.set('AsyncTest1', 'AsyncTest2'); end;
/
anonymous block completed
-- Return additional tables used by test1
column TEST2 FORMAT A20
select * from extra_tables_view;
TEST1                TEST2                TABLE_NAME                      DELTAROWS
-------------------- -------------------- ------------------------------ ----------
AsyncTest1       AsyncTest2       WORK_ITEM                               1

 

Results

I used the tool to find out the following.  All tests were run using SOA Suite 11.1.1.7.

The following is based on a very simple composite as shown below:

Each BPEL process is basically the same as the one shown below:

Impact of Fault Policy Retry Being Executed Once

Setting Total Rows Written Total Tables Updated
No Retry 13 12
One Retry 15 13

When a fault policy causes a retry then the following additional database rows are written:

Table Name Number of Rows
AUDIT_TRAIL 1
WORK_ITEM 1

Impact of Setting Audit Level = Development Instead of Production

Setting Total Rows Written Total Tables Updated
Development 13 12
Production 11 11

When the audit level is set at development instead of production then the following additional database rows are written:

Table Name Number of Rows
AUDIT_TRAIL 1
WORK_ITEM 1

Impact of Setting Audit Level = Production Instead of Off

Setting Total Rows Written Total Tables Updated
Production 11 11
Off 7 7

When the audit level is set at production rather than off then the following additional database rows are written:

Table Name Number of Rows
AUDIT_COUNTER 1
AUDIT_DETAILS 1
AUDIT_TRAIL 1
COMPOSITE_INSTANCE 1

Impact of Setting Capture Composite Instance State

Setting Total Rows Written Total Tables Updated
On 13 12
Off 13 12

When capture composite instance state is on rather than off then no additional database rows are written, note that there are other activities that occur when composite instance state is captured:

Impact of Setting oneWayDeliveryPolicy = async.cache or sync

Setting Total Rows Written Total Tables Updated
async.persist 13 12
async.cache 7 7
sync 7 7

When choosing async.persist (the default) instead of sync or async.cache then the following additional database rows are written:

Table Name Number of Rows
AUDIT_DETAILS 1
DLV_MESSAGE 1
DOCUMENT_CI_REF 1
DOCUMENT_DLV_MSG_REF 1
HEADERS_PROPERTIES 1
XML_DOCUMENT 1

As you would expect the sync mode behaves just as a regular synchronous (request/reply) interaction and creates the same number of rows in the database.  The async.cache also creates the same number of rows as a sync interaction because it stores state in memory and provides no restart guarantee.

Caveats & Warnings

The results above are based on a trivial test case.  The numbers will be different for bigger and more complex composites.  However by taking snapshots of different configurations you can produce the numbers that apply to your composites.

The capture procedure supports multiple steps in a test case, but the views only support two snapshots per test case.

Code Download

The sample project I used us available here.

The scripts used to create the user (createUser.sql), create the schema (createSchema.sql) and sample queries (TableCardinality.sql) are available here.

The Web Service wrapper to the capture state stored procedure is available here.

The sample SoapUI project that I used to take a snapshot, perform the test and take a second snapshot is available here.

About

Musings on Fusion Middleware and SOA Picture of Antony Antony works with customers across the US and Canada in implementing SOA and other Fusion Middleware solutions. Antony is the co-author of the SOA Suite 11g Developers Cookbook, the SOA Suite 11g Developers Guide and the SOA Suite Developers Guide.

Search

Archives
« March 2014 »
SunMonTueWedThuFriSat
      
1
2
3
4
5
6
7
8
10
11
12
13
14
15
16
17
18
19
21
22
23
24
25
26
29
30
31
     
Today