Friday Mar 28, 2014

One Queue to Rule them All

Using a Single Queue for Multiple Message Types with SOA Suite

Problem Statement

You use a single JMS queue for sending multiple message types /  service requests.  You use a single JMS queue for receiving multiple message types / service requests.  You have multiple SOA JMS Adapter interfaces for reading and writing these queues.  In a composite it is random which interface gets a message from the JMS queue.  It is not a problem having multiple adapter instances writing to a single queue, the problem is only with having multiple readers because each reader gets the first message on the queue.

Background

The JMS Adapter is unaware of who receives the messages.  Each adapter instance just takes the message from the queue and delivers it to its own configured interface, one interface per adapter instance.  The SOA infrastructure is then responsible for routing that message, usually via a database table and an in memory notification message, to a component within a composite.  Each message will create a new composite but the BPEL engine and Mediator engine will attempt to match callback messages to the appropriate Mediator or BPEL instance.
Note that message type, including XML document type, has nothing to do with the preceding statements.

The net result is that if you have a sequence of two receives from the same queue using different adapters then the messages will be split equally between the two adapters, meaning that half the time the wrong adapter will receive the message.  This blog entry looks at how to resolve this issue.

Note that the same problem occurs whenever you have more than 1 adapter listening to the same queue, whether they are in the same composite or different composites.  The solution in this blog entry is also relevant to this use case.

Solutions

In order to correctly deliver the messages to the correct interface we need to identify the interface they should be delivered to.  This can be done by using JMS properties.  For example the JMSType property can be used to identify the type of the message.  A message selector can be added to the JMS inbound adapter that will cause the adapter to filter out messages intended for other interfaces.  For example if we need to call three services that are implemented in a single application:
  • Service 1 receives messages on the single outbound queue from SOA, it send responses back on the single inbound queue.
  • Similarly Service 2 and Service 3 also receive messages on the single outbound queue from SOA, they send responses back on the single inbound queue.
First we need to ensure the messages are delivered to the correct adapter instance.  This is achieved as follows:
  • aThe inbound JMS adapter is configured with a JMS message selector.  The message selector might be "JMSType='Service1'" for responses from Service 1.  Similarly the selector would be "JMSType='Service2'" for the adapter waiting on a response from Service 2.  The message selector ensures that each adapter instance will retrieve the first message from the queue that matches its selector.
  • The sending service needs to set the JMS property (JMSType in our example) that is used in the message selector.
Now our messages are being delivered to the correct interface we need to make sure that they get delivered to the correct Mediator or BPEL instance.  We do this with correlation.  There are several correlation options:
  1. We can do manual correlation with a correlation set, identifying parts of the outbound message that uniquely identify our instance and matching them with parts of the inbound message to make the correlation.
  2. We can use a Request-Reply JMS adapter which by default expects the response to contain a JMSCorrelationID equal to the outgoing JMSMessageID.  Although no configuration is required for this on the SOA client side, the service needs to copy the incoming JMSMessageID to the outgoing JMSCorrelationID.

Special Case - Request-Reply Synchronous JMS Adapter

When using a synchronous Request-Reply JMS adapter we can omit to specify the message selector because the Request-Reply JMS adapter will immediately do a listen with a message selector for the correlation ID rather than processing the incoming message asynchronously.
The synchronous request-reply will block the BPEL process thread and hold open the BPEL transaction until a response is received, so this should only be used when you expect the request to be completed in a few seconds.

The JCA Connection Factory used must point to a non-XA JMS Connection Factory and must have the isTransacted property set to “false”.  See the documentation for more details.

Sample

I developed a JDeveloper SOA project that demonstrates using a single queue for multiple incoming adapters.  The overall process flow is shown in the picture below.  The BPEL process on the left receives messages from the jms/TestQueue2 and sends messages to the jms/Test Queue1.  A Mediator is used to simulate multiple services and also provide a web interface to initiate the process.  The correct adapter is identified by using JMS message properties and a selector.

 

The flow above shows that the process is initiated from EM using a web service binding on mediator.  The mediator, acting as a client, posts the request to the inbound queue with a JMSType property set to "Initiate".

Model Client BPEL Service
Inbound Request Client receives web service request and posts the request to the inbound queue with JMSType='Initiate' The JMS adapter with a message selector "JMSType='Initiate'" receives the message and causes a composite to be created.  The composite in turn causes the BPEL process to start executing.
The BPEL process then sends a request to Service 1 on the outbound queue.
Key Points
  • Initiate message can be used to initate a correlation set if necessary
  • Selector required to distinguish initiate messages from other messages on the queue
Service 1 receives the request and sends a response on the inbound queue with JMSType='Service1' and JMSCorrelationID= incoming JMS Message ID.
Separate Request and Reply Adapters   The JMS adapter with a message selector "JMSType='Service1'" receives the message and causes a composite to be created.  The composite uses a correlation set to in turn deliver the message to BPEL which correlates it with the existing BPEL process.
The BPEL process then sends a request to Service 2 on the outbound queue.
Key Points
  • Separate request & reply adapters require a correlation set to ensure that reply goes to correct BPEL process instance
  • Selector required to distinguish service 1 response messages from other messages on the queue
Service 2 receives the request and sends a response on the inbound queue with JMSType='Service2' and JMSCorrelationID= incoming JMS Message ID.
Asynchronous Request-Reply Adapter   The JMS adapter with a message selector "JMSType='Service2'" receives the message and causes a composite to be created.  The composite in turn delivers the message to the existing BPEL process using native JMS correlation.
Key Point
  • Asynchronous request-reply adapter does not require a correlation set, JMS adapter auto-correlates using CorrelationID to ensure that reply goes to correct BPEL process instance
  • Selector still required to distinguish service 2 response messages from other messages on the queue
The BPEL process then sends a request to Service 3 on the outbound queue using a synchronous request-reply.
Service 3 receives the request and sends a response on the inbound queue with JMSType='Service2' and JMSCorrelationID= incoming JMS Message ID.
Synchronous Request-Reply Adapter   The synchronous JMS adapter receives the response without a message selector and correlates it to the BPEL process using native JMS correlation and sends the overall response to the outbound queue.
Key Points
  • Synchronous request-reply adapter does not require a correlation set, JMS adapter auto-correlates using CorrelationID to ensure that reply goes to correct BPEL process instance
  • Selector also not required to distinguish service 3 response messages from other messages on the queue because the synchronous adapter is doing a selection on the expected CorrelationID
 
Outbound Response Client receives the response on an outbound queue.      

Summary

When using a single JMS queue for multiple purposes bear in mind the following:

  • If multiple receives use the same queue then you need to have a message selector.  The corollary to this is that the message sender must add a JMS property to the message that can be used in the message selector.
  • When using a request-reply JMS adapter then there is no need for a correlation set, correlation is done in the adapter by matching the outbound JMS message ID to the inbound JMS correlation ID.  The corollary to this is that the message sender must copy the JMS request message ID to the JMS response correlation ID.
  • When using a synchronous request-reply JMS adapter then there is no need for the message selector because the message selection is done based on the JMS correlation ID.
  • Synchronous request-reply adapter requires a non-XA connection factory to be used so that the request part of the interaction can be committed separately to the receive part of the interaction.
  • Synchronous request-reply JMS adapter should only be used when the reply is expected to take just a few seconds.  If the reply is expected to take longer then the asynchronous request-reply JMS adapter should be used.

Deploying the Sample

The sample is available to download here and makes use of the following JMS resources:

JNDI Resource; Notes
jms/TestQueue Queue Outbound queue from the BPEL process
jms/TestQueue2 Queue Inbound queue to the BPEL process
eis/wls/TestQueue JMS Adapter Connector Factory This can point to an XA or non-XA JMS Connection Factory such as weblogic.jms.XAConnectionFactory
eis/wls/TestQueue None-XA JMS Adapter Connector Factory This must point to a non-XA JMS Connection Factory such as weblogic.jms.ConnectionFactory and must have isTransacted set to “false”

To run the sample then just use the test facility in the EM console or the soa-infra application.

Sunday Mar 09, 2014

The Impact of Change

Measuring Impact of Change in SOA Suite

Mormon prophet Thomas S. Monson once said:

When performance is measured, performance improves. When performance is measured and reported, the rate of performance accelerates.

(LDS Conference Report, October 1970, p107)

Like everything in life, a SOA Suite installation that is monitored and tracked has a much better chance of performing well than one that is not measured.  With that in mind I came up with tool to allow the measurement of the impact of configuration changes on database usage in SOA Suite.  This tool can be used to assess the impact of different configurations on both database growth and database performance, helping to decide which optimizations offer real benefit to the composite under test.

Basic Approach

The basic approach of the tool is to take a snapshot of the number of rows in the SOA tables before executing a composite.  The composite is then executed.  After the composite has completed another snapshot is taken of the SOA tables.  This is illustrated in the diagram below:

An example of the data collected by the tool is shown below:

Test Name Total Tables Changed Total Rows Added Notes
AsyncTest1 13 15 Async Interaction with simple SOA composite, one retry to send response.
AsyncTest2 12 13 Async interaction with simple SOA composite, no retries on sending response.
AsyncTest3 12 13 Async interaction with simple SOA composite, no callback address provided.
OneWayTest1 12 13 One-Way interaction with simple SOA composite.
SyncTest1 7 7 Sync interaction with simple SOA composite.

Note that the first three columns are provided by the tool, the fourth column is just an aide-memoir to identify what the test name actually did. The tool also allows us to drill into the data to get a better look at what is actually changing as shown in the table below:

Test Name Table Name Rows Added
AsyncTest1 AUDIT_COUNTER 1
AsyncTest1 AUDIT_DETAILS 1
AsyncTest1 AUDIT_TRAIL 2
AsyncTest1 COMPOSITE_INSTANCE 1
AsyncTest1 CUBE_INSTANCE 1
AsyncTest1 CUBE_SCOPE 1
AsyncTest1 DLV_MESSAGE 1
AsyncTest1 DOCUMENT_CI_REF 1
AsyncTest1 DOCUMENT_DLV_MSG_REF 1
AsyncTest1 HEADERS_PROPERTIES 1
AsyncTest1 INSTANCE_PAYLOAD 1
AsyncTest1 WORK_ITEM 1
AsyncTest1 XML_DOCUMENT 2

Here we have drilled into the test case with the retry of the callback to see what tables are actually being written to.

Finally we can compare two tests to see difference in the number of rows written and the tables updated as shown below:

Test Name Base Test Name Table Name Row Difference
AsyncTest1 AsyncTest2 AUDIT_TRAIL 1

Here are the additional tables referenced by this test

Test Name Base Test Name Additional Table Name Rows Added
AsyncTest1 AsyncTest2 WORK_ROWS 1

How it Works

I created a database stored procedure, soa_snapshot.take_soa_snaphot(test_name, phase). that queries all the SOA tables and records the number of rows in each table.  By running the stored procedure before and after the execution of a composite we can capture the number of rows in the SOA database before and after a composite executes.  I then created a view that shows the difference in the number of rows before and after composite execution.  This view has a number of sub-views that allow us to query specific items.  The schema is shown below:

The different tables and views are:

  • CHANGE_TABLE
    • Used to track number of rows in SOA schema, each test case has two or more phases.  Usually phase 1 is before execution and phase 2 is after execution.
    • This only used by the stored procedure and the views.
  • DELTA_VIEW
    • Used to track changes in number of rows in SOA database between phases of a test case.  This is a view on CHANGE_TABLE.  All other views are based off this view.
  • SIMPLE_DELTA_VIEW
    • Provides number of rows changed in each table.
  • SUMMARY_DELTA_VIEW
    • Provides a summary of total rows and tables changed.
  • DIFFERENT_ROWS_VIEW
    • Provides a summary of differences in rows updated between test cases
  • EXTRA_TABLES_VIEW
    • Provides a summary of the extra tables and rows used by a test case.
    • This view makes use of a session context, soa_ctx, which holds the test case name and the baseline test case name.  This context is initialized by calling the stored procedure soa_ctx_pkg.set(testCase, baseTestCase).

I created a web service wrapper to the take_soa_snapshot procedure so that I could use SoapUI to perform the tests.

Sample Output

How many rows and tables did a particular test use?

Here we can see how many rows in how many tables changed as a result of running a test:

-- Display the total number of rows and tables changed for each test
select * from summary_delta_view
order by test_name;

TEST_NAME            TOTALDELTAROWS TOTALDELTASIZE TOTALTABLES
-------------------- -------------- -------------- -----------
AsyncTest1                   15              0          13
AsyncTest1noCCIS             15              0          13
AsyncTest1off                 8              0           8
AsyncTest1prod               13              0          12
AsyncTest2                   13              0          12
AsyncTest2noCCIS             13              0          12
AsyncTest2off                 7              0           7
AsyncTest2prod               11              0          11
AsyncTest3                   13              0          12
AsyncTest3noCCIS             13          65536          12
AsyncTest3off                 7              0           7
AsyncTest3prod               11              0          11
OneWayTest1                  13              0          12
OneWayTest1noCCI             13          65536          12
OneWayTest1off                7              0           7
OneWayTest1prod              11              0          11
SyncTest1                     7              0           7
SyncTest1noCCIS               7              0           7
SyncTest1off                  2              0           2
SyncTest1prod                 5              0           5

20 rows selected

Which tables grew during a test?

Here for a given test we can see which tables had rows inserted.

-- Display the tables which grew and show the number of rows they grew by
select * from simple_delta_view
where test_name='AsyncTest1'
order by table_name;
TEST_NAME            TABLE_NAME                      DELTAROWS  DELTASIZE
-------------------- ------------------------------ ---------- ----------
AsyncTest1       AUDIT_COUNTER                           1          0
AsyncTest1       AUDIT_DETAILS                           1          0
AsyncTest1       AUDIT_TRAIL                             2          0
AsyncTest1       COMPOSITE_INSTANCE                      1          0
AsyncTest1       CUBE_INSTANCE                           1          0
AsyncTest1       CUBE_SCOPE                              1          0
AsyncTest1       DLV_MESSAGE                             1          0
AsyncTest1       DOCUMENT_CI_REF                         1          0
AsyncTest1       DOCUMENT_DLV_MSG_REF                    1          0
AsyncTest1       HEADERS_PROPERTIES                      1          0
AsyncTest1       INSTANCE_PAYLOAD                        1          0
AsyncTest1       WORK_ITEM                               1          0
AsyncTest1       XML_DOCUMENT                            2          0
13 rows selected

Which tables grew more in test1 than in test2?

Here we can see the differences in rows for two tests.

-- Return difference in rows updated (test1)
select * from different_rows_view
where test1='AsyncTest1' and test2='AsyncTest2';

TEST1                TEST2                TABLE_NAME                          DELTA
-------------------- -------------------- ------------------------------ ----------
AsyncTest1       AsyncTest2       AUDIT_TRAIL                             1

Which tables were used by test1 but not by test2?

Here we can see tables that were used by one test but not by the other test.

-- Register base test case for use in extra_tables_view
-- First parameter (test1) is test we expect to have extra rows/tables
begin soa_ctx_pkg.set('AsyncTest1', 'AsyncTest2'); end;
/
anonymous block completed
-- Return additional tables used by test1
column TEST2 FORMAT A20
select * from extra_tables_view;
TEST1                TEST2                TABLE_NAME                      DELTAROWS
-------------------- -------------------- ------------------------------ ----------
AsyncTest1       AsyncTest2       WORK_ITEM                               1

 

Results

I used the tool to find out the following.  All tests were run using SOA Suite 11.1.1.7.

The following is based on a very simple composite as shown below:

Each BPEL process is basically the same as the one shown below:

Impact of Fault Policy Retry Being Executed Once

Setting Total Rows Written Total Tables Updated
No Retry 13 12
One Retry 15 13

When a fault policy causes a retry then the following additional database rows are written:

Table Name Number of Rows
AUDIT_TRAIL 1
WORK_ITEM 1

Impact of Setting Audit Level = Development Instead of Production

Setting Total Rows Written Total Tables Updated
Development 13 12
Production 11 11

When the audit level is set at development instead of production then the following additional database rows are written:

Table Name Number of Rows
AUDIT_TRAIL 1
WORK_ITEM 1

Impact of Setting Audit Level = Production Instead of Off

Setting Total Rows Written Total Tables Updated
Production 11 11
Off 7 7

When the audit level is set at production rather than off then the following additional database rows are written:

Table Name Number of Rows
AUDIT_COUNTER 1
AUDIT_DETAILS 1
AUDIT_TRAIL 1
COMPOSITE_INSTANCE 1

Impact of Setting Capture Composite Instance State

Setting Total Rows Written Total Tables Updated
On 13 12
Off 13 12

When capture composite instance state is on rather than off then no additional database rows are written, note that there are other activities that occur when composite instance state is captured:

Impact of Setting oneWayDeliveryPolicy = async.cache or sync

Setting Total Rows Written Total Tables Updated
async.persist 13 12
async.cache 7 7
sync 7 7

When choosing async.persist (the default) instead of sync or async.cache then the following additional database rows are written:

Table Name Number of Rows
AUDIT_DETAILS 1
DLV_MESSAGE 1
DOCUMENT_CI_REF 1
DOCUMENT_DLV_MSG_REF 1
HEADERS_PROPERTIES 1
XML_DOCUMENT 1

As you would expect the sync mode behaves just as a regular synchronous (request/reply) interaction and creates the same number of rows in the database.  The async.cache also creates the same number of rows as a sync interaction because it stores state in memory and provides no restart guarantee.

Caveats & Warnings

The results above are based on a trivial test case.  The numbers will be different for bigger and more complex composites.  However by taking snapshots of different configurations you can produce the numbers that apply to your composites.

The capture procedure supports multiple steps in a test case, but the views only support two snapshots per test case.

Code Download

The sample project I used us available here.

The scripts used to create the user (createUser.sql), create the schema (createSchema.sql) and sample queries (TableCardinality.sql) are available here.

The Web Service wrapper to the capture state stored procedure is available here.

The sample SoapUI project that I used to take a snapshot, perform the test and take a second snapshot is available here.

Friday Jan 17, 2014

Clear Day for Cloud Adapters

salesforce.com Adapter Released

Yesterday Oracle released their cloud adapter for salesforce.com (SFDC) so I thought I would talk a little about why you might want it.  I had previously integrated with SFDC using BPEL and the SFDC web interface, so in this post I will explore why the adapter might be a better approach.

Why?

So if I can interface to SFDC without the adapter why would I spend money on the adapter?  There are a number of reasons and in this post I will just explain the following 3 benefits:

  • Auto-Login
  • Non-Ploymorphic Operations
  • Operation Wizards

Lets take each one in turn.

Auto-Login

The first obvious benefit is how you connect and make calls to SFDC.  To perform an operation such as query an account or update an address the SFDC interface requires you to do the following:

  • Invoke a login method which returns a session ID to placed in the header on all future calls and the actual endpoint to call.
  • Invoke the actual operation using the provided endpoint and passing the session ID provided.
  • When finished with calls invoke the logout operation.

Now these are not unreasonable demands.  The problem comes when you try to implement this interface.

Before calling the login method you need the credentials.  These need to be obtained from somewhere, I set them as BPEL preferences but there are other ways to store them.  Calling the login method is not a problem but you need to be careful in how you make subsequent calls.

First all subsequent calls must override the endpoint address with the one returned from the login operation.  Secondly the provided session ID must be placed into a custom SOAP header.  So you have to copy the session ID into a custom SOAP Header and provide that header to the invoke operation.  You also have to override the endpointURI property in the invoke with provided endpoint.

Finally when you have finished performing operations you have to logout.

In addition to the number of steps you have to code there is the problem of knowing when to logout.  The simplest thing to do is for each operation you wish to perform execute the login operatoin folloed y the actual working operation and then do a logout operation.  The trouble with this is that you are now making 3 calls every time you want to perform an operation against SFDC.  This causes additional latency in processing the request.

The adapter hides all this from you, hiding the login/logout operations and allowing connections to  be re-used, reducing the number of logins required.  The adapter makes the SFDC call look like a call to any other web service while the adapter uses a session cache to avoid repeated logins.

Non-Polymorphic Operations

The standard operations in the SFDC interface provide a base object return type, the sObject.  This could be an Account or a Campaign for example but the operations always return the base sObject type, leaving it to the client to make sure they process the correct return type.  Similarly requests also use polymorphic data types.  This often requires in BPEL that the sObject returned is copied to a variable of a more specific type to simplify processing of the data.  If you don’t do this then you can still query fields within the specific object but the SOA tooling cannot check it for you.

The adapter identifies the type of request and response and also helps build the request for you with bind parameters.  This means that you are able to build your process to actually work with the real data structures, not the abstract ones.  This is another big benefit in my view!

Operation Wizards

The SFDC API is very powerful.  Translation: the SFDC API is very complex.  With great power comes complexity to paraphrase Uncle Ben (amongst others).  The adapter groups the operations into logical collections and then provides additional help in selecting from within those collections of operations and providing the correct parameters for them.

Installing

Installation takes place in two parts.  The Design time is deployed into JDeveloper and the run time is deployed into the SOA Suite Oracle Home.  The adapter is available for download here and the installation instructions and documentation are here.  Note that you will need OPatch to install the adapter.  OPatch can be downloaded from Oracle Support Patch 6880880.  Don’t use the OPatch that ships with JDeveloper and SOA Suite.  If you do you may see an error like:

Uncaught exception
oracle.classloader.util.AnnotatedNoClassDefFoundError:
      Missing class: oracle.tip.tools.ide.adapters.cloud.wizard.CloudAdapterWizard

You will want the OPatch 11.1.0.x.x.  Make sure you download the correct 6880880, it must be for 11.1.x as that is the version of JDeveloper and SOA Suite and it must be for the platform you are running on.

Apart from getting the right OPatch the installation is very straight forward.

So don’t be afraid of SFDC integration any more, cloud integratoin is clear with the SFDC adapter.

Monday Dec 30, 2013

Going Native with JCA Adapters

Formatting JCA Adapter Binary Contents

Sometimes you just need to go native and play with binary data rather than XML.  This occurs commonly when using JCA adapters, the file to be written is in binary format, or the TCP messsages written by the Socket Adapter are in binary format.  Although the adapter has no problem converting Base64 data into raw binary, it is a little tricky to get that data into base64 format in the first place, so this blog entry will explain how.

Adapter Creation

When creating most adapters (application & DB being the exceptions) you have the option of choosing the message format.  By making the message format “opaque” you are telling the adapter wizard that the message data will be provided as a base-64 encoded string and the adapter will convert this to binary and deliver it.

This results in a WSDL message defined as shown below:

<wsdl:types>
<schema targetNamespace="http://xmlns.oracle.com/pcbpel/adapter/opaque/"
        xmlns="http://www.w3.org/2001/XMLSchema" >
  <element name="opaqueElement" type="base64Binary" />
</schema>
</wsdl:types>
<wsdl:message name="Write_msg">
    <wsdl:part name="opaque" element="opaque:opaqueElement"/>
</wsdl:message>

The Challenge

The challenge now is to convert out data into a base-64 encoded string.  For this we have to turn to the service bus and MFL.

Within the service bus we use the MFL editor to define the format of the binary data.  In our example we will have variable length strings that start with a 1 byte length field as well as 32-bit integers and 64-bit floating point numbers.

The example below shows a sample MFL file to describe the above data structure:

<?xml version='1.0' encoding='windows-1252'?>
<!DOCTYPE MessageFormat SYSTEM 'mfl.dtd'>
<!--   Enter description of the message format here.   -->
<MessageFormat name='BinaryMessageFormat' version='2.02'>
    <FieldFormat name='stringField1' type='String' delimOptional='y' codepage='UTF-8'>
        <LenField type='UTinyInt'/>
    </FieldFormat>
    <FieldFormat name='intField' type='LittleEndian4' delimOptional='y'/>
    <FieldFormat name='doubleField' type='LittleEndianDouble' delimOptional='y'/>
    <FieldFormat name='stringField2' type='String' delimOptional='y' codepage='UTF-8'>
        <LenField type='UTinyInt'/>
    </FieldFormat>
</MessageFormat>

Note that we can define the endianess of the multi-byte numbers, in this case they are specified as little endian (Intel format).

I also created an XML version of the MFL that can be used in interfaces.

The XML version can then be imported into a WSDL document to create a web service.

Full Steam Ahead

We now have all the pieces we need to convert XML to binary and deliver it via an adapter using the process shown below:

  • We receive the XML request, in the sample code, the sample delivers it as a web service.
  • We then convert the request data into MFL format XML using an XQuery and store the result in a variable (mflVar).
  • We then convert the MFL formatted XML into binary data (internally this is held as a java byte array) and store the result in a variable (binVar).
  • We then convert the byte array to a base-64 string using javax.xml.bind.DatatypeConverter.printBase64Binary and store the result in a variable (base64Var).
  • Finally we replace the original $body contents with the output of an XQuery that matches the adapter expected XML format.

The diagram below shows the OSB pipeline that implements the above.

A Wrinkle

Unfortunately we can only call static Java methods that reside in a jar file imported into service bus, so we have to provide a wrapper for the printBase64Binary call.  The below Java code was used to provide this wrapper:

package antony.blog;

import javax.xml.bind.DatatypeConverter;

public class Base64Encoder {
    public static String base64encode(byte[] content) {
        return DatatypeConverter.printBase64Binary(content);
    }
    public static byte[] base64decode(String content) {
        return DatatypeConverter.parseBase64Binary(content);
    }
}

Wrapping Up

Sample code is available here and consists of the following projects:

  • BinaryAdapter – JDeveloper SOA Project that defines the JCA File Adapter
  • OSBUtils – JDeveloper Java Project that defines the Java wrapper for DataTypeConverter
  • BinaryFileWriter – Eclipse OSB Project that includes everything needed to try out the steps in this blog.

The OSB project needs to be customized to have the logical directory name point to something sensible.  The project can be tested using the normal OSB console test screen.

The following sample input (note 16909060 is 0x01020304)

<bin:OutputMessage xmlns:bin="http://www.example.org/BinarySchema">
    <bin:stringField1>First String</bin:stringField1>
    <bin:intField>16909060</bin:intField>
    <bin:doubleField>1.5</bin:doubleField>
    <bin:stringField2>Second String</bin:stringField2>
</bin:OutputMessage>

Generates the following binary data file – displayed using “hexdump –C”.  The int is highlighted in yellow, the double in orange and the strings and their associated lengths in green with the length in bold.

$ hexdump -C 2.bin
00000000  0c 46 69 72 73 74 20 53  74 72 69 6e 67 04 03 02  |.First String...|
00000010  01 00 00 00 00 00 00 f8  3f 0d 53 65 63 6f 6e 64  |........?.Second|
00000020  20 53 74 72 69 6e 67                              | String|

Although we used a web service writing through to a file adapter we could have equally well used the socket adapter to send the data to a TCP endpoint.  Similarly the source of the data could be anything.  The same principle can be applied to decode binary data, just reverse the steps and use Java method parseBase64Binary instead of printBase64Binary.

Thursday Dec 26, 2013

List Manipulation in Rules

Generating Lists from Rules

Recently I was working with a customer that wanted to use rules to do validation.  The idea was to pass in a document to the rules engine and get back a list of violations, or an empty list if there were no violations.  Turns out that there were a coupe more steps required than I expected so thought I would share my solution in case anyone else is wondering how to return lists from the rules engine.

The Scenario

For the purposes of this blog I modeled a very simple shipping company document that has two main parts.   The Package element contains information about the actual item to be shipped, its weight, type of package and destination details.  The Billing element details the charges applied.

For the purpose of this blog I want to validate the following:

  • A residential surcharge is applied to residential addresses.
  • A residential surcharge is not applied to non-residential addresses.
  • The package is of the correct weight for the type of package.

The Shipment element is sent to the rules engine and the rules engine replies with a ViolationList element that has all the rule violations that were found.

Creating the Return List

We need to create a new ViolationList within rules so that we can return it from within the decision function.  To do this we create a new global variable – I called it ViolationList – and initialize it.  Note that I also had some globals that I used to allow changing the weight limits for different package types.

When the rules session is created it will initialize the global variables and assert the input document – the Shipment element.  However within rules our ViolationList variable has an uninitialized internal List that is used to hold the actual List of Violation elements.  We need to initialize this to an empty RL.list in the Decision Functions “Initial Actions” section.

We can then assert the global variable as a fact to make it available to be the return value of the decision function.  After this we can now create the rules.

Adding a Violation to the List

If a rule fires because of a violation then we need add a Violation element to the list.  The easiest way to do this without having the rule check the ViolationList directly is to create a function to add the Violation to the global variable VioaltionList.

The function creates a new Violation and initializes it with the appropriate values before appending it to the list within the ViolationList.

When a rule fires then it just necessary to call the function to add the violation to the list.

In the example above if the address is a residential address and the surcharge has not been applied then the function is called with an appropriate error code and message.

How it Works

Each time a rule fires we can add the violation to the list by calling the function.  If multiple rules fire then we will get multiple violations in the list.  We can access the list from a function because it is a global variable.  Because we asserted the global variable as a fact in the decision function initialization function it is picked up by the decision function as a return value.  When all possible rules have fired then the decision function will return all asserted ViolationList elements, which in this case will always be 1 because we only assert it in the initialization function.

What Doesn’t Work

A return from a decision function is always a list of the element you specify, so you may be tempted to just assert individual Violation elements and get those back as a list.  That will work if there is at least one element in the list, but the decision function must always return at least one element.  So if there are no violations then you will get an error thrown.

Alternative

Instead of having a completely separate return element you could have the ViolationList as part of the input element and then return the input element from the decision function.  This would work but now you would be copying most of the input variables back into the output variable.  I prefer to have a cleaner more function like interface that makes it easier to handle the response.

Download

Hope this helps someone.  A sample composite project is available for download here.  The composite includes some unit tests.  You can run these from the EM console and then look at the inputs and outputs to see how things work.

Friday Dec 20, 2013

Supporting the Team

SOA Support Team Blog

Some of my former colleagues in support have created a blog to help answer common problems for customers.  One way they are doing this is by creating better landing zones within My Oracle Support (MOS).  I just used the blog to locate the landing zone for database related issues in SOA Suite.  I needed to get the purge scripts working on 11.1.1.7 and I couldn’t find the patches needed to do that.  A quick look on the blog and I found a suitable entry that directed me to the Oracle Fusion Middleware (FMW) SOA 11g Infrastructure Database: Installation, Maintenance and Administration Guide (Doc ID 1384379.1) in MOS.  Lots of other useful stuff on the blog so stop by and check it out, great job Shawn, Antonella, Maria & JB.

Tuesday Oct 08, 2013

Getting Started with Oracle SOA B2B Integration: A hands On Tutorial

Book: Getting Started with Oracle SOA B2B Integration: A hands On Tutorial

Before OpenWorld I received a copy of a new book by Scott Haaland, Alan Perlovsky & Krishnaprem Bhatia entitled Getting Started with Oracle SOA B2B Integration: A hands On Tutorial.  A free download is available of Chapter 3 to help you get a feeling for the style for the book.

A useful new addition to the growing library of Oracle SOA Suite books, it starts off by putting B2B into context and identifying some common B2B message patterns and messaging protocols.  The rest of the book then takes the form of tutorials on how to use Oracle B2B interspersed with useful tips, such as how to set up B2B as a hub to connect different trading partners, similar to the way a VAN works.

The book goes a little beyond a tutorial by providing suggestions on best practice, giving advice on what is the best way to do things in certain circumstances.

I found the chapter on reporting & monitoring to be particularly useful, especially the BAM section, as I find many customers are able to use BAM reports to sell a SOA/B2B solution to the business.

The chapter on Preparing to Go-Live should be read closely before the go live date, at the very least pay attention to the “Purging data” section

Not being a B2B expert I found the book helpful in explaining how to accomplish tasks in Oracle B2B, and also in identifying the capabilities of the product.  Many SOA developers, myself included, view B2B as a glorified adapter, and in many ways it is, but it is an adapter with amazing capabilities.

The editing seems a little loose, the language is strange in places and there are references to colors on black and white diagrams, but the content is solid and helpful to anyone tasked with implementing Oracle B2B.

Friday Jun 28, 2013

SOA Suite 11g Developers Cookbook Published

SOA Suite 11g Developers Cookbook Available

Just realized that I failed to mention that Matt & mine’s most recent book, the SOA Suite 11g Developers Cookbook was published over Christmas last year!

In some ways this was an easier book to write than the Developers Guide, the hard bit was deciding what recipes to include.  Once we had decided that the writing of the book was pretty straight forward.

The book focuses on areas that we felt we had neglected in the Developers Guide, and so there is more about Java integration and OSB, both of which we see a lot of questions about when working with customers.

Amazon has a couple of reviews.

Table of Contents

Chapter 1: Building an SOA Suite Cluster
Chapter 2: Using the Metadata Service to Share XML Artifacts
Chapter 3: Working with Transactions
Chapter 4: Mapping Data
Chapter 5: Composite Messaging Patterns
Chapter 6: OSB Messaging Patterns
Chapter 7: Integrating OSB with JSON
Chapter 8: Compressed File Adapter Patterns
Chapter 9: Integrating Java with SOA Suite
Chapter 10: Securing Composites and Calling Secure Web Services
Chapter 11: Configuring the Identity Service
Chapter 12: Configuring OSB to Use Foreign JMS Queues
Chapter 13: Monitoring and Management

More Reviews

In addition to the Amazon Reviews I also found some reviews on GoodReads.

Friday Oct 12, 2012

Oracle SOA Suite 11g Administrator's Handbook

SOA Administration Book

I have just received a copy of the “Oracle SOA Suite 11g Administrator's Handbook” so as soon as I have read it I will let you know what I think.  In the meantime the first thing that struck me was the author credentials, although I have never met either of them as I remember, I have read Admeds blog postings and they are a great community resource, so immediately I am well disposed towards the book.  Similarly Arun is an employee of my friend and co-author Matt Wright, and I have heard good things about him from Rubicon Red people.

A first glance at the table of contents looks encouraging, I particularly like their approach to performance tuning where they give a clear concise explanation of the knobs they are using.

More when I have read more.

Wednesday Oct 10, 2012

Following the Thread in OSB

Threading in OSB

The Scenario

I recently led an OSB POC where we needed to get high throughput from an OSB pipeline that had the following logic:

1. Receive Request
2. Send Request to External System
3. If Response has a particular value
  3.1 Modify Request
  3.2 Resend Request to External System
4. Send Response back to Requestor

All looks very straightforward and no nasty wrinkles along the way.  The flow was implemented in OSB as follows (see diagram for more details):

  • Proxy Service to Receive Request and Send Response
  • Request Pipeline
    •   Copies Original Request for use in step 3
  • Route Node
    •   Sends Request to External System exposed as a Business Service
  • Response Pipeline
    •   Checks Response to Check If Request Needs to Be Resubmitted
      • Modify Request
      • Callout to External System (same Business Service as Route Node)

The Proxy and the Business Service were each assigned their own Work Manager, effectively giving each of them their own thread pool.

The Surprise

Imagine our surprise when, on stressing the system we saw it lock up, with large numbers of blocked threads.  The reason for the lock up is due to some subtleties in the OSB thread model which is the topic of this post.

 

Basic Thread Model

OSB goes to great lengths to avoid holding on to threads.  Lets start by looking at how how OSB deals with a simple request/response routing to a business service in a route node.

Most Business Services are implemented by OSB in two parts.  The first part uses the request thread to send the request to the target.  In the diagram this is represented by the thread T1.  After sending the request to the target (the Business Service in our diagram) the request thread is released back to whatever pool it came from.  A multiplexor (muxer) is used to wait for the response.  When the response is received the muxer hands off the response to a new thread that is used to execute the response pipeline, this is represented in the diagram by T2.

OSB allows you to assign different Work Managers and hence different thread pools to each Proxy Service and Business Service.  In out example we have the “Proxy Service Work Manager” assigned to the Proxy Service and the “Business Service Work Manager” assigned to the Business Service.  Note that the Business Service Work Manager is only used to assign the thread to process the response, it is never used to process the request.

This architecture means that while waiting for a response from a business service there are no threads in use, which makes for better scalability in terms of thread usage.

First Wrinkle

Note that if the Proxy and the Business Service both use the same Work Manager then there is potential for starvation.  For example:

  • Request Pipeline makes a blocking callout, say to perform a database read.
  • Business Service response tries to allocate a thread from thread pool but all threads are blocked in the database read.
  • New requests arrive and contend with responses arriving for the available threads.

Similar problems can occur if the response pipeline blocks for some reason, maybe a database update for example.

Solution

The solution to this is to make sure that the Proxy and Business Service use different Work Managers so that they do not contend with each other for threads.

Do Nothing Route Thread Model

So what happens if there is no route node?  In this case OSB just echoes the Request message as a Response message, but what happens to the threads?  OSB still uses a separate thread for the response, but in this case the Work Manager used is the Default Work Manager.

So this is really a special case of the Basic Thread Model discussed above, except that the response pipeline will always execute on the Default Work Manager.

 

Proxy Chaining Thread Model

So what happens when the route node is actually calling a Proxy Service rather than a Business Service, does the second Proxy Service use its own Thread or does it re-use the thread of the original Request Pipeline?

Well as you can see from the diagram when a route node calls another proxy service then the original Work Manager is used for both request pipelines.  Similarly the response pipeline uses the Work Manager associated with the ultimate Business Service invoked via a Route Node.  This actually fits in with the earlier description I gave about Business Services and by extension Route Nodes they “… uses the request thread to send the request to the target”.

Call Out Threading Model

So what happens when you make a Service Callout to a Business Service from within a pipeline.  The documentation says that “The pipeline processor will block the thread until the response arrives asynchronously” when using a Service Callout.  What this means is that the target Business Service is called using the pipeline thread but the response is also handled by the pipeline thread.  This implies that the pipeline thread blocks waiting for a response.  It is the handling of this response that behaves in an unexpected way.

When a Business Service is called via a Service Callout, the calling thread is suspended after sending the request, but unlike the Route Node case the thread is not released, it waits for the response.  The muxer uses the Business Service Work Manager to allocate a thread to process the response, but in this case processing the response means getting the response and notifying the blocked pipeline thread that the response is available.  The original pipeline thread can then continue to process the response.

Second Wrinkle

This leads to an unfortunate wrinkle.  If the Business Service is using the same Work Manager as the Pipeline then it is possible for starvation or a deadlock to occur.  The scenario is as follows:

  • Pipeline makes a Callout and the thread is suspended but still allocated
  • Multiple Pipeline instances using the same Work Manager are in this state (common for a system under load)
  • Response comes back but all Work Manager threads are allocated to blocked pipelines.
  • Response cannot be processed and so pipeline threads never unblock – deadlock!

Solution

The solution to this is to make sure that any Business Services used by a Callout in a pipeline use a different Work Manager to the pipeline itself.

The Solution to My Problem

Looking back at my original workflow we see that the same Business Service is called twice, once in a Routing Node and once in a Response Pipeline Callout.  This was what was causing my problem because the response pipeline was using the Business Service Work Manager, but the Service Callout wanted to use the same Work Manager to handle the responses and so eventually my Response Pipeline hogged all the available threads so no responses could be processed.

The solution was to create a second Business Service pointing to the same location as the original Business Service, the only difference was to assign a different Work Manager to this Business Service.  This ensured that when the Service Callout completed there were always threads available to process the response because the response processing from the Service Callout had its own dedicated Work Manager.

Summary

  • Request Pipeline
    • Executes on Proxy Work Manager (WM) Thread so limited by setting of that WM.  If no WM specified then uses WLS default WM.
  • Route Node
    • Request sent using Proxy WM Thread
    • Proxy WM Thread is released before getting response
    • Muxer is used to handle response
    • Muxer hands off response to Business Service (BS) WM
  • Response Pipeline
    • Executes on Routed Business Service WM Thread so limited by setting of that WM.  If no WM specified then uses WLS default WM.
  • No Route Node (Echo functionality)
    • Proxy WM thread released
    • New thread from the default WM used for response pipeline
  • Service Callout
    • Request sent using proxy pipeline thread
    • Proxy thread is suspended (not released) until the response comes back
    • Notification of response handled by BS WM thread so limited by setting of that WM.  If no WM specified then uses WLS default WM.
      • Note this is a very short lived use of the thread
    • After notification by callout BS WM thread that thread is released and execution continues on the original pipeline thread.
  • Route/Callout to Proxy Service
    • Request Pipeline of callee executes on requestor thread
    • Response Pipeline of caller executes on response thread of requested proxy
  • Throttling
    • Request message may be queued if limit reached.
    • Requesting thread is released (route node) or suspended (callout)

So what this means is that you may get deadlocks caused by thread starvation if you use the same thread pool for the business service in a route node and the business service in a callout from the response pipeline because the callout will need a notification thread from the same thread pool as the response pipeline.  This was the problem we were having.
You get a similar problem if you use the same work manager for the proxy request pipeline and a business service callout from that request pipeline.
It also means you may want to have different work managers for the proxy and business service in the route node.
Basically you need to think carefully about how threading impacts your proxy services.

References

Thanks to Jay Kasi, Gerald Nunn and Deb Ayers for helping to explain this to me.  Any errors are my own and not theirs.  Also thanks to my colleagues Milind Pandit and Prasad Bopardikar who travelled this road with me.

Friday Jul 13, 2012

Deploying Fusion Order Demo on 11.1.1.6

How to Deploy Fusion Order Demo on SOA Suite 11.1.1.6

We need to build a demo for a customer, why not use Fusion Order Demo (FOD) and modify it to do some extra things.  Great idea, let me install it on one of my Linux servers I said…

Turns out there are a few gotchas, so here is how I installed it on a Linux server with JDeveloper on my Windows desktop.

Task 1: Install Oracle JDeveloper Studio

I already had JDeveloper 11.1.1.6 with SOA extensions installed so this was easy.

Task 2: Install the Fusion Order Demo Application

First thing to do is to obtain the latest version of the demo from OTN, I obtained the R1 PS5 release.

Gotcha #1 – my winzip wouldn’t unzip the file, I had to use 7-Zip.

Task 3: Install Oracle SOA Suite

On the domain modify the setDomainEnv script by adding “-Djps.app.credential.overwrite.allowed=true” to JAVA_PROPERTIES and restarting the Admin Server.

Also set the JAVA_HOME variable and add Ant to the path.

I created a domain with separate SOA and BAM servers and also set up the Node Manager to make it easier to stop and start components.

Taking a Look at the WebLogic Fusion Order Demo Application

Note that when opening the composite you will get warnings because the components are not yet deployed to MDS.

Deploying Fusion Order Demo

Task 1: Create a Connection to an Oracle WebLogic Server

If some tests complete when you test the connection to the WebLogic domain but other tests fail, for example the JSR-88 tests, then you may need to go into the console and under each servers Configuration->General->Advanced setting, set the “External Listen Address” to be the name that JDeveloper uses to access the managed server.

Task 2: Create a Connection to the Oracle BAM Server

I can’t understand why customers wouldn’t want to use BAM.  Monitor Express makes it a matter of a few clicks to provide real time process status information to the business.

Oh yes!  I remember now, several customers IT staff have told me they don’t want the business seeing this data because they will hassle the IT department if something goes wrong, and BAM lets them see it going wrong in real time…

Task 3: Install the Schema for the Fusion Order Demo Application

When editing the Infrastructure->MasterBuildScript->Resources build.properties make sure that you set jdeveloper.home to the jdeveloper directory underneath the directory that you installed JDeveloper into.  I installed JDeveloper Studio into “C:\JDev11gPS5” so my jdeveloper.home is “C:\JDev11gPS5\jdeveloper”.

Gotcha #2 – the ant script throws an error partway through but does not report it at the end, so check carefully for the following error:

oracle.jbo.JboException: JBO-29000: Unexpected exception caught: java.lang.NoClassDefFoundError, msg=oracle/jdbc/OracleClob

This occurs because the build.xml does not include the ojdbc6dms library which have CLOB support so in JDeveloper add it to the path “oracle.jdbc.path” in Infrastructure->/DatabaseSchema/Resources build.xml file.

<path id="oracle.jdbc.path">
  <fileset dir="${jdeveloper.home}/../wlserver_10.3/server/lib">
    <include name="ojdbc6.jar"/>
  </fileset>
  <fileset dir="${jdeveloper.home}/../oracle_common/modules/oracle.jdbc_11.1.1">
    <include name="ojdbc6dms.jar"/>
  </fileset>
</path>

Rerun the ant script from Infrastructure/Ant with a target of “buildAll” and it should now complete without errors.

Task 4: Set the Configuration Property for the Store Front Module

Nothing to watch out for here.

Task 5: Edit the Database Connection

Nothing to watch out for here.

Task 6: Deploy the Store Front Module

There is an additional step when deploying, you will be asked for an MDS repository to use.  Best to use the MDS-SOA repository and put the content in its own partition.

Gotcha #3 –when prompted select the mds-soa MDS repository and choose the soa.  Note that this is an MDS partition, not a SOA partition.

Note that when you deploy the StoreFrontServiceSDO_Services application it will populate the local WebLogic LDAP with the demo users.  If this step fails it will be because you forgot to set the “-Djps.app.credential.overwrite.allowed=true” parameter and restart the Admin Server.

Task 7: Deploy the WebLogic Fusion Order Demo Application

Set up the environment for BAM.

When editing the WebLogicFusionOrderDemo->bin->Resources build.properties make sure that you set oracle.home to the jdeveloper directory underneath the directory that you installed JDeveloper into.  I installed JDeveloper Studio into “C:\JDev11gPS5” so my oracle.home is “C:\JDev11gPS5\jdeveloper”.

Gotcha #5a – Make sure that you create directories on the server for the FileAdapter to use as a file directory and a separate control directory and make sure you set the corresponding properties in the build.properties file:

  • orderbooking.file.adapter.dir
  • orderbooking.file.adapter.control.dir

Gotcha #5b – Also make sure you set the following properties in the build.properties file:

  • soa.domain.name
  • managed.server.host
  • managed.server.rmi.port
  • soa.db.username
  • soa.db.password
  • soa.db.connectstring

Note that the soa.server.oracle.home property must be set to the ORACLE_SOA_HOME (usually Oracle_SOA1 under the MW_HOME) on the server.

Gotcha #6 – I found that unless I went into the console to each servers Configuration->Protocols->IIOP->Advanced setting, and set the “Default IIOP Username” and “Default IIOP Password” to be the weblogic user the deployment failed.

Gotcha #7 – when deploying BAM objects in seedBAMServerObjects activity I got an exception “java.lang.NoClassDefFoundError: org/apache/commons/codec/binary/Base64” which is caused because the BAM installation under JDeveloper does not have all the required libraries.  To fix this copy the commons-codec-1.3.jar file from the server machine ORACLE_SOA_HOME/bam/modules/oracle.bam.third.party_11.1.1 to the JDev machine ORACLE_JDEV_HOME/bam/modules/oracle.bam.third.party_11.1.1.

Gotcha #8 – when deploying BAM objects in seedBAMServerObjects activity I got an error “BAM-02440: ICommand is unable to connect to the Oracle BAM server because user credentials (username/password) have not been specified.”.  The quick way to fix this is to change to the directory where the import script was created on the JDeveloper machine (ORACLE_JDEV_HOME\bam\dataObjects\load) and run the load script after setting the JAVA_HOME

..\..\bin\icommand -CMDFILE ImportFODBamObjects.xml

I am sure if I spent more time in the ant scripts I could have found what was wrong with the script for deploying this.

Running Fusion Order Demo

You are now ready to place an order through the frontend app at http://soahost:soaport/StoreFrontModule/faces/home.jspx.  The BAM dashboard is available for you to monitor the progress of your order and EM is all set to let you monitor the health of the processes.  Enjoy studying a relatively complex example that demonstrates many best practices such as use of MDS.

Friday Mar 16, 2012

Memory Efficient Windows SOA Server

Installing a Memory Efficient SOA Suite 11.1.1.6 on Windows Server

Well 11.1.1.6 is now available for download so I thought I would build a Windows Server environment to run it.  I will minimize the memory footprint of the installation by putting all functionality into the Admin Server of the SOA Suite domain.

Required Software

  • 64-bit JDK
  • SOA Suite
    • If you want 64-bit then choose “Generic” rather than “Microsoft Windows 32bit JVM” or “Linux 32bit JVM”
    • This has links to all the required software.
    • If you choose “Generic” then the Repository Creation Utility link does not show, you still need this so change the platform to “Microsoft Windows 32bit JVM” or “Linux 32bit JVM” to get the software.
    • Similarly if you need a database then you need to change the platform to get the link to XE for Windows or Linux.

If possible I recommend installing a 64-bit JDK as this allows you to assign more memory to individual JVMs.

Windows XE will work, but it is better if you can use a full Oracle database because of the limitations on XE that sometimes cause it to run out of space with large or multiple SOA deployments.

Installation Steps

The following flow chart outlines the steps required in installing and configuring SOA Suite.

The steps in the diagram are explained below.

64-bit?

Is a 64-bit installation required?  The Windows & Linux installers will install 32-bit versions of the Sun JDK and JRockit.  A separate JDK must be installed for 64-bit.

Install 64-bit JDK

The 64-bit JDK can be either Hotspot or JRockit.  You can choose either JDK 1.7 or 1.6.

Install WebLogic

If you are using 64-bit then install WebLogic using “java –jar wls1036_generic.jar”.  Make sure you include Coherence in the installation, the easiest way to do this is to accept the “Typical” installation.

SOA Suite Required?

If you are not installing SOA Suite then you can jump straight ahead and create a WebLogic domain.

Install SOA Suite

Run the SOA Suite installer and point it at the existing Middleware Home created for WebLogic.  Note to run the SOA installer on Windows the user must have admin privileges.  I also found that on Windows Server 2008R2 I had to start the installer from a command prompt with administrative privileges, granting it privileges when it ran caused it to ignore the jreLoc parameter.

Database Available?

Do you have access to a database into which you can install the SOA schema.  SOA Suite requires access to an Oracle database (it is supported on other databases but I would always use an oracle database).

Install Database

I use an 11gR2 Oracle database to avoid XE limitations.  Make sure that you set the database character set to be unicode (AL32UTF8).  I also disabled the new security settings because they get in the way for a developer database.  Don’t forget to check that number of processes is at least 150 and number of sessions is not set, or is set to at least 200 (in the DB init parameters).

Run RCU

The SOA Suite database schemas are created by running the Repository Creation Utility.  Install the “SOA and BPM Infrastructure” component to support SOA Suite.  If you keep the schema prefix as “DEV” then the config wizard is easier to complete.

Run Config Wizard

The Config wizard creates the domain which hosts the WebLogic server instances.  To get a minimum footprint SOA installation choose the “Oracle Enterprise Manager” and “Oracle SOA Suite for developers” products.  All other required products will be automatically selected.

The “for developers” installs target the appropriate components at the AdminServer rather than creating a separate managed server to house them.  This reduces the number of JVMs required to run the system and hence the amount of memory required.  This is not suitable for anything other than a developer environment as it mixes the admin and runtime functions together in a single server.  It also takes a long time to load all the required modules, making start up a slow process.

If it exists I would recommend running the config wizard found in the “oracle_common/common/bin” directory under the middleware home.  This should have access to all the templates, including SOA.

If you also want to run BAM in the same JVM as everything else then you need to “Select Optional Configuration” for “Managed Servers, Clusters and Machines”.

To target BAM at the AdminServer delete the “bam_server1” managed server that is created by default.  This will result in BAM being targeted at the AdminServer.

Installation Issues

I had a few problems when I came to test everything in my mega-JVM.

  • Following applications were not targeted and so I needed to target them at the AdminServer:
    • b2bui
    • composer
    • Healthcare UI
    • FMW Welcome Page Application (11.1.0.0.0)

How Memory Efficient is It?

On a Windows 2008R2 Server running under VirtualBox I was able to bring up both the 11gR2 database and SOA/BPM/BAM in 3G memory.  I allocated a minimum 512M to the PermGen and a minimum of 1.5G for the heap.  The setting from setSOADomainEnv are shown below:

set DEFAULT_MEM_ARGS=-Xms1536m -Xmx2048m
set PORT_MEM_ARGS=-Xms1536m -Xmx2048m

set DEFAULT_MEM_ARGS=%DEFAULT_MEM_ARGS% -XX:PermSize=512m -XX:MaxPermSize=768m
set PORT_MEM_ARGS=%PORT_MEM_ARGS% -XX:PermSize=512m -XX:MaxPermSize=768m

I arrived at these numbers by monitoring JVM memory usage in JConsole.

Task Manager showed total system memory usage at 2.9G – just below the 3G I allocated to the VM.

Performance is not stellar but it runs and I could run JDeveloper alongside it on my 8G laptop, so in that sense it was a result!

Tuesday Mar 13, 2012

Packt Oracle Discount Month

Packt Publishing are celebrating the publication of their 50th Oracle book by offering an exclusive discount throughout March for all Oracle books. Here is the link which explains it in detail: Packt's Oracle Campaign, https://www.packtpub.com/news/hit-the-oracle-packtpot. If you haven’t bought mine and Matt’s book the Oracle SOA Suite 11g R1 Developer's Guide book then now is a great time to get it from Packt.  Packt have recently published a number of new Oracle SOA Suite books, including an Oracle Service Bus Bus 11g Development Cookbook by those smart guys in the Netherlands and Switzerland, and an Oracle BAM 11gR1 Handbook by my friend Pete Wang.

Wednesday Dec 28, 2011

Too Much Debug

Too Much Debug

Remains of a Roast Turkey

Well it is Christmas and as is traditional, in England at least, we had roast turkey dinner.  And of course no matter how big your family, turkeys come in only two sizes; massively too big or enormously too big!  So by the third day of Christmas you are ready never to eat turkey again until thanksgiving.  Your trousers no longer fit around the waist, your sweater is snug around the midriff, and your children start talking about the return of the Blob.

And my point?  Well just like the food world, sometimes in the SOA world too much of a good thing is bad for you.  I had just extended my BPM domain with OSB only to discover that I could no longer start the BAM server, or the newly configured OSB server.  The error message I was getting was:

starting weblogic with Java version:
FATAL ERROR in native method: JDWP No transports initialized, jvmtiError=AGENT_ERROR_TRANSPORT_INIT(197)
ERROR: transport error 202: bind failed: Address already in use
ERROR: JDWP Transport dt_socket failed to initialize, TRANSPORT_INIT(510)
JDWP exit error AGENT_ERROR_TRANSPORT_INIT(197): No transports initialized [../../../src/share/back/debugInit.c:690]
Starting WLS with line:
C:\app\oracle\product\FMW\JDK160~2\bin\java -client -Xdebug -Xnoagent -Xrunjdwp:transport=dt_socket,address=8453,server=y,suspend=n

The mention of JDWP points to a problem with debug settings and sure enough in a development domain the setDomainEnv script is set up to enable debugging of OSB.  The problem is that the settings apply to all servers started with settings in the setDomainEnv script and should really only apply to the OSB servers.  There is a blog entry by Jiji Sasidharan that explains this and provides a fix.  However the fix disables the debug flag for all servers, not just the non-OSB servers.  So I offer my extension to this fix which modifies the setDomainEnv script as follows from:

set debugFlag=true

to:

rem Added so that only OSB server starts in debug mode
if "%SERVER_NAME%"=="osb_server1" (
    set debugFlag=true
)

This enables debugging to occur on managed server osb_server1 (this should match the name of one of your OSB servers to enable debugging).  It does not enable the debug flag for any other server, including other OSB servers in a cluster.  After making this change it may be necessary to restart the Admin Server because it is probably bound to the debug port.

So the moral of this tale is don’t eat too much turkey, don’t abuse the debug flag, but make sure you can get the benefits of debugging.

Have a great new year!

Thursday Sep 29, 2011

Mapping the Java World Part I

How to Customise Java/XML Mapping in SOA Suite
Part I Setting Up EclipseLink MOXy

The Challenge

java-xml-duke-mediumDuring a recent POC, the customer asked us to integrate with several of their backend EJBs, which hosted legacy code talking to backend systems.  The EJB interfaces could not be changed, and had to be called a certain way to be successful.  The customer was looking to phase out another set of "integration EJBs", written over the last few years that orchestrated their core backend EJBs.  SOA Suite will allow them to drastically reduce their development time, and provide much better visibility into the processes as they run.  Given their past experiences (writing their integration EJBs was painful and long winded), one of the key requirements was to support their legacy EJBs without writing code in the SOA environment.  At first glance this seemed impossible, because we couldn't change anything on the EJB side.  In addition many of the parameters to the interfaces contained either incomplete annotations or methods that didn't follow the JavaBeans spec.  This was not previously a problem for them because all their EJBs ran in the same JVM and were making local Java calls

We decided to use a powerful yet obscure feature of SOA Suite to do the mapping.  Chapter 49.7 of the SOA Suite Developer's guide mentions this marriage between EclipseLink MOXy and SOA Suite, but it does more than advertised, and works outside of the Spring Framework components.  We decided to use this functionality to "fix" some of the things we had mapping issues with in the customer code.  Additionally, we used the framework to do other helpful tasks, such as changing namespaces, fixing arrays, and removing unnecessary mappings.

In this article we'll cover the theory behind the use of this functionality, basic setup and usage, and several examples to get you started.

Background

When we use an EJB Reference or a Spring component in SOA Suite we usually want to wire it to a non-Java resource.  When we do this JDeveloper uses JAXB to create an XML representation of the parameters and return values of the methods in the Java interface we are using.  In this article we will show how to override those mappings.  Overriding the default generation of mappings allows us to specify target namespaces, rationalize the structure of the data and remove unneeded properties in the Java classes.  Some things we may want to customize include:

  • Specifying concrete implementations for abstract classes and interfaces in the interface
    • This allows us to map to Java objects in the interface which cannot be instantiated directly.  For example often we have lists of abstract classes or interfaces, by  specifying the possible concrete implementations of these classes we can generate an XML schema that includes additional properties available only through the concrete classes.
  • Hiding unwanted properties
    • This allows us to remove properties that are not needed for our implementation, or not needed because they are convenience properties such as the length of an array or collection which can easily be derived from the underlying array or collection.
  • Providing wrappers for arrays and collections
    • The default mapping for an array or collection is to provide a list of repeating elements.  We can modify the mapping to provide a wrapper element that represents the whole array or collection, with the repeating elements appearing a level down inside this.
  • Changing WSDL namespaces
    • It is often necessary to change the namespaces in a generated WSDL to match a corporate standard or to avoid conflicts with other components that are being used.

Approach

EclipseLinkMoxySOA Suite allows us to describe in XML how we want a Java interface to be mapped from Java objects into XML.  The file that does this is called an “Extended Mapping” (EXM) file.  When generating a WSDL and its associated XML Schema from a Java interface SOA Suite looks for an EXM file corresponding to the Java Interface being generated from.  Without this file the mapping will be the “default” generation, which simply attempts to take each field and method in the Java code and map it to an XML type in the resulting WSDL.  The EXM file is used to describe or clarify the mappings to XML and uses EclipseLink MOXy to provide an XML version of Java annotations.  This means that we can apply the equivalent of Java annotations to Java classes referenced from the interface, giving us complete control over how the XML is generated.  This is illustrated in the diagram which shows how the WSDL interface mapping depends on the JavaInterface of the EJB reference or Spring component being wired (obviously), but is modified by the EXM file which in turn may embed or reference an XML version of JAXB annotations (using EclipseLink MOXy).

The mapping will automatically take advantage of any class annotations in the Java classes being mapped, but the XML descriptions can override or add to these annotations, allowing us fine grained control over our XML interface.  This allows for changes to be made without touching the underlying Java code.

Setup

Using the mapper out of the box is fairly simple.   Suppose you set up an EJB Reference or Spring component inside your composite.  You'd like to call this from a Mediator or BPEL Process which expect to operate on a WSDL.  Simply drag a wire from the BPEL process or Mediator to the EJB or Spring component and you should see a WSDL generated for you, which contains the equivalent to the Java components business interface and all required types.  This happens for you, as the tool goes through the classes in your target.

But what if you get a "Schema Generation Error" or if the generated WSDL isn't correct.  As discussed earlier there may be a  number of changes we need or want to make to the mapping.   In order to use an "Extended Mapping", or EXM file, we need to do the following:

  1. We need to register the EclipseLink MOXy schema with JDeveloper.  Under JDeveloper Tools->Preferences->XML Schemas we click Add… to register the schema as an XML extension.

    The schema is found in a jar file located at <JDEV_HOME>/modules/org.eclipse.persistence_1.1.0.0_2-1.jar! and the schema is inside this jhar at /xsd/eclipselink_oxm_2_1.xsd so the location we register is jar:file:/<JDEV_HOME>/modules/org.eclipse.persistence_1.1.0.0_2-1.jar!/xsd/eclipselink_oxm_2_1.xsd where <JDEV_HOME> is the location where you installed JDeveloper.

    NOTE: This will also work as OXM instead of XML.  But if you use an .oxm extension then in each project you use it you must add a rule to copy .oxm files from the source to the output directory when compiling.
  2. Change the order of the source paths in your SOA Project to have SCA-INF/src first.  This is done using the Project Properties…->Project Source Paths dialog.  All files related to the mapping will go here.  For example, <Project>/SCA-INF/src/com/customer/EXM_Mapping_EJB.exm, where com.customer is the associated java package.
  3. We will now use a wizard to generate a base mapping file.
    1. Launch the wizard New XML Document from XML Schema (File->New->All Technologies->General->XML Document from XML Schema).
    2. Specify a file with the name of the Java Interface and an .exm extension in a directory corresponding to the Java package of the interface under SCA-INF/src.  For example, if your EJB adapter defined soa.cookbook.QuoteInterface as the remote interface, then the directory should be <Project>/SCA-INF/src/soa/cookbook and so the full file path would be <Project>/SCA-INF/src/soa/cookbook/QuoteInterface.exm.  By using the exm extension we are able to Use Registered Schema which will automatically map to the correct schema so that future steps in the wizard will understand what we are doing.
    3. The weblogic-wsee-databinding schema should already be selected, select a root element of java-wsdl-mapping and generate to a Depth of 3.  This will give us a basic file to start working with.
  4. As recommended by Oracle, separate out the mappings per package, by using the toplink-oxm-file element.  This will allow you, per package, to define re-usable mapping files outside of the EXM file.  Since the EXM file and the embedded mappings have different XML root elements, defining them separately allows JDeveloper to provide validation and completion, a sample include is shown below:

    <?xml version="1.0" encoding="UTF-8" ?>
    <java-wsdl-mapping xmlns="
    http://xmlns.oracle.com/weblogic/weblogic-wsee-databinding">
      <xml-schema-mapping>
        <toplink-oxm-file file-path="./mappings.xml" java-package="soa.cookbook"/>
      </xml-schema-mapping>
    </java-wsdl-mapping>

  5. Create an "OXM Mapping" file to store custom mappings.  As mentioned, these files are per package, separate from the EXM files, and re-usable.  We can use the "New XML Document from Schema" to create these as well.  In this case, they will have an XML or OXM extension, use the persistence registered schema (http://www.eclipse.org/eclipselink/xsds/persistence/oxm), and be stored relative to the EXM file.  That is, they can go in the same directory, or in other directories, as long as you refer to them by relative path from the EXM file.

    <?xml version="1.0" encoding="UTF-8" ?>
    <xml-bindings xmlns="
    http://www.eclipse.org/eclipselink/xsds/persistence/oxm">
      <!-- Set target Namespace via namespace attribute -->
      <xml-schema namespace=
    http://cookbook.soa.mapping/javatypes
                  element-form-default="QUALIFIED"/>
    </xml-bindings>

  6. In the newly created OXM Mapping file, we can use completion and validation to ensure we follow the EclipseLink MOXy documentation.  For example, to declare that a field is to be transient and not show up in the WSDL mapping, do this:

    <?xml version="1.0" encoding="UTF-8" ?>
    <xml-bindings xmlns="
    http://www.eclipse.org/eclipselink/xsds/persistence/oxm">
      <!-- Set target Namespace via namespace attribute -->
      <xml-schema namespace=
    http://cookbook.soa.mapping/javatypes
                  element-form-default="QUALIFIED"/>
      <java-types>
        <java-type name="soa.cookbook.QuoteRequest">
          <java-attributes>
            <!-- Can remove mappings by making them transient via xml-transient element -->
       
        <xml-transient java-attribute="product"/>
          </java-attributes>
        </java-type>
      </java-types>
    </xml-bindings>

  7. Once complete, delete any existing wires to the Java components and rewire.  You should notice the dialog box change to indicate that an extended mapping file was used.

    Note that an “extended mapping file” was used.
  8. As an iterative process, changing a mapping is quite easy.  Simply delete the wire, and JDeveloper will offer to delete the WSDL file, which you should do.  Make updates to the EXM and OXM Mapping files as required, and rewire.  Often this occurs several times during development, as there are multiple reasons to make changes.  We will cover some of these in our next entry.

Summary

The extended mapping file puts the SOA composite developer in control of his own destiny when it comes to mapping Java into XML.  It frees him from the tyranny of Java developer specified annotations embedded in Java source files and allows the SOA developer to customize the mapping for his own needs.  In this blog entry we have shown how to set up and use extended mapping in SOA Suite composites.  In the next entry we will show some of the power of this mapping.

Patches

To use this effectively you need to download the following patch from MetaLink:

  • 12984003 - SOA SUITE 11.1.1.5 - EJB ADAPTER NONBLOCKINGINVOKE FAILS WITH COMPLEX OBJECTS (Patch)
    • Note that this patch includes a number of fixes that includes a performance fix for the Java/XML mapping in SOA Suite.

Sample Code

There is a sample application uploaded as EXMdemo.zip.  Unzip the provided file and open the EXMMappingApplication.jws in JDeveloper.  The application consists of two projects:

  • EXMEJB
    • This project contains an EJB and needs to be deployed before the other project.  This EJB provides an EJB wrapper around a POJO used in the Spring component in the EXMMapping project.
  • EXMMapping
    • This project contains a SOA composite which has a Spring component that is called from a Mediator, there is also an EJB reference.  The Mediator calls the Spring component or the EJB based on an input value.  Deploy this project after deploying the EJB project.

Key files to examine are listed below:

  • QuoteInterface.java in package soa.cookbook
    • This is the same interface implemented by the Spring component and the EJB.
    • It takes a QuoteRequest object as an input parameter and returns a QuoteResponse object.
  • Quote.java_diagram in package soa.cookbook
    • A UML class diagram showing the structure of the QuoteRequest and QuoteResponse objects.
  • EXM_Mapping_EJB.exm in package soa.cookbook
    • EXM mapping file for EJB.
    • This is used to generate EXM_Mapping_EJB.wsdl file.
  • QuoteInterface.exm in package soa.cookbook
    • EXM mapping file for Spring component.
    • This is used to generate QuoteInterface.wsdl file.
  • mappings.xml in package soa.cookbook
    • Contains mappings for QuoteRequest and QuoteResponse objects.
    • Used by both EXM files (they both include it, showing how we can re-use mappings).
    • We will cover the contents of this file in the next installment of this series.

Sample request message

<soap:Envelope xmlns:soap="http://schemas.xmlsoap.org/soap/envelope/">
  <soap:Body xmlns:ns1="
http://cookbook.soa.mapping/types">
    <ns1:quoteRequest>
      <ns1:products>
        <ns1:product>Product Number 1</ns1:product>
        <ns1:product>Product Number 2</ns1:product>
        <ns1:product>Product Number 3</ns1:product>
      </ns1:products>
      <ns1:requiredDate>2011-09-30T18:00:00.000-06:00</ns1:requiredDate>
      <!-- provider should be “EJB” or “Spring” to select the appropriate target -->
      <ns1:provider>EJB</ns1:provider>
    </ns1:quoteRequest>
  </soap:Body>
</soap:Envelope>

Co-Author

This blog article was co-written with my colleague Andrew Gregory.  If this becomes a habit I will have to change the title of my blog!

Acknowledgements

Blaise Doughan, the Eclipse MOXy lead was extremely patient and helpful as we worked our way through the different mappings.  We also had a lot of support from David Twelves, Chen Shih-Chang, Brian Volpi and Gigi Lee.  Finally thanks to Simone Geib and Naomi Klamen for helping to co-ordinate the different people involved in researching this article.

About

Musings on Fusion Middleware and SOA Picture of Antony Antony works with customers across the US and Canada in implementing SOA and other Fusion Middleware solutions. Antony is the co-author of the SOA Suite 11g Developers Cookbook, the SOA Suite 11g Developers Guide and the SOA Suite Developers Guide.

Search

Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today