Threading It All Together

Threading It All Together

or Thread Usage by BPEL Processes

Based on comments I have received and questions I have been asked it seems that when it comes to thread usage by BPEL processes there are lot of confused people around.  So I thought I would unravel a few mysteries about threading in BPEL.  Understanding the threading model is important for BPEL because the thread model can affect scalability of processes and the BPEL engine itself.
How threads are allocated depends on the interaction pattern of the link activating the receive activity.

Interaction Patterns

The interaction pattern can be thought of as how the BPEL process is called.  There are two types of interaction pattern, one-way and two-way or request-response as it is sometimes known.

One-Way Interaction

A one-way interaction pattern means that the process is invoked and is then left to run, possibly returning the results via another interaction, possibly not.  We often think of this as asynchronous because the caller does not wait for the process to complete.  Don't confuse the actual interaction pattern with the way the process works.  For example many processes have a request reply process model that is implemented through two one-way interactions.  The client calls the process and continues working.  The process does its job and then calls the client to pass back the results.  The process has a request-reply model but it works through two one-way interactions.  A one-way interaction pattern is characterised by a process that has a receive activity but no corresponding reply activity.  A request-reply process using one-way interaction patterns would be characterized by a receive activity with no corresponding reply but it would have a corresponding invoke.

Two-Way or Request-Response Interaction

A request-response interaction pattern means that the process is invoked and the caller then waits for the process to return a result before the caller continues its own processing.  We often think of this as synchronous because the caller appears to get the result immediately.  A request-response interaction pattern is characterised by a process that has a receive activity with a corresponding reply activity.   Note that it is possible to have a process that combines an initial request-response interaction to return an initial result, such as a correlation token, with a later one-way interaction that returns the final result to the caller.
More information on what this looks like is available in the BPEL docs.

Instrumentation

To investigate thread usage in BPEL processes I wrote a small amount of Java code that is embedded into the process inside an exec activity.  The code is shown below and stores the currently executing thread group and thread name into a variable called "CurrentThread".
  <bpelx:exec name="Java_Embedding_1" language="java" version="1.3">
    <![CDATA[
      Thread t = Thread.currentThread();
      setVariableData("CurrentThread",
                      t.getThreadGroup().getName()+":"+t.getName());
    ]]>

  </bpelx:exec>
This variable is then surfaced in the output of the process.

Simple Request-Response Thread Usage

My first test was to run a simple request-reply interaction pattern in a process.  In this case the  thread is taken from the inbound HTTP listener thread pool and a single thread is used to receive the request and process it.  The thread group is "HTTPThreadGroup" and the actual thread I got was "AJPRequestHandler-RMICallHandler-56".  The last digits vary.  So in this case the BPEL server receives the request on the same thread that is used for all processing within the process.  This process is available as ThreadExplorer1.zip.

Simple One-Way Thread Usage

My next test was to run a simple one-way interaction pattern, with another one-way onteraction to return the result.  in this case the message is received on one thread and then passed onto a pool of worker threads to execute the BPEL process.  The thread group is "main" and the actual thread I got was "WorkExecutorWorkerThread-120".  The last digits vary.  So in this case the BPEL server received the request and then queued it for subsequent execution.  This allows for greater scalability by limiting thread usage within the BPEL server.  This process is available as ThreadExplorer2.zip.

Combination of Request-Reply and One-Way Thread Usage

The obvious next question is what happens when I have both interaction patterns in the same process.  Well that is what I did in my next test.  I created a process that returns an immediate result (request-response interaction) and then later posts a further response as a one-way interaction.  In this case the message is received on the listener thread and processed on that thread up to the reply.  At that point the remainder of the work is passed off to a worker thread.  This is exactly as one would expect from combining the two previous usages.  This process is available as ThreadExplorer3.zip.

Effect of Flow on Threading

The next question is what happens in a flow activity.  Surely this will cause multiple threads to be spawned.  Well by default the answer is no.  ThreadExplorer4 process uses a request-reply interaction and within the processing it has a 3-way flow statement.  All statements in the flow are executed in the same thread.  This is exactly the same as if we had used a request-response interaction with a while activity instead of a flow.  This may be counter-intuitive but the reasoning behind this is as follows:
  • For scalability we want to limit thread usage
  • A flow is really intended to support waiting for multiple events such as responses from services with two one-way interactions and so we don't need to execute them in parallel, pseudo-parallel will do.
  • If one branch of a flow is waiting for an event the same thread can start executing the next branch of the flow, avoiding the need for a JVM  thread context switch.
So there is little real need for a truly parallel flow statement.  This process is available as ThreadExplorer4.zip.

More Detail of Effect on Flow on Threading

If we put a wait or receive into our flows then we find that the thread itself is not suspended, but is returned to the thread pool.  If there are other legs of the flow capable of execution then the thread will be used to execute those before being returned to the worker pool.  This same behaviour applies when a leg of a flow wakes up due to an event such as a timer (wait activity) expiring or a message arrive (receive activity).  This is all shown in ThreadExplorer5 process which has a 3 way flow that uses the initial worker thread to put all three legs into a wait activity and then uses a different worker thread when they wake up to carry on their processing.  This process is available as ThreadExplorer5.zip.

Creating True Parallelism in an Flow

One common use case for flow is to make multiple request-reply calls in parallel to reduce their latency.  But as we saw in the previous section this doesn't work out of the box so how can it be achieved?  If we add a property "nonBlockingInvoke" to the partner link we are calling and set it to "true" then BPEL will use a seperate thread to make the call.  This is shown in ThreadExplorer6.  Note that when you run this the initial request is being processed on the thread that received the message, a listener thread.  When we do the invokes on a partner link with nonBlockingInvoke property set to true we see that those invokes are processed in a seperate thread.  Because we are calling another BPEL process the server does some optimisation and uses the same thread for all three invokes.  If the partner link were on a different server then 3 threads would have been used.  Note that these threads are all worker threads, not the thread from the listener.  Finally when the three flow legs join together a new worker thread is used to carry on processing.  This process is available as ThreadExplorer6.zip.
You can read more about the nonBlockingInvoke in the BPEL documentation.

More on Thread Processing in BPEL

You can read more about thread processing in BPEL in the documentation which gives more reasoning for the way the cases above behave as they do.  Hope this was useful to you.

Comments:

Hi, a very nice article about the BPEL-threading model. The provided BPELs are very helpful! Thanks! Dietrich

Posted by Dietrich Schroff on February 20, 2008 at 06:34 AM MST #

Hi Antony, Thanks for this great blog-entry. It would be interesting to look at the thread-behaviour of a process hierarchy with focus on the thread-sharing between the processes. For example: - Sync1 calls Sync2 - Async1 calls Async2 - ASync1 calls Sync1 - Sync1 calls ASync1 - ASync1 calls Async2 calls Sync1 (containing a wait-activity) Best Regards, Harald Reinm├╝ller

Posted by Harald Reinmueller on February 27, 2008 at 08:50 PM MST #

Post a Comment:
Comments are closed for this entry.
About

Musings on Fusion Middleware and SOA Picture of Antony Antony works with customers across the US and Canada in implementing SOA and other Fusion Middleware solutions. Antony is the co-author of the SOA Suite 11g Developers Cookbook, the SOA Suite 11g Developers Guide and the SOA Suite Developers Guide.

Search

Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today