Threading It All Together
By Antony Reynolds on Feb 20, 2008
Threading It All Together
or Thread Usage by BPEL ProcessesBased on comments I have received and questions I have been asked it seems that when it comes to thread usage by BPEL processes there are lot of confused people around. So I thought I would unravel a few mysteries about threading in BPEL. Understanding the threading model is important for BPEL because the thread model can affect scalability of processes and the BPEL engine itself.
How threads are allocated depends on the interaction pattern of the link activating the receive activity.
Interaction PatternsThe interaction pattern can be thought of as how the BPEL process is called. There are two types of interaction pattern, one-way and two-way or request-response as it is sometimes known.
One-Way InteractionA one-way interaction pattern means that the process is invoked and is then left to run, possibly returning the results via another interaction, possibly not. We often think of this as asynchronous because the caller does not wait for the process to complete. Don't confuse the actual interaction pattern with the way the process works. For example many processes have a request reply process model that is implemented through two one-way interactions. The client calls the process and continues working. The process does its job and then calls the client to pass back the results. The process has a request-reply model but it works through two one-way interactions. A one-way interaction pattern is characterised by a process that has a receive activity but no corresponding reply activity. A request-reply process using one-way interaction patterns would be characterized by a receive activity with no corresponding reply but it would have a corresponding invoke.
Two-Way or Request-Response InteractionA request-response interaction pattern means that the process is invoked and the caller then waits for the process to return a result before the caller continues its own processing. We often think of this as synchronous because the caller appears to get the result immediately. A request-response interaction pattern is characterised by a process that has a receive activity with a corresponding reply activity. Note that it is possible to have a process that combines an initial request-response interaction to return an initial result, such as a correlation token, with a later one-way interaction that returns the final result to the caller.
More information on what this looks like is available in the BPEL docs.
InstrumentationTo investigate thread usage in BPEL processes I wrote a small amount of Java code that is embedded into the process inside an exec activity. The code is shown below and stores the currently executing thread group and thread name into a variable called "CurrentThread".
<bpelx:exec name="Java_Embedding_1" language="java" version="1.3">
Thread t = Thread.currentThread();
This variable is then surfaced in the output of the process.
Simple Request-Response Thread UsageMy first test was to run a simple request-reply interaction pattern in a process. In this case the thread is taken from the inbound HTTP listener thread pool and a single thread is used to receive the request and process it. The thread group is "HTTPThreadGroup" and the actual thread I got was "AJPRequestHandler-RMICallHandler-56". The last digits vary. So in this case the BPEL server receives the request on the same thread that is used for all processing within the process. This process is available as ThreadExplorer1.zip.
Simple One-Way Thread UsageMy next test was to run a simple one-way interaction pattern, with another one-way onteraction to return the result. in this case the message is received on one thread and then passed onto a pool of worker threads to execute the BPEL process. The thread group is "main" and the actual thread I got was "WorkExecutorWorkerThread-120". The last digits vary. So in this case the BPEL server received the request and then queued it for subsequent execution. This allows for greater scalability by limiting thread usage within the BPEL server. This process is available as ThreadExplorer2.zip.
Combination of Request-Reply and One-Way Thread UsageThe obvious next question is what happens when I have both interaction patterns in the same process. Well that is what I did in my next test. I created a process that returns an immediate result (request-response interaction) and then later posts a further response as a one-way interaction. In this case the message is received on the listener thread and processed on that thread up to the reply. At that point the remainder of the work is passed off to a worker thread. This is exactly as one would expect from combining the two previous usages. This process is available as ThreadExplorer3.zip.
Effect of Flow on ThreadingThe next question is what happens in a flow activity. Surely this will cause multiple threads to be spawned. Well by default the answer is no. ThreadExplorer4 process uses a request-reply interaction and within the processing it has a 3-way flow statement. All statements in the flow are executed in the same thread. This is exactly the same as if we had used a request-response interaction with a while activity instead of a flow. This may be counter-intuitive but the reasoning behind this is as follows:
- For scalability we want to limit thread usage
- A flow is really intended to support waiting for multiple events such as responses from services with two one-way interactions and so we don't need to execute them in parallel, pseudo-parallel will do.
- If one branch of a flow is waiting for an event the same thread can start executing the next branch of the flow, avoiding the need for a JVM thread context switch.
More Detail of Effect on Flow on ThreadingIf we put a wait or receive into our flows then we find that the thread itself is not suspended, but is returned to the thread pool. If there are other legs of the flow capable of execution then the thread will be used to execute those before being returned to the worker pool. This same behaviour applies when a leg of a flow wakes up due to an event such as a timer (wait activity) expiring or a message arrive (receive activity). This is all shown in ThreadExplorer5 process which has a 3 way flow that uses the initial worker thread to put all three legs into a wait activity and then uses a different worker thread when they wake up to carry on their processing. This process is available as ThreadExplorer5.zip.
Creating True Parallelism in an FlowOne common use case for flow is to make multiple request-reply calls in parallel to reduce their latency. But as we saw in the previous section this doesn't work out of the box so how can it be achieved? If we add a property "nonBlockingInvoke" to the partner link we are calling and set it to "true" then BPEL will use a seperate thread to make the call. This is shown in ThreadExplorer6. Note that when you run this the initial request is being processed on the thread that received the message, a listener thread. When we do the invokes on a partner link with nonBlockingInvoke property set to true we see that those invokes are processed in a seperate thread. Because we are calling another BPEL process the server does some optimisation and uses the same thread for all three invokes. If the partner link were on a different server then 3 threads would have been used. Note that these threads are all worker threads, not the thread from the listener. Finally when the three flow legs join together a new worker thread is used to carry on processing. This process is available as ThreadExplorer6.zip.
You can read more about the nonBlockingInvoke in the BPEL documentation.