I recently led an OSB POC where we needed to get high throughput from an OSB pipeline that had the following logic:
1. Receive Request
2. Send Request to External System
3. If Response has a particular value
3.1 Modify Request
3.2 Resend Request to External System
4. Send Response back to Requestor
All looks very straightforward and no nasty wrinkles along the way. The flow was implemented in OSB as follows (see diagram for more details):
The Proxy and the Business Service were each assigned their own Work Manager, effectively giving each of them their own thread pool.
Imagine our surprise when, on stressing the system we saw it lock up, with large numbers of blocked threads. The reason for the lock up is due to some subtleties in the OSB thread model which is the topic of this post.
OSB goes to great lengths to avoid holding on to threads. Lets start by looking at how how OSB deals with a simple request/response routing to a business service in a route node.
Most Business Services are implemented by OSB in two parts. The first part uses the request thread to send the request to the target. In the diagram this is represented by the thread T1. After sending the request to the target (the Business Service in our diagram) the request thread is released back to whatever pool it came from. A multiplexor (muxer) is used to wait for the response. When the response is received the muxer hands off the response to a new thread that is used to execute the response pipeline, this is represented in the diagram by T2.
OSB allows you to assign different Work Managers and hence different thread pools to each Proxy Service and Business Service. In out example we have the “Proxy Service Work Manager” assigned to the Proxy Service and the “Business Service Work Manager” assigned to the Business Service. Note that the Business Service Work Manager is only used to assign the thread to process the response, it is never used to process the request.
This architecture means that while waiting for a response from a business service there are no threads in use, which makes for better scalability in terms of thread usage.
Note that if the Proxy and the Business Service both use the same Work Manager then there is potential for starvation. For example:
Similar problems can occur if the response pipeline blocks for some reason, maybe a database update for example.
The solution to this is to make sure that the Proxy and Business Service use different Work Managers so that they do not contend with each other for threads.
So what happens if there is no route node? In this case OSB just echoes the Request message as a Response message, but what happens to the threads? OSB still uses a separate thread for the response, but in this case the Work Manager used is the Default Work Manager.
So this is really a special case of the Basic Thread Model discussed above, except that the response pipeline will always execute on the Default Work Manager.
So what happens when the route node is actually calling a Proxy Service rather than a Business Service, does the second Proxy Service use its own Thread or does it re-use the thread of the original Request Pipeline?
Well as you can see from the diagram when a route node calls another proxy service then the original Work Manager is used for both request pipelines. Similarly the response pipeline uses the Work Manager associated with the ultimate Business Service invoked via a Route Node. This actually fits in with the earlier description I gave about Business Services and by extension Route Nodes they “… uses the request thread to send the request to the target”.
So what happens when you make a Service Callout to a Business Service from within a pipeline. The documentation says that “The pipeline processor will block the thread until the response arrives asynchronously” when using a Service Callout. What this means is that the target Business Service is called using the pipeline thread but the response is also handled by the pipeline thread. This implies that the pipeline thread blocks waiting for a response. It is the handling of this response that behaves in an unexpected way.
When a Business Service is called via a Service Callout, the calling thread is suspended after sending the request, but unlike the Route Node case the thread is not released, it waits for the response. The muxer uses the Business Service Work Manager to allocate a thread to process the response, but in this case processing the response means getting the response and notifying the blocked pipeline thread that the response is available. The original pipeline thread can then continue to process the response.
This leads to an unfortunate wrinkle. If the Business Service is using the same Work Manager as the Pipeline then it is possible for starvation or a deadlock to occur. The scenario is as follows:
The solution to this is to make sure that any Business Services used by a Callout in a pipeline use a different Work Manager to the pipeline itself.
Looking back at my original workflow we see that the same Business Service is called twice, once in a Routing Node and once in a Response Pipeline Callout. This was what was causing my problem because the response pipeline was using the Business Service Work Manager, but the Service Callout wanted to use the same Work Manager to handle the responses and so eventually my Response Pipeline hogged all the available threads so no responses could be processed.
The solution was to create a second Business Service pointing to the same location as the original Business Service, the only difference was to assign a different Work Manager to this Business Service. This ensured that when the Service Callout completed there were always threads available to process the response because the response processing from the Service Callout had its own dedicated Work Manager.
So what this means is that you may get deadlocks caused by thread starvation if you use the same thread pool for the business service in a route node and the business service in a callout from the response pipeline because the callout will need a notification thread from the same thread pool as the response pipeline. This was the problem we were having.
You get a similar problem if you use the same work manager for the proxy request pipeline and a business service callout from that request pipeline.
It also means you may want to have different work managers for the proxy and business service in the route node.
Basically you need to think carefully about how threading impacts your proxy services.
Thanks to Jay Kasi, Gerald Nunn and Deb Ayers for helping to explain this to me. Any errors are my own and not theirs. Also thanks to my colleagues Milind Pandit and Prasad Bopardikar who travelled this road with me.