Thursday Feb 07, 2008

BPEL4People and WS Human Task submitted to OASIS

If you've used BPEL in anger, you know that it has limitations. One of the major ones that I've encountered is the lack of work flow support. Real business processes often involve human actions, and humans don't really look like web services! Also, web services don't perform roles in the same way humans do; there is no question about assigning a work item to a web service, whereas work item assignment for people can involve some complex considerations, including organizational structures.

BPEL4People and WS-Human Task are a pair of specifications that have recently submitted to OASIS for standardization. These important standards address many of the work flow-related problems related to using WS-BPEL, providing a standard way of modeling and executing business processes that include so-called human tasks.

One of my first comments to the WS-BPEL TC, when I joined it way back in 2003, was that BPEL needed to accommodate assignment of work items to humans. Unfortunately this was dismissed as "out of scope" (and that is probably a good thing, or the TC would still be active today!). I am gratified that other participants in the BPEL community realized that this was a real problem, and I'm grateful today to join them in submitting these two new specifications for standardization.

Thursday Aug 02, 2007

SOA lessons learned from business process automation

Business process automation taught us to separate business process logic from execution of business activities. Today, it teaches us about reuse of services in SOA-based systems.[Read More]

SOA services != RPC

So where do services come from?

That's actually a more profound question than it appears at first blush, especially if you are a hands-on, code-first sort of developer like me. Profound, for it goes directly to the value (potential value, perhaps) of a system built using a service-oriented architecture.

What is the value of SOA? Reuse. That has always been its strength, or at least its promise. But this is not the same as code reuse, the holy grail of software design since the invention of the subroutine.

Where then, do services come? We must look not to code, or other artifacts of service implementation. Instead, we must look to business processes. The elements of reuse in a business process are business functions. Reusable business functions are what enable the quick, reliable creation (or modification) of business processes possible. Business agility (I hate the phrase; it sounds like market-speak) depends on such services. Shorter time to market, high quality, avoiding any code creation: all things that help create business value.

How do you create such services? Simply by looking at business processes, mining the common activities that comprise them. Do some processes involve a credit check of a customer? Sounds like a candidate for a service. Notable SOA thinkers have developed nuanced models of services in a layered fashion (John Crupi's "lasagna" model, for instance), but the basic principle applies: the high-value reuse occurs at the business process level. It's what makes modern SOA worthwhile, and not simply a language-neutral way of packaging libraries: reuse at the service composition level.

This is in sharp contrast to how a lot of services are identified and implemented today. Many developers, with a very code-centric world view, very naturally look to their available reusable code assets (libraries, APIs to useful applications, and the like) when then look to service creation. After all, these are proven candidate for reuse already, and wrapping them with more technology-neutral interfaces (web services and the like) should make them even more reusable. Very logical.

Logical, but off target. This completely misses the value of reusable services, as discussed above. Repackaging code assets is really an exercise in RPC. Does it create the benefits of reusable services that can be used for process composition? No, for such code assets are the fine-grained stuff from which such business functions \*may\* be created, but aren't those business functions themselves. It is rather pointless to use a lot of RPC to write somethng that is more easily, and efficiently, realized in native languages (C++, Java, etc.).

So what is the lesson here? As the title of this piece says, RPC is not equal to SOA services. Services must be designed from a different centre than code reuse. Process composition is the proper centre of service design.

This is, in many ways, what the business process automation folks have been telling us for years, but that is for another entry.

Monday Jun 04, 2007

ZapThink should rethink SCA and JBI

Jason Bloomberg has made some rather unkind remarks about SCA and JBI. While he is entitled to his opinion, his remarks are not well supported by any arguments or statements of fact that puts forward. Although he asserts that there must be a better way to do SOA, he is remarkably silent about what that would look like. He should share any constructive ideas he has on this subject. He should also take a closer look at the specification; he will find many things, including the fact that JBI and SCA are complementary technologies.[Read More]

Tuesday Apr 24, 2007

What makes a service reusable?

A typical scenario: a department in a company deploys a service into its production environment. It turns out the service is quite useful, and gets reused in a variety of business processes supported by the department. We have a success story, a poster child for SOA adoption, right? All too often the answer is "wrong!"

Why wrong? Just wait for the second shoe to drop... the service is too successful, and is reused by consumers far and wide in the company. Suddenly it slows down, adversely affecting business processes. People get grumpy, and fingers are pointed, and emergency plans must be hatched to scale the service to cope with its own success.

This scenario plays itself out every day, and it points out a lot of different deficiencies that all point to the need for both design- and run-time governance of any SOA system. I'll save design-time governance for another day, and dwell on some aspects of run-time governance of the system.

First, a definition, then an assertion. A policy enforcement mechanism (PEM) is a mechanism in the SOA infrastructure where run-time policies (security, SLA and the like) are enforced. The PEM can enforce access controls, ensure schema compliance, guard against over-use of a service, load balance, do content-based routing and much more.

Now the assertion: a service isn't really reusable unless it is offered through a PEM. The PEM gives you operational control of the service that is completely unrelated to the logical function provided by the service. SOA enables reuse, but unregulated reuse can result in an inability to control that reuse in any fashion. In some environments that might be just fine, but in most commercial environments such a collegial situation is rare, and the risks of suffering a meltdown (as illustrated in the little scenario above) quite real, and very costly (not to mention career-limiting :-) ).

This bears restatement: a service offered without run-time policy enforcement mechanisms is not truly reusable.

Regulating reuse is just the tip of the iceberg. It is an instance of a more general problem of ensuring that services comply to all applicable policies within the production environment. Examples include auditing requirements, privacy controls (e.g., all SSNs are to be encrypted), and even service mediation (versioning, or perhaps protocol). And just to make things worse, compliance is a moving target: legal requirements change (both over time, and by legal jurisdiction), as do internal policies.

This sounds like it is pointing to a massive application maintenance headache. How it the world do I write services that comply to all these extra, changing requirements, when it is tough enough just getting the core service functions right in a timely fahsion? How do I respond to what are operational issues and needs from within the normal software development life cycle?

Fortunately, there is a better way than brute-force, code-first solutions. The good old trick of "separation of concerns" is very handy here. Our PEM can be completely removed from service providers, and placed into a Policy Enforcement Point (PEP) in the SOA infrastructure. This separates the concerns neatly: the service provider concentrates on supplying the core service, and nothing more. The PEP "protects" that core service by acting as an intermediary between the service provider and the service consumer, enforcing all applicable policies so that the provider will only receive messages that are compliant. Further, the policies enforced by the PEP can be changed without affecting the service provider, allowing policies to be changed quickly and frequently, and completely outside the service's software development life cycle.

The service consumer, obviously, deals not with the service provider, but the PEP. The service offered by the PEP is often referred to as a "virtual service".

There is a lot more to be said about this. If you are interested in practical, policy-guarded service reuse (and that should be most of you SOA folks), and you are attending JavaOne in May, please attend TS-8459, titled Service Virtualization: Separating Business Logic from Policy Enforcement, Wednesday May 9th, at 6:35 PM. I'll be co-presenting this talk with Scott Morrison, from Layer 7 Technologies, who has a lot of experience with service virtualization in SOA systems. I hope to see you there!

Wednesday Mar 14, 2007

How to learn WS-BPEL 2.0?

Nearly four years ago I joined the WS-BPEL 2.0 technical committee, and from nearly the first day people have been asking me, "how do I learn BPEL?" At long last I have a good answer -- the NetBeans 5.5 enterprise pack (EP). The EP includes an integrated version of Open ESB, including its WS-BPEL 2.0 service engine.

The EP includes an intuitive interactive graphical process diagram editor. If you really like the XML language the TC crafted, there is two-way editing (between the graphic and text views of the process). This makes writing process definitions easier, since it handles a lot of the "boilerplate" XML, and makes it much harder to make a mistake. This can make learning the language syntax less painful, and shorten the learning curve. (The EP includes WSDL and XML Schema editors with similar capabilities, making these related document types easier to learn and create.)

Syntax isn't the whole story, though. What about the run-time dynamics of WS-BPEL? How can you learn them? How can you gain insight into the internal workings of the language? The best way I've seen to date is to use the EP's BPEL debugger. This allows you to do all the things you'd expect: stopping process instances, inspecting variables, setting breakpoints, etc. This lets you see what is happening "under the hood" of the BPEL service engine, answering the common question "why did it do that?" in a way that is very instructive.

If you are curious about WS-BPEL 2.0, check out the Enterprise Pack, and take the BPEL debugger for a test drive.

FYI, the WS-BPEL 2.0 service engine itself comes from Open ESB 2.0.

Monday Feb 05, 2007

WS-BPEL 2.0 (almost) out of the oven

After over 3-1/2 years, the WS-BPEL technical committee (TC) at OASIS finally hit a milestone we have all been awaiting for a very long time: the TC has submitted WS-BPEL 2.0 to OASIS for approval as an OASIS specification.

We will soon see some official announcements, webinars, etc., to properly introduce the completed specification. I'll just say that the new, soon-to-be standard is a huge improvement over the previous non-standard specification, BPEL4WS 1.1. Not only is the language more clearly and rigorously defined, but it also far more consistent, powerful, portable, and, in general, more useful. It has certainly been worth the wait!

During this 3-1/2 year journey, I've come to better know and admire the TC members. I am also impressed with the willingness of the participating corporations to contribute time and talent (not to mention intellectual property) to the cause of creating a truly open web service-based business process language. I feel honored to work with such people, and know that we "done good."

For me this hasn't been a mere intellectual exercise. Sun has "skin in the game", as a colleague of mine is fond of saying. WS-BPEL is supported in Open ESB, the Java EE 5 SDK (which includes a subset of Open ESB), and in the NetBeans 5.5 Enterprise Pack. I still am impressed with the Enterprise Pack; the interactive WS-BPEL debugger is, to me, still a Cool Thing. So take WS-BPEL 2.0 out for a spin; it is The Way to add stateful processes to your web services.

Friday Jun 30, 2006

What's right with SCA?

My last posting expressed some of my misgivings about SCA, but also alluded to some features that I consider to be useful in SCA: the model-driven approach, the "smart wires" for connecting consumers to providers, and the fractal composition pattern.[Read More]

Thursday Jun 29, 2006

What's wrong with SCA?

Service Component Architecture 0.9 is flawed in several ways. Its flawed definition of services makes building or using a service-oriented architecture problematic. Its aim to support multiple languages is laudable, but SCA is too complex and doesn't address key interoperability issues.[Read More]

Tuesday Jun 06, 2006

Implementing Service-Oriented Architectures (SOA) with the Java EE 5 SDK

There is a very good article on the Sun Developer Network, entitled Implementing Service-Oriented Architectures (SOA) with the Java EE 5 SDK. This is a nice walk-through, showing exactly how do develop a composite application using various Java EE 5 technologies, orchestrated by WS-BPEL 2.0. This really shows off the NetBeans 5.5 support for Java EE 5, and its built-in support for using Open ESB and GlassFish/Sun Java System AS 9.

This article gives the complete tour of using the various technologies. I like the way the use of WS-BPEL 2.0 is folded into the tools; even debugging is integrated, allowing simultaneous debugging of the BPEL process logic and Java-based logic (such as EJBs). If you want to get your feet wet, in the new world of composite service applications, this article is a very good starting point.

Friday Apr 21, 2006

Faults, Errors and WSDL

Dave Orchard has recently blogged at length about error/fault abstractions from SOAP (over HTTP) "leaking" into abstract WSDL service descriptions. While I won't comment on the specific issues he raises (and he raises a lot of them -- check it out!), his comments raised, in my mind, a more general issue: how do we handle errors in a distributed SOA?

The introduction of a network into any computing system raises the complexity enormously. The eight fallacies of distributed computing is a good summary of where the complexities come from. The average programmer is pretty used to composing an application out of reusable pieces of code from libraries, frameworks, etc. So when he writes

    x = foo(y);
to call a library function foo, he isn't too concerned about errors occuring. However, if foo is implemented as a call to, say, a web service, we now have to consider a variety of issues. Number one on the list of eight fallacies is "the network is reliable." So now I have to ask, what happens when the call to foo fails? Obviously, the function needs to be able to convey the fact that it failed. For bonus points, it ought to tell us something about the failure. Also, is it possible that the service provider actually received the request to perform "foo", and it was the response that got dropped? Or was the request lost? These considerations introduce a lot of new application states, possible error recover paths, etc. Complexity.

How does this complexity fit into a WSDL description of foo? Should the abstract service description (portType or WSDL 2.0 interface) describe the possible faults that the underlying binding may (perhaps must) "leak"? If the answer is "yes", then the picture is pretty grim: the details of the binding must pollute the abstract service definition, and all our efforts to separate the two are pointless.

If you are familiar with JBI, you know that the JBI expert group answered the above question with a firm "no". JBI separates service providers from communication bindings, such that a service can be bound to multiple endpoints simultaneously, and that the service remains completely decoupled from the bindings.

So how does JBI handle the errors that arise from bindings? The answer lies in the JBI MessageExchange interface, and a couple of observations:

  • WSDL described Faults are "normal" parts of the message exchange pattern (MEP). There is nothing magic about such faults, and, in particular, WSDL fault messages should not be used to convey binding-specific errors. A Fault does not cause an abrupt termination of the MEP.
  • Binding errors (don't call them faults -- that's just confusing) are not part of the standard WSDL MEPs. They represent an abrupt end to the MEP.

JBI's MessageExchange interface models this directly. In addition to all the normal, MEP-described states that an on-going exchange can enter, JBI adds an error state that abruptly ends the exchange. The cause of the error is detailed using a standard Java exception.

Why did we do this? Because we know that a standard WSDL description of a service is not sufficient for describing all the possible states an interaction between consumer and provider will enter when conducted through a binding that uses a network. Implementations of consumers (and providers) must provide for these extra states. A good implementation will minimize the "leakage" of binding-driven error information. JBI's approach is, in a sense, the "worst-case scenario": from almost every state of the WSDL-described MEP an abrupt transition to the terminal error state can occur. This decouples the provider and consumer, but certainly doesn't eliminate the need to deal with failures. The eight fallacies have not been repealed!

Monday Apr 17, 2006


I am often asked what the relationship is between Enterprise Service Bus (ESB) and Java™ Business Integration (JBI). There is often an assumption that JBI defines an ESB, but this isn't true. JBI defines a part of an ESB: the service container.

The service container is the point where integration really happens: where IT assets (applications, protocols, databases, even data files) are turned into providers of services, consumers of services, or even both. Service containers have to deal with a wide variety of technologies, and "map" them to (and from) a standard services model.

JBI is the perfect means for constructing such service containers. It provides a standardized, plug-in architecture for bringing the right technologies to bear on particular integration tasks. The WSDL services model built into JBI is perfectly aligned with the standard services model needed for the ESB. Pragmatically speaking, building service containers from standard components is far more economical than custom-building them, or using proprietary adapters.

Service containers are not the whole ESB story. The ability to create a distributed set of service containers, linked by reliable messaging infrastructure, intelligent message routing, and administered centrally are all features outside the service container itself (and thus, outside of the scope of JBI 1.0).

Several open-source projects have taken this basic idea, and are in the process of creating cool new ESBs. These include:

My apologies to any projects that I missed. (Note that there are at times complex inter-relationships between some of these projects.) There are also commercial products in the pipeline, but vendors are typically less open about their development efforts, so they will have to speak for themselves.

As you can see, JBI has found a place in the world of ESB! This is a great benefit to users of both the open-source ESBs and the commercial ones: JBI standardizes what is easily the most complex (and costly) pieces of an integration fabric. By avoiding having to reinvent application adapters for each new ESB implementation, the ESB architects can concentrate on innovating in what their ESB's do. This delivers more value to customers. Standardized integration components lower costs, and help avoid the hazards of vendor lock-in. So if you are looking for an ESB, look for JBI support. If you are exploring the ideas around ESB and SOA, check out the open-source projects listed above.

Tuesday Apr 11, 2006

What Is Enterprise Service Bus?

Enterprise Service Bus (ESB) is a way to create a service-oriented architecture. Leaving aside the marketing wars between various ESB vendors (and wanna-be vendors), the following are useful definitions of an ESB, and ones that we use in the Open ESB project.

In a Sentence

An Enterprise Service Bus (ESB) is a distributed middleware system for integrating enterprise IT assets using a service-oriented approach.

In a Paragraph

An Enterprise Service Bus (ESB) is a distributed infrastructure used for enterprise integration. It consists of a set of service containers, which integrate various types of IT assets. The containers are interconnected with a reliable messaging bus. Service containers adapt IT assets to a standard services model, based on XML message exchange using standardized message exchange patterns. The ESB provides services for transforming and routing messages, as well as the ability to centrally administer the distributed system.

As a Bullet List

An Enterprise Service Bus (ESB) is an infrastructure used for enterprise integration using a service-oriented approach. Its main features are:

  • It has a set of service containers, used to adapt a wide variety of IT assets to the ESB.
  • It has a reliable messaging system, used to allow the service containers to interact.
  • It has a standard (WSDL) services model is used for inter-container interaction. That is, all adapted assets are modelled using services. An asset can be a provider of services, a consumer of services, or both. The services model is based on message exchange.
  • It uses messages that are exchanged between containers using standard message exchange patterns.
  • It uses messages that consist of XML data, and message metadata.
  • It provides message transformation services
  • It provides message routing services
  • It provides security features to control access to services
  • It is centrally administered, despite being a distributed system.
  • It allows incremental changes to services without requiring shutdown or other disturbance to system availability.
As I mentioned at the start, ESB is a way to create a SOA, but not the only one. As we have demonstrated at Project Open ESB, JBI is an important element in constructing an ESB, but is not an ESB by itself. Open ESB isn't unique in this regard; the open JBI standard is the basis of several ESBs, both open source and closed.

Logical vs Physical Design

Back in the bad old days life was pretty simple. Applications lived in a single computer process. Then came the idea that multiple processes could be used to attack certain problems. This rapidly evolved to encompass the use of multiple computers. Moore's law (and networking) made it seemingly inevitable.

While hardware advances made horizontal and vertical scaling possible, software lagged. More recent high-level languages have included support for distributed computing (e.g., the Java Remote interface), but these required that the developer decide, during application development, how the application was to be distributed. Changing the physical distribution of the application was a huge pain, since it touched a lot of code. Even middleware technologies suffered from this shortcoming.

The problem is the combination of writing a logical application (defining what is to be done), and physically partitioning it (defining where particular tasks are done). Ideally, these ought to be two separate steps, such that physical distribution of the application doesn't affect the logic of the application at all. In a really cool universe, the distribution can be changed on-the-fly, at run-time.

The closest I've seen any language come to this ideal was the Forte O-O 4GL, TOOL. You developed the application using a repository full of the code artifacts you needed for you application. After designing and debugging the application logic, usually in a single "node" environment (a single address space), you went through a separate "partitioning workshop", where you decided where the pieces of the application would run in your particular environment. This didn't affect the application code at all; application logic and physical distribution were completely orthogonal.

Those Forte Software guys were very clever. I joined Forte in 1997, when they were already working on R3 of this stuff. To me it was like magic. I'd spent years creating distributed C/C++ apps. If I'd had Forte, back then, I'd have been able to create those distributed applications in a fraction of the time, and I'd probably have less grey hair today!

But time marches on. The creation of distributed applications is changing yet again. The olde way of creating large applications is from reusable code entities, frameworks, and even generated code (MDA, anyone?), and partitioning the result for execution in a distributed environment. The new way is termed "composition application development", and is founded on service-oriented architecture. Applications are now (largely) aggregations of (reusable) services, plus some connective glue. Services provide coarse-grained functions. Like the Forte 4GL, distribution of functions (services) within a SOA is orthogonal to the actual application logic. Calling a locally provided service is semantically identical to calling a remotely provided one. (This is one of the reasons JBI has only a single service invocation API, which hides locality from the consumer & provider.)

Developing large applications is tough enough. Being able to disregard distribution issues during development and maintenance is truly a blessing to the developer, who has enough things on his plate. Forte Software managed to deliver such benefits to developers using the vehicle of a proprietary 4GL. Today we can realize the same advantages in application development using standards-based SOA.

Why SOA?

Why bother with SOA? What benefits can it possibly generate for users? Isn't it just another instance of vendors hyping the latest acronym du jour?

At its core, SOA is a good, sensible idea. It has existed in one form or another since the 1960's. The technical "perfect storm" of the Internet, TCP/IP, HTTP and XML have created the conditions for yet another incarnation of SOA to emerge. Due to the almost universal support for those technologies, this version of SOA has the potential to have a wider, longer lasting impact than any previous one.

SOA is a software system structuring principle, based on the concept of services, which are offered (and described) by service providers. Service consumers read service descriptions, and using ONLY those descriptions, can interact with service provider, getting access to whatever functions it provides.

That's a very general definition. Interaction between a consumer and provider requires some common computational infrastructure. In our shiny, new 21st century version of SOA, this means XML message exchange using web protocols, usually HTTP over TCP/IP. Service description is accomplished using WSDL. (This is my definition of SOA; others have much different (and usually more complex) ideas about modern SOA.)

That's a very brief description of what SOA is. But why bother? In a word: decoupling. In the history of software engineering, coupling between "pieces" of a software system are like nails through pieces of wood: they make modification of the system design difficult, expensive, and time-consuming.

Sofware systems are normally decomposed into some sort of modules (I'd use the term 'components', but that has become a loaded word!). Good decomposition leads to modules that have just a few, well-understood associations with other modules. Software architects talk about "connections" between such modules. Obviously the more connections there are, the more difficult it is to modify one of the modules without disturbing the others. Too much coupling between modules manifests itself in several ways:

  • Need for co-ordinated software updates across two or more (sometimes many more! modules. This can be a serious problem in distributed system.
  • Unexpected side effects. We've all seen this one. Make a change to one module, and something apparently completely unrelated breaks. Why? Complex (sometimes invisible) coupling. What you don't know can hurt you!
  • High costs for software maintenance. Changes are difficult, and thus expensive.
  • Difficulty in "evolving" a system over time. Reuse, changing, refactoring are all difficult when changes of any sort are difficult.
These are all consequences of "nails", often hidden ones, confining your system to its current design. Consider the following snippet of code:
    z = foo.getW().getX().getY().getZ();
This is a long nail, hard-coding the association between five separate classes. (Karl Lieberherr has lot to say about this, by the way. Check of the Method of Demeter.) This is an example from O-O, but larger scale systems often exhibit the same close-coupling, but in ways that are not so obvious.

How to avoid those "nails"? Decoupling is the first step. You must minimize how much each module "knows" about the other modules in the system. At a very fine-grained scale, this is what OOP introduced: type abstraction helped hide class implementation details from "users" of a class. Similarly, SOA introduces mechanisms to help maximize decoupling at a course-grained scale.

The service consumer (in SOA) bases all of its "knowledge" about a provider on the WSDL the provider has made available (typically through publication). This represents everything the consumer needs to know, so we have a small, well-defined interface between the two entities. Normally services a very coarse-grained (think "submit purchase order"); fine-grained services (like "find square root") don't make a lot of sense when I've got a perfectly good run-time library.

The consumer, dealing with only XML messages, has no knowledge of how the provider implements the service. That's good: this avoids sneaky sorts of technical coupling. It also makes changes, such as switching to a different service provider, very easy: the declared interface to the provider is either compatible with the old one, or not.

The provider is similarly decoupled from the consumer. In fact, the provider can make very few assumptions about the nature of the consumer, especially why the consumer is asking for a particular service. (There of course, can be higher-level application context information concerning this (a BPEL process instance, for example).) All of this adds up to a very good set of circumstances for reuse of the service in many different use cases. Indeed, when we talk about service composition, we are relying on this ability to reuse services in new ways without affecting the service implementation. Decoupling is the ticket to providing this capability.

So what do I get from using a SOA?

  • Reuse and composition. This is particularly powerful for creating new business processes quickly and reliably.
  • Recomposition. The ability to alter existing business processes or other applications based on service aggregation.
  • The ability to incrementally change the system. Switching service providers, extending services, modifying service providers and consumers. All of these can be done safely, due to well-controlled coupling.
  • The ability to incrementally build the system. This is especially true of SOA-based integration.
As my colleague Mike Wright is fond of saying, SOA gives you the ability to refactor your enterprise incrementally. That certainly makes SOA worthwhile, don't you think?



« July 2016