Mash-Ups and Dynamically Provisioned Services

As I have been watching all of the discussions about mash-ups, I have been wondering if traditional integration mechanisms employed by the developer community are really well suited for this new environment in which services (yours and others) are embraced, combined and extended in order to deliver some new aggregate value proposition.

I really view mash-ups as a way to take someones intellectual property and extend it to address some new use-case for which the original designer may/not have designed. This extension creates numerous problems that include license (approved use) - something that I'll allow the attorneys in the audience to argue about, but interesting to me is the context under which the component in question was designed to be used, and mechanisms to elaborate that context to help the mash-up developer understand the critical “ilities” - reliascalavailaserviceability of using their application in a “production” environment.

As I looked for analogous problem domains, I happened upon the Integrated Circuit industry - an industry that has been moving from discrete semiconductors to integrated circuits to Systems on-a Chip (SoC). The proliferation of specialized “cores” (provided by IP backed designs) and the recognition that customers are looking for single chip solutions for cost, space and power reasons, has evolved a set of processes and tools to combine these cores into a single system - and in doing so has forced substantial changes in the Electronic Design Automation (EDA) field.

I hypothesize that a similar revolution will unfold for the “service oriented” world that is being espoused by just about every rag, and in most every IT shop. Taking cues from EDA, it might look something like this:

Assembly-1

Assemble:
Some components will need to be developed –allowing the expression of business processes and critical IC in a programmatic language. However, it is anticipated that as components are added to an Internet enabled / distributed registry (think about the catalogs that we used to receive from IC vendors) that developers will become focussed on assembly to enable business processes, and proven business process patterns will become building blocks at the next higher level.

Tooling/Paradigm: Specific business service assembly tools + Business Process Modelling to provide both component development and component assembly at the functional layer, with extensions to the domain model specific to the component abstraction that allow for systemic constraints to be suitably defined. The analog in chip design seems to be VHDL/Verilog & SystemC.

Verify:
Once a workflow/business process has been composed, the developer needs to be able to verify that it behaves as intended (before worrying about the systemic constraints). The output of the verify step should be a constraining graph based upon what we know about new and existing components to allow the system to plan for the process deployment. Over time, as component sub-systems emerge, they will be pre-verified (ebay model).

Tooling/Paradigm: there need to be a set of tools / processes that can be run to ensure that interfaces are appropriately wired, test cases executed to ensure appropriate functional result is achieved. The analog in chip design is Functional Verification.

Synthesize:
Each of the components has systemic constraints, the system nowneeds to leverage rules/policies to determine the overarching constraintswhich best characterize the characteristics of the defined model. In this way the system can begin to understand how things like trans-action performance (viewed as latency), high availability (viewed as uptime), can be elaborated and tradeoff decisions made with the help of the developer (cost / time).

Tooling/Paradigm: once the system is functionally defined, the constraints need to be organized to ensure performance, and discrepancies resolved. This results in a systemic design in which the no constraints remain “at odds” with one another. The analog in chip design is Design Synthesis

Plan:
Now that the constraints are fully understood, we can begin to group components and map them against known capabilities of the infrastructure, selecting appropriate provisioning and operational policies/rules and bringing those component based plans together into a federated construct that can be used by the observability and management systems to deploy the system.

Execute:
Once the plan has been developed, it can be delivered to an executor. There should be (at a minimum) 2 execution types: try it, do it. Try it should allow the interfaces to be excercised so that the plan can be validated/ verified against “production like”. At which point the plan can be certified to run at scale. This 2 step process is critical as it will help us maintain control of rogue applications (unintentional) that may not behave well. Furthermore, execution includes a mandatory monitoring/auditability that can enable an operator to better re-plan over time for better Service Level performance at lower cost.

Thanks for reading, as I stated above, this is just elaborating an analogy, whether it proves valuable in SOA is yet to be seen, but I'd love your comments.

Comments:

Post a Comment:
Comments are closed for this entry.
About

dhushon

Search

Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today