Tuesday Feb 07, 2012

11g Purging White paper

Its finally released!!! The 11g white paper on purging is now readily available on OTN. This white paper has been written by Michael Bousamra of Oracle SOA development with contributions from me and Sai of SOA development. You can find the 11g white paper here.

As always comments welcome!

Deepak

Tuesday Nov 29, 2011

List of all states from COMPOSITE_INSTANCE, CUBE_INSTANCE, DLV_MESSAGE tables

In many of my engagements I get asked repeatedly about the states of the composites in 11g and how to decipher them, especially when we are troubleshooting issues around purging. I have compiled a list of all the states from the COMPOSITE_INSTANCE, CUBE_INSTANCE, DLV_MESSAGE and MEDIATOR_INSTANCE tables. These are the primary tables that are used when using BPEL composites and how they are used with the ECID. 

Composite State Values


COMPOSITE_INSTANCE States

State Description
0 Running
1 Completed
2 Running with faults
3 Completed with faults
4 Running with recovery required
5 Completed with recovery required
6 Running with faults and recovery required
7 Completed with faults and recovery required
8 Running with suspended
9 Completed with suspended
10 Running with faults and suspended
11 Completed with faults and suspended
12 Running with recovery required and suspended
13 Completed with recovery required and suspended
14 Running with faults, recovery required, and suspended
15 Completed with faults, recovery required, and suspended
16 Running with terminated
17 Completed with terminated
18 Running with faults and terminated
19 Completed with faults and terminated
20 Running with recovery required and terminated
21 Completed with recovery required and terminated
22 Running with faults, recovery required, and terminated
23 Completed with faults, recovery required, and terminated
24 Running with suspended and terminated
25 Completed with suspended and terminated
26 Running with faulted, suspended, and terminated
27 Completed with faulted, suspended, and terminated
28 Running with recovery required, suspended, and terminated
29 Completed with recovery required, suspended, and terminated
30 Running with faulted, recovery required, suspended, and terminated
31 Completed with faulted, recovery required, suspended, and terminated
32 Unknown
64 -


Any value in the range of 32 to 63 indicates that the composite instance state has not been enabled, but the instance state is updated for faults, aborts, etc.

CUBE_INSTANCE States

State Description
0 STATE_INITIATED
1 STATE_OPEN_RUNNING
2 STATE_OPEN_SUSPENDED
3 STATE_OPEN_FAULTED
4 STATE_CLOSED_PENDING_CANCEL
5 STATE_CLOSED_COMPLETED
6 STATE_CLOSED_FAULTED
7 STATE_CLOSED_CANCELLED
8 STATE_CLOSED_ABORTED
9 STATE_CLOSED_STALE
10 STATE_CLOSED_ROLLED_BACK


DLV_MESSAGE States

State Description
0 STATE_UNRESOLVED
1 STATE_RESOLVED
2 STATE_HANDLED
3 STATE_CANCELLED
4 STATE_MAX_RECOVERED

Since now in 11g the Invoke_Messages table is not there so to distinguish between a new message (Invoke) and callback (DLV) and there is DLV_TYPE column that defines the type of message:

DLV_TYPE States


State Description
1 Invoke Message
2 DLV Message

MEDIATOR_INSTANCE

STATE Description
 0  No faults but there still might be running instances
 1  At least one case is aborted by user
 2  At least one case is faulted (non-recoverable)
 3  At least one case is faulted and one case is aborted
 4  At least one case is in recovery required state
 5 At least one case is in recovery required state and at least one is aborted
 6 At least one case is in recovery required state and at least one is faulted
 7 At least one case is in recovery required state, one faulted and one aborted
 >=8 and < 16
 Running
>= 16 
 Stale

In my next blog posting I will walk through the lifecycle of a BPEL process using the above states for the following use cases:

- New BPEL process - initial Receive activity

- Callback BPEL process - mid-level Receive activity

As always comments and questions welcome!

Deepak

Friday May 27, 2011

How to "turn off EDN" if its not being used in a SOA deployment

Recently I was asked by a client that they would like to turn off EDN since they were not planning on using it in their production application. I asked them to turn off the EDN Datasource however that datasource is used by other components in the Fusion MiddleWare (FMW) stack and requires that the datasource be on. Consequently here are the steps on how to "turn off" or pause EDN.

1. Open the EM console

2. Navigate to System MBean Browser > MBean Application > Defined MBeans > oracle.as.soainfra.config > Server: SOA_cluster_name > EDNConfig > edn

EDN Mbean

and set the Paused Attribute to True . If true, EDN listener/poller threads are decreased to 0 which effectively stops delivering events. Works for EDN-DB and EDN-JMS.

If you would like to do this through WLST the commands are listed below:

WLST command to enable/disable EDN event delivery:

    • sca_enableEventsDelivery()
    • sca_disableEventsDelivery()

(start WLST script, for example, $MW_HOME/AS11gR1SOA/common/bin/wlst.sh)

1. connect('<user>','<password>','t3://<soa server>:<port>')

2. custom()

3. cd('oracle.as.soainfra.config/oracle.as.soainfra.config:name=edn,type=EDNConfig,Application=soa-infra')

4. set('Paused', 1)

5. exit()

Note: In step 4, change it to set('Paused', 0) would resume EDN event delivery.

Please note: This does not require a restart of the Managed Servers to take effect.

Enjoy,

DA

Monday Dec 06, 2010

Oracle 10g BPEL dehydration store purge strategies

After many months of deliberation here is the white paper that I have been working on to define 10G BPEL dehydration store purge strategies. You can find it here on OTN. The main goal of the paper is to provide SOA administrators a way to identify their SOA Data footprint and choose the right purging strategy that best meets their requirements.

Please give it a read and I would welcome any feedback for further editions. I am currently working on an 11g purging white paper due for release in early 2011.

Thanks,

Deepak

Friday Sep 10, 2010

11g Dynamic partnerlink example

In one of my blog postings I had mentioned using dynamic partnerlinks in 11g but didn't post an example since 11g was not out then. Well here is the example - I checked it against Sean Carey's example from the BPEL Cookbook and there are is only one big change. Since there is no bpel.xml anymore all references to static WSDL's are now located in the composite.xml. So in my very simple example:

<reference name="HelloWorld" ui:wsdlLocation="HelloWorld.wsdl">
  <interface.wsdl interface="
http://xmlns.oracle.com/HelloWorld/HelloWorlf/HelloWorld#wsdl.interface(HelloWorld)"
                  callbackInterface="http://xmlns.oracle.com/HelloWorld/HelloWorlf/HelloWorld#wsdl.interface(HelloWorldCallback)"/>
  <binding.ws port="
http://xmlns.oracle.com/HelloWorld/HelloWorlf/HelloWorld#wsdl.endpoint(client/HelloWorld_pt)"
              location="HelloWorld.wsdl"/>
</reference>

There is still a reference to the local WSDL (not remote) which is used as a static interface to the actual WSDL that is passed at runtime. None of the other artifacts change in 11g i.e. the EndpopintReference variable, the ServiceName, Address and assign to Partnerlink - all stay the same.

In my example I am including a SequentialProcess.wsdl which is not used in the project but it can be used as a template for defining a static WSDL for future projects. At the moment my GoDynamicBPEL process is adding has the values for ServiceName and Address at design time, but these can be changed to pick up the values at runtime instead to make the process truly robust.

The project can be downloaded from here:

As always comments and feedback welcome.

DA

Tuesday Aug 31, 2010

How to automate deletion of XML_DOCUMENT partitions in 10g SOA

I just recently returned from the Middle East where I was visiting a client to conduct a performance analysis. As part of this exercise I spent a lot of time going over their purging strategies for their BPEL dehydration store. Since I just finished writing the 10g BPEL purging strategy white paper, this was a perfect ground to test some of the theories. After some careful analysis we concluded that we would have to use the hybrid approach - the multi-looped purge script and partitioning for XML_DOCUMENT as our purging strategy. While conducting this exercise I came up with an interesting way to automate the deletion (drop partitions) for the XML_DOCUMENT.

For this exercise we need to use the partitioning scheme and verify scripts as described in Michael Bousamra's partitioning white paper. I am not going into the semantics of the hybrid approach (which is mentioned in my strategy white paper) but focus more on how to automate the deletion of XML_DOCUMENT partitions. Currently the verify script takes an array with a name of the partitions to check for deletion - this is a manual task that the DBA has to conduct i.e. the partition names have to be provided manually. The problem with this approach is that not only does someone have to remember all the partition names that are ready for deletion but also keep track of partitions that were not previously dropped.

For e.g. there are 6 monthly partitions: partitionA, partitionB, partitionC, partitionD, partitionE, partitionF and they are dropped on a monthly basis. Lets assume that in the 1st month partitionA was not dropped since there were still running BPEL instance. In the 2nd month the DBA would now have to pass in two names to the verify script, partitionA and partitionB. Lets assume that partitionB was dropped but partitionA was not (long running BPEL processes) then in the 3rd month the DBA has to pass in partitionA and partitionC to the verify scripts and so on. So in this case not only does the DBA have to remember what names to pass to the verify script but also keep a track of partitions that were not dropped in past purge cycles. As you can imagine this can quickly get complicated with a lot of partitions in the mix. Here is how this can be automated:

1. Create a new table called XML_DOC_PARTITION_STATE with 4 columns in the ORABPEL schema. The columns names are: PartitionName, StartDate, ExpiryDate, isDropped

XML_DOC_PARTITION_STATE

PartitionName StartDate ExpiryDate isDropped

partitionA

01-01-2010

31-01-2010

N

partitionB

01-02-2010

28-02-2010

Y

partitionC

01-03-2010

31-03-2010

N

2. Create a DB trigger to automatically populate the above table whenever a new XML_DOCUMENT partition is created.

3. Create a SQL query which will read from the XML_DOC_PARTITION_STATE based on the expiry date and state and pass the names of the partitions to the verify script. (If you would like you can embed this query into the verify script directly - DC_EXEC_VERIFY): The SQL would look like this:

SELECT PARTITIONAME FROM XML_DOC_PARTITION_STATE WHERE EXPIRYDATE < SYSDATE-1 AND ISDROPPED='N';

The goal is to select all the partition names that have not been dropped and meet the purging date criteria. So based on the above example partitionA and partitionC would be selected and passed to the DC_EXEC_VERIFY.sql.

4. At the moment the verify script creates a report stating if a partition can be dropped or not. The actual deletion happens outside of the verify scripts. This can be automated by changing the DC_VERIFY.sql to add the ALTER table command directly in the verify script to do this in one shot:

THEN
      IF (Ci_Ref_Part_Ok(UPPER(doc_drv_list(i))))
      THEN
        IF (Ad_Part_Ok(UPPER(doc_drv_list(i))))
        THEN
          UTL_FILE.Put_Line (PART_HANDLE,'PASS: ALL DOCUMENTS ARE UNREFERENCED THUS THE');
          UTL_FILE.Put_line (PART_HANDLE,'        XML_DOCUMENT PARTITION CAN BE DROPPED');

Delete_Partition(UPPER(doc_drv_list(i)));

      ELSE
          UTL_FILE.Put_Line (PART_HANDLE,'FAIL: AUDIT_DETAILS TABLE HAS ACTIVE DOCUMENTS');
          UTL_FILE.Put_Line (PART_HANDLE,'         THUS THE XML_DOCUMENT PARTITON CANNOT BE DROPPED');

The Delete_Partition function call will just do the following (pseudo code):

-ALTER TABLE DROP PARTITION doc_drv_list(i) --> which is the current partition name that the script is looping over.

The above mechanism will generate the report and also drop the partition at the same time instead of doing this at separate times.

5. Once a partition has been dropped update the XML_DOC_PARTITION_STATE table to update the isDropped column to 'Y' for that partition. So the SQL in the DC_VERIFY.sql would look like:

THEN
      IF (Ci_Ref_Part_Ok(UPPER(doc_drv_list(i))))
      THEN
        IF (Ad_Part_Ok(UPPER(doc_drv_list(i))))
        THEN
          UTL_FILE.Put_Line (PART_HANDLE,'PASS: ALL DOCUMENTS ARE UNREFERENCED THUS THE');
          UTL_FILE.Put_line (PART_HANDLE,'        XML_DOCUMENT PARTITION CAN BE DROPPED');
Delete_Partition(UPPER(doc_drv_list(i)));

Update_State_Table((UPPER(doc_drv_list(i)));

where the Update_State_Table function is just updating the state for that partition (pseudo code):

UPDATE XML_DOC_PARTITION_STATE SET ISDROPPED='Y' WHERE PARTITIONNAME=doc_drv_list(i);

COMMIT;

So using our above example if both partitionA and partitionC were dropped the XML_DOC_PARTITION_STATE would look like this:

XML_DOC_PARTITION_STATE

PartitionName StartDate ExpiryDate isDropped

partitionA

01-01-2010

31-01-2010

Y

partitionB

01-02-2010

28-02-2010

Y

partitionC

01-03-2010

31-03-2010

Y

Summary:

By using the STATE tables approach you can automate the purging of XML_DOCUMENT partitions. There is no need to track or remember the partition names and the partitions can be dropped directly in the verify scripts. This same methodology can be applied for other partitioned tables to help with automated purging.

As always would love to hear your comments and or questions.

DA!

Monday Jul 26, 2010

Dynamic BPEL PartnerLinks and Dynamic Routing of BPEL processes - a powerhouse combination

This is a subject that I have been working on for quite some time, and the more I have investigated this subject, the moreI have realized that this model is very powerful in today's ever changing business world. This model allows your BPEL processes to be very dynamic, both from a development point of view and from a runtime point of view by adding an additional intangible runtime value. Lets review each of these individually first. Please note that this blog post will not contain any code examples.

Dynamic BPEL PartnerLinks

In a lot of development environments, we sometimes do not know what service we are going to call or for that matter where does this service resides. So its very hard at development time to provide an endpoint to our BPEL process, so the questions to answer are what service do we call, and where does it live?

This is where dynamic partnerlinks come in handy, they resolve both questions, by providing the answers at runtime (magically ;o) ). Well not really magically but these values are passed to the BPEL process at runtime, and using the ws-addressing feature of BPEL PM we can invoke and receive callbacks from these dynamic endpoints, without having to provide these values at design time. The only caveat is that you need to know the data model of your service that you will be calling, so you need to know the XSD or a variant of the XSD. For an example please visit the BPEL Cookbook on OTN. (I have a working example of dynamic partnerlinks for 11G, which I will post at a later date).

Dynamic Routing of BPEL processes

This aspect of the model relates to using some sort of a Business Rules Engine, which will route your workflow based on some business logic/events, all of it dynamic. So the only two things given would be the start and the end, what path was taken to get to the end will not be known at design time, its only at runtime that all will unfold and told. There are no caveats here, but you need to design and develop your business rules, make them available to BPEL to allow for dynamic routing of your workflows.

Combined Model

Now if we take the dynamic partnerlinks and combine it with dynamic routing, you end up with a double whammy, a fully dynamic BPEL process AND workflow!! How many times have you wished for this, well wait no longer, cuz here she is!!

Its a little hard to imagine but the benefits of this model are enormous for e.g.:

- minimum impact on your parent BPEL process, even if you change your endpoints or even the work that the service does, it has no impact on the calling BPEL process

- the only contract between the BPEL process and partnerlinks is the XSD, as long as that stays constant, it does not matter to the BPEL process

- by having all dynamic partnerlinks (or most, or some of them, you may still have to have some static PL), by changing the workflow it does not matter what services are called since they all conform to same XSD contract

- this model allows you to quickly change the partnerlinks and workflow (rules) with no impact on the BPEL process, since these rules and partnerlinks are external to the BPEL process - no redeployment!

-imagine being able to point to another version of your service and only have it called when a certain rule is satisfied, while still having the older service/rule up and running and doing this with NO DOWN TIME!!

Of course with all the benefits come some costs :) and the one cost (not bad odds one cost vs so many benefits) is the contract between services, i.e. the XSD --> for this whole model to work you need to come up with a common XSD for all the dynamic services that will be called, also known as a canonical XSD. Now, this canonical XSD can be a simpler version of a "true" canonical XSD, not as many elements in it, just enough to satisfy the endpoint. But this still means work on your part to come with this legal contract, and it may mean "modifying" services to meet this requirement. Some of the services that you are calling may not owned by you, so it may get tricky unless you can get a confirmation that the XSD's will not change in the near future (and the endpoint/implementation may not change that frequently either).

The thing to note here, this model works best for use cases where you are expecting a lot of change, for example in the legal industry, or policy driven industries (insurance, banks), due to the very dynamic nature of these businesses, they may need to change their process very frequently. If however, your business needs are static and do not except much change, this model may prove to "hectic", though it will not hurt, - if you are looking to implement this approach it would be prudent to do a feasibility exercise first.

Another benefit of this approach, and in my view one of the most important, is for all your long-running BPEL processes - my definition of a long running BPEL process (aka "active durable processes") is a process that cannot survive a rolling upgrade (usually 2 days), i.e. it has to finish where it started.

Processes can run for months/years - the problem with active durable processes is that if you have to change their implementation or if you have to migrate, you need to wait while they finish. Well, with this model, you can make all your active durable processes, into multiple short-running dynamic BPEL processes(a small design change), so any change to their implementation will have no impact at all on the BPEL processes.

So here ya go, a powerhouse at your disposal. As always comments and feedback welcome!

Deepak

Metrics to gather when doing performance exercises for Oracle SOA Suite

[Read More]
About

Expect the unexpected from technology to a personal perspective on things/issues that matter to me technology or otherwise. Primarily this blog is tailored for SOA issues, that I encounter on a daily basis and ways to analyze and resolve them with a smile. I will try to keep things light yet enlightening and welcome any and all feedback!! Deepak Arora
I am a Consulting Solutions Architect in the coveted A-Team at Oracle. Primarily dealing with J2EE and SOA and the whole Fusion Middleware Stack. Specializing in Performance, SOA architecture and best practices for the Oracle FMW suite of products.

Search

Categories
Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today