Monday Jul 22, 2013

The ateamsoab2b blogs have been moved to www.ateam-oracle.com

The A-Team has a new web site: The A-Team Chronicles

All of our articles and posts from various locations, including this site, have been consolidated there. Please visit the new site at www.ateam-oracle.com

Please note that all ateamsoab2b posts will be removed from this site on August 30th.

Thanks,

Pete Farkas

Thursday Apr 04, 2013

Case Management Part 3: Runtime Lifecycle of a Project

Now that we understand what Case Management is and the anatomy of an Oracle BPM 11g PS6 Case Management project, we can look at the simplified lifecycle of a project at runtime.... how the stakeholder interacts, what happens when a Case Activity is triggered, what happens when it ends etc....

Case Management Runtime Lifecycle

[Read More]

Wednesday Apr 03, 2013

Case Management Part 2: Anatomy of a Project

In Oracle BPM 11g PS6, BPM Studio (JDeveloper) is the design-time environment for Case Management. This blog entry will describe the make-up of a Case Management project in BPM Studio, stepping through all the terms and properties associated but will stop short of giving recommendations or best-practices, which will follow in a later blog entry.

BPM Studio: Case Management Project 

[Read More]

Tuesday Apr 02, 2013

Case Management Part 1: An Introduction

With the release of PS6 on 1st April, Case Management made its appearance. In this series of blogs I intend to....

  • introduce the concept of case management
  • explain the anatomy of a case management project in BPM 11g
  • explain the lifecycle of a typical case management project at runtime
  • give pointers as to best practices in the design of a case management project

Case Management Part 1: An Introduction

[Read More]

Wednesday Mar 27, 2013

B2B Agreement Life-Cycle Management, Part 2 - Best Practices for High Volume Deployment of Agreements

Introduction

In Part 1 of the B2B Agreement Life-Cycle Management Series, we looked at the best practices to import high volume of agreement metadata into the repository [1]. In this post, we will take a look at the best practices to deploy the agreement metadata for run-time processing.

Background

B2B 11g supports the notion of agreements, which serve as the binding contract between different partners and documents defined within the repository. In order to facilitate run-time processing of messages, these agreements must be deployed so that the corresponding metadata can be queried for runtime validation of message content.

In production systems with large B2B repository containing several partners, documents and agreements, the system performance could be severely affected, if the deployment of the agreements is not managed properly.

This note highlights the best practices for deployment of agreements, when large numbers of partners and documents in the excess of several hundreds are required to be maintained within the B2B repository.

Symptoms

The run-time processing of inbound messages, typically go through a validation phase, where the message contents are tested against the metadata pre-defined for the particular document.

Usually these operations complete fairly quickly. However, as the number of deployed agreements within the repository goes up to several hundreds and beyond, the time to complete internal processing by the engine could go up by a significant amount. If that happens, it may be worth looking into the state of the MDS labels active within the repository.

Remedy

Bulk Deploy

All the agreements created for all trading partners and all document definitions within the B2B repository, can be deployed by a single invoke of the ant utility for B2B deploy. This would ensure that only 1 active label is created for all the agreements created. Thus, the number of labels within MDS will be at a minimum and not add aditional processing overhead on the system perfromance when labels need to be referred to at runtime for metedata retrieval.

Multi-Agreement Deploy

In certain situations, the operating constraints on a production system, may limit the possibility of carrying out a bulk deploy, as mentioned previously. In that case, it might still be helpful to deploy multiple agreements in batches. The greater the batch sizes are, the less will be the performance overhead, since they will reduce the number of active MDS labels in turn. The key objective here is to minimize the number of active MDS labels as much as possible.

Results

Let us take a look at the numbers of active MDS labels created as a result of our 2 types of agreement deployment operations.The examples cited here use the command-line utilities that are available as part of the B2B 11g install. These operations can also be performed via the B2B console, after selecting multiple agreements for deployment. However, for production systems, it is always recommended to develop custom scripts based on the command-line utilities available.

In the first case, we are deploying the 2 agreements were individually deployed one at a time.

  • ant -f ant-b2b-util.xml b2bdeploy -Dtpanames="MarketInc_OracleServices_X12_4010_850_Agr_In" 
  • ant -f ant-b2b-util.xml b2bdeploy -Dtpanames="MarketInc_OracleServices_X12_4010_997_Agr_Out"

When we investigate the B2B_LIFECYCLE table, we see that there were 2 MDS labels created for each deployment of 2 agreements. It should also be noticed that both of these agreements and labels are in active state.


SQL> select state, count(*), count(distinct label)

from b2b_lifecycle
group by state
/

STATE             COUNT(*) COUNT(DISTINCTLABEL)
---------------   ---------------- ------------------------------------------
Active                           2                                         2

SQL>


Alternatively, when both of these agreements were deployed together, only 1 active MDS label was created for both the active agreements. 

  • ant -f ant-b2b-util.xml b2bdeploy -Dtpanames="MarketInc_OracleServices_X12_4010_850_Agr_In,MarketInc_OracleServices_X12_4010_997_Agr_Out"

 SQL> select state, count(*), count(distinct label)

from b2b_lifecycle
group by state
/

STATE             COUNT(*) COUNT(DISTINCTLABEL)
---------------    ---------------- ------------------------------------------
Active                           2                                         1

SQL>


The basic syntax for bulk deploy is the same as shown earlier, without any agreement names in the command-line. e.g.

  • ant -f ant-b2b-util.xml b2bdeploy

For few agreements, the bulk deployment can also be achieved via the B2B console UI, by selecting all the agreements before clicking for deployment. However, in general, UI is not recommended for deployment. Scripts using the command-line utilities should be developed for deployment of agreements as a best practice.

After the deployment is completed, all the existing active agreements and labels will go into an inactive state. A standard maintenance procedure should be implemented to purge all the inactive agreements and labels for proper housekeeping.

Summary

The key objective is to minimize the number of active MDS labels generated as a result of deployment of agreements for run-time processing. Ideally, a bulk deploy would result in 1 active MDS label for the entire repository.

In situations where bulk deploy is not possible, minimizing the number of active MDS labels should be a top priority. So, techniques like deployment of several agreements in a comma-separated list via one command-line invoke should be used in non-ideal situations, whenever possible.

Acknowledgements

The material posted here has been compiled with the help from B2B Engineering and Product Management teams.

References

1. B2B Agreement Life-Cycle Mgmt Part 1 - Best Practices for High Volume Import of CPA Metadata into B2B Repository. https://blogs.oracle.com/ateamsoab2b/entry/best_practices_for_high_volume

Sunday Mar 10, 2013

EDN Debugging

I have a customer asked me about how to debug EDN. This blog will show you how to debug EDN and the tools that can be used to debug EDN.

1. Using EDN-DB-LOG

EDN comes with a useful EDN DB logging servlet to view logging information generated by the EDN component. It is only available for END-DB which is based on AQ, it will not work for EDN with JMS. The servlet uses a table called “EDN_LOG_MESSAGES” in SOA_INFRA schema. It logs the operation on “main” operation of event_queue and oaoo-queue with timestamp information.

The default URL is http://<host_name>:<port_number>/soa-infra/events/edn-db-log.  In this servlet, you can enable, disable and clear logs but you need to have the administrative role in order to access the servlet.  This is a good tool to use to display dynamic counts of un-deq'ed events (potentially "stuck") in the "main" and "OAOO" queues. The log also provides information of EDN bus when it is being connected to AQ-DB.  In the screenshot below, “EVENT_SEQ:202” shows that the EDN bus is being started.

When the logging is enabled, the EDN_LOG_MESSAGES table will be populated with messages until the logging is disabled, so it is inadvisable to leave logging turned on for large amounts of events. It is recommended to clear the log regularly.

Messages in the log are grouped together. Usually the first line in the group will indicate what operation is being performed, and the event sequence number is used to group the messages together and each group will be highlighted using the same color (e.g. enqueuing an event or handling an event that has been dequeued). In the screenshots below, “EVENT_SEQ:204” is dequeuing an event and “EVENT_SEQ:205” is enqueuing an event.

2. Database tables

The second method is to examine the database table. You can check on count of potentially “stuck” events currently in the following queue tables:

  • EDN_EVENT_QUEUE_TABLE – This table is for “EDN_EVENT_QUEUE” AQ. Every event published is temporarily enqueued into this table.
  • EDN_OAOO_DELIVERY_TABLE – This table only stores the event with “OAOO” (one-and-only-one) delivery target(s). The event is temporarily enqueued into this table for END_OAOO_QUEUE AQ.

For event with OAOO delivery target, it travels through both tables, first it is stored in EDN_EVENT_QUEUE_TABLE and then in EDN_OAOO_DELIVERY_TABLE.

This example shows the event enq'ed in "edn_event_queue".

Another alternative is to check the count from the following database views:

  • AQ$EDN_EVENT_QUEUE_TABLE: There are two rows for every event enqueued into "edn_event_queue".
  • AQ$EDN_OAOO_DELIVERY_TABLE: There is one row for every event enqueued into "edn_oaoo_queue". 

This example shows further details about that event which is deq'ed by the subscribers of "edn_event_queue".

The AQ$EDN_EVENT_QUEUE_TABLE.MSG_STATE shows the state of the message.  The states are listed in the table below:

State Code

Value

Description

0

Ready

The message is ready to be processed, i.e., either the delay
time of the message has passed or the message did not have a delay time specified

1

Wait

The delay specified by message_properties_t.delay while executing dbms_aq.enqueue has not been reached.

2

Processed

The message has been successfully processed (dequeued) but will remain in the queue until the retention_time specified for the queue while executing dbms_aqadm.create_queue has been reached.

3

Expired

The message was not successfully processed (dequeued) in either 1) the time specified by message_properties_t.expiration while executing dbms_aq.enqueue or 2) the maximum number of dequeue attempts (max_retries) specified for the queue while executing dbms_aqadm.create_queue.

8

Deferred

Buffered messages enqueued by a Streams Capture process

10

Buffered Expired

User-enqueued expired buffered messages

If the subscriber type is equal to 2 when there is no subscribers to the message, and there is no transaction id due to invalid transaction, it will be marked as UNDELIVERABLE.

When the state message is expired, the AQ$EDN_EVENT_QUEUE_TABLE.EXPIRATION_REASON will be populated with one of the following value:

  • Messages to be cleaned up later
  • MAX_RETRY_EXCEEDED
  • TIME_EXPIRATION
  • INVALID_TRANSACTION
  • PROPAGATION_FAILURE

3. Server Logs

The third method is using EM log configuration and log viewer. There are few logger names related the EDN:

  • oracle.integration.platform.blocks.event
  • oracle.integration.platform.blocks.event
  • oracle.integration.platform.blocks.event.saq
  • oracle.integration.platform.blocks.event.jms

You can set log level to one of the following to capture more details:

    • TRACE:1 (FINE) - Logging the event content details, XPath filter results, event enqueue, dequeue, publish and send operations
    • TRACE:16 (FINER) – Logging the begin, commit and rollback statements of XA transaction (for OAOO) and retry count.
    • TRACE:32 (FINEST)  - All above.

    The log level changes take effect immediately without server restart. However, if you want the changes to persist after server restart, make sure to check on the “Persist Log Level State Across Component Restarts” prior to server restart.

    At FINER or FINEST level, you may see loggings like "Began XA for OAOO." and "Rolled back XA for OAOO." These are normal messages of OAOO event delivery when there are no events waiting to be delivered. They are NOT errorred conditions. You may turn off these messages by setting the Java logging level to "TRACE:1 (FINE)" or a higher value. All detailed logging goes into SOA server's diagnostic.log file configured in EM.  Below is a snippet of the diagnostic log showing the event delivery to an OAOO subscriber:


    [SRC_METHOD: finerBeganXA] Began XA for OAOO.

    [SRC_METHOD: fineEventPublished] Received event: Subject: ... Sender: .... Event: ...

    [SRC_METHOD: fineFilterResults] Filter [XPath Filter: …] for subscriber "..." returned true/false

    [SRC_METHOD: fineDequeuedEvent] Dequeued event, Subject: ... [source type ..]: business-event...

    [SRC_METHOD: fineOAOOEnqueuedEvent] Enqueued OAOO event, Subject: ... [source: ..., target: ... ]: business-event...

    [SRC_METHOD: fineOAOODequeuedEvent] Dequeued OAOO event, Subject: ... [source: ..., target: ...]: business-event...

    [SRC_METHOD: finerInsertedTryCount] Inserted try count for msgId: .... Status: ...

    [SRC_METHOD: finerRemovedTryCount] Removed try count for msgId: ...

    [SRC_METHOD: fineSentOAOOEvent] Sent OAOO event [QName: ... to target: ...]: business-event...

    [SRC_METHOD: fineCommittedOAOODelivery] Committed OAOO Delivery, Subject: ... [source: ..., target: ...]: business-event...

    [SRC_METHOD: finerBeganXA] Began XA for OAOO.

    [SRC_METHOD: finerRolledbackXA] Rolled back XA for OAOO.

    In some cases, more than one method may be necessary to assist in the debugging process. Below is a comparison of server and DB logging that might help you in evaluating and determining which method(s) is/are most suitable in your environment.

    Server Logging

    • EDN will generate standard Java logging messages when events are published, when they are pulled from persistent storage and when they are delivered.
    • The logger used by EDN depends on the implementation. For instance, EDN-DB uses "oracle.integration.platform.blocks.event.saq" and EDN-JMS uses "oracle.integration.platform.blocks.event.jms".
    • As in all Java logging, messages are written at different log levels from ERROR to FINEST. The most detailed messages (including the event body) use FINEST.
    • Loggers can also be configured in logging.xml in your config directory.

    DB Logging

    • If you are using EDN-DB, a lot of debugging information may not be accessible due to the many activities that occurred in the database which couldn’t be logged in the server. Hence, a servlet web page that accesses the debug logging table is implemented to assist the debugging process. The page is located at: http://hostname:port/soa-infra/events/edn-db-log and you do need to have administrative role to access the servlet page.
    • There are commands on the servlet web page to enable and disable logging and for clearing the log table. The table will be filled with messages, so it is inadvisable to leave logging turned on for large amounts of events. It is recommended to clear the log regularly.
    • Messages in the log are grouped together. Usually the first line in the group will indicate what operation is being performed (e.g. enqueuing an event or handling an event that has been dequeued).

    Mediator Instance Tracking

    Mediator supports three modes for instance tracking by changing the audit level in EM->SOA->SOA-INFRA->SOA Administration->Mediator Properties:

    1. Off - No instance tracking for successfully completed instance, however, instances and faulted instances are created even in this mode.  Audit trail will not be created with this flag.
    2. Production - Instance tracking is enabled for all.  All audit details are logged, except the details of assign activities, but the instances and payloads are not captured.
    3. Development - Instance tracking is enabled for all.  All audit details are logged, and the instances and payloads are also captured.

    The following tables are used by Mediator to store the instance and audit trail data:

    1. MEDIATOR_INSTANCE - This table contains one row for each mediator instance. Each instance has a unique id. It stores ecid, composite instance id and parent component id from normalized message and overall state of an instance in the component_state column.  The component state depends on the combination of the mediator case instance states, the states are listed here.
    2. MEDIATOR_CASE_INSTANCE - This table contains one row for each mediator routing rule and fault information for a routing rule is also stored.  Each case instance has one unique id.  It stores mediator instance id and case name, related fault information and information pertaining to retries.  This is the base table for executing automatic retries using fault policies.
    3. MEDIATOR_CASE_DETAIL - This table contains multiple rows for each routing rule and stores mediator audit trail xml as a blob for each routing rule. Each case detail rows are bound together by case id.  It stores case detail state, audit trail for each case detail.  The state of the latest case detail is the current state of the case.
    4. MEDIATOR_AUDIT_DOCUMENT- This table stores payload at each stage of mediator message flow and payloads are stored only when instance tracking audit level is set to "Development". Each row in this table stores the payload at a point in the message flow. e,g, transformed payload, payload being sent to the target service.

    Below is a screenshot of a basic mediator project with 2 routing rules which polls an xml file from an input folder, transforms the content and writes the xml file to a folder. 

    When the mediator receives a massage, it creates a mediator instance, and then depending on the number of routing rule, one or more case instance will be created in the MEDIATOR CASE INSTANCE table. The engine will then initializes the audit trail xml and stores it as an XML document. After each processing point (e.g. transformation, filter evaluation etc), it stores the trail messages to audit trail xml and persists to audit trail table (MEDIATOR_CASE_DETAIL.AUDIT_TRAIL  and/or MEDIATOR_AUDIT_DOCUMENT), then the mediator instance state will be updated.

    1. When the mediator instance kicks off, a composite instance will be created in the COMPOSITE_INSTANCE table, and unique ECID will be assigned to the instance.

    select * from composite_instance where ecid='1b7e5955c26b51de:-56440391:13d41f410c6:-8000-000000000000144b'

    2. Using ECID, you can retrieve the mediator instance data and the component state from the mediator instance table.  From this point onward, MEDIATOR_INSTANCE.ID will be used to retrieve the mediator case data.

    select * from mediator_instance where ecid='1b7e5955c26b51de:-56440391:13d41f410c6:-8000-000000000000144b'

     3. Depending on the number of routing rules, the mediator will store each routing rule separately in the MEDIATOR_CASE_INSTANCE table and the MEDIATOR_CASE_INSTANCE .ID will be used to retrieve the case detail for each routing rule.  In the above example, there are 2 routing rules.

    select * from mediator_case_instance where instance_id = 'C64B82E086BB11E2BFBE1B53FB1929E1';

    4. The audit trail of each routing rule is stored in the MEDIATOR_CASE_DETAIL table in compressed format.

    select * from mediator_case_detail where instance_id = 'C64B82E086BB11E2BFBE1B53FB1929E1';

    Below are the xml data that are stored in the MEDIATOR_CASE_DETAIL.AUDIT_TRAIL column.  In the example below, two routing rules were being executed. The first event routing rule’s result was equal to “false”, then the second routing rule was executed. The second event routing rule’s result was successful, subsequently the message was transformed and published to the destination. If you have the audit trail level set to “Development”, you can use the audit id in the case trail to retrieve the payload from the MEDIATOR_AUDIT_DOCUMENT table for further investigation.

    CASE=ID= C64BA9F086BB11E2BFBE1B53FB1929E1

    <case_trail>

      <event type="inputPayloadReceived" status="Completed"

             parentId="C64B82E086BB11E2BFBE1B53FB1929E1" date="1362615182063"

             auditId="C64BA9F086BB11E2BFBE1B53FB1929E1">

        <message>MediatorAudit_29</message>

      </event>

    </case_trail>

    CASE_ID= C66EE96086BB11E2BFBE1B53FB1929E1

    <case_trail>

      <event type="case" id="C66EE96086BB11E2BFBE1B53FB1929E1"

             parentId="C64B82E086BB11E2BFBE1B53FB1929E1" caseName="USCustomer.Write"

             date="1362615182073" auditId="C64BA9F086BB11E2BFBE1B53FB1929E1">

        <message>MediatorAudit_0#USCustomer.Write</message>

      </event>

      <event type="condition" status="Completed"

             parentId="C66EE96086BB11E2BFBE1B53FB1929E1" date="1362615182074"

             auditId="C64BA9F086BB11E2BFBE1B53FB1929E1">

        <message>MediatorAudit_1#false#$in.CustomerData/imp1:CustomerData/Country='US'</message>

      </event>

    </case_trail>


    CASE _ID= C670700086BB11E2BFBE1B53FB1929E1

    <case_trail>

      <event type="case" id="C670700086BB11E2BFBE1B53FB1929E1"

             parentId="C64B82E086BB11E2BFBE1B53FB1929E1"

             caseName="CanadaCustomer.Write" date="1362615182083"

             auditId="C64BA9F086BB11E2BFBE1B53FB1929E1">

        <message>MediatorAudit_0#CanadaCustomer.Write</message>

      </event>

      <event type="condition" status="Completed"

             parentId="C670700086BB11E2BFBE1B53FB1929E1" date="1362615182083"

             auditId="C64BA9F086BB11E2BFBE1B53FB1929E1">

        <message>MediatorAudit_1#true#$in.CustomerData/imp1:CustomerData/Country='CA'</message>

      </event>

      <event type="transform" status="Completed"

             parentId="C670700086BB11E2BFBE1B53FB1929E1" date="1362615182102"

             auditId="C67292E086BB11E2BFBE1B53FB1929E1">

        <message>MediatorAudit_3#Customer#xsl/CustomerData_To_Customer_2.xsl</message>

      </event>

      <event type="publish" status="Completed"

             parentId="C670700086BB11E2BFBE1B53FB1929E1" date="1362615182124"

             auditId="C67292E086BB11E2BFBE1B53FB1929E1" parentRefId="mediator:C64B82E086BB11E2BFBE1B53FB1929E1:C670700086BB11E2BFBE1B53FB1929E1:oneway">

        <message>MediatorAudit_9#Write#CanadaCustomer</message>

      </event>

    </case_trail>

    Wednesday Jan 30, 2013

    OSB Performance Tuning - RouterRuntimeCache

    Many customers start out with smaller projects for an initial release.  Typically, these applications require 20-30 Proxy services.  But as time goes on and later phases of the project rollout, the number of proxy services can increase drastically.  The RouterRuntimeCache is a cache implemented by OSB to improve performance by eliminating or reducing the amount of time spent on compiling the proxy pipeline. 

    By default, OSB will not compile a pipeline until a request message for a given service is received.  Once it has been compiled, the pipeline is cached in memory for re-use.  You have probably noticed in testing that the first request to a service takes longer to respond than subsequent requests, and this is a big part of the reason.  Since free heap space is often at a premium, this cache can not be infinite in size so this cache has a built in limit.  When the cache is full, the least recently used entry is released and the pipeline that is currently being requested is placed into cache in its place.  The next time a request comes in for the service who's pipeline was released, that pipeline has to be re-compiled and placed in cache, again forcing out the least recently used pipeline.  Once a pipeline is placed in cache it is never removed from cache unless forced out by a full cache scenario as above, or if the service is updated, forcing it to be recompiled.

    The default size limit of the RouterRuntimeCache is 100 entries (or pipelines).  It is limited by the number of services in the cache, not the memory used by the cache so the amount of memory used by a full cache will vary greatly based on the complexity of the services, the extent and complexity of inline xquery, etc.  If your project grows beyond 100 proxy services, system performance can degrade significantly if the cache size is not increased to hold all frequently used services. 

    Unfortunately, the way to tune this cache is not exposed through the OSB console.  As of 11g PS5, the only way to set this parameter is via a system property specified on the Java command-line.  The property name is com.bea.wli.sb.pipeline.RouterRuntimeCache.size.   For example,

    “java … -Dcom.bea.wli.sb.pipeline.RouterRuntimeCache.size=500 … weblogic.Server …”. 

    In this example, OSB will cache 500 proxies, instead of the default 100.  Because increasing the RouterRuntimeCache.size value will require more space in the heap to hold the additional proxies, be aware that you may need to reevaluate your JVM memory settings to allow OSB to continue to perform optimally.

    Tuesday Jan 29, 2013

    SOA Suite for Healthcare Integration startup errors due to expired passwords.

    Background

    SOA Suite for Healthcare integration involves starting up the managed servers, which are in turn, dependent on valid connections to databases. In many low-maintenance environment like Virtualbox images distributed for training and workshops, the database passwords are likely to expire after a certain period. Subsequently, the related SOA startup error becomes a road-block for users not familiar with the database dependencies.

    This note describes the step-by-step instructions to resolve the password expiry errors and get the SOA Suite for Healthcare Integration environment back up and running.

    Symptoms

    The errors seen on startup of SOA server could look like the following:

    • Caused by: javax.ejb.CreateException: SDP-25700: An unexpected exception was caught.
    • Cause: weblogic.common.resourcepool.ResourceDeadException:
    • weblogic.common.ResourceException: Could not create pool connection.
    • The DBMS driver exception was: ORA-28001: the password has expired

    Remedy

    The error is caused by the fact that the database passwords in the image are set to expire after a definite period. To get past the issue, the passwords for the following database users have to be reset:

    • DEV_SOAINFRA
    • DEV_MDS
    • DEV_ORASDPM

    These users with DEV_ prefix are the default but could vary in other situations, where custom prefixes may have been used during installation of SOA Suite repository. 

    All the passwords can be set to the original value, e.g. welcome1 by logging into a SQL*PLus session as a DBA and using the following command for each database user one at a time :

    • SQL> Alter user <username from above list> identified by welcome1;
    The above procedure is applicable to all database users. Alternatively, upon attempts to login to a SQL*Plus session as a database user with expired password, the session itself can prompt for new passwords.

    Results

    Below, we have the excerpt from a terminal session captured from the Virtualbox image, distributed for SOA Suite for Healthcare Integration training. It shows how the passwords were reset using the approaches mentioned earlier.


    [oracle@soahc ~]$ . oraenv
    ORACLE_SID = [orcl] ?
    The Oracle base for ORACLE_HOME=/u01/DBInstall/product/11.2.0/dbhome_1 is /u01/DBInstall
    [oracle@soahc ~]$ sqlplus

    SQL*Plus: Release 11.2.0.1.0 Production on Date...

    Copyright (c) 1982, 2009, Oracle.  All rights reserved.

    Enter user-name: system
    Enter password: welcome1
    ERROR:
    ORA-28001: the password has expired


    Changing password for system
    New password: welcome1
    Retype new password: welcome1
    Password changed

    Connected to:
    Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
    With the Partitioning, OLAP, Data Mining and Real Application Testing options

    SQL> alter user DEV_SOAINFRA identified by welcome1;

    User altered.

    SQL> alter user DEV_MDS identified by welcome1;

    User altered.

    SQL> alter user DEV_ORASDPM identified by welcome1;

    User altered.

    SQL> conn / as sysdba
    Connected.

    SQL>


    Wednesday Jan 16, 2013

    Oracle SOA for Healthcare - Setting Endpoint Acknowledgement Acceptance

    Acknowledgment acceptance in SOA Suite for Healthcare is globally set through the console at the document level. Currently there is no way to override the global setting for an endpoint in the SOA for Healthcare console. The illustration below shows acknowledgment acceptance being set in the document type definition in the SOA for Healthcare console

    [Read More]

    Friday Dec 21, 2012

    Oracle B2B – b2b.rowLockingForCorrelation

    In cases where the latency between the outbound EDI document and the subsequent inbound acknowledgment is very low, a race condition will likely occur when B2B tries to update the document state in the B2B_BUSINESS_MESSAGE table. The result is the status of the original EDI document remains in either a MSG_WAIT_FA or MSG_WAIT_ACK state indefinitely if no retry value has been set, or it evolves to a state of MSG_ERROR after the retry interval and all the retry attempts have expired.[Read More]

    Saturday Dec 15, 2012

    B2B Agreement Life-Cycle Management, Part 1 - Best Practices for High Volume CPA Import Operations with ebXML

    Introduction

    This is Part 1 of the B2B Life-Cycle Management Series. The series will discuss the best practices around various aspects of the complete B2B agreement life cycle. This post will discuss the first part which is related to the import of data artifacts into the repository. The specific use case discussed here is for CPAs related to ebXML data but the concepts are fairly generic and the process can be extended to any document or protocol.

    Background

    B2B 11g supports ebXML messaging protocol, where multiple CPAs can be imported via command-line utilities. 

    This note highlights one aspect of the best practices for import of CPA, when large numbers of CPAs in the excess of several hundreds are required to be maintained within the B2B repository.

    Symptoms

    The import of CPA usually is a 2-step process, namely creating a soa.zip file using b2bcpaimport utility based on a CPA properties file and then using b2bimport to import the b2b repository.  The commands are provided below:

    1. ant -f ant-b2b-util.xml b2bcpaimport -Dpropfile="<Path to cpp_cpa.properties>" -Dstandard=true
    2. ant -f ant-b2b-util.xml b2bimport -Dlocalfile=true -Dexportfile="<Path to soa.zip>" -Doverwrite=true

    Usually the first command completes fairly quickly regardless of the number of CPAs in the repository. However, as the number of trading partners within the repository goes up, the time to complete the second command could go up to ~30 secs per operation. So, this could add up to a significant amount, if there is a need to import hundreds of CPA in a production system within a limited downtime, maintenance window. 

    Remedy

    In situations, where there is a large number of entries to be imported, it is best to setup a staging environment and go through the import operation of each individual CPA in an empty repository. Since, this will be done in an empty repository, the time taken for completion should be reasonable. 

    After all the partner profiles have been imported, a full repository export can be taken to capture the metadata for all the entries in one file. 

    If this single file with all the partner entries is imported in a loaded repository, the total time taken for import of all the CPAs should see a dramatic reduction.

    Results

    Let us take a look at the numbers to see the benefit of this approach. With a pre-loaded repository of ~400 partners, the individual import time for each entry takes ~30 secs. So, if we had to import another 100 partners, the individual entries will take ~50 minutes (100 times ~30 secs). On the other hand, if we prepare the repository export file of the same 100 partners from a staging environment earlier, the import takes about ~5 mins.

    The total processing time for the loading of metadata, specially in a production environment, can thus be shortened by almost a factor of 10.

    Summary

    The following diagram summarizes the entire approach and process.

    Acknowledgements

    The material posted here has been compiled with the help from B2B Engineering and Product Management teams.

    Monday Dec 10, 2012

    Oracle B2B - Synchronous Request Reply

    Beginning with Oracle SOA Suite PS5 (11.1.1.6), B2B supports synchronous request reply over http using the b2b/syncreceiver servlet. I’m attaching a demo to this blog which includes a SOA composite archive that needs to be deployed using JDeveloper, a B2B repository with two agreements that need to be deployed using the B2B console, and a test xml file that gets sent to the b2b/syncreceiver servlet using your favorite SOAP test tool[Read More]

    SOA 11g Technology Adapters – ECID Propagation

    Have you ever had the need to view related component instances in Enterprise Manager, but are broken apart due to the use of technology adapters? If so, then you might be interested in a new feature that is being introduced in SOA Suite 11.1.1.7 (PS6). The Oracle adapter JCA framework has been enhanced to provide the ability to propagation the ECID. This blog describes the new feature, how to configure your composite to enable the feature, and illustrates what you will see via a simple example.
    [Read More]

    Tuesday Dec 04, 2012

    How to Achieve OC4J RMI Load Balancing

    This is an old, Oracle SOA and OC4J 10G topic. In fact this is not even a SOA topic per se. Questions of RMI load balancing arise when you developed custom web applications accessing human tasks running off a remote SOA 10G cluster. Having returned from a customer who faced challenges with OC4J RMI load balancing, I felt there is still some confusions in the field how OC4J RMI load balancing work. Hence I decide to dust off an old tech note that I wrote a few years back and share it with the general public.

    Here is the tech note:

    Overview

    A typical use case in Oracle SOA is that you are building web based, custom human tasks UI that will interact with the task services housed in a remote BPEL 10G cluster. Or, in a more generic way, you are just building a web based application in Java that needs to interact with the EJBs in a remote OC4J cluster. In either case, you are talking to an OC4J cluster as RMI client. Then immediately you must ask yourself the following questions:

    1. How do I make sure that the web application, as an RMI client, even distribute its load against all the nodes in the remote OC4J cluster?

    2. How do I make sure that the web application, as an RMI client, is resilient to the node failures in the remote OC4J cluster, so that in the unlikely case when one of the remote OC4J nodes fail, my web application will continue to function?

    That is the topic of how to achieve load balancing with OC4J RMI client.

    Solutions

    You need to configure and code RMI load balancing in two places:

    1. Provider URL can be specified with a comma separated list of URLs, so that the initial lookup will land to one of the available URLs.

    2. Choose a proper value for the oracle.j2ee.rmi.loadBalance property, which, along side with the PROVIDER_URL property, is one of the JNDI properties passed to the JNDI lookup.(http://docs.oracle.com/cd/B31017_01/web.1013/b28958/rmi.htm#BABDGFBI)

    More details below:

    About the PROVIDER_URL

    The JNDI property java.name.provider.url's job is, when the client looks up for a new context at the very first time in the client session, to provide a list of RMI context

    The value of the JNDI property java.name.provider.url goes by the format of a single URL, or a comma separate list of URLs.
    • A single URL. For example: opmn:ormi://host1:6003:oc4j_instance1/appName1
    • A comma separated list of multiple URLs. For examples:  opmn:ormi://host1:6003:oc4j_instanc1/appName, opmn:ormi://host2:6003:oc4j_instance1/appName, opmn:ormi://host3:6003:oc4j_instance1/appName

    When the client looks up for a new Context the very first time in the client session, it sends a query against the OPMN referenced by the provider URL. The OPMN host and port specifies the destination of such query, and the OC4J instance name and appName are actually the “where clause” of the query.

    When the PROVIDER URL reference a single OPMN server

    Let's consider the case when the provider url only reference a single OPMN server of the destination cluster. In this case, that single OPMN server receives the query and returns a list of the qualified Contexts from all OC4Js within the cluster, even though there is a single OPMN server in the provider URL. A context represent a particular starting point at a particular server for subsequent object lookup.

    For example, if the URL is opmn:ormi://host1:6003:oc4j_instance1/appName, then, OPMN will return the following contexts:

    • appName on oc4j_instance1 on host1
    • appName on oc4j_instance1 on host2,
    • appName on oc4j_instance1 on host3, 
    (provided that host1, host2, host3 are all in the same cluster)

    Please note that
    • One OPMN will be sufficient to find the list of all contexts from the entire cluster that satisfy the JNDI lookup query. You can do an experiment by shutting down appName on host1, and observe that OPMN on host1 will still be able to return you appname on host2 and appName on host3.
    When the PROVIDER URL reference a comma separated list of multiple OPMN servers


    When the JNDI propery java.naming.provider.url references a comma separated list of multiple URLs, the lookup will return the exact same things as with the single OPMN server: a list of qualified Contexts from the cluster.

    The purpose of having multiple OPMN servers is to provide high availability in the initial context creation, such that if OPMN at host1 is unavailable, client will try the lookup via OPMN on host2, and so on. After the initial lookup returns and cache a list of contexts, the JNDI URL(s) are no longer used in the same client session. That explains why removing the 3rd URL from the list of JNDI URLs will not stop the client from getting the EJB on the 3rd server.


    About the oracle.j2ee.rmi.loadBalance Property

    After the client acquires the list of contexts, it will cache it at the client side as “list of available RMI contexts”.  This list includes all the servers in the destination cluster. This list will stay in the cache until the client session (JVM) ends. The RMI load balancing against the destination cluster is happening at the client side, as the client is switching between the members of the list.

    Whether and how often the client will fresh the Context from the list of Context is based on the value of the  oracle.j2ee.rmi.loadBalance. The documentation at http://docs.oracle.com/cd/B31017_01/web.1013/b28958/rmi.htm#BABDGFBI list all the available values for the oracle.j2ee.rmi.loadBalance.

    Value Description
    client
    If specified, the client interacts with the OC4J process that was initially chosen at the first lookup for the entire conversation.
    context
    Used for a Web client (servlet or JSP) that will access EJBs in a clustered OC4J environment.
    If specified, a new Context object for a randomly-selected OC4J instance will be returned each time InitialContext() is invoked.
    lookup
    Used for a standalone client that will access EJBs in a clustered OC4J environment.
    If specified, a new Context object for a randomly-selected OC4J instance will be created each time the client calls Context.lookup().


    Please note the regardless of the setting of oracle.j2ee.rmi.loadBalance property, the “refresh” only occurs at the client. The client can only choose from the "list of available context" that was returned and cached from the very first lookup. That is, the client will merely get a new Context object from the “list of available RMI contexts” from the cache at the client side. The client will NOT go to the OPMN server again to get the list. That also implies that if you are adding a node to the server cluster AFTER the client’s initial lookup, the client would not know it because neither the server nor the client will initiate a refresh of the “list of available servers” to reflect the new node.

    About High Availability (i.e. Resilience Against Node Failure of Remote OC4J Cluster)

    What we have discussed above is about load balancing. Let's also discuss high availability.

    This is how the High Availability works in RMI: when the client use the context but get an exception such as socket is closed, it knows that the server referenced by that Context is problematic and will try to get another unused Context from the “list of available contexts”. Again, this list is the list that was returned and cached at the very first lookup in the entire client session.
    About


    This is the blog for the Oracle FMW Architects team fondly known as the A-Team. The A-Team is the central, technical, outbound team as part of the FMW Development organization working with Oracle's largest and most important customers. We support Oracle Sales, Consulting and Support when deep technical and architectural help is needed from Oracle Development.
    Primarily this blog is tailored for SOA issues (BPEL, OSB, BPM, Adapters, CEP, B2B, JCAP)that are encountered by our team. Expect real solutions to customer problems, encountered during customer engagements.
    We will highlight best practices, workarounds, architectural discussions, and discuss topics that are relevant in the SOA technical space today.

    Search

    Archives
    « April 2014
    SunMonTueWedThuFriSat
      
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
       
           
    Today