Tuesday Apr 26, 2016

Testing Integrated SOA Gateway's Service Invocation Framework (SIF)


Integrated SOA Gateway's Service Invocation Framework id based on the Business Event System capability to subscribe to java function Invoke Web Service (WebServiceInvokerSubscription.java) in order to invoke a web service.

 This blog goes over the series of steps on how to deploy one of the standard database packages (FND_ USER_PKG) as a web service so that it can be consumed by the subscription to Invoke Web Service, and perform the invocation from the Business Event System administrator page.

Note: This content applies to EBS versions 12.1.3 and 12.2. 

1. Enable and deploy the interface to be published as web service

Connect to EBS with the responsibility Integrated SOA Gateway and go to the Repository. There, expand Application Technology and then Application Object Library. Select User.

Now the list of interfaces for User are displayed. Click on the one with internal name FND_USER_PKG:

From the list of PLSQL interfaces select 'Test User Name' and hit generate WSDL

Select the box of Test User Name and click on Generate and then Deploy. After the Generation is complete a link to the WSDL file is enabled and it is to be used to consume the service. Copy the WSDL link or simply click on it:

2. Test the invocation from outside EBS

SOAP UI is a nice tool to quickly test the services proper deployment, and it only requires the URL to the WSDL generated above. 

Notice that upon invocation the response of the Web Service can be seen on the panel on the right.

3. Define a business event with a subscription to Invoke Web Service

Now test the invocation from within EBS by using the Business Event System itself. Simply create a business event and create a subscription to Web Service Invocation. Then follow the train of configuration. 

Select the service, port and operation to be invoked by this subscription:

and then

4. Test the invocation from Business Event System

Now that the event and the subscription are created, query the event and click on the Test event icon/link

On these form three parameters are required:

- Event Key, which can be anything

- Event parameter WFBES_INPUT_HEADER set to the following:

<wsse:Security xmlns:wsse="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd" 
  env:mustUnderstand="1" xmlns:env="http://schemas.xmlsoap.org/soap/envelope/">
  <wsse:UsernameToken xmlns:wsu="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd" 
    <wsse:Password Type="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-username-token-profile-1.0#PasswordText">sysadmin</wsse:Password> 

- Event payload (XML)
<tes:InputParameters xmlns:tes="http://xmlns.oracle.com/apps/fnd/soaprovider/plsql/fnd_user_pkg/testusername/"> 


By clicking on Raise on Java the subscription is executed in the same session and the response is provided immediately:

Notice the response is the same as the one seen in SOAP UI


Oracle Business Event System provides the capability to invoke SOAP web services and also the means to test them by using its own features.

Friday Aug 16, 2013

Understanding Workflow Errors - Part 2

This post is a follow up to Understanding Workflow Errors - Part 1 that deals with some of the most common errors raised when some of the Workflow Engine APIs are used. 

The way to trap and continue processing upon finding these errors is the same: evaluate the content of global variable WF_CORE.error_name.

Error message: 3103 Attribute 'ATTRIBUTE' does not exist for item 'TYPE/KEY'


The most likely reason for this to happen is a change in the Workflow definition. For instance, an attribute is added to the definition while there are open workflows  not having it. Then you attempt to set a value for it and notice new processes have no issues but the ones created before the change will fail.

Whenever possible these regressions should be foreseen and addressed in the design of the Workflow definition. A possible workaround is to add the supporting plsql code to check for the exception:

    if WF_CORE.error_name='WFENG_ITEM_ATTR' then
    end if;

Error message: 3120 Activity 'TYPE/ACTID' has no performer


All performers of notification activities are taken from view WF_ROLES. The most typical reason for this error is the originally assigned user/role has been expired. While the correct fix is to unexpire the performer and let the workflow complete, there are two possible ways to go around:

1. Change the value to the attribute holding the name of the performer. For instance, if there is an item attribute RECIPIENT pointing to expired user JSMITH then change it to an active user/role, say JHOPKINS. You will need to call this from a SQL*Plus session:

   WF_ENGINE.SetItemAttrText(<type>, <key>, 'RECIPIENT', 'JHOPKINS');

2. The other alternative is to change all references in ALL the workflow runtime tables to change anything related to JSMITH and change it to the taking over user JHOPKINS:

  l_attributes wf_parameter_list_t;
  WF_EVENT.AddParameterToList('OLD_USER_NAME', 'JSMITH', l_attributes);
  WF_EVENT.AddParameterToList('USER_NAME', 'JHOPKINS', l_attributes);
  WF_LOCAL_SYNCH.Propagate_Role(p_orig_system    => <orig system>, --for instance, 'FND_USR' or 'PER'
                                p_orig_system_id => l_orig_system_id, --for instance, 29984
                                p_attributes     => l_attributes);

As always, use care and try these APIs on a test environment before using them in production

Error message: 3133 Activity instance 'NAME' is not a notified activity for item 'TYPE/KEY'.


When it is needed that the processing of a workflow stops to wait for something to happen a BLOCK function can be used. When the block function runs the activity goes into NOTIFIED status. Then, to resume the processing, an explicit call to WF_ENGINE.CompleteActivity must be made. But if the activity is not in notified status then the API will raise WFENG_NOT_NOTIFIED.

There is no straightforward action to deal with this error. Rather, troubleshooting must be made to see why the activity did not go into NOTIFIED status. Sometimes the reason is that the workflow process continued to the next activity, leaving it complete or did not even reach it. A good way to understand what the workflow process has done is to run the seeded script wfstat.sql. Check the output and follow the execution path.

Tuesday Aug 03, 2010

Understanding AQs in Workflow

This blog post contributed by Shivdas Tomar.

In Oracle Workflow, Advanced Queues are used for inter-process communication, background processing, sending messages to external agents and so on. Many times the messages may not be dequeued by the consumers such as Workflow Agent Listeners or Workflow Background Engines.


  • Workflow Agent Listeners or Background engines are not able to dequeue messages in READY state.
  • Messages continue to be in WAIT state even after passing the "delay" interval
  • Processed messages are not purged after passing the "retention" interval
  • Queue propagation is not happening


Always check AQ view (AQ$<Queue_Table_Name> for message state and not the under lying table directly.

  • If messages are in READY state but consumers such as Workflow Agent Listeners and Workflow Background Engines are not able to dequeue, then there might be issues with message recipients or AQ subscribers. Check the CONSUMER_NAME and ADDRESS columns value in that AQ and then the subscriber rule sets.
  • If event messages are not moving from one state to another, check AQ_TM_PROCESSES parameter's value to be greater than 0.
Note: In 10gR1 or later database releases, the AQ time management and many other background processes are automatically controlled by a coordinator-slave architecture called Queue Monitor Coordinator (QMNC). QMNC dynamically spawns slaves named qXXX depending on the system load.
If you want to disable the Queue Monitor Coordinator, then you must set AQ_TM_PROCESSES = 0 in your pfile or spfile. Oracle strongly recommends that you DO NOT set AQ_TM_PROCESSES = 0. If you are using Oracle Streams, setting this parameter to zero (which Oracle Database respects no matter what) can cause serious problems. So either remove AQ_TM_PROCESSES parameter or do not set its value as zero.
  • Check if processes, named like 'qmnc' or 'ora_qxxx' are running at the OS level. If there are no such process(es) running then bounce the Oracle Database after removing AQ_TM_PROCESSES or changing its value to a non-zero value as recommended in Oracle Applications setup documentation.

  • If Queue propagation is not happening, job_queue_processes parameter's value should be checked.
Note: In 11gR1 or later database releases, the init.ora parameter JOB_QUEUE_PROCESSES does not need to be set for AQ propagations. AQ propagation is now handled by DBMS_SCHEDULER jobs rather than DBMS_JOBS.
Propagation takes advantage of the event based scheduling features of DBMS_SCHEDULER for better scalability. If the value of the JOB_QUEUE_PROCESSES database initialization parameter is zero, then that parameter does not influence the number of Oracle Scheduler jobs that can run concurrently. However, if the value is non-zero, it effectively becomes the maximum number of scheduler jobs and job queue jobs that can run concurrently. If a non-zero value is set, it should be large enough to accommodate a scheduler job for each Messaging Gateway agent to be started.

Gathering Stats for Workflow's Queue Tables

This blog post contributed by Ajay Verma.

Statistics gathering is an important activity in order to enable the cost based optimizer to take a decision on the execution plan to choose from the various possible ones with which a particular SQL statement can be run. It is recommended to gather the statistics for an object when it is in steady state, i.e. when the corresponding tables have a fair amount of data giving out statistics which would lead the best execution plan to be picked by CBO.

From 10g onwards , Oracle database introduced Automatic Optimizer Statistics Collection feature in which a scheduled job GATHER_STATS_JOB runs in the nights and gathers stats for tables with either empty or stale statistics using DBMS_STATS  package APIs. This indeed is a great feature but for highly volatile tables such as AQ/Streams tables  it is quite possible that when the stats collection job runs, these tables may not have the data that is representative of their full load period. One scenario can be stats collection on queue tables when they were empty, and the optimizer, as a result, choosing poor execution plans when they have a lot of data in them during normal workload hours.

As a direct fix to tackle the above issue, from 10gR2 database onwards, AQ table stats are getting locked immediately after its creation so that the auto stats collector doesn't change it in the future. This can be confirmed as below:

SQL> CREATE TYPE event_msg_type AS OBJECT   
                  (  name            VARCHAR2(10), 
                     current_status  NUMBER(5),  
                     next_status     NUMBER(5)  
Type created.
--Create Queue Table
SQL> EXEC DBMS_AQADM.create_queue_table(
PL/SQL procedure successfully completed. 
--Create Queue
SQL> EXEC DBMS_AQADM.create_queue(queue_name =>'test_queue',
PL/SQL procedure successfully completed.
--Check for locks on stats
SQL> select owner, table_name, stattype_locked
     from dba_tab_statistics
     where stattype_locked is not null
     and owner='APPS' and table_name='TEST_QUEUE_TAB';
---------- ---------------- ----- 
--Gather stats
SQL> EXEC dbms_stats.gather_table_stats('APPS','TEST_QUEUE_TAB')
BEGIN dbms_stats.gather_table_stats('APPS','TEST_QUEUE_TAB'); END; 
ERROR at line 1: 
ORA-20005: object statistics are locked (stattype = ALL) 
ORA-06512: at "SYS.DBMS_STATS", line 18408 
ORA-06512: at "SYS.DBMS_STATS", line 18429 
ORA-06512: at line 1

As we can see above , queue table stats are locked and hence further stats gathering fails. We can still go ahead and use force parameter provided in DBMS_STATS package to override any lock on statistics:

SQL>exec dbms_stats.gather_table_stats('APPS','TEST_QUEUE_TAB'
PL/SQL procedure successfully completed.

The recommended use is to gather statistics with a representative queue message load and keep the stats locked to avoid getting picked up by auto stats collector.

What happens in Apps Instance?

From Oracle Apps 11i  (on top of Oracle Database 10gR2) instance onwards, the automatic statistics gathering job GATHER_STATS_JOB is disabled using $APPL_TOP/admin/adstats.sql , as can be confirmed below:

SQL> select job_name, enabled from DBA_SCHEDULER_JOBS 
JOB_NAME                       ENABLE
------------------------------ ------

Oracle Apps provides separate concurrent programs which make use of procedures in FND_STATS package to gather statistics for apps database objects. FND_STATS is basically a wrapper around DBMS_STATS and is recommended by Oracle for stats collection in Oracle Apps environment because of the flexibility it provides in identifying the objects with empty and stale stats. The modification threshold ( % of rows used to estimate) is fixed to 10% in DBMS_STATS while it can vary from 0 to 100% incase of FND_STATS.

But FND_STATS doesn't provide the force parameter to override any lock on statistics and hence fails while gathering statistics for a queue table whose stats are locked :

SQL> select owner, table_name, stattype_locked ,last_analyzed
  2   from dba_tab_statistics where table_name='WF_DEFERRED'
  3  and owner='APPLSYS';
-------------- --------------- ----- ---------
APPLSYS        WF_DEFERRED     ALL   23-FEB-09
SQL> exec fnd_stats.gather_table_stats('APPLSYS','WF_DEFERRED');
BEGIN fnd_stats.gather_table_stats('APPLSYS','WF_DEFERRED'); END;
ERROR at line 1:
ORA-20005: object statistics are locked (stattype = ALL)
ORA-06512: at "APPS.FND_STATS", line 1505
ORA-06512: at line 1	

The same error is observed on running the Gather Table Statistics concurrent program for such queue tables. Gather Schema Statistics concurrent program in general skips the table incase any error is encountered while gathering its stats and hence the queue tables with locked stats are also skipped.

Hence currently ,  in order to carry out stat collection for Workflow queue tables whose stats are locked,the approach should be to unlock the queue table temporarily using DBMS_STAT's Unlock APIs when the table have a representative load for the correct stats to be gathered, gather the statistics using FND_STATS and lock it again.

SQL> begin
  2    dbms_stats.unlock_table_stats('APPLSYS','WF_DEFERRED');
  3    fnd_stats.gather_table_stats('APPLSYS','WF_DEFERRED');
  4    dbms_stats.lock_table_stats('APPLSYS','WF_DEFERRED');
  5  end;
  6  /
PL/SQL procedure successfully completed.
SQL> select owner, table_name, stattype_locked ,last_analyzed
  2  from dba_tab_statistics where table_name='WF_DEFERRED'
  3  and owner='APPLSYS';
---------------- ---------------- ----- -------------
APPLSYS          WF_DEFERRED      ALL   12-APR-09	

Another approach can be to unlock the WF queue table stats during the adpatch run itself as Oracle's automatic stats collection feature is disabled in Apps environments and the stats collection in general occurs through the Apps provided concurrent programs. What say?


This blog is dedicated to bring latest information on Oracle Workflow new features, best practices, troubleshooting techniques and important fixes directly from it's Development Team.


« May 2016