What is the Java cache feature: Framework, Features, Administration and How does it tie up with Workflow Business Event System (BES)


In a fast paced world, we all want to cache things for faster access: Lets keep this here, so that I wont have to go around the world to get that. Well, this concept was just applied to the JVM components in the application server realm as well.

While we already have an excellent
article by Mike Shaw on Steven Chan's million dollar blog:
http://blogs.oracle.com/schan/2007/05/01/, I thought I would expound upon the java caching feature in general and talk about how it works, it is configured and administered. Also, the relationship between java caching and Workflow Business event system is not made very clear in the existing articles.


Most of the content in this article has been taken from the  Oracle Applications Java Caching Framework Developer's Guide, so I am not going to lay any great claims to its originality. At the same time, I will just say that I have tried to present the information there in a more digestible format for Application DBAs like me, who want to get upto speed with the various features being spewed out by the FND team.

Distributed Java Cache - what framework does it provide...

The Caching Framework provides following features:

  • Synchronization: Caching Framework takes care of all
    the synchronization issues in a multi-threaded environment.

  • Distributed Caching: Distributed caching allows one JVM
    to inform all the other JVMs about updates to any cached data. This ensures
    consistency of the cached data across all the JVMS.

  • Data Partition: This feature allows the cached data to
    be partitioned based on any partition key. This is useful for JVMS in hosting
    environments running against a Virtual Private Database (VPD). The Caching
    Framework ensures that the cache data is partitioned based on security group

  • Database Invalidation: This feature ensures that when
    the data in a database is updated, the corresponding cached data in the
    mid-tier is also updated. This is useful when the updates to the database
    bypass the JVMS, such as updates through forms or sqlplus.

  • Event Support: This feature allows custom event handlers
    to be invoked when certain events such as invalidation or updates to the
    cached data occur. This is useful for maintaining consistency across inter-dependent
    cached data.

  • Cache Administration: Caching Framework comes with administration User
    interface, which is available under the Functional Administrator responsibility. This interface
    can be used to perform administrative operations including changing the time-out values for
    cache components, looking at cache usage statistics, and clearing caches.

  • Cache Diagnostics: Cache diagnostics is a set of diagnostic tests that can
    identify some of the common problems that may occur when using the Caching Framework.

Another good representation of the communication between components is:


How does it work?

By default, the distributed caching works by sending the
invalidation message to other JVMs. The list of machines is maintained in the
database. As each JVM starts, it makes an entry into this list (if it is not
already present). When the other JVMs receive this invalidation message, they
mark the object invalid if it is present. When the object is requested in the
other JVMs, it is loaded from the database at that time.

DB Invalidation provides a general mechanism to originate
events from the database. For Caching Framework, these events are used to
invalidate objects in a component cache.

Distributed caching ensures that these updates are seen in all the
middle-tiers. However, in some cases, these updates can happen outside of the middle-tier such as,
updates through the Forms server or concurrent programs. In such cases, the DB invalidation
mechanism can be used to ensure that the cached data is invalidated when the corresponding
database data changes. 

The following is a typical process flow for database invalidation:

  1. The user JVM (Apache/ Jserv) caches data such as profiles. The cached data is a
    representation of some profiles in the database.

  2. The user performs an action such as, updating a profile value through Jinitiator, which
    causes the data in the database to be updated while passing the user JVM.

  3. The cached profile value must now be marked as invalid. Subsequent access to the profile in
    the user JVM causes a cache miss, which in turn results in the updated value being loaded from the

How Oracle workflow business event system (BES) is related to this.....

The Workflow Business Event system provides necessary infrastructure support for notification
of the database data change to the middle-tier. The notification is originated by raising a
business event with the correct key. Caching Framework provides the support for processing the
notification and invalidating the corresponding cached data in all the user JVM.

It is important to know that:
  • Every time an event is raised, it is processed in a separate thread.
  • The events are processed by the Java Deferred Agent Listener running in a standalone JVM (GSM)
    started by a concurrent process. This JVM must be running all the time. This JVM sends a
    distributed invalidation message causing the corresponding key in all of the JVMs to be marked
Lets see an example of processing a WF BES event:



Configuring DB Invalidation Feature

This feature is built using the business event support provided by Oracle Workflow,
the apps caching infrastructure and the underlying distributed caching functionality
provided by IAS cache.

When an update to the database data happens, a workflow event is raised. The
Workflow JVM running the Java Deferred Agent Listener processes this event. This
JVM then issues a distributed invalidation message to all the other JVM's running
against the same database and the cached data in the JVM is invalidated. The following
configuration is necessary for this feature:

  1. Caching Framework must be running in the distributed mode. This is
    the default configuration of Caching Framework. The
    -DLONG_RUNNING_JVM=true is set and it ensures that Caching Framework
    runs in distributed mode. For backward compatibility the same can be ensured by setting

  2. The Workflow Java Deferred Agent Listener must be running. This runs as
    a thread in a special workflow service container JVM and processes the business
    events being raised in the database. This should be running by
    default. To verify, make sure that Oracle Applications Manager (OAM) is running,
    login into SSA and perform the following steps:
    1. Select the System Administrator responsibility.
    2. Select Workflow (OAM).
    3. Select on the icon next to Agent Listeners.
    4. Query for Workflow Java Deferred
      Agent Listener by selecting Name
      from the Filter dropdown list.

      Note: If the Status column
      shows Running then skip the following steps.

    5. Select the Workflow Agent Listener Service link (under
    6. Select Workflow Agent Listener Service and if the status is
      not running, select Start from the dropdown list. Then select

      Note: Make sure the status changes to the green icon
      representing a running state.

    7. Return to the Workflow Java Deferred Agent Listener page. Select Start from the dropdown list (under Actions) and then select Go.

      The Status column should show Running.

How to check if a component is enabled for caching

Follow the steps below to check whether the distributed flag is set for a particular
component cache:
  1. Login into the HTML Application Administrator Console (sysadmin/ sysadmin).
  2. Select the Performance tab and then select Components, which is
    located on left side navigation bar.
  3. Choose the correct application from the View dropdown list and then select the appropriate link under Component Identifier.
  4. Verify the Distributed Mode checkbox is
    checked. To enable distributed caching for the component cache, this box needs
    to be checked.
  5. If you change the setting, the Apache/ Jserv must be restarted for the setting
    to take effect.

Cache Administration

These java -D parameters must be added to make the java caching work. For an
Apache/Jserv these are specified in the jserv.properties file as:


The list of parameters is as below:

  • -DAPPLRGF=<a writeable log directory>

    The writeable log directory can be the Jserv log directory. A file with
    the name javacache.log is created under this directory. If it is not specified, the
    java current working directory is the default. If the default directory
    is not writeable, an error message is written to the jserv log file without
    any adverse effect on the functionality.


Note: The above cache deployment steps are required
only if you are using either the distributed caching feature or the cache event handling
If your environment is auto-config enabled apply the tech stack rollup patch H 3416234.

Testing and Troubleshooting

Testing the Component Cache

To troubleshoot Caching Framework use the Diagnostic
Framework. The diagnostic test for Caching Framework can be accessed from
the Oracle Diagnostic UI:


To log into the system use sysadmin/ sysadmin. Alternatively, the same can
be accessed by logging into AppsLocalLogin as sysadmin, selecting either
CRM ETF Administration or CRM HTML Administration responsibility, and then
selecting the Diagnostics link under Setup.

There are two types of diagnostic tests:

  1. Basic tests to verify the basic configuration related with Caching Framework:CRM Foundation from the
    Application dropdown list, and then select
    Caching Framework on the left side. Select
    Run Without Prerequisite to run the test. Select the icon under the

    These tests do not require any user inputs and are more suitable for the system administrator
    to run. To access, select the Basic tab, choose column to see the results. This test prints out the basic
    information about Caching Framework's configuration and component caches. This test is more applicable to Apps DBAs like us.

  2. Advanced tests that can be used to troubleshoot a specific component
    CRM Foundation
    from the ApplicationCache Component Diagnostic Test on the left side. You can run tests
    on individual component caches.
    To access, select the Advanced tab, select dropdown list, and then select

Testing Database Invalidation

This section discusses how to:

    * Perform database invalidation tests.
    * Bounce the workflow JVM.
    * Verify the workflow event is getting raised.
    * Verify workflow event processing.
    * Verify Java object cache log file location.

Performing Database Invalidation Tests

To test your implemenation, If you have implemented database invalidation:

  1. Perform an action that causes updates to the database data while bypassing the user JVM
    (typically Apache/ Jserv), such as updating a profile through Jinitiator.

    To test, the user must update the data in the database through a UI or directly. Also,
    the user must ensure that an event is raised when the update occurs.

    Note: The event can be raised using the PL/SQL API or through the
    Workflow Administrator Web Application UI.

  2. Access the corresponding cached object. If database invalidation works correctly, you will see
    the updated value.

    To test, the value of the data in the user JVM must be accessed. It should be the new updated
    value. Alternatively, you can examine the keys in the cache through the Advanced Diagnostics test.
    The key that corresponds to the data that is getting updated in step 1 above, must not be
    present in the cache after the update occurs.

    Note: Because the processing of events is asynchronous, you must
    wait at least one or two minutes after updating the data to see the effect in the middle-tier data.

For database invalidation to work correctly, all the different underlying pieces must work as
expected. Some of these pieces belong to the infrastructure and some are supplied by the user. All
the seed data must be in place, the background services must be running, and the runtime
must behave as expected. Diagnostic tests are provided for checking the configuration. The
Cache DB Invalidation test, under the Basic diagnostics tab, can be run
to verify that the general infrastructure configuration is in place.

The CacheComponent DB Event Invalidation test, under the Advanced diagnostics tab, can be run for
a specific component cache for which the database invalidation functionality is being tested. The
poplist lists only the component caches that have corresponding workflow defined. If your
component cache does not appear, check the component cache definition to make sure that the business
event is defined.

If all the diagnostic tests pass but the functionality still does not work, check the runtime
behavior by:

  • Bouncing the workflow JVM.
  • Verifying the workflow event is getting raised.
  • Verifying the workflow event is processed correctly.
  • Verifying the Java object cache log file location.

Bouncing the Workflow JVM

After applying the appropriate patch, the workflow JVM that is running the Java Deferred Agent
Listener, needs to be bounced.

Note: This is not the Apache Jserv process.

To bounce the workflow JVM:

  1. Login to SSA and select the System Administrator responsibility.

  2. Select Workflow (under Oracle Applications Manager (OAM).

  3. Select the Agent Listeners icon and locate the row where the
    Name column is Workflow Java Deferred Agent Listener.

  4. Select the Workflow Agent Listener Service under the
    Container column.

  5. Verify that the radio button for the Workflow Agent Listener Service row under the
    Select column is selected.

  6. Select Restart from the second dropdown list and then select

  7. Select OK. The State column changes
    to Restarting.

  8. After a short delay, select Reload. The
    State column changes to Activated.

Verifying the Workflow Event is Getting Raised

To verify that the workflow event is getting raised:

  1. Issue the following query:

    select to_char(enq_time, 'yyyy-mm-dd hh24:mi:ss')
    ,to_char(deq_time,'yyyy-mm-dd hh24:mi:ss')
    ,msg_state,user_data from applsys.aq$wf_java_deferred where enq_time > to_date
    ('2004-11-1 11:19:00','yyyy-mm-dd hh24:mi:ss') order by enq_time desc;


    • The to_date(..) value should not be more than a few
      seconds greater than the time 't'.

    • This query should return at least one row where the string representation of
      user_data value contains text of the form
      ('BES_EVENT_NAME', 100, 'oracle.apps.fnd.menu.entry.insert') or
      ('BES_EVENT_NAME', 100, 'oracle.apps.fnd.menu.entry.update').

    • For profile update it would be
      ('BES_EVENT_NAME', 100, 'oracle.apps.fnd.profile.value.update').

    • The third value, oracle.apps.fnd.profile.value.update,
      is the 'event name'. It should also have text of the form
      where the third value should correspond to the data
      that got updated/ created.

    • The value '10001:0:0:FND_SSO_LOCAL_LOGIN_MASK' is the
      'event key'.

  2. The value of the msg_state column begins as Ready and then
    it changes to Processed.

    Note: Attempt to raise an
    event through the Workflow Administrator Web Application UI by selecting
    Events, searching for the desired event, and then using the icon under
    the Test column in the Search result.

Verifying Workflow Event Processing

Event processing can be confirmed by examining the log files. This requires changing the log
level of the workflow JVM that is running the Java Deferred Agent Listener, which processes the

To change the log level:

  1. Login to Oracle Applications Manager Dashboard -> Site map -> workflow -> Service component.

  2. Locate the row where the Name column is
    Workflow Java Deferred Agent Listener.

  3. Select the checkbox under the Select column, which is located
    next to the row mentioned in the previous step.

  4. Select Edit.

  5. Select Next to go to a second page. Select Procedure from the
    Log Level dropdown list.

    Note: No changes are required if the
    Log Level value is Procedure or Statement.

  6. Select Finish.

  7. Restart the Workflow Agent Listener service.

To examine the runtime logs:

  1. Cause a database invalidation event by performing an action to update the data. See performing database invalidations test.

  2. Login to Oracle Applications Manager Dashboard -> Site map -> workflow -> Service component.

  3. Locate the row where the Name column is
    Workflow Java Deferred Agent Listener.

  4. Select View Log.

    The log file contains the following
    series of messages:

    • Business Event received in the cache handler:<event name> with the workflow
      context:<context value>.
    • Business Event=<event name>key=<event key>corresponds to the
      app=<Application Short Name>and component key=<Cache Component Key>and loader
      class=<loader class name>.
    • Just about to call stringToKey with key=<event key>.
    • Obtained the keys <keys>.
    • Invoking
      CacheManager.invalidate for component=,Cache Component Key>
      app=<Application Short Name> and key=<list of keys>.

      Note: This message should appear at least once.

Interesting related information on the metalink

Looks like there have been more metalink notes released for diagnosing issues in the recent past: e.g. Investigating NoClassDefFoundError in eBusiness 11i when users login (Metalink Note 455366.1) and Diagnosing database invalidation issues with Java Cache for eBusiness Suite (Metalink Note 455194.1).


hi, The caching framework is my actual pursuit in recent days. But it's a pity that it's written in Java, and not a opensource. I would like to write the same caching framework in C++. But I find that under the RAC environment, the Synchronization, Distributed Caching and the Database Invalidation is difficult to implement. Can you give me some hints? Appreciated for your help!

Posted by bartholo on November 16, 2008 at 07:08 PM EST #

Post a Comment:
  • HTML Syntax: NOT allowed



« July 2016