Monday Feb 24, 2014

Introduction to bpel.rs and bpel.sps Diagnostic Dumps in PS6

There are a couple of new BPEL Diagnostic Dumps that haven't been discussed previously and offer some interesting value. These dumps are bpel.rs and bpel.sps. Here I'll describe what they collect and how to use them. Note that these dumps are available starting in SOA Suite 11.1.1.7 (PS6).


As with all diagnostic dumps, these can be executed from WLST and this is probably the most convenient way to do so.

Steps:
  1. Navigate to <MIDDLEWARE_HOME>/oracle_common/common/bin and execute 'wlst.sh' (or .cmd).
  2. Connect to a server that is running SOA Suite with 'connect('user','pwd','t3://host:port')
  3. Execute the listDumps command to view the available SOA dumps, 'listDumps(appName='soa-infra')'. You should see bpel.sps and bpel.rs in the list.



The data collected by these dumps may seem familiar, especially if you've used DMS, but they do offer a unique capability in that you can specify a duration over which the data is collected. This differs from other SOA dumps which provide aggregate data from when the component was deployed or DMS which provides aggregate data from when the server was started. First I'll describe the data they collect.


bpel.sps provides aggregate performance statistics for <strong>SYNCHRONOUS</strong> BPEL processes like min, max and average processing time. By default the results are provided in a table format, organized by composite process.


bpel.rs provides request level statistics (min, max and avg) but it breaks the stats down by steps in the BPEL engine. So if you have a composite with a BPEL process and you collect this dump over some duration when there is load, you'll get an XML type output that lists each step the BPEL engine took to process each request and every step will have associated statistics. There will only be 1 entry per process / step and a 'count' attribute indicating how many times that step was run. Since it's listing every step in the process, this is a convenient way to see the performance of every activity in the process.


The most unique aspect of these dumps is the ability to specify a time range in which to collect the data. This capability is coupled with the BPEL property statsLastN in a mutually exclusive manner that can be confusing so I'll try to explain it.


If statsLastN is configured for the BPEL engine then you cannot use the duration arguments with these dumps. statsLastN is configured in Fusion Middleware Control under BPEL Properties and then 'More Properties' and specifies the number of BPEL instances to keep statistics on for each deployed process. When enabled, these dumps will provide only the statistics for those instances. So if you have statsLastN set to '10', when you run either of these dumps you will see the statistics for the 10 most recent instances.


If statsLastN is disabled then you must provide the duration argument and optionally the buffer size. The command looks like this:

executeDump(name='bpel.sps', appName='soa-infra', outputFile='/home/oracle/tmp/BPEL_SPS.txt', args={'duration':'10', 'buffer':'1000'})

The duration is in seconds and the buffer says how many entries to store.

Once started in this way, the dump will run for 10 seconds and then provide the output. If there was no load then the statistics will be empty and if statsLastN is enabled you will receive an error.


The output of bpel.sps is self explanatory as it's just a simple table but bpel.rs is more complicated so I wanted to offer a sample:


<stats key="TestHarnessProject:L1SyncBPEL:main:receiveInput:105" min="0" max="1" average="0.2" count="30">
<stats key="monitor-send-activity-data" min="0" max="0" average="0.0" count="60">
</stats>
<stats key="sensor-send-variable-data" min="0" max="0" average="0.0" count="30">
</stats>
<stats key="sensor-send-activity-data" min="0" max="0" average="0.0" count="60">
</stats>
</stats>
<stats key="TestHarnessProject:L1SyncBPEL:main:replyOutput:387" min="0" max="1" average="0.33" count="30">
<stats key="monitor-send-activity-data" min="0" max="0" average="0.0" count="60">
</stats>
<stats key="sensor-send-activity-data" min="0" max="0" average="0.0" count="60">
</stats>
</stats>
<stats key="TestHarnessProject:L2SyncBPEL:main:If1:Sequence1:L2SyncBPEL_FileBranch_Assign:99" min="0" max="5" average="0.93" count="30">
<stats key="sensor-send-variable-data" min="0" max="0" average="0.0" count="120">
</stats>
<stats key="sensor-send-activity-data" min="0" max="1" average="0.01" count="60">
</stats>
<stats key="monitor-send-activity-data" min="0" max="0" average="0.0" count="60">
</stats>
</stats>
<stats key="TestHarnessProject:L2SyncBPEL:main:If1:If1:93" min="0" max="1" average="0.06" count="30">
</stats>
<stats key="TestHarnessProject:L1SyncBPEL:main:If1:elseif:Sequence4:L1SyncBPEL_L2SyncBPELBranch_Embedded:243" min="5001" max="5006" average="5002.46" count="30">
<stats key="sensor-send-activity-data" min="0" max="0" average="0.0" count="60">
</stats>
<stats key="monitor-send-activity-data" min="0" max="0" average="0.0" count="60">
</stats>
</stats>
<stats key="TestHarnessProject:L1SyncBPEL:main:If1:elseif:Sequence4:L1SyncBPEL_L2SyncBPELBranch_Assign2:267" min="1" max="2" average="1.06" count="30">
<stats key="monitor-send-activity-data" min="0" max="0" average="0.0" count="60">
</stats>
<stats key="sensor-send-variable-data" min="0" max="0" average="0.0" count="120">
</stats>
<stats key="sensor-send-activity-data" min="0" max="1" average="0.01" count="60">
</stats>
</stats>
<stats key="TestHarnessProject:L1SyncBPEL:main:L1SyncBPEL_Assign1:106" min="0" max="1" average="0.33" count="30">
<stats key="sensor-send-activity-data" min="0" max="1" average="0.01" count="60">
</stats>
<stats key="sensor-send-variable-data" min="0" max="0" average="0.0" count="30">
</stats>
<stats key="monitor-send-activity-data" min="0" max="0" average="0.0" count="60">
</stats>
</stats>


Here you can see the individual entries for the 'TestHarnessProject' BPEL activities. Every time these activities are run, the stats get updated. If you have a lot of BPEL processes with a lot of activities this output will get very long so it's good to know the names of what you're interested it, whether it's the project or even the specific activities.


As always I encourage you to try this out and have a look at the output. The ability to collect current data in this way is very useful to diagnose sporadic performance issues and see how your applications are performing under various load conditions.

Friday Dec 27, 2013

Basic MQ 7.5 Configuration for SOA Suite 11g (MQ Adapter)

I recently went through an exercise to install, configure and interact with MQ 7.5 through a composite and thought I would share my notes. This is not meant to be a full tutorial on MQ configuration with SOA but more of a cheat sheet to avoid wasting a lot of time like I did. I also make no claim to be an MQ admin so if you see something that's wrong or would benefit from clarification, please add a comment.


MQ Steps:
  1. Download MQ 7.5 from the IBM site (90 day trial license, Windows)

  2. Ran setup which was straightforward. Several questions concerning the domain controller but I said that I don't have any running Win 2000

  3. Started MQ Explorer and created a Queue Manager. Specified Transmission and DeadLetter queues along with exposing an external channel

  4. Created my queueus: Transmission, DeadLetter, LocalQueue1, LocalQueue2

  5. Created a 'Server-connection' channel. Note that the Channel may have a status of 'Inactive' and a red arrow pointing down. I spent a ton of time trying to get the arrow to be green and pointing up with a status of 'Active' but in the end it didn't seem to matter.

  6. Created a new user on the Windows machine that I wanted to connect to MQ as remotely

  7. Added the user to the Channel 'Specific Profiles' Authority Records and gave them all of the authorizations. Right click on the Channel -> Object Authorities -> Manage Authority Records -> Specific Profiles. There should be a default Specific Profile you can edit.

  8. Note: If you want to disable the security for the Channel you can go into the Authority Records and delete the default profile

  9. Using the command 'setmqaut -m <Queue Manager Name> -t qmgr -p <user@pcname> +all' I granted full MQ privs to the new user (command is in <MQ_HOME>/bin)



SOA Steps:
  1. Add the MQ jar files to <DOMAIN_HOME>/lib: com.ibm.mq.commonservices.jar, com.ibm.mq.headers.jar, com.ibm.mq.jar, com.ibm.mq.jmqi.jar, com.ibm.mq.pcf.jar (all can be obtained from the MQ Installation)

  2. Created a new composite application with input and output MQ Adapter instances

  3. The adapters both used the default 'eis/MQ/MQAdapter' JNDI from the default connection pool but be careful b/c the default JNDI in the Adapter wizard is 'eis/MQ/MQSeriesAdapter'. If you don't change it or create a new connection pool then the adapter will fail.

  4. Other values in the adapter wizard were defaults or obvious (queue name and my schema)

  5. In the WLS Console I went into the MQ Adapter deployment -> Configuration -> Connection Pool -> Default Pool to change set some properties

  6. Set channelName, hostName, password, queueManagerName, userID. Everything else I left as default.

  7. Deployed and ran the composite, verifying that the message ended up in the right MQ queue



I ran into some errors along the way:

MQJE001: Completion Code '2', Reason '2035': Due to insufficient permissions for the user I was trying to connect as. To resolve this I had to add the permissions using the 'setmqaut' command mentioned above. To make it simple I just granted full admin.

java.util.MissingResourceException: We document that you can get this if you don't have the correct MQ jar files in <DOMAIN_HOME>/lib. In my case it was due to a missing or wrong Channel Name configured in the connection pool properties.

Admittedly much of this is documented across various resources but I couldn't find anything that provided it all together. Now when I need to do this again in a year and have forgotten everything I'll have this blog post to come back to.

If everything works then you'll see the messages appear in the queue through MQ Explorer where you can then insepct the payload, etc.

Wednesday Nov 27, 2013

How to get Work Manager Information in a WLS or OSB Thread Dump

To analyze issues related to performance or work load, it would be nice to know which work manager is running a given thread. In Oracle Service Bus, a dispatch policy defined for a proxy or a business service relates to a work manager definition in WLS. While analyzing a thread dump, you would like to know which work manager is running a given thread.

Starting with WebLogic Server version 10.3.6, work manager information is shown in a thread dump if work manager / self tuning debugging is enabled. This debug flag can be enabled in the WLS console:

Go to http://<admin-server>:<admin-port>/console - Environment - <your server> - Debug tab:

ENABLE DEBUG FLAG FOR WORK MANAGER SELF TUNING 

Enable the DebugSelfTuning debug flag in the work manager section.

After that, work manager information is automatically added to a thread dump. This is a sample stack trace which has work manager information included:

"[ACTIVE] ExecuteThread: '14' for queue: 'weblogic.kernel.Default (self-tuning)' for workmanager: XBus Kernel@null@MyCustomWorkManager" id=123 idx=0x1d4 tid=21936 prio=5 alive, waiting, native_blocked, daemon
-- Waiting for notification on: java/lang/Object@0xe574a240[fat lock]
at jrockit/vm/Threads.waitForNotifySignal(JLjava/lang/Object;)Z(Native Method)
at java/lang/Object.wait(J)V(Native Method)
at java/lang/Object.wait(Object.java:485)
at com/bea/wli/sb/pipeline/PipelineContextImpl$SynchronousListener.waitForResponse(PipelineContextImpl.java:1620)
^-- Lock released while waiting: java/lang/Object@0xe574a240[fat lock]
at com/bea/wli/sb/pipeline/PipelineContextImpl.dispatchSync(PipelineContextImpl.java:562)
at stages/transform/runtime/WsCalloutRuntimeStep$WsCalloutDispatcher.dispatch(WsCalloutRuntimeStep.java:1391)
at stages/transform/runtime/WsCalloutRuntimeStep.processMessage(WsCalloutRuntimeStep.java:236)
at com/bea/wli/sb/pipeline/debug/DebuggerRuntimeStep.processMessage(DebuggerRuntimeStep.java:74)
at com/bea/wli/sb/stages/StageMetadataImpl$WrapperRuntimeStep.processMessage(StageMetadataImpl.java:346)
at com/bea/wli/sb/stages/impl/SequenceRuntimeStep.processMessage(SequenceRuntimeStep.java:33)
at com/bea/wli/sb/pipeline/PipelineStage.processMessage(PipelineStage.java:84)

 

Monday Nov 11, 2013

Scripted SOA Diagnostic Dumps for PS6 (11.1.1.7)

When you upgrade to SOA Suite PS6 (11.1.1.7) you acquire a new set of Diagnostic Dumps in addition to what was available in PS5. With more than a dozen to choose from and not wanting to run them one at a time, this blog post provides a sample script to collect them all quickly and hopefully easily. There are several ways that this collection could be scripted and this is just one example.




What is Included:
  • wlst.properties: Ant Properties
  • build.xml
  • soa_diagnostic_script.py: Python Script


What is Collected:
  • 5 contextual thread dumps at 5 second intervals
  • Diagnostic log entries from the server
  • WLS Image which includes the domain configuration and WLS runtime data
  • Most of the SOA Diagnostic Dumps including those for BPEL runtime, Adapters and composite information from MDS


Instructions:
  1. Download the package and extract it to a location of your choosing
  2. Update the properties file 'wlst.properties' to match your environment
  3. Run 'ant' (must be on the path)
  4. Collect the zip package containing the files (by default it will be in the script.output location)


Properties Reference:
  • oracle_common.common.bin: Location of oracle_common/common/bin
  • script.home: Location where you extracted the script and supporting files
  • script.output: Location where you want the collections written
  • username: User name for server connection
  • pwd: Password to connect to the server
  • url: T3 URL for server connection, '<host>:<port>'
  • dump_interval: Interval in seconds between thread dumps
  • log_interval: Duration in minutes that you want to go back for diagnostic log information
Script Package

Dynamic Monitoring Service (DMS) Configuration Dumping and CPU Utilization


There was recently a report of CPU spikes on a system that were occuring at precise 3 hour intervals. Research revealed that the spikes were the result of the Dynamic Monitoring Service generating a metrics dump and writing it under the server 'logs' folder for every WLS server in the domain. This blog provides some information on what this is for and how to control it.


The Dynamic Monitoring Service is a facility in FMw (JRF to be more precise) that collects runtime data on the components deployed to WebLogic. Each component is responsible for how much or how little they use the service and SOA collects a fair amount of information. To view what is collected on any running server you can use the following URL, http://host:port/dms/Spy and login with admin credentials.


DMS is essentially always running and collecting this information in the runtime and to protect against loss of this data it also runs automatic backups, by default at the 3 hour interval mentioned above. Most of the management options for DMS are exposed through WLST but these settings are not so we must open the dms_config.xml file which can be found in DOMAIN_HOME/config/fmwconfig/servers/<server_name>/dms_config.xml.


The contents are fairly short and at the bottom you will find the following entry:

<dumpConfiguration>
    <dump intervalSeconds="10800" maxSizeMBytes="75" enabled="true"/>
</dumpConfiguration>

The interval of 10800 seconds corresponds to the 3 hours and the maximum size is 75MB. The file is written as an archive to DOMAIN_HOME/servers/<server_name>/logs/metrics. This archive contains the dump in XML format.


You can disable the dumps all together by simply setting the 'enabled' value to 'false' or of course you could modify the other parameters to suit your needs. Disabling the dumps will NOT impact DMS collections or display at runtime. It will only eliminate these periodic backups.


Wednesday Oct 16, 2013

Transaction Boundaries and Rollbacks in Oracle SOA Suite

A new eCourse/video is available in the Oracle Learning Library, "Transaction Boundaries and Rollbacks in Oracle SOA Suite" .

The course covers:

  • Definition of transaction, XA, Rollback and transaction boundary.
  • BPEL transaction boundaries from a fault propagation point of view
  • Parameters bpel.config.transaction and bpel.config.oneWayDeliveryPolicy for the configuration of both synchronous and asynchronous BPEL processes.
  • Transaction behavior in Mediator
  • Rollback scenarios based on type of faults
  • Rollback using bpelx:rollback within a <throw> activity.

The video is accessible here

Thursday Jul 25, 2013

Using the New RDA 8 with SOA Suite 11g

RDA 8 was released on July 23rd and this blog post is a brief summary of the new, smaller profiles available for SOA Suite and Service Bus 11g.

Install RDA 8

You can download the package from here.

Extract the package to a location of your choosing. It is recommended that if you want to install RDA 8 in the same location as a previous RDA installation, first delete the existing 'rda' directory. You will lose your previous configuration and this is expected when moving from RDA 4.x to 8. When subsequently updating RDA 8.x it is expected that you will be able to preserve your configuration by saving the 'output.cfg' file.

Configure and Run RDA 8

We have added 2 new profiles in RDA 8, one for SOA Suite and one for Service Bus. The purpose of these profiles is to simplify configuration, shorten the collection time and reduce the size of the resultant package.

There are 2 options for each profile, 'offline' (default) and 'online'. The 'online' collections will include a few items that require a connection to the running server and are generally only needed in special circumstances. The SOA Suite online collection adds thread dumps along with soa-infra MDS, composite and WSDL configuration information. The Service Bus 'online' collection adds thread dumps and Service Bus specific Diagnostic Dumps available starting in PS6 (cache contents, JMA and MQ async message tables).

It is recommended that the ORACLE_HOME and DOMAIN_HOME environment variables be set as this will speed / simplify the profile configuration. This can be done by running source <DOMAIN_HOME>/bin/setDomainEnv.

Steps for SOA Suite:
  1. From a command prompt enter the command: rda.sh (cmd) -CRP -p FM11g_SoaMin
  2. If the environment variables are set appropriately you can hit 'enter' at every prompt and the collection will run automatically. Otherwise continue with the steps.
  3. Accept the default value of the first prompt asking about the network domain
  4. Enter ORACLE_HOME as the Middleware Home location
  5. Confirm or enter a new location for the domain. By default it is <ORACLE_HOME>/user_projects/domains'
  6. Choose whether you want an 'offline' or 'online' collection. The default is 'offline'.
  7. If you have more than one domain in the domains location you will next be asked to choose which domain / domains to analyze.
  8. Select the domain servers to include. The default is all of them.
  9. Choose whether you want to run the OCM collection. For SOA this can be set to 'n' but the default is 'y'

Steps for Service Bus: (Identical except for the name of the profile)
  1. From a command prompt in the RDA installation location enter the command: rda.sh (cmd) -CRP -p FM11g_OsbMin
  2. If the environment variables are set appropriately you can hit 'enter' at every prompt and the collection will run automatically. Otherwise continue with the steps.
  3. Accept the default value of the first prompt asking about the network domain
  4. Enter ORACLE_HOME as the Middleware Home location
  5. Confirm or enter a new location for the domain. By default it is <ORACLE_HOME>/user_projects/domains'
  6. Choose whether you want an 'offline' or 'online' collection. The default is 'offline'.
  7. If you have more than one domain in the domains location you will next be asked to choose which domain / domains to analyze.
  8. Select the domain servers to include. The default is all of them.
  9. Choose whether you want to run the OCM collection. For SOA this can be set to 'n' but the default is 'y'

The collected files are written to RDA_HOME/output and the zipped package is written to RDA_HOME. If you are interested in viewing the collection you can go to /output and drag the file RDA__start.htm to a browser.

back to top

Friday May 17, 2013

Introduction and Troubleshooting of SOA 11g Database Adapter


SOA 11g Adapters

Oracle SOA Suite 11g Adapters allow Middleware service engines (BPEL, BPM, OSB, etc) to communicate with backend systems like E-Business Suite, Siebel, SAP, Databases, Messaging Systems (MQSeries and Oracle Advanced Queuing), Tuxedo, CICS, etc


SOA provides different types of adapters: Technology, Legacy, Packaged Application and Others. It also allows the creation of custom adapters.



SOA 11g Database Adapter

The Database Adapter enables service engines to communicate with database end points. Databases like Oracle or any other relational database that follows the ANSI SQL standard and provides JDBC drivers. Some of the databases are:

* Oracle 8i and above

* IBM DB/2

* Informix

* Clarion

* Clipper

* Cloudscape

* DBASE

* Dialog

* Essbase

* FOCUS Data Access

* Great Plains

* Microsoft SQL Server

* MUMPS (Digital Standard MUMPS)

* PROGRESS

* Red Brick

* RMS

* SAS Transport Format

* Sybase

* Teradata

* Unisys DMS 1100/2200

* UniVerse

* Navision Financials (ODBC 3.x)

* Nucleus

* Paradox

* Pointbase

The database adapter supports inbound and outbound interactions. It is based on standards like J2EE Connector Architecture (JCA), Extensible Markup Language (XML), XML Schema Definition (XSD), and Web Service Definition Language (WSDL).

The adapter is deployed as a RAR file into Weblogic. The figure below shows how to check the deployment. It can also be deployed in any Application Server that supports the standards. App servers like Websphere and JBOSS



Other features associated with the Database Adapter:

  • Uses TopLink to map database tables and data into XML.
  • Transforms DML operations (Merge, Select, Insert, and Update) as Web services. It also supports stored procedures and Pure SQL
  • Supports Polling Strategies to avoid duplicate reads (Physical Delete, Logical Delete, Dequencing Tables and files)
  • Supports Transactions to keep the database in a healthy. Changes to the database are rollback in case of an error
  • Streaming Large Payload. Payload is not stored in memory
  • Schema Validation
  • High Availability. Supports Active-Active or Active-Pasive clusters
  • Performance Tuning

To integrate the Database Adapter with BPEL, create a SOA composite in JDeveloper and drag and drop the adapter to the composite’s Services or Reference region. This will create a Inbound or outbound interaction respectively.



When the adapter component is added to the composite the configuration wizard will open. Through the wizard we define the connection to the database, the type of operation (insert, update, poll, etc), performance parameters, retry logic, etc.

Once the configuration is done, JDeveloper creates a series of SOA artifacts. These files are used by the composite to communicate with the adapter instance during runtime. Some of the artifacts are:

  • <serviceName>.wsdl: This is an abstract WSDL, which defines the service end point in terms of the name of the operations and the input and output XML elements.
  • <serviceName>_table.xsd: This contains the XML file schema for these input and output XML elements. Both these files form the interface to the rest of the SOA project.
  • <serviceName>_or-mappings.xml: It is a TopLink specific file, which is used to describe the mapping between a relational schema and the XML schema. It is used at run time.
  • <serviceName>_db.jca: This contains the internal implementation details of the abstract WSDL. It has two main sections, location and operations. Location is the JNDI name of an adapter instance, that is, eis/DB/SOADemo. Operations describe the action to take against that end point, such as INSERT, UPDATE, SELECT, and POLL.
  • <serviceName>.properties: It is created when tables are imported, and information about them is saved. Based on the properties in the db.jca file and the linked or-mappings.xml file, <seviceName>.properties file generates the correct SQL to execute, parses the input XML, and builds an output XML file matching the XSD file.

Troubleshooting

The basic step to troubleshoot the adapter is to set the oracle.soa.adapter logger level to Trace:32(FINEST) in the FMW Console

Once this is done, reproduce the issues and check the SOA Manager Server log file MW_HOME/user_projects/domains/<domain_name>/servers/<soa-server>/logs/soa-diagnostic.log

Look for JCABinding and BINDING.JCA-xxxx strings. You should see messages like these:



If a BINDING.JCA error occurred go to My Oracle Support knowledge base and search for it. Remember this is the same knowledge base used by Oracle support engineers when solving Service Requests.


Other References

Wednesday Apr 03, 2013

Diagnostics Enhancements in SOA Suite 11g PS6 (11.1.1.7)

What's new with Diagnostics in PS6?



Interval Sampling

Purpose: To collect Diagnostic Dumps at specified intervals for as long as they need to be collected.
Activation: By default there are samplings configured for the Diagnostic Dumps 'jvm.threads' and 'jvm.histogram' but they will not begin collecting samples until an Incident is generated on the server. New samples can be started and managed through WLST.

NOTE: To enable dump sampling on a healthy server that has not yet generated a DFW Incident you must activate dump sampling for a healthy server. Steps:
  1. Login in the EM Console
  2. Right click on the domain name in the left menu and select 'System MBean Browser'
  3. Navigate to the MBean: '‘Application Defined MBeans’ -> ‘oracle.dfw’ -> ‘Domain: <domain>’ -> ‘oracle.dfw.jmx.DiagnosticsConfigMbean’
  4. Select the 'DiagnosticsConfig' mbean
  5. Change the value of the 'DumpSamplingIdleWhenHealthy' attribute to 'false'
  6. Click 'Apply'. No restart is necessary for the change to take effect


Collection: The samples are loaded into memory and maintained there until the sampling session is stopped. To collect the current sample archive requires only the execution of a simple WLST command.

Example Commands:
Start and connect to a running server with WLST:
  1. Start WLST with '/oracle_common/common/bin/wlst.sh'
  2. Connect to the running server with 'connect('<user>','<pwd>','t3://<host>:<port>')'

List the currently running dump samples:

wls:/soasingle/serverConfig> listDumpSamples()

(default output)
Name:JVMThreadDump
Dump Name:jvm.threads
Application Name:
Sampling Interval:5
Rotation Count:10
Dump Implicitly:true
Append Samples:true
Dump Arguments:timing=true, context=true


Name:JavaClassHistogram
Dump Name:jvm.classhistogram
Application Name:
Sampling Interval:1800
Rotation Count:5
Dump Implicitly:false
Append Samples:true
Dump Arguments:


Collect the current memory buffer for the 'JVMThreadDump' sample:

wls:/soasingle/serverConfig> getSamplingArchives(sampleName='JVMThreadDump',outputFile='<file>')

wrote 194751 bytes to <file>

The archive is written to the specified location as a zip. In the case of JVMThreadDumps the archive contains a .dmp file containing the actual thread dumps and a text file listing the sampling parameters and the dumps that were taken.


Start a new sampling session:

wls:/soasingle/serverConfig> addDumpSample(sampleName='ThreadsSample',diagnosticDumpName='jvm.threads',samplingInterval=20,rotationCount=20, toAppend=true,args={'context' : 'true'})

ThreadsSample is added

You can then confirm the sampling session with the 'listDumpSamples()' command.


Kill a sampling session:

wls:/soasingle/serverConfig> removeDumpSample(sampleName='ThreadsSample')

Removed ThreadsSample

You can confirm that the sampling session is gone by again running the 'listDumpSamples()' command.

back to top



Contextual SOA Thread Dump Translation


In PS6 we now have the ability to generate thread dumps from DFW that will tell us which composite and component a particular active SOA thread is running. This is best illustrated through an example.

Execute the jvm.threads Diagnostic Dump from WLST against an active SOA Server:

NOTE: You MUST pass the 'context' argument in order to have the SOA context information included in the dump.

wls:/soasingle/serverConfig> executeDump(name='jvm.threads',outputFile='<file>',args={'context' : 'true'})

(no output in the command window)

Go to the location where you wrote the file and open it. The file will look like a typical thread dump but scroll past the stack traces and you'll find some CPU statistics for each thread and below that the SOA context information. Here we list a partial stack for an active SOA thread followed by the corresponding context entry:

"[ACTIVE] ExecuteThread: '4' for queue: 'weblogic.kernel.Default (self-tuning)'"
id=69
idx=0x100 tid=27191 prio=5 alive, in native, daemon
    at jrockit/net/SocketNativeIO.readBytesPinned(Ljava/io/FileDescriptor;[BIII)I(Native Method)
    at jrockit/net/SocketNativeIO.socketRead(SocketNativeIO.java:32)
    at java/net/SocketInputStream.socketRead0(Ljava/io/FileDescriptor;[BIII)I(SocketInputStream.java)
    at java/net/SocketInputStream.read(SocketInputStream.java:129)
    at oracle/net/nt/MetricsEnabledInputStream.read(TcpNTAdapter.java:730)
    at oracle/net/ns/Packet.receive(Packet.java:302)
    at oracle/net/ns/DataPacket.receive(DataPacket.java:108)
    at oracle/net/ns/NetInputStream.getNextPacket(NetInputStream.java:317)
    at oracle/net/ns/NetInputStream.read(NetInputStream.java:262)
    ....


===== THREAD CONTEXT INFORMATION =====
idECIDRIDContext Values
-----------------------------------------------------------------------------------------------------------------------------
id=205911d1def534ea1be0:39fa34d1:....0dbRID=0:6
id=209511d1def534ea1be0:39fa34d1:....0dbRID=0:6
id=6011d1def534ea1be0:39fa34d1:....0dbRID=0:6
id=69
11d1def534ea1be0:39fa34d1:....0WEBSERVICE_PORT.name=execute_pt
dbRID=0:9
composite_name=DiagProject
component_instance_id=DF4389F07C6611E2BFBECB6C185E5342
component_name=TestProject_BPELProcess1
J2EE_MODULE.name=fabric
WEBSERVICE_NAMESPACE.name=http://xmlns.oracle.com/TestApp/DiagProject/TestProject_Mediator1
activity_name=AQ_Java_Embedding1:BxExe3:405
J2EE_APP.name=soa-infra
WEBSERVICE.name=TestProject_Mediator1_ep
composite_instance_id=182858


The entries of primary interest are highlighted in red.
  1. Thread ID: This is the ID of the thread from the thread dump entry. We list them all but only provide the context information for SOA threads
  2. composite_name: The name of the composite that is running in that thread
  3. component_name: The name of the component inside the composite that is running in the thread
  4. activity_name: The name of the BPEL activity that is currently running
  5. composite_instance_id: The instance id of the composite that can be looked up in EM
There are obviously some other entries but these are the one that we think will be of most benefit.

NOTE: In the initial release you may see instances where Mediator is listed as the running component when it ought to be BPEL. In these instances the 'activity_name' should still be accurate and list the running BPEL activity

back to top



BPEL Diagnostic Dumps


There are 3 new Diagnostic Dumps for BPEL in PS6:
  • bpel.dispatcher: Dumps the current state of the BPEL thread pools including Engine, Invoke and System.
  • bpel.adt: Dumps the average time that messages for asynchronous BPEL processes wait in the DLV_MESSAGE table before being consumed by an Engine thread
  • bpel.apt: Dumps the average processing time for all BPEL components in every deployed composite

Examples:

List the available SOA Diagnostic Dumps after starting WLST with '/oracle_common/common/bin/wlst.sh' and connecting to a running SOA server.

wls:/soasingle/serverConfig> listDumps(appName='soa-infra')

adf.ADFConfigDiagnosticDump
bpel.apd
bpel.apt
bpel.dispatcher
mediator.resequencer
soa.adapter.connpool
soa.adapter.ra
soa.adapter.stats
soa.composite
soa.composite.trail
soa.config
soa.db
soa.edn
soa.env
soa.wsdl


(To list the dumps that are part of the system default)

wls:/soasingle/serverConfig> listDumps()

dfw.samplingArchive
dms.ecidctx
dms.metrics
http.requests
jvm.classhistogram
jvm.flightRecording
jvm.threads
odl.activeLogConfig
odl.logs
odl.quicktrace
wls.image



Execute bpel.dispatcher:

wls:/soasingle/serverConfig> executeDump(name='bpel.dispatcher',appName='soa-infra',outputFile='<file>')

(There is no console output)

The output is a text file in the specified location. Sample

The report first lists the BPEL thread pools along with their configured sizes. It then breaks them down by statistics such as number of active threads, processed messages, errored messages, etc. Keep in mind that synchronous requests are handled by the default WLS Work Manager and therefore will not be reflected here. These pools handle callbacks (Invoke) and asynchronous requests (Engine).


Execute bpel.adt:

wls:/soasingle/serverConfig> executeDump(name='bpel.adt',appName='soa-infra',outputFile='<file>')

(There is no console output)

The output is a text file in the specified location. No sample available at the moment report looks identical to bpel.apt, listing the asynchronous BPEL instances and their avg delay in processing messages from the DLV_MESSAGE table.


Execute bpel.apt:

wls:/soasingle/serverConfig> executeDump(name='bpel.apt',appName='soa-infra',outputFile='<file>')

(There is no console output)

The output is a text file in the specified location. Sample

The report lists the deployed BPEL components and their avg processing times in seconds.

back to top



Adapater Diagnostic Dumps


In PS6 there are 3 new Adapter Diagnostic Dumps:
  • soa.adapter.connpool: Dumps the current adapter connection pool statistics
  • soa.adapter.ra: Dumps the adapter Connection Factory configuration parameters
  • soa.adapter.stats: Dumps the current DMS statistics for all deployed adapter instances


Execute soa.adapter.connpool:
wls:/soasingle/serverConfig> executeDump(name='soa.adapter.connpool',appName='soa-infra',outputFile='<file>')

(There is no console output)

The output is a text file in the specified location. Sample

The report groups the connection pool statistics by the adapter instance so you'll see a header including the instance name followed by the list of statistics.


Execute soa.adapter.ra:
wls:/soasingle/serverConfig> executeDump(name='soa.adapter.ra',appName='soa-infra',outputFile='<file>')

(There is no console output)

The output is a text file in the specified location. Sample

The report groups the Connection Factory configurations by deployed adapter so again you'll see the header with the Adapter type followed by the configuration parameters.


Execute soa.adapter.stats:
wls:/soasingle/serverConfig> executeDump(name='soa.adapter.stats',appName='soa-infra',outputFile='<file>')

(There is no console output)

The output is a text file in the specified location. Sample

The report lists statistics such as min time, max time and avg time along with number of processed / error instances. They are grouped by adapter instance.

back to top



Service Bus Diagnostic Dumps


In PS6 there are 3 new Service Bus Diagnostic Dumps:
  • OSB.derived-cache: Dumps the current statistics and contents of all Service Bus derived caches
  • OSB.jms-async-table: Dumps the current contents of the JMS async table which stores message correlation IDs, timestamps, etc
  • OSB.mq-async-table: Dumps the current contents of the MQ async table which stores similar information to the JMS async table


Execute OSB.derived-cache:
wls:/soasingle/serverConfig> executeDump(name='OSB.derived-cache',appName='OSB',outputFile='<file>')

(There is no console output)

The output is a text file in the specified location and the specific caches dumped are:

  • ArchiveClassLoader: Cache of dependency-aware archive class-loaders
  • ArchiveSummary: Cache of archive summaries
  • CodecFactory: Cache of codec factories
  • EffectiveWSDL: Cache of the EffectiveWSDL objects that are derived from the service or wsdl resources of Proxy or Business services
  • Flow_Info: Cache of flow info objects
  • LightweightEffectiveWSDL: Cache of the EffectiveWSDL objects that are derived from the service or wsdl resources of Proxy or Business services
  • MflExecutor: Cache of the MFL executors
  • RouterRuntime: Cache of the compiled router runtimes for proxy services
  • RuntimeEffectiveWSDL: Cache of the Session valid EffectiveWSDL objects that are derived from the service or wsdl resources of Proxy or Business services
  • RuntimeEffectiveWSPolicy: Cache of WS Policies for Proxy or Business services
  • SchemaTypeSystem: Type system cache for MFL, XS and WSDLs
  • ServiceAlertsStatisticInfo: Cache of service alert statistics for Proxy or Business services
  • ServiceInfo: Cache of compile service info for Proxy or Business services and WSDLs
  • Wsdl_Info: Cache of WSDL Info objects
  • WsPolicyMetadata>: Cache of compiled WS Policy Metadata
  • XMLSchema_Info: XML Schema info cache for xml schema objects
  • XqueryExecutor: Cache of Xquery executors
  • XsltExecutor: Cache of XSLT executors
  • alsb.transports.ejb.bindingtype: Cache of EJB Binding Info for EJB Business Services
  • alsb.transports.jejb.business.bindingtype: Cache of JEJB Binding Info
  • alsb.transports.jejb.proxy.bindingtype: Cache of JEJB Binding Info



Execute OSB.jms-async-table:
wls:/soasingle/serverConfig> executeDump(name='OSB.jms-async-table',appName='OSB',outputFile='<file>')

(There is no console output)

The output is a text file in the specified location.


Execute OSB.mq-async-table:
wls:/soasingle/serverConfig> executeDump(name='OSB.mq-async-table',appName='OSB',outputFile='<file>')

(There is no console output)

The output is a text file in the specified location.

back to top



There is one new Diagnostic Dump for Mediator which dumps the content of the resequencer table to help in troubleshooting resequencer issues. To execute:

wls:/soasingle/serverConfig> executeDump(name='mediator.resequencer',appName='soa-infra',outputFile='<file>')

(There is no console output)

The output is a text file in the specified location.


Troubleshooting Example with the new Features


Most of the new Diagnostic Dumps report performance related information so we will use a performance example. Our composite called 'DiagProject' is failing to process messages at the expected rate. What might be the problem?

We can start at a number of places but here is a list of Diagnostic Dumps to collect up front:

wls:/soasingle/serverConfig> executeDump(name='bpel.dispatcher',appName='soa-infra',outputFile='<file>')

wls:/soasingle/serverConfig> executeDump(name='bpel.apt',appName='soa-infra',outputFile='<file>')

wls:/soasingle/serverConfig> executeDump(name='soa.adapter.connpool',appName='soa-infra',outputFile='<file>')

wls:/soasingle/serverConfig> executeDump(name='soa.adapter.stats',appName='soa-infra',outputFile='<file>')

wls:/soasingle/serverConfig> executeDump(name='jvm.threads',outputFile='<file>',args={'context' : 'true'})

So what do we have?
  • bpel.dispatcher shows that all of the BPEL thread pools are fine and running without a backlog. This means that the bottleneck is not a lack of available BPEL threads
  • bpel.apt clearly shows that average execution time of the bpel component is over 5 seconds which is much longer than expected
  • soa.adapter.connpool shows that only one adapter, AQ, is running and its connection pool is healthy
  • soa.adapter.stats again shows that the only adapter running is AQ and its performance is quite good with an average processing time of 421ms
  • jvm.threads may actually be the first thing you want to look at and when viewing the context information at the bottom it's clear that everything is running the BPELProcess1 component, indicating that it is taking by far the most time to complete. Looking a little closer at the 'activity_name' we can see that many of them are running an embedded activity which is indeed where the delay is (Thread.sleep()).

So clearly the bottleneck is within the BPEL process itself and on further review you find that someone left a 5 second Wait activity in the process. Contrived to be sure but hopefully still somewhat helpful.

back to top

Tuesday Mar 12, 2013

Using the WLS Harvester and Console Extension with SOA Suite 11g

Contents

Overview

There are occassions when we want to see how metrics trend over a period of time and we want to control which metrics they are and for how long the data is collected. The combination of the WLDF (WebLogic Diagnostic Framework) Harvester and the WebLogic Console Extension offer us the capability and the purpose of this blog post is to describe the components and walk through a configuration all the way to data export.

What is the Harvester?
The Harvester is a component of WLDF that will collect a specified metric or set of metrics at a designated interval. You configure the collection through the WLS Console and when complete the collection is done during runtime in the background and the results are persisted to the file system (by default) in binary format. From here they can be exported at any time to an XML file as described in the section 'Exporting the Metrics'.

What is the WLS Console Extension?
The Console Extension is an add-on to the WLS Console that is package with the WLS installation. After installation it is accessible from the Console itself and its purpose is to provide real time reporting on monitored system metrics, including those that are manually configured in WLDF. Below we'll see how to configure a view and what the display of the data looks like.



Installing the Console Extension

As mentioned, the Console Extension is included with the default WebLogic Server installation. We're going to copy the jar from the installation location to the domain:
  1. Find the file 'diagnostics-console-extension.jar' in 'WL_HOME\server\lib\console-ext'
  2. Copy the file to 'DOMAIN_HOME/console-ext'
  3. Restart the Admin Server
After restart login to the WLS Console and look for the 'WLDF Console Extension' tab at the top left:



                          
                           (click image for full view)





Now, it's a bit strange but when you click on the tab it will display a page that instructs you to enter the following URL in your browser: 'http://<host>:<port>/console/dashboard'. Here we simply open a new tab in the browser and enter the URL. The landing page looks like this:


                          
                           (click image for full view)





By default you can expand the 'Built-in Views' on the left and select one. We select 'JVM Runtime Heap' and then click on the green 'Start' icon to start displaying collected metrics on the heap. After a few minutes our page looks like this:


                          
                           (click image for full view)





You can see at the bottom of the chart that we're tracking 2 metrics, 'Heap Size Current' and 'Heap Free Current'. The sampling and display periods are configurable and we'll see below how to create our own chart. To stop the monitor simply click the 'Stop' icon at the top left and if you have more than one monitoring session you can click the icon for 'Stop All'.

top




Configuring the Collected Metrics

  1. Login to the WebLogic Console ('http://<host>:<port>')
  2. In the left menu expand 'Diagnostics'
  3. Select 'Diagnostic Modules'
  4. Select the Diagnostic Module which is 'Module-FMWDFW' by default
  5. Select the 'Collected Metrics' tab which brings you here:



                          
                           (click image for full view)





First notice that you can set the sampling period on this page. The period affects both the configured Collected Metrics modules as well as any configured WLDF Watches which can be seen under the 'Watches and Notifications' tab. Here we want to create a new 'Collected Metrics' module so select the 'New' button. Takes you to this screen:


                          
                           (click image for full view)





Accept the default value 'ServerRuntime' and click 'Next' for this screen:


                          
                           (click image for full view)





Here we're going to select the runtime mbean that we're interest in monitoring. For this example I've chosen 'soainfra_message_processing' which you can find by simply scrolling down the list. Click 'Next' and you land here:


                          
                           (click image for full view)





This screen shows the available attributes on the mbean that we can monitor. The attributes are cropped in the selection list but we'll see the full names by mousing over them. Here we're going to select 'requestProcessingTime_avg', 'requestProcessingTime_maxTime' and 'requestProcessingTime_minTime'. It should look like this:


                          
                           (click image for full view)





Click 'Next' for the following screen:


                          
                           (click image for full view)





Here we will have multiple selections for the actual mbean instances we want to monitor. Again they are cropped in the selection list but a mouseover reveals the full names. We select the one that contains 'bpel.type=soainfra_message_processing' as follows:


                          
                           (click image for full view)





Click 'Finish' and you're done. You end up back at the landing page for 'Collected Metrics' and you now have an 'enabled' module that is ready for activation.


                          
                           (click image for full view)





Here you just click 'Activate Changes' and you're collecting metrics.

top



Viewing the Collected Metrics

Steps:
  • Go to the WLDF Console landing page and make sure that the 'View List' tab is selected
  • On the left select 'My Views' so that it is highlighted
  • We're now going to create a new view so just click on the 'New View' icon (page with a plus sign) above 'Built-in Views'
  • Fill in the name of the view below 'My Views'
The screen should now look like this:


                          
                           (click image for full view)





You can see that there is only a chart title on the right. Now click on the 'Metric Browser' tab at the top where we'll populate the table. The page will start off with a lot of entries on the left but we'll be filtering them.


                          
                           (click image for full view)





We need to make some selections on the left:
  • Servers: Select the relevant server that you want to collect the metrics from. Here we have a single server domain
  • Select 'Collected Metrics Only'
Your left menu will now look like this:


NOTE: If you are doing this on a SOA server that has NOT processed any composite instances since the last restart you may not see your Collected Metrics module appear. Simply run an instance of the composite to completion and try again. The reason for this is b/c the underlying DMS metric upon which it is based is not created until the first composite instance completes.


                          
                           (click image for full view)





There is an arrow just to the right of the chart name. Click it and select 'New Chart'. We now get a blank chart. From here we can simply drag the three attributes from the bottom left into the chart area and they will be added to the right pane at the bottom. The chart should now look like this:


                          
                           (click image for full view)





We're ready to start collecting some metrics. Click the green 'Start' icon and apply some load to the composite. In my case I modulate the wait time of the BPEL process and end up with a chart that looks like this:


                          
                           (click image for full view)





The beginning of the chart shows data points packed together all at the same level. This is from the background data collection that we configured in 'Collected Metrics' and there was no variable load.

top



Exporting the Metrics

Monitoring the data in the WLDF Console is great but there may be a need to export the data such that it can be manipulated. In order to do this we need to use WLST.

The metrics are stored in a binary file by default and this file is kept in 'DOMAIN_HOME/servers/<server>/data/store/diagnostics'. The file will have a name like 'WLS_DIAGNOSTICS0000000.DAT'. We need this for the WLST command.
  • To use this particular command the server needs to be stopped. The reason for this is because the running server has a continuous lock on the DAT file while running
  • Start WLST from 'FMw_HOME'/oracle_common/common/bin/wlst.sh'
  • Command: exportDiagnosticData(logicalName="HarvestedDataArchive",exportFileName="<Path>/Harvester.xml", storeDir="<MW_HOME>/user_projects/domains/soasingle/servers/AdminServer/data/store/diagnostics")
  • The data is exported in XML format to the specified location
Explanation of command parameters:
  • logicalName: These are predefined values and in this case we need to use 'HarvestedDataArchive'. This persistence mechanism is used for other things so there are 2 additional options that we won't cover.
  • exportFileName: File you want the data exported to
  • storeDir: The location of the DAT file you want to export. WLS does offer customization of this location but that is beyond the scope of this post
The XML output is quite raw and looks like this:



                          
                           (click image for full view)





NOTE: The DAT file is cummulative so if you start a Collected Metrics module, delete it and start a new one then the information will be appended to the same DAT file. To start with a clean DAT file you'll need to stop the server and delete the existing one first or use your own tooling to filter the entries of interest from the combined XML.

top

Friday Feb 15, 2013

Follow Up to Diagnostic Frameworks Presentation at the DOAG in Nuernberg

On November 22, Natascha and I had the oportunity to present about WLS/SOA Diagnostic Frameworks for Administrators and Developers at the DOAG (German Oracle Users Group) in Nuernberg. This post is a follow up in order to provide the material that was covered for your reference.

Thanks to everyone who listened to the session and for the interesting discussions and questions around Diagnostic Frameworks, RDA and Selective Tracing.

The link to the german material can be found here:

The english version of the presentation is available from the following link:

The viewlet for selective tracing can be downloaded from the following link:

If you have any feedback on diagnostic frameworks or tools, please let us know.

Monday Feb 11, 2013

Authorization Model in SOA Suite 11g

Figuring out how the authorization works in SOA Suite 11g between the WebLogic Console and Enterprise Manager can seem daunting. This blog post aims to clarify how the two parts work together and hopefully demonstrates that it is not as complicated as it may first appear.

In SOA Suite 11g there is one Authentication stack and 2 Authorization stacks:

  1. Authentication is handled by WebLogic Server and is based on the order and control flags set for the Authentication Providers in the Security Realm.
  2. Authorization is split between the Global Role definitions in WebLogic Server and the SOA Application Roles in Fusion Middleware Control (EM). WLS Roles govern the interactions in the WLS Console while the SOA Roles control permissions on SOA resources / activities. In most cases the users will need access to both.

Let's describe the authorization stacks independently:

In WLS there are Global Roles defined out of the box that apply to the WebLogic Console. For our purposes we will focus on the 'Admin' Global Role as it has a counterpart in EM and is representative of the other roles as well. In the standard domain this role has a single membership condition which is for the pre-configured Group 'Administrators'. This means that any user who is a member of a group called 'Administrators' will be granted the permissions of the 'Admin' Global Role in WLS. This is important because in order for a user to login to the WLS or EM consoles they must have permissions for at least one of the WLS Global Roles, either through a Group or individual association.

To access the WLS Roles:
  1. Login to the WLS Console (http://host:port/console)
  2. Select 'Security Realm' on the left
  3. By default there is a single realm called 'myrealm'. Select it.
  4. Select the 'Roles and Policies' tab
  5. Expand 'Global Roles' in the list
  6. Expand 'Roles'
  7. The 'Admin' Role is at the top. To view the expression select 'View Role Conditions' on the right

In the default configuration the conditions should look like this:


(click image for full view)



In SOA EM we find several Application Roles available for the soa-infra application, one of which is 'SOAAdmin'. Out of the box this role has a single member, the 'Administrators' Group from WLS. Permissions provided by this role include composite deployment and instance management within EM, among other things. Only Users and Groups may be associated with the Application Roles in EM. This means that a user can be granted the permissions of the WLS 'Admin' Global Role explicitly in WLS but will NOT automatically have composite deployment privileges unless they are a member of the WLS 'Administrators' Group or have been explicitly added to the 'SOAAdmin' role in EM.

To access the EM soa-infra Roles:
  1. Login in to EM (http://:/em)
  2. Expand 'SOA' on the left
  3. Right click on 'soa-infra'
  4. Select 'Security' and then 'Application Roles'
  5. Leave the text box blank and click the search arrow icon to the right
  6. You get the full list of Application Roles for the soa-infra application. 'SOAADmin' is at the top
  7. Select 'SOAAdmin' and then click the 'Edit' button
  8. You can see the members of the role. By default there is only the 'Administrators' Group from WLS
Here's what it should look like:


(click image for full view)



The bridge between the two Authorization stacks is the system-jazn-data.xml file which can be found under /config/fmwconfig. By default this file stores references to all of the application deployments and the associated EM roles. Here is the partial entry for soa-infra:

<application locale="en_US">
   <name>soa-infra</name>
      <app-roles>
         <app-role>
            <name>SOAAdmin</name>
            <display-name>SOA Admin Role</display-name>
            <description>SOA application admin role, has full privilege for performing any
            operations including security related</description>
            <guid>88293480FE7711E08F7531B0B4CEDB15</guid>
            <class>oracle.security.jps.service.policystore.ApplicationRole</class>
            <members>
               <member>
                  <class>weblogic.security.principal.WLSGroupImpl</class>
                  <name>Administrators</name>
               </member>
         </members>
      </app-role>

Here we're only looking at the first role associated with the soa-infra application, 'SOAAdmin'. We also see the 'members' element specifying that the WebLogic 'Administrators' Group is associated. The 'class' entry indicates the type of object 'Administrators' is. If we were to add a User to the role in EM we would then see an additional 'member' entry with a User class type.

Please note that in a clustered environment we highly recommend that this information be moved to either DB or LDAP persistence using OID. More information on changing the configured persistence type can be found here.

All of this works very well in the standard domain using the Default Authenticator (Embedded LDAP). The 'Administrators' Group is already there, it's associated with the WLS 'Admin' Global Role and has been added to the 'SOAAdmin' Application Role in EM. If you're using a different Authentication Provider, however, you must configure these connections yourself. Let's go into more detail using Active Directory as the generic example.

Let's assume that the AD Authentication Provider is configured in WLS, is at the top of the Authenticators list and has its Control Flag set to 'REQUIRED'. You can see the users in the WLS console but none of them can login to the console or EM as themselves. This is because they are not yet associated with a Global Role in WLS and there are two options:

  1. Create a Group in AD that these users will be added to and then add that Group to the WLS 'Admin' Global Role in the WLS Console. If you name the AD Group 'Administrators' then you will not have to add it to the Global Role conditions because it is already there.
  2. Add the users individually to the WLS 'Admin' Global Role in the WLS Console

This will give the AD 'admin' users the ability to login to the WLS Console and EM.

Now when these users attempt to deploy a composite from either JDeveloper or EM they get an error saying that they must have permissions of 'SOAAdmin' or 'SOAOperator'. This is because the users are not yet granted the permissions on the soa-infra application through a default or custom Role in EM. 'SOAAdmin' and 'SOAOperator' are examples of default Roles that are immediately available. There are 2 options:

  1. Add the AD Group to the existing 'SOAAdmin' or 'SOAOperator' Application Role in EM. If the AD Group was called 'Administrators' then they won't get the error in the first place because the supporting configuration is in place out of the box for the 'Administrators' Group.
  2. Add the AD users individually to the SOAAdmin or SOAOperator Application Role in EM

This action bridges the gap between the 2 Authorization stacks and allows the full range of expected permissions. The below diagram may help to visualize the relationship:


(click image for full view)



In releases prior to 11.1.1.6 the explicit addition of the Group or Users to the Application Roles in EM is not required as long as they are associated with the WLS Global Role, along with the standard 'Administrators' or 'Operators' Group (Admin and Operator roles respectively). We discourage you from taking advantage of this as 11.1.1.6 changes the behavior, requiring that the Groups / Users be explicitly added to the EM Roles as described here.

Wednesday Jan 23, 2013

How to Configure Multiple Authentication Providers for Oracle SOA Suite Human Workflow

If you need to configure SOA Human Workflow 11.1.1.4 or higher with multiple authenticator providers, please follow Note 1520268.1 SOA 11g: How to Configure Multiple Authentication Providers for Oracle SOA Suite Human Workflow .

SOA Human Workflow uses underlying Oracle Platform Security Services (OPSS) for the security layer. Until 11.1.1.4, OPSS had a limitation of only one authentication provider on Weblogic. It supported only users / groups from the very first authentication provider in the list. This restriction was removed from 11.1.1.4 onwards. Now you can configure multiple providers setting the virtualize flag in the idstore instance.

Please follow the recommended note for step by step instructions on how to do this.



Wednesday Jan 02, 2013

JMS Step 8 - How to Read from an AQ JMS (Advanced Queueing JMS) from a BPEL Process

JMS Step 8 - How to Read from an AQ JMS (Advanced Queueing JMS) from a BPEL Process

Welcome to the last post in the series of JMS articles on using JMS queues from within SOA. The previous posts were:

  1. JMS Step 1 - How to Create a Simple JMS Queue in Weblogic Server 11g
  2. JMS Step 2 - Using the QueueSend.java Sample Program to Send a Message to a JMS Queue
  3. JMS Step 3 - Using the QueueReceive.java Sample Program to Read a Message from a JMS Queue
  4. JMS Step 4 - How to Create an 11g BPEL Process Which Writes a Message Based on an XML Schema to a JMS Queue
  5. JMS Step 5 - How to Create an 11g BPEL Process Which Reads a Message Based on an XML Schema from a JMS Queue
  6. JMS Step 6 - How to Set Up an AQ JMS (Advanced Queueing JMS) for SOA Purposes
  7. JMS Step 7 - How to Write to an AQ JMS (Advanced Queueing JMS) Queue from a BPEL Process

This example demonstrates how to read a simple message from an Oracle AQ via the WebLogic AQ JMS functionality from a BPEL process and a JMS adapter. It is part of a step-by-step series of samples. If you have not yet reviewed the previous posts, please do so first, as this one references objects created created there.

1. Recap and Prerequisites

In the last two examples we created an Oracle Advanced Queue (AQ) some related JMS objects in WebLogic Server, which allow us to be able to access it via JMS. Here are the objects which were created and their names and JNDI names:

Database Objects

Name

Type

AQJMSUSER

Database User

MyQueueTable

Advanced Queue (AQ) Table

UserQueue

Advanced Queue

WebLogic Server Objects

Object Name

Type

JNDI Name

aqjmsuserDataSource

Data Source

jdbc/aqjmsuserDataSource

AqJmsModule

JMS System Module

AqJmsForeignServer

JMS Foreign Server

AqJmsForeignServerConnectionFactory

JMS Foreign Server Connection Factory

AqJmsForeignServerConnectionFactory

AqJmsForeignDestination

AQ JMS Foreign Destination

queue/USERQUEUE

eis/aqjms/UserQueue

Connection Pool

eis/aqjms/UserQueue

In the example JMS Step 7 - How To Write To an AQ JMS Queue From a BPEL Process  we wrote a simple message to that queue. In this example, we will create a composite with a BPEL process, which reads the same message from the AQ JMS using a JMS adapter.

2 . Create a BPEL Composite with a JMS Adapter Partner Link

This step requires that you have a valid Application Server Connection defined in JDeveloper, pointing to the application server on which you created the JMS Queue and Connection Factory. You can create this connection in JDeveloper under the Application Server Navigator. Give it any name and be sure to test the connection before completing it.

This sample will read a simple XML message from the AQ JMS queue via the JMS adapter, based on the following XSD file, which consists of a single string element. A message based on this XSD was written to the queue in the previous example.

stringPayload.xsd

<?xml version="1.0" encoding="windows-1252" ?>
<xsd:schema xmlns:xsd="http://www.w3.org/2001/XMLSchema"
               xmlns="http://www.example.org"
               targetNamespace="http://www.example.org"
               elementFormDefault="qualified">
 <xsd:element name="exampleElement" type="xsd:string">
 </xsd:element>
</xsd:schema>

The following steps are all executed in JDeveloper. The SOA project will be created inside a JDeveloper Application. If you do not already have an application to contain the project, you can create a new one via File > New > General > Generic Application. Give the application any name, for example JMSTests and, when prompted for a project name and type, call the project   JmsAdapterReadAqJms  and select SOA as the project technology type. If you already have an application, continue below.

Create a SOA Project

Create a new project and select SOA Tier > SOA Project as its type. Name it JmsAdapterReadAqJms . When prompted for the composite type, choose Empty Composite.

Create a JMS Adapter Partner Link

In the composite editor, drag a JMS adapter over from the Component Palette to the left-hand swim lane, under Exposed Services.

This will start the JMS Adapter Configuration Wizard. Use the following entries:

Service Name: JmsAdapterRead
Oracle Enterprise Messaging Service (OEMS): Oracle Advanced Queueing
AppServer Connection: Use an existing application server connection pointing to the WebLogic server on which the connection factory created earlier is located. You can use the “+” button to create a connection directly from the wizard, if you do not already have one.

Adapter Interface > Interface: Define from operation and schema (specified later)

Operation Type: Consume Message
Operation Name: Consume_message

Consume Operation Parameters

Destination Name: Wait for the list to populate. (Only foreign servers are listed here, because Oracle Advanced Queuing was selected earlier, in step 3) .

        Select the foreign server destination created earlier,

AqJmsForeignDestination (queue) .

This will automatically populate the Destination Name field with the name of the foreign destination, queue/USERQUEUE .

JNDI Name: The JNDI name to use for the JMS connection. This is the JNDI name of the connection pool created in the WebLogic Server. JDeveloper does not verify the value entered here. If you enter a wrong value, the JMS adapter won’t find the queue and you will get an error message at runtime. In our example, this is the value eis/aqjms/UserQueue .

Messages

URL: We will use the XSD file created during the previous examples, e.g. the JmsAdapterWriteSchema or JmsAdapterWriteAqJms projects to define the format for the incoming message payload and, at the same time, demonstrate how to import an existing XSD file into a JDeveloper project.

Press the magnifying glass icon to search for schema files. In the Type Chooser, press the Import Schema File button.


Select the next magnifying glass next to URL to search for schema files. Navigate to the location of the JmsAdapterWriteSchema or JmsAdapterWriteAqJms projects > xsd and select the stringPayload.xsd file.

Check the “Copy to Project” checkbox, press OK and confirm the following Localize Files popup.

Now that the XSD file has been copied to the local project, it can be selected from the project’s schema files. Expand Project Schema Files > stringPayload.xsd and select exampleElement : string .

Press Next and Finish, which will complete the JMS Adapter configuration.
Save the project.

Create a BPEL Component

Drag a BPEL Process from the Component Palette (Service Components) to the Components section of the composite designer. Name it JmsAdapterReadAqJms  and select Template: Define Service Later and press OK.

Wire the JMS Adapter to the BPEL Component

Now wire the JMS adapter to the BPEL process, by dragging the arrow from the adapter to the BPEL process. A Transaction Properties popup will be displayed. Set the delivery mode to async.persist.

This completes the steps at the composite level.


3. Complete the BPEL Process Design

Invoke the BPEL Flow via the JMS Adapter

Open the BPEL component by double-clicking it in the design view of the composite.xml, or open it from the project navigator by selecting the JmsAdapterReadAqJms.bpel file. This will display the BPEL process in the design view. You should see the JmsAdapterRead partner link in the left-hand swim lane.

Drag a Receive activity onto the BPEL flow diagram, then drag a wire (left-hand yellow arrow) from it to the JMS adapter. This will open the Receive activity editor. Auto-generate the variable by pressing the green “+” button and check the “Create Instance” checkbox. This will result in a BPEL instance being created when a new JMS message is received.

At this point the composite can be deployed and it will pick up any messages from the AQ JMS queue. This is very rudimentary, but is sufficient for our demonstration purposes as we will see in the next step.

As with the previous examples, you can extend the BPEL process to do something useful with the message, such as pass it to another web service, write it to a file using a file adapter or to a database via a database adapter. Also see JMS Step 5 - How To Create an 11g BPEL Process Which Reads a Message Based on an XML Schema from a JMS Queue  for an example of how to add a Java Embedding activity to the process to print the message to standard output.


4. Test the Composite

Execute an instance of the previous example JmsAdapterWriteAqJms  which will write a message to the AQ called UserQueue, if you have not yet done so. That example also explains how to view and monitor the queue from SQL*Plus or JDeveloper. You should see a message similar to the following in the queue:

Now compile and deploy the composite JmsAdapterReadAqJms. It will immediately begin dequeuing messages from the AQ. Requery the queue to confirm that the message has been dequeued. Then, in Enterprise Manager 11g Fusion Middleware Control (EM), navigate to SOA > soa-infra (soa_server1) > default (or wherever you deployed your composite) and click on  JmsAdapterReadAqJms [1.0] . You should see an instance ID listed under Recent Instances. Select it, to view its flow trace, then select the JmsAdapterReadAqJms BPEL Component.

This should display the Audit Trail, including the successful Receive activity. Click on “View XML Document” to see the dequeued message.

This concludes this example and the SOA/JMS series.

Please make use of the comments section for your feedback and questions. If there is enough interest, I will plan to do a series of webcasts to go over and demonstrate the samples shown here.


Thanks-you

John-Brown Evans
Senior Principal Support Engineer
Oracle Technology Proactive Support Delivery

Wednesday Dec 19, 2012

JMS Step 7 - How to Write to an AQ JMS (Advanced Queueing JMS) Queue from a BPEL Process

JMS Step 7 - How to Write to an AQ JMS (Advanced Queueing JMS) Queue from a BPEL Process

This post continues the series of JMS articles which demonstrate how to use JMS queues in a SOA context. The previous posts were:

  1. JMS Step 1 - How to Create a Simple JMS Queue in Weblogic Server 11g
  2. JMS Step 2 - Using the QueueSend.java Sample Program to Send a Message to a JMS Queue
  3. JMS Step 3 - Using the QueueReceive.java Sample Program to Read a Message from a JMS Queue
  4. JMS Step 4 - How to Create an 11g BPEL Process Which Writes a Message Based on an XML Schema to a JMS Queue
  5. JMS Step 5 - How to Create an 11g BPEL Process Which Reads a Message Based on an XML Schema from a JMS Queue
  6. JMS Step 6 - How to Set Up an AQ JMS (Advanced Queueing JMS) for SOA Purposes

This example demonstrates how to write a simple message to an Oracle AQ via the the WebLogic AQ JMS functionality from a BPEL process and a JMS adapter. If you have not yet reviewed the previous posts, please do so first, especially the JMS Step 6 post, as this one references objects created there.

1. Recap and Prerequisites

In the previous example, we created an Oracle Advanced Queue (AQ) and some related JMS objects in WebLogic Server to be able to access it via JMS. Here are the objects which were created and their names and JNDI names:

Database Objects

Name

Type

AQJMSUSER

Database User

MyQueueTable

Advanced Queue (AQ) Table

UserQueue

Advanced Queue

WebLogic Server Objects

Object Name

Type

JNDI Name

aqjmsuserDataSource

Data Source

jdbc/aqjmsuserDataSource

AqJmsModule

JMS System Module

AqJmsForeignServer

JMS Foreign Server

AqJmsForeignServerConnectionFactory

JMS Foreign Server Connection Factory

AqJmsForeignServerConnectionFactory

AqJmsForeignDestination

AQ JMS Foreign Destination

queue/USERQUEUE

eis/aqjms/UserQueue

Connection Pool

eis/aqjms/UserQueue

2 . Create a BPEL Composite with a JMS Adapter Partner Link

This step requires that you have a valid Application Server Connection defined in JDeveloper, pointing to the application server on which you created the JMS Queue and Connection Factory. You can create this connection in JDeveloper under the Application Server Navigator. Give it any name and be sure to test the connection before completing it.

This sample will write a simple XML message to the AQ JMS queue via the JMS adapter, based on the following XSD file, which consists of a single string element:

stringPayload.xsd

<?xml version="1.0" encoding="windows-1252" ?>
<xsd:schema xmlns:xsd="http://www.w3.org/2001/XMLSchema"
               xmlns="http://www.example.org"
               targetNamespace="http://www.example.org"
               elementFormDefault="qualified">
 <xsd:element name="exampleElement" type="xsd:string">
 </xsd:element>
</xsd:schema>

The following steps are all executed in JDeveloper. The SOA project will be created inside a JDeveloper Application. If you do not already have an application to contain the project, you can create a new one via File > New > General > Generic Application. Give the application any name, for example JMSTests and, when prompted for a project name and type, call the project   JmsAdapterWriteAqJms  and select SOA as the project technology type. If you already have an application, continue below.

Create a SOA Project

Create a new project and select SOA Tier > SOA Project as its type. Name it JmsAdapterWriteAqJms . When prompted for the composite type, choose Composite With BPEL Process.

When prompted for the BPEL Process, name it JmsAdapterWriteAqJms too and choose Synchronous BPEL Process as the template.
This will create a composite with a BPEL process and an exposed SOAP service. Double-click the BPEL process to open and begin editing it. You should see a simple BPEL process with a Receive and Reply activity. As we created a default process without an XML schema, the input and output variables are simple strings.

Create an XSD File

An XSD file is required later to define the message format to be passed to the JMS adapter. In this step, we create a simple XSD file, containing a string variable and add it to the project.

First select the xsd item in the left-hand navigation tree to ensure that the XSD file is created under that item.

Select File > New > General > XML and choose XML Schema.

Call it stringPayload.xsd  and when the editor opens, select the Source view.

then replace the contents with the contents of the stringPayload.xsd example above and save the file. You should see it under the XSD item in the navigation tree.

Create a JMS Adapter Partner Link

We will create the JMS adapter as a service at the composite level. If it is not already open, double-click the composite.xml file in the navigator to open it.

From the Component Palette, drag a JMS adapter over onto the right-hand swim lane, under External References.

This will start the JMS Adapter Configuration Wizard. Use the following entries:

Service Name: JmsAdapterWrite

Oracle Enterprise Messaging Service (OEMS): Oracle Advanced Queueing

AppServer Connection: Use an existing application server connection pointing to the WebLogic server on which the connection factory created earlier is located. You can use the “+” button to create a connection directly from the wizard, if you do not already have one.

Adapter Interface > Interface: Define from operation and schema (specified later)

Operation Type: Produce Message
Operation Name: Produce_message

Produce Operation Parameters

Destination Name: Wait for the list to populate. (Only foreign servers are listed here, because Oracle Advanced Queuing was selected earlier, in step 3) .

        Select the foreign server destination created earlier,

AqJmsForeignDestination (queue) .

This will automatically populate the Destination Name field with the name of the foreign destination, queue/USERQUEUE .

JNDI Name: The JNDI name to use for the JMS connection. This is the JNDI name of the connection pool created in the WebLogic Server.JDeveloper does not verify the value entered here. If you enter a wrong value, the JMS adapter won’t find the queue and you will get an error message at runtime. In our example, this is the value eis/aqjms/UserQueue

Messages
URL:
We will use the XSD file we created earlier, stringPayload.xsd to define the message format for the JMS adapter. Press the magnifying glass icon to search for schema files. Expand Project Schema Files > stringPayload.xsd and select exampleElement : string .

Press Next and Finish, which will complete the JMS Adapter configuration.

Wire the BPEL Component to the JMS Adapter

In this step, we link the BPEL process/component to the JMS adapter. From the composite.xml editor, drag the right-arrow icon from the BPEL process to the JMS adapter’s in-arrow.

  This completes the steps at the composite level.

3. Complete the BPEL Process Design

Invoke the JMS Adapter

Open the BPEL component by double-clicking it in the design view of the composite.xml. This will display the BPEL process in the design view. You should see the JmsAdapterWrite partner link under one of the two swim lanes. We want it in the right-hand swim lane. If JDeveloper displays it in the left-hand lane, right-click it and choose Display > Move To Opposite Swim Lane.

An Invoke activity is required in order to invoke the JMS adapter. Drag an Invoke activity between the Receive and Reply activities. Drag the right-hand arrow from the Invoke activity to the JMS adapter partner link. This will open the Invoke editor. The correct default values are entered automatically and are fine for our purposes. We only need to define the input variable to use for the JMS adapter. By pressing the green “+” symbol, a variable of the correct type can be auto-generated, for example with the name Invoke1_Produce_Message_InputVariable.

Press OK after creating the variable.

Assign Variables

Drag an Assign activity between the Receive and Invoke activities. We will simply copy the input variable to the JMS adapter and, for completion, so the process has an output to print, again to the process’s output variable.

Double-click the Assign activity and create two Copy rules:

for the first, drag Variables > inputVariable > payload > client:process > client:input_string to Invoke1_Produce_Message_InputVariable > body > ns2:exampleElement

for the second, drag the same input variable to outputVariable > payload > client:processResponse > client:result

This will create two copy rules, similar to the following:

Press OK.

This completes the BPEL and Composite design.

4. Compile and Deploy the Composite

Compile the process by pressing the Make or Rebuild icons or by right-clicking the project name in the navigator and selecting Make... or Rebuild...

If the compilation is successful, deploy it to the SOA server connection defined earlier. (Right-click the project name in the navigator, select Deploy to Application Server, choose the application server connection, choose the partition on the server (usually default) and press Finish. You should see the message

----  Deployment finished.  ----


in the Deployment frame, if the deployment was successful.

5. Test the Composite

Execute a Test Instance

In a browser, log in to the Enterprise Manager 11g Fusion Middleware Control (EM) for your SOA installation. Navigate to SOA > soa-infra (soa_server1) > default (or wherever you deployed your composite) and click on  JmsAdapterWriteAqJms [1.0] , then press the Test button. Enter any string into the text input field, for example “Test message from JmsAdapterWriteAqJms” then press Test Web Service.

If the instance is successful, you should see the same text you entered in the Response payload frame.

Monitor the Advanced Queue

The test message will be written to the advanced queue created at the top of this sample. To confirm it, log in to the database as AQJMSUSER and query the MYQUEUETABLE database table. For example, from a shell window with SQL*Plus

sqlplus aqjmsuser/aqjmsuser

SQL> SELECT user_data FROM myqueuetable;

which will display the message contents, for example

Similarly, you can use the JDeveloper Database Navigator to view the contents. Use a database connection to the AQJMSUSER and in the navigator, expand Queues Tables and select MYQUEUETABLE. Select the Data tab and scroll to the USER_DATA column to view its contents.

This concludes this example.

The following post will be the last one in this series. In it, we will learn how to read the message we just wrote using a BPEL process and AQ JMS.

Best regards
John-Brown Evans
Oracle Technology Proactive Support Delivery

About

This is the official blog of the SOA Proactive Support Team. Here we will provide information on our activities, publications, product related information and more. Additionally we look forward to your feedback to improve what we do.

Search

Categories
Archives
« April 2014
SunMonTueWedThuFriSat
  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
   
       
Today