Friday Jan 23, 2015

Now Available and Free to Customers: Oracle OpenWorld 2014 Top Sessions

Check out all available middleware-related Oracle OpenWorld sessions . Featured sessions include:

· Processing Large Files with Oracle Managed File Transfer and Oracle SOA Suite

Dave Berry, from the Oracle SOA group, explores various use cases for solving the age-old problem of processing large files, including using standard Oracle Managed File Transfer compression to reduce the upload time for the end user; using Oracle Managed File Transfer to pass a directory to an Oracle SOA Suite application to process files individually; and, finally, using an Oracle SOA Suite application to debatch very large files delivered from Oracle Managed File Transfer.


· SOA Suite Cloud Service: Tame the Cloud Integration Beast

Oracle's world-class service bus will be available on Oracle Cloud. In this presentation, Oracle Development Director Rajan Modi and Oracle Product Manager Scott Haaland describe how Oracle Service Bus is automatically installed and configured by Oracle's cloud operations infrastructure as soon as you subscribe to the service, and demonstrate how to use Oracle Service Bus in the cloud. You will also find out how you can quickly provision an Oracle Service Bus infrastructure and get to market faster with your Oracle Service Bus implementation projects. Now you can easily integrate your cloud applications with a world-class service bus in the cloud.


· Service Integration Product Strategy: Oracle SOA Suite 12c, the Cloud, and API Management

In this headliner Oracle integration session, senor SOA product managers Vikas Anand, Simone Geib, and Peter Belknap share major enhancements recently released in Oracle SOA Suite 12c as well as exciting new integration initiatives currently under way at Oracle. If you have been wondering how to simplify integration as integration channels expand beyond on-premises to include the cloud, mobile, and the Internet of Things, then hear from Oracle product management about what's new and what's next. Attend this session to get the integration big picture and join in the collaborative Q&A discussion.


· Web Services and SOA Integration Options for Oracle E-Business Suite

Rajesh Ghosh, Director, ATG Development, presents this deep dive into a subset of the web services and SOA-related integration options available to Oracle E-Business Suite systems integrators. He offers a technical look at Oracle E-Business Suite Integrated SOA Gateway, Oracle SOA Suite, Oracle Application Adapters for Data Integration for Oracle E-Business Suite, and other web services options for integrating Oracle E-Business Suite with other applications. Systems integrators and developers will get an overview of the latest integration capabilities and technologies available out of the box with Oracle E-Business Suite and possibly a sneak preview of upcoming functionality and features.


· Introducing Oracle Integration Cloud Service: Simplify Integration to Cloud and Mobile

Axel Allgeier, Software Development VP, Vikas Anand, Product Management Senior Director, and Herb Stiel, Product Development VP, describe the core features of Oracle Integration Cloud Service, a complete iPaaS solution for dramatically simplifying and accelerating cloud-to-cloud and cloud-to-on-premises integrations.


· Oracle Fusion Middleware Upgrade: Best Practices and Strategy

Learn about the latest features and improvements in Oracle Fusion Middleware from Michael Rubino (Senior Director of Software Development) and Renga Rengarajan (Director of Product Management and Architecture), who describe the upgrade process from Oracle Fusion Middleware 11g to 12c as well as from Oracle Fusion Middleware 10g to 11g. In this session, you'll learn how to use Oracle Fusion Middleware Upgrade Assistant and Oracle Fusion Middleware Patch Set Assistant.

Wednesday Jan 21, 2015

Latest and Recommended Bundle Patches for SOA 11.1.1.7 and SOA 12.1.3

The latest and recommended Bundle Patches for SOA 11.1.1.7 and SOA 12.1.3 are as follows:

SOA Version

Bundle Patch Name

Bundle Patch Number

Note

11.1.1.7

PS6 Bundle Patch 6 (11.1.1.7.6)

Patch 19953598

12.1.3

12.1.3 Bundle Patch 1 (12.1.3.0.1)

Patch 19707784

Please review Document 1962904.1 before applying this Bundle Patch. The patch readme suggests to apply Patch 20333237 before applying the Bundle Patch. The patch number is incorrect and is actually 20163149.

For more information please refer to Note 1485949.1 SOA 11g: Bundle Patch Reference.

Monday Jan 12, 2015

Malware sites offering Oracle 'patches'

Warning

It has come to our attention that there are non-Oracle sites offering Oracle 'fixes' for genuine Oracle error messages.

You probably already don't need to be told, however:

Please do not download these fixes as

  • They are not authorized by us in any way and
  • They are more than likely to be dangerous to your system

If you do encounter one of these sites please create a SR and we will rectify the situation.

Friday Jan 09, 2015

New SOA 12c Oracle Cloud Adapter for RightNow

Oracle Cloud Adapter for Oracle RightNow Cloud Service as been released for Oracle SOA 12c 12.1.3. The adapter supports integration with RightNow Cloud Service via the Connect Web Services for SOAP API. It enables performing the following operations on the RightNow instance:

1. CRUD (create/get/update/destroy)

2. Query (ROQL)

3. Batch of CRUD/Query Operations

The adapter is certified to work with the Oracle SOA Suite and Oracle BPM Suite components including SOA composite applications and Oracle Service Bus.

You can find more information at Oracle Fusion Middleware 12c 12.1.3 Cloud Adapter for RightNow

Wednesday Oct 15, 2014

SOA 12c Upgrade Videos

Our Curriculum Development team has created a series of short videos related to SOA 12c upgrade. This is a fast and easy way to get familiar with the 12c upgrade process and the steps involved.

Videos can be found in our SOA 12c document library at http://docs.oracle.com/middleware/1213/cross/upgrade_videos.htm

Monday Feb 24, 2014

Introduction to bpel.rs and bpel.sps Diagnostic Dumps in PS6

There are a couple of new BPEL Diagnostic Dumps that haven't been discussed previously and offer some interesting value. These dumps are bpel.rs and bpel.sps. Here I'll describe what they collect and how to use them. Note that these dumps are available starting in SOA Suite 11.1.1.7 (PS6).


As with all diagnostic dumps, these can be executed from WLST and this is probably the most convenient way to do so.

Steps:
  1. Navigate to <MIDDLEWARE_HOME>/oracle_common/common/bin and execute 'wlst.sh' (or .cmd).
  2. Connect to a server that is running SOA Suite with 'connect('user','pwd','t3://host:port')
  3. Execute the listDumps command to view the available SOA dumps, 'listDumps(appName='soa-infra')'. You should see bpel.sps and bpel.rs in the list.



The data collected by these dumps may seem familiar, especially if you've used DMS, but they do offer a unique capability in that you can specify a duration over which the data is collected. This differs from other SOA dumps which provide aggregate data from when the component was deployed or DMS which provides aggregate data from when the server was started. First I'll describe the data they collect.


bpel.sps provides aggregate performance statistics for <strong>SYNCHRONOUS</strong> BPEL processes like min, max and average processing time. By default the results are provided in a table format, organized by composite process.


bpel.rs provides request level statistics (min, max and avg) but it breaks the stats down by steps in the BPEL engine. So if you have a composite with a BPEL process and you collect this dump over some duration when there is load, you'll get an XML type output that lists each step the BPEL engine took to process each request and every step will have associated statistics. There will only be 1 entry per process / step and a 'count' attribute indicating how many times that step was run. Since it's listing every step in the process, this is a convenient way to see the performance of every activity in the process.


The most unique aspect of these dumps is the ability to specify a time range in which to collect the data. This capability is coupled with the BPEL property statsLastN in a mutually exclusive manner that can be confusing so I'll try to explain it.


If statsLastN is configured for the BPEL engine then you cannot use the duration arguments with these dumps. statsLastN is configured in Fusion Middleware Control under BPEL Properties and then 'More Properties' and specifies the number of BPEL instances to keep statistics on for each deployed process. When enabled, these dumps will provide only the statistics for those instances. So if you have statsLastN set to '10', when you run either of these dumps you will see the statistics for the 10 most recent instances.


If statsLastN is disabled then you must provide the duration argument and optionally the buffer size. The command looks like this:

executeDump(name='bpel.sps', appName='soa-infra', outputFile='/home/oracle/tmp/BPEL_SPS.txt', args={'duration':'10', 'buffer':'1000'})

The duration is in seconds and the buffer says how many entries to store.

Once started in this way, the dump will run for 10 seconds and then provide the output. If there was no load then the statistics will be empty and if statsLastN is enabled you will receive an error.


The output of bpel.sps is self explanatory as it's just a simple table but bpel.rs is more complicated so I wanted to offer a sample:


<stats key="TestHarnessProject:L1SyncBPEL:main:receiveInput:105" min="0" max="1" average="0.2" count="30">
<stats key="monitor-send-activity-data" min="0" max="0" average="0.0" count="60">
</stats>
<stats key="sensor-send-variable-data" min="0" max="0" average="0.0" count="30">
</stats>
<stats key="sensor-send-activity-data" min="0" max="0" average="0.0" count="60">
</stats>
</stats>
<stats key="TestHarnessProject:L1SyncBPEL:main:replyOutput:387" min="0" max="1" average="0.33" count="30">
<stats key="monitor-send-activity-data" min="0" max="0" average="0.0" count="60">
</stats>
<stats key="sensor-send-activity-data" min="0" max="0" average="0.0" count="60">
</stats>
</stats>
<stats key="TestHarnessProject:L2SyncBPEL:main:If1:Sequence1:L2SyncBPEL_FileBranch_Assign:99" min="0" max="5" average="0.93" count="30">
<stats key="sensor-send-variable-data" min="0" max="0" average="0.0" count="120">
</stats>
<stats key="sensor-send-activity-data" min="0" max="1" average="0.01" count="60">
</stats>
<stats key="monitor-send-activity-data" min="0" max="0" average="0.0" count="60">
</stats>
</stats>
<stats key="TestHarnessProject:L2SyncBPEL:main:If1:If1:93" min="0" max="1" average="0.06" count="30">
</stats>
<stats key="TestHarnessProject:L1SyncBPEL:main:If1:elseif:Sequence4:L1SyncBPEL_L2SyncBPELBranch_Embedded:243" min="5001" max="5006" average="5002.46" count="30">
<stats key="sensor-send-activity-data" min="0" max="0" average="0.0" count="60">
</stats>
<stats key="monitor-send-activity-data" min="0" max="0" average="0.0" count="60">
</stats>
</stats>
<stats key="TestHarnessProject:L1SyncBPEL:main:If1:elseif:Sequence4:L1SyncBPEL_L2SyncBPELBranch_Assign2:267" min="1" max="2" average="1.06" count="30">
<stats key="monitor-send-activity-data" min="0" max="0" average="0.0" count="60">
</stats>
<stats key="sensor-send-variable-data" min="0" max="0" average="0.0" count="120">
</stats>
<stats key="sensor-send-activity-data" min="0" max="1" average="0.01" count="60">
</stats>
</stats>
<stats key="TestHarnessProject:L1SyncBPEL:main:L1SyncBPEL_Assign1:106" min="0" max="1" average="0.33" count="30">
<stats key="sensor-send-activity-data" min="0" max="1" average="0.01" count="60">
</stats>
<stats key="sensor-send-variable-data" min="0" max="0" average="0.0" count="30">
</stats>
<stats key="monitor-send-activity-data" min="0" max="0" average="0.0" count="60">
</stats>
</stats>


Here you can see the individual entries for the 'TestHarnessProject' BPEL activities. Every time these activities are run, the stats get updated. If you have a lot of BPEL processes with a lot of activities this output will get very long so it's good to know the names of what you're interested it, whether it's the project or even the specific activities.


As always I encourage you to try this out and have a look at the output. The ability to collect current data in this way is very useful to diagnose sporadic performance issues and see how your applications are performing under various load conditions.

Friday Dec 27, 2013

Basic MQ 7.5 Configuration for SOA Suite 11g (MQ Adapter)

I recently went through an exercise to install, configure and interact with MQ 7.5 through a composite and thought I would share my notes. This is not meant to be a full tutorial on MQ configuration with SOA but more of a cheat sheet to avoid wasting a lot of time like I did. I also make no claim to be an MQ admin so if you see something that's wrong or would benefit from clarification, please add a comment.


MQ Steps:
  1. Download MQ 7.5 from the IBM site (90 day trial license, Windows)

  2. Ran setup which was straightforward. Several questions concerning the domain controller but I said that I don't have any running Win 2000

  3. Started MQ Explorer and created a Queue Manager. Specified Transmission and DeadLetter queues along with exposing an external channel

  4. Created my queueus: Transmission, DeadLetter, LocalQueue1, LocalQueue2

  5. Created a 'Server-connection' channel. Note that the Channel may have a status of 'Inactive' and a red arrow pointing down. I spent a ton of time trying to get the arrow to be green and pointing up with a status of 'Active' but in the end it didn't seem to matter.

  6. Created a new user on the Windows machine that I wanted to connect to MQ as remotely

  7. Added the user to the Channel 'Specific Profiles' Authority Records and gave them all of the authorizations. Right click on the Channel -> Object Authorities -> Manage Authority Records -> Specific Profiles. There should be a default Specific Profile you can edit.

  8. Note: If you want to disable the security for the Channel you can go into the Authority Records and delete the default profile

  9. Using the command 'setmqaut -m <Queue Manager Name> -t qmgr -p <user@pcname> +all' I granted full MQ privs to the new user (command is in <MQ_HOME>/bin)



SOA Steps:
  1. Add the MQ jar files to <DOMAIN_HOME>/lib: com.ibm.mq.commonservices.jar, com.ibm.mq.headers.jar, com.ibm.mq.jar, com.ibm.mq.jmqi.jar, com.ibm.mq.pcf.jar (all can be obtained from the MQ Installation)

  2. Created a new composite application with input and output MQ Adapter instances

  3. The adapters both used the default 'eis/MQ/MQAdapter' JNDI from the default connection pool but be careful b/c the default JNDI in the Adapter wizard is 'eis/MQ/MQSeriesAdapter'. If you don't change it or create a new connection pool then the adapter will fail.

  4. Other values in the adapter wizard were defaults or obvious (queue name and my schema)

  5. In the WLS Console I went into the MQ Adapter deployment -> Configuration -> Connection Pool -> Default Pool to change set some properties

  6. Set channelName, hostName, password, queueManagerName, userID. Everything else I left as default.

  7. Deployed and ran the composite, verifying that the message ended up in the right MQ queue



I ran into some errors along the way:

MQJE001: Completion Code '2', Reason '2035': Due to insufficient permissions for the user I was trying to connect as. To resolve this I had to add the permissions using the 'setmqaut' command mentioned above. To make it simple I just granted full admin.

java.util.MissingResourceException: We document that you can get this if you don't have the correct MQ jar files in <DOMAIN_HOME>/lib. In my case it was due to a missing or wrong Channel Name configured in the connection pool properties.

Admittedly much of this is documented across various resources but I couldn't find anything that provided it all together. Now when I need to do this again in a year and have forgotten everything I'll have this blog post to come back to.

If everything works then you'll see the messages appear in the queue through MQ Explorer where you can then insepct the payload, etc.

Wednesday Nov 27, 2013

How to get Work Manager Information in a WLS or OSB Thread Dump

To analyze issues related to performance or work load, it would be nice to know which work manager is running a given thread. In Oracle Service Bus, a dispatch policy defined for a proxy or a business service relates to a work manager definition in WLS. While analyzing a thread dump, you would like to know which work manager is running a given thread.

Starting with WebLogic Server version 10.3.6, work manager information is shown in a thread dump if work manager / self tuning debugging is enabled. This debug flag can be enabled in the WLS console:

Go to http://<admin-server>:<admin-port>/console - Environment - <your server> - Debug tab:

ENABLE DEBUG FLAG FOR WORK MANAGER SELF TUNING 

Enable the DebugSelfTuning debug flag in the work manager section.

After that, work manager information is automatically added to a thread dump. This is a sample stack trace which has work manager information included:

"[ACTIVE] ExecuteThread: '14' for queue: 'weblogic.kernel.Default (self-tuning)' for workmanager: XBus Kernel@null@MyCustomWorkManager" id=123 idx=0x1d4 tid=21936 prio=5 alive, waiting, native_blocked, daemon
-- Waiting for notification on: java/lang/Object@0xe574a240[fat lock]
at jrockit/vm/Threads.waitForNotifySignal(JLjava/lang/Object;)Z(Native Method)
at java/lang/Object.wait(J)V(Native Method)
at java/lang/Object.wait(Object.java:485)
at com/bea/wli/sb/pipeline/PipelineContextImpl$SynchronousListener.waitForResponse(PipelineContextImpl.java:1620)
^-- Lock released while waiting: java/lang/Object@0xe574a240[fat lock]
at com/bea/wli/sb/pipeline/PipelineContextImpl.dispatchSync(PipelineContextImpl.java:562)
at stages/transform/runtime/WsCalloutRuntimeStep$WsCalloutDispatcher.dispatch(WsCalloutRuntimeStep.java:1391)
at stages/transform/runtime/WsCalloutRuntimeStep.processMessage(WsCalloutRuntimeStep.java:236)
at com/bea/wli/sb/pipeline/debug/DebuggerRuntimeStep.processMessage(DebuggerRuntimeStep.java:74)
at com/bea/wli/sb/stages/StageMetadataImpl$WrapperRuntimeStep.processMessage(StageMetadataImpl.java:346)
at com/bea/wli/sb/stages/impl/SequenceRuntimeStep.processMessage(SequenceRuntimeStep.java:33)
at com/bea/wli/sb/pipeline/PipelineStage.processMessage(PipelineStage.java:84)

 

Monday Nov 11, 2013

Scripted SOA Diagnostic Dumps for PS6 (11.1.1.7)

When you upgrade to SOA Suite PS6 (11.1.1.7) you acquire a new set of Diagnostic Dumps in addition to what was available in PS5. With more than a dozen to choose from and not wanting to run them one at a time, this blog post provides a sample script to collect them all quickly and hopefully easily. There are several ways that this collection could be scripted and this is just one example.




What is Included:
  • wlst.properties: Ant Properties
  • build.xml
  • soa_diagnostic_script.py: Python Script


What is Collected:
  • 5 contextual thread dumps at 5 second intervals
  • Diagnostic log entries from the server
  • WLS Image which includes the domain configuration and WLS runtime data
  • Most of the SOA Diagnostic Dumps including those for BPEL runtime, Adapters and composite information from MDS


Instructions:
  1. Download the package and extract it to a location of your choosing
  2. Update the properties file 'wlst.properties' to match your environment
  3. Run 'ant' (must be on the path)
  4. Collect the zip package containing the files (by default it will be in the script.output location)


Properties Reference:
  • oracle_common.common.bin: Location of oracle_common/common/bin
  • script.home: Location where you extracted the script and supporting files
  • script.output: Location where you want the collections written
  • username: User name for server connection
  • pwd: Password to connect to the server
  • url: T3 URL for server connection, '<host>:<port>'
  • dump_interval: Interval in seconds between thread dumps
  • log_interval: Duration in minutes that you want to go back for diagnostic log information
Script Package

Dynamic Monitoring Service (DMS) Configuration Dumping and CPU Utilization


There was recently a report of CPU spikes on a system that were occuring at precise 3 hour intervals. Research revealed that the spikes were the result of the Dynamic Monitoring Service generating a metrics dump and writing it under the server 'logs' folder for every WLS server in the domain. This blog provides some information on what this is for and how to control it.


The Dynamic Monitoring Service is a facility in FMw (JRF to be more precise) that collects runtime data on the components deployed to WebLogic. Each component is responsible for how much or how little they use the service and SOA collects a fair amount of information. To view what is collected on any running server you can use the following URL, http://host:port/dms/Spy and login with admin credentials.


DMS is essentially always running and collecting this information in the runtime and to protect against loss of this data it also runs automatic backups, by default at the 3 hour interval mentioned above. Most of the management options for DMS are exposed through WLST but these settings are not so we must open the dms_config.xml file which can be found in DOMAIN_HOME/config/fmwconfig/servers/<server_name>/dms_config.xml.


The contents are fairly short and at the bottom you will find the following entry:

<dumpConfiguration>
    <dump intervalSeconds="10800" maxSizeMBytes="75" enabled="true"/>
</dumpConfiguration>

The interval of 10800 seconds corresponds to the 3 hours and the maximum size is 75MB. The file is written as an archive to DOMAIN_HOME/servers/<server_name>/logs/metrics. This archive contains the dump in XML format.


You can disable the dumps all together by simply setting the 'enabled' value to 'false' or of course you could modify the other parameters to suit your needs. Disabling the dumps will NOT impact DMS collections or display at runtime. It will only eliminate these periodic backups.


Wednesday Oct 16, 2013

Transaction Boundaries and Rollbacks in Oracle SOA Suite

A new eCourse/video is available in the Oracle Learning Library, "Transaction Boundaries and Rollbacks in Oracle SOA Suite" .

The course covers:

  • Definition of transaction, XA, Rollback and transaction boundary.
  • BPEL transaction boundaries from a fault propagation point of view
  • Parameters bpel.config.transaction and bpel.config.oneWayDeliveryPolicy for the configuration of both synchronous and asynchronous BPEL processes.
  • Transaction behavior in Mediator
  • Rollback scenarios based on type of faults
  • Rollback using bpelx:rollback within a <throw> activity.

The video is accessible here

Thursday Jul 25, 2013

Using the New RDA 8 with SOA Suite 11g

RDA 8 was released on July 23rd and this blog post is a brief summary of the new, smaller profiles available for SOA Suite and Service Bus 11g.

Install RDA 8

You can download the package from here.

Extract the package to a location of your choosing. It is recommended that if you want to install RDA 8 in the same location as a previous RDA installation, first delete the existing 'rda' directory. You will lose your previous configuration and this is expected when moving from RDA 4.x to 8. When subsequently updating RDA 8.x it is expected that you will be able to preserve your configuration by saving the 'output.cfg' file.

Configure and Run RDA 8

We have added 2 new profiles in RDA 8, one for SOA Suite and one for Service Bus. The purpose of these profiles is to simplify configuration, shorten the collection time and reduce the size of the resultant package.

There are 2 options for each profile, 'offline' (default) and 'online'. The 'online' collections will include a few items that require a connection to the running server and are generally only needed in special circumstances. The SOA Suite online collection adds thread dumps along with soa-infra MDS, composite and WSDL configuration information. The Service Bus 'online' collection adds thread dumps and Service Bus specific Diagnostic Dumps available starting in PS6 (cache contents, JMA and MQ async message tables).

It is recommended that the ORACLE_HOME and DOMAIN_HOME environment variables be set as this will speed / simplify the profile configuration. This can be done by running source <DOMAIN_HOME>/bin/setDomainEnv.

Steps for SOA Suite:
  1. From a command prompt enter the command: rda.sh (cmd) -CRP -p FM11g_SoaMin
  2. If the environment variables are set appropriately you can hit 'enter' at every prompt and the collection will run automatically. Otherwise continue with the steps.
  3. Accept the default value of the first prompt asking about the network domain
  4. Enter ORACLE_HOME as the Middleware Home location
  5. Confirm or enter a new location for the domain. By default it is <ORACLE_HOME>/user_projects/domains'
  6. Choose whether you want an 'offline' or 'online' collection. The default is 'offline'.
  7. If you have more than one domain in the domains location you will next be asked to choose which domain / domains to analyze.
  8. Select the domain servers to include. The default is all of them.
  9. Choose whether you want to run the OCM collection. For SOA this can be set to 'n' but the default is 'y'

Steps for Service Bus: (Identical except for the name of the profile)
  1. From a command prompt in the RDA installation location enter the command: rda.sh (cmd) -CRP -p FM11g_OsbMin
  2. If the environment variables are set appropriately you can hit 'enter' at every prompt and the collection will run automatically. Otherwise continue with the steps.
  3. Accept the default value of the first prompt asking about the network domain
  4. Enter ORACLE_HOME as the Middleware Home location
  5. Confirm or enter a new location for the domain. By default it is <ORACLE_HOME>/user_projects/domains'
  6. Choose whether you want an 'offline' or 'online' collection. The default is 'offline'.
  7. If you have more than one domain in the domains location you will next be asked to choose which domain / domains to analyze.
  8. Select the domain servers to include. The default is all of them.
  9. Choose whether you want to run the OCM collection. For SOA this can be set to 'n' but the default is 'y'

The collected files are written to RDA_HOME/output and the zipped package is written to RDA_HOME. If you are interested in viewing the collection you can go to /output and drag the file RDA__start.htm to a browser.

back to top

Friday May 17, 2013

Introduction and Troubleshooting of SOA 11g Database Adapter


SOA 11g Adapters

Oracle SOA Suite 11g Adapters allow Middleware service engines (BPEL, BPM, OSB, etc) to communicate with backend systems like E-Business Suite, Siebel, SAP, Databases, Messaging Systems (MQSeries and Oracle Advanced Queuing), Tuxedo, CICS, etc


SOA provides different types of adapters: Technology, Legacy, Packaged Application and Others. It also allows the creation of custom adapters.



SOA 11g Database Adapter

The Database Adapter enables service engines to communicate with database end points. Databases like Oracle or any other relational database that follows the ANSI SQL standard and provides JDBC drivers. Some of the databases are:

* Oracle 8i and above

* IBM DB/2

* Informix

* Clarion

* Clipper

* Cloudscape

* DBASE

* Dialog

* Essbase

* FOCUS Data Access

* Great Plains

* Microsoft SQL Server

* MUMPS (Digital Standard MUMPS)

* PROGRESS

* Red Brick

* RMS

* SAS Transport Format

* Sybase

* Teradata

* Unisys DMS 1100/2200

* UniVerse

* Navision Financials (ODBC 3.x)

* Nucleus

* Paradox

* Pointbase

The database adapter supports inbound and outbound interactions. It is based on standards like J2EE Connector Architecture (JCA), Extensible Markup Language (XML), XML Schema Definition (XSD), and Web Service Definition Language (WSDL).

The adapter is deployed as a RAR file into Weblogic. The figure below shows how to check the deployment. It can also be deployed in any Application Server that supports the standards. App servers like Websphere and JBOSS



Other features associated with the Database Adapter:

  • Uses TopLink to map database tables and data into XML.
  • Transforms DML operations (Merge, Select, Insert, and Update) as Web services. It also supports stored procedures and Pure SQL
  • Supports Polling Strategies to avoid duplicate reads (Physical Delete, Logical Delete, Dequencing Tables and files)
  • Supports Transactions to keep the database in a healthy. Changes to the database are rollback in case of an error
  • Streaming Large Payload. Payload is not stored in memory
  • Schema Validation
  • High Availability. Supports Active-Active or Active-Pasive clusters
  • Performance Tuning

To integrate the Database Adapter with BPEL, create a SOA composite in JDeveloper and drag and drop the adapter to the composite’s Services or Reference region. This will create a Inbound or outbound interaction respectively.



When the adapter component is added to the composite the configuration wizard will open. Through the wizard we define the connection to the database, the type of operation (insert, update, poll, etc), performance parameters, retry logic, etc.

Once the configuration is done, JDeveloper creates a series of SOA artifacts. These files are used by the composite to communicate with the adapter instance during runtime. Some of the artifacts are:

  • <serviceName>.wsdl: This is an abstract WSDL, which defines the service end point in terms of the name of the operations and the input and output XML elements.
  • <serviceName>_table.xsd: This contains the XML file schema for these input and output XML elements. Both these files form the interface to the rest of the SOA project.
  • <serviceName>_or-mappings.xml: It is a TopLink specific file, which is used to describe the mapping between a relational schema and the XML schema. It is used at run time.
  • <serviceName>_db.jca: This contains the internal implementation details of the abstract WSDL. It has two main sections, location and operations. Location is the JNDI name of an adapter instance, that is, eis/DB/SOADemo. Operations describe the action to take against that end point, such as INSERT, UPDATE, SELECT, and POLL.
  • <serviceName>.properties: It is created when tables are imported, and information about them is saved. Based on the properties in the db.jca file and the linked or-mappings.xml file, <seviceName>.properties file generates the correct SQL to execute, parses the input XML, and builds an output XML file matching the XSD file.

Troubleshooting

The basic step to troubleshoot the adapter is to set the oracle.soa.adapter logger level to Trace:32(FINEST) in the FMW Console

Once this is done, reproduce the issues and check the SOA Manager Server log file MW_HOME/user_projects/domains/<domain_name>/servers/<soa-server>/logs/soa-diagnostic.log

Look for JCABinding and BINDING.JCA-xxxx strings. You should see messages like these:



If a BINDING.JCA error occurred go to My Oracle Support knowledge base and search for it. Remember this is the same knowledge base used by Oracle support engineers when solving Service Requests.


Other References

Wednesday Apr 03, 2013

Diagnostics Enhancements in SOA Suite 11g PS6 (11.1.1.7)

What's new with Diagnostics in PS6?



Interval Sampling

Purpose: To collect Diagnostic Dumps at specified intervals for as long as they need to be collected.
Activation: By default there are samplings configured for the Diagnostic Dumps 'jvm.threads' and 'jvm.histogram' but they will not begin collecting samples until an Incident is generated on the server. New samples can be started and managed through WLST.

NOTE: To enable dump sampling on a healthy server that has not yet generated a DFW Incident you must activate dump sampling for a healthy server. Steps:
  1. Login in the EM Console
  2. Right click on the domain name in the left menu and select 'System MBean Browser'
  3. Navigate to the MBean: '‘Application Defined MBeans’ -> ‘oracle.dfw’ -> ‘Domain: <domain>’ -> ‘oracle.dfw.jmx.DiagnosticsConfigMbean’
  4. Select the 'DiagnosticsConfig' mbean
  5. Change the value of the 'DumpSamplingIdleWhenHealthy' attribute to 'false'
  6. Click 'Apply'. No restart is necessary for the change to take effect


Collection: The samples are loaded into memory and maintained there until the sampling session is stopped. To collect the current sample archive requires only the execution of a simple WLST command.

Example Commands:
Start and connect to a running server with WLST:
  1. Start WLST with '/oracle_common/common/bin/wlst.sh'
  2. Connect to the running server with 'connect('<user>','<pwd>','t3://<host>:<port>')'

List the currently running dump samples:

wls:/soasingle/serverConfig> listDumpSamples()

(default output)
Name:JVMThreadDump
Dump Name:jvm.threads
Application Name:
Sampling Interval:5
Rotation Count:10
Dump Implicitly:true
Append Samples:true
Dump Arguments:timing=true, context=true


Name:JavaClassHistogram
Dump Name:jvm.classhistogram
Application Name:
Sampling Interval:1800
Rotation Count:5
Dump Implicitly:false
Append Samples:true
Dump Arguments:


Collect the current memory buffer for the 'JVMThreadDump' sample:

wls:/soasingle/serverConfig> getSamplingArchives(sampleName='JVMThreadDump',outputFile='<file>')

wrote 194751 bytes to <file>

The archive is written to the specified location as a zip. In the case of JVMThreadDumps the archive contains a .dmp file containing the actual thread dumps and a text file listing the sampling parameters and the dumps that were taken.


Start a new sampling session:

wls:/soasingle/serverConfig> addDumpSample(sampleName='ThreadsSample',diagnosticDumpName='jvm.threads',samplingInterval=20,rotationCount=20, toAppend=true,args={'context' : 'true'})

ThreadsSample is added

You can then confirm the sampling session with the 'listDumpSamples()' command.


Kill a sampling session:

wls:/soasingle/serverConfig> removeDumpSample(sampleName='ThreadsSample')

Removed ThreadsSample

You can confirm that the sampling session is gone by again running the 'listDumpSamples()' command.

back to top



Contextual SOA Thread Dump Translation


In PS6 we now have the ability to generate thread dumps from DFW that will tell us which composite and component a particular active SOA thread is running. This is best illustrated through an example.

Execute the jvm.threads Diagnostic Dump from WLST against an active SOA Server:

NOTE: You MUST pass the 'context' argument in order to have the SOA context information included in the dump.

wls:/soasingle/serverConfig> executeDump(name='jvm.threads',outputFile='<file>',args={'context' : 'true'})

(no output in the command window)

Go to the location where you wrote the file and open it. The file will look like a typical thread dump but scroll past the stack traces and you'll find some CPU statistics for each thread and below that the SOA context information. Here we list a partial stack for an active SOA thread followed by the corresponding context entry:

"[ACTIVE] ExecuteThread: '4' for queue: 'weblogic.kernel.Default (self-tuning)'"
id=69
idx=0x100 tid=27191 prio=5 alive, in native, daemon
    at jrockit/net/SocketNativeIO.readBytesPinned(Ljava/io/FileDescriptor;[BIII)I(Native Method)
    at jrockit/net/SocketNativeIO.socketRead(SocketNativeIO.java:32)
    at java/net/SocketInputStream.socketRead0(Ljava/io/FileDescriptor;[BIII)I(SocketInputStream.java)
    at java/net/SocketInputStream.read(SocketInputStream.java:129)
    at oracle/net/nt/MetricsEnabledInputStream.read(TcpNTAdapter.java:730)
    at oracle/net/ns/Packet.receive(Packet.java:302)
    at oracle/net/ns/DataPacket.receive(DataPacket.java:108)
    at oracle/net/ns/NetInputStream.getNextPacket(NetInputStream.java:317)
    at oracle/net/ns/NetInputStream.read(NetInputStream.java:262)
    ....


===== THREAD CONTEXT INFORMATION =====
idECIDRIDContext Values
-----------------------------------------------------------------------------------------------------------------------------
id=205911d1def534ea1be0:39fa34d1:....0dbRID=0:6
id=209511d1def534ea1be0:39fa34d1:....0dbRID=0:6
id=6011d1def534ea1be0:39fa34d1:....0dbRID=0:6
id=69
11d1def534ea1be0:39fa34d1:....0WEBSERVICE_PORT.name=execute_pt
dbRID=0:9
composite_name=DiagProject
component_instance_id=DF4389F07C6611E2BFBECB6C185E5342
component_name=TestProject_BPELProcess1
J2EE_MODULE.name=fabric
WEBSERVICE_NAMESPACE.name=http://xmlns.oracle.com/TestApp/DiagProject/TestProject_Mediator1
activity_name=AQ_Java_Embedding1:BxExe3:405
J2EE_APP.name=soa-infra
WEBSERVICE.name=TestProject_Mediator1_ep
composite_instance_id=182858


The entries of primary interest are highlighted in red.
  1. Thread ID: This is the ID of the thread from the thread dump entry. We list them all but only provide the context information for SOA threads
  2. composite_name: The name of the composite that is running in that thread
  3. component_name: The name of the component inside the composite that is running in the thread
  4. activity_name: The name of the BPEL activity that is currently running
  5. composite_instance_id: The instance id of the composite that can be looked up in EM
There are obviously some other entries but these are the one that we think will be of most benefit.

NOTE: In the initial release you may see instances where Mediator is listed as the running component when it ought to be BPEL. In these instances the 'activity_name' should still be accurate and list the running BPEL activity

back to top



BPEL Diagnostic Dumps


There are 3 new Diagnostic Dumps for BPEL in PS6:
  • bpel.dispatcher: Dumps the current state of the BPEL thread pools including Engine, Invoke and System.
  • bpel.adt: Dumps the average time that messages for asynchronous BPEL processes wait in the DLV_MESSAGE table before being consumed by an Engine thread
  • bpel.apt: Dumps the average processing time for all BPEL components in every deployed composite

Examples:

List the available SOA Diagnostic Dumps after starting WLST with '/oracle_common/common/bin/wlst.sh' and connecting to a running SOA server.

wls:/soasingle/serverConfig> listDumps(appName='soa-infra')

adf.ADFConfigDiagnosticDump
bpel.apd
bpel.apt
bpel.dispatcher
mediator.resequencer
soa.adapter.connpool
soa.adapter.ra
soa.adapter.stats
soa.composite
soa.composite.trail
soa.config
soa.db
soa.edn
soa.env
soa.wsdl


(To list the dumps that are part of the system default)

wls:/soasingle/serverConfig> listDumps()

dfw.samplingArchive
dms.ecidctx
dms.metrics
http.requests
jvm.classhistogram
jvm.flightRecording
jvm.threads
odl.activeLogConfig
odl.logs
odl.quicktrace
wls.image



Execute bpel.dispatcher:

wls:/soasingle/serverConfig> executeDump(name='bpel.dispatcher',appName='soa-infra',outputFile='<file>')

(There is no console output)

The output is a text file in the specified location. Sample

The report first lists the BPEL thread pools along with their configured sizes. It then breaks them down by statistics such as number of active threads, processed messages, errored messages, etc. Keep in mind that synchronous requests are handled by the default WLS Work Manager and therefore will not be reflected here. These pools handle callbacks (Invoke) and asynchronous requests (Engine).


Execute bpel.adt:

wls:/soasingle/serverConfig> executeDump(name='bpel.adt',appName='soa-infra',outputFile='<file>')

(There is no console output)

The output is a text file in the specified location. No sample available at the moment report looks identical to bpel.apt, listing the asynchronous BPEL instances and their avg delay in processing messages from the DLV_MESSAGE table.


Execute bpel.apt:

wls:/soasingle/serverConfig> executeDump(name='bpel.apt',appName='soa-infra',outputFile='<file>')

(There is no console output)

The output is a text file in the specified location. Sample

The report lists the deployed BPEL components and their avg processing times in seconds.

back to top



Adapater Diagnostic Dumps


In PS6 there are 3 new Adapter Diagnostic Dumps:
  • soa.adapter.connpool: Dumps the current adapter connection pool statistics
  • soa.adapter.ra: Dumps the adapter Connection Factory configuration parameters
  • soa.adapter.stats: Dumps the current DMS statistics for all deployed adapter instances


Execute soa.adapter.connpool:
wls:/soasingle/serverConfig> executeDump(name='soa.adapter.connpool',appName='soa-infra',outputFile='<file>')

(There is no console output)

The output is a text file in the specified location. Sample

The report groups the connection pool statistics by the adapter instance so you'll see a header including the instance name followed by the list of statistics.


Execute soa.adapter.ra:
wls:/soasingle/serverConfig> executeDump(name='soa.adapter.ra',appName='soa-infra',outputFile='<file>')

(There is no console output)

The output is a text file in the specified location. Sample

The report groups the Connection Factory configurations by deployed adapter so again you'll see the header with the Adapter type followed by the configuration parameters.


Execute soa.adapter.stats:
wls:/soasingle/serverConfig> executeDump(name='soa.adapter.stats',appName='soa-infra',outputFile='<file>')

(There is no console output)

The output is a text file in the specified location. Sample

The report lists statistics such as min time, max time and avg time along with number of processed / error instances. They are grouped by adapter instance.

back to top



Service Bus Diagnostic Dumps


In PS6 there are 3 new Service Bus Diagnostic Dumps:
  • OSB.derived-cache: Dumps the current statistics and contents of all Service Bus derived caches
  • OSB.jms-async-table: Dumps the current contents of the JMS async table which stores message correlation IDs, timestamps, etc
  • OSB.mq-async-table: Dumps the current contents of the MQ async table which stores similar information to the JMS async table


Execute OSB.derived-cache:
wls:/soasingle/serverConfig> executeDump(name='OSB.derived-cache',appName='OSB',outputFile='<file>')

(There is no console output)

The output is a text file in the specified location and the specific caches dumped are:

  • ArchiveClassLoader: Cache of dependency-aware archive class-loaders
  • ArchiveSummary: Cache of archive summaries
  • CodecFactory: Cache of codec factories
  • EffectiveWSDL: Cache of the EffectiveWSDL objects that are derived from the service or wsdl resources of Proxy or Business services
  • Flow_Info: Cache of flow info objects
  • LightweightEffectiveWSDL: Cache of the EffectiveWSDL objects that are derived from the service or wsdl resources of Proxy or Business services
  • MflExecutor: Cache of the MFL executors
  • RouterRuntime: Cache of the compiled router runtimes for proxy services
  • RuntimeEffectiveWSDL: Cache of the Session valid EffectiveWSDL objects that are derived from the service or wsdl resources of Proxy or Business services
  • RuntimeEffectiveWSPolicy: Cache of WS Policies for Proxy or Business services
  • SchemaTypeSystem: Type system cache for MFL, XS and WSDLs
  • ServiceAlertsStatisticInfo: Cache of service alert statistics for Proxy or Business services
  • ServiceInfo: Cache of compile service info for Proxy or Business services and WSDLs
  • Wsdl_Info: Cache of WSDL Info objects
  • WsPolicyMetadata>: Cache of compiled WS Policy Metadata
  • XMLSchema_Info: XML Schema info cache for xml schema objects
  • XqueryExecutor: Cache of Xquery executors
  • XsltExecutor: Cache of XSLT executors
  • alsb.transports.ejb.bindingtype: Cache of EJB Binding Info for EJB Business Services
  • alsb.transports.jejb.business.bindingtype: Cache of JEJB Binding Info
  • alsb.transports.jejb.proxy.bindingtype: Cache of JEJB Binding Info



Execute OSB.jms-async-table:
wls:/soasingle/serverConfig> executeDump(name='OSB.jms-async-table',appName='OSB',outputFile='<file>')

(There is no console output)

The output is a text file in the specified location.


Execute OSB.mq-async-table:
wls:/soasingle/serverConfig> executeDump(name='OSB.mq-async-table',appName='OSB',outputFile='<file>')

(There is no console output)

The output is a text file in the specified location.

back to top



There is one new Diagnostic Dump for Mediator which dumps the content of the resequencer table to help in troubleshooting resequencer issues. To execute:

wls:/soasingle/serverConfig> executeDump(name='mediator.resequencer',appName='soa-infra',outputFile='<file>')

(There is no console output)

The output is a text file in the specified location.


Troubleshooting Example with the new Features


Most of the new Diagnostic Dumps report performance related information so we will use a performance example. Our composite called 'DiagProject' is failing to process messages at the expected rate. What might be the problem?

We can start at a number of places but here is a list of Diagnostic Dumps to collect up front:

wls:/soasingle/serverConfig> executeDump(name='bpel.dispatcher',appName='soa-infra',outputFile='<file>')

wls:/soasingle/serverConfig> executeDump(name='bpel.apt',appName='soa-infra',outputFile='<file>')

wls:/soasingle/serverConfig> executeDump(name='soa.adapter.connpool',appName='soa-infra',outputFile='<file>')

wls:/soasingle/serverConfig> executeDump(name='soa.adapter.stats',appName='soa-infra',outputFile='<file>')

wls:/soasingle/serverConfig> executeDump(name='jvm.threads',outputFile='<file>',args={'context' : 'true'})

So what do we have?
  • bpel.dispatcher shows that all of the BPEL thread pools are fine and running without a backlog. This means that the bottleneck is not a lack of available BPEL threads
  • bpel.apt clearly shows that average execution time of the bpel component is over 5 seconds which is much longer than expected
  • soa.adapter.connpool shows that only one adapter, AQ, is running and its connection pool is healthy
  • soa.adapter.stats again shows that the only adapter running is AQ and its performance is quite good with an average processing time of 421ms
  • jvm.threads may actually be the first thing you want to look at and when viewing the context information at the bottom it's clear that everything is running the BPELProcess1 component, indicating that it is taking by far the most time to complete. Looking a little closer at the 'activity_name' we can see that many of them are running an embedded activity which is indeed where the delay is (Thread.sleep()).

So clearly the bottleneck is within the BPEL process itself and on further review you find that someone left a 5 second Wait activity in the process. Contrived to be sure but hopefully still somewhat helpful.

back to top

Tuesday Mar 12, 2013

Using the WLS Harvester and Console Extension with SOA Suite 11g

Contents

Overview

There are occassions when we want to see how metrics trend over a period of time and we want to control which metrics they are and for how long the data is collected. The combination of the WLDF (WebLogic Diagnostic Framework) Harvester and the WebLogic Console Extension offer us the capability and the purpose of this blog post is to describe the components and walk through a configuration all the way to data export.

What is the Harvester?
The Harvester is a component of WLDF that will collect a specified metric or set of metrics at a designated interval. You configure the collection through the WLS Console and when complete the collection is done during runtime in the background and the results are persisted to the file system (by default) in binary format. From here they can be exported at any time to an XML file as described in the section 'Exporting the Metrics'.

What is the WLS Console Extension?
The Console Extension is an add-on to the WLS Console that is package with the WLS installation. After installation it is accessible from the Console itself and its purpose is to provide real time reporting on monitored system metrics, including those that are manually configured in WLDF. Below we'll see how to configure a view and what the display of the data looks like.



Installing the Console Extension

As mentioned, the Console Extension is included with the default WebLogic Server installation. We're going to copy the jar from the installation location to the domain:
  1. Find the file 'diagnostics-console-extension.jar' in 'WL_HOME\server\lib\console-ext'
  2. Copy the file to 'DOMAIN_HOME/console-ext'
  3. Restart the Admin Server
After restart login to the WLS Console and look for the 'WLDF Console Extension' tab at the top left:



                          
                           (click image for full view)





Now, it's a bit strange but when you click on the tab it will display a page that instructs you to enter the following URL in your browser: 'http://<host>:<port>/console/dashboard'. Here we simply open a new tab in the browser and enter the URL. The landing page looks like this:


                          
                           (click image for full view)





By default you can expand the 'Built-in Views' on the left and select one. We select 'JVM Runtime Heap' and then click on the green 'Start' icon to start displaying collected metrics on the heap. After a few minutes our page looks like this:


                          
                           (click image for full view)





You can see at the bottom of the chart that we're tracking 2 metrics, 'Heap Size Current' and 'Heap Free Current'. The sampling and display periods are configurable and we'll see below how to create our own chart. To stop the monitor simply click the 'Stop' icon at the top left and if you have more than one monitoring session you can click the icon for 'Stop All'.

top




Configuring the Collected Metrics

  1. Login to the WebLogic Console ('http://<host>:<port>')
  2. In the left menu expand 'Diagnostics'
  3. Select 'Diagnostic Modules'
  4. Select the Diagnostic Module which is 'Module-FMWDFW' by default
  5. Select the 'Collected Metrics' tab which brings you here:



                          
                           (click image for full view)





First notice that you can set the sampling period on this page. The period affects both the configured Collected Metrics modules as well as any configured WLDF Watches which can be seen under the 'Watches and Notifications' tab. Here we want to create a new 'Collected Metrics' module so select the 'New' button. Takes you to this screen:


                          
                           (click image for full view)





Accept the default value 'ServerRuntime' and click 'Next' for this screen:


                          
                           (click image for full view)





Here we're going to select the runtime mbean that we're interest in monitoring. For this example I've chosen 'soainfra_message_processing' which you can find by simply scrolling down the list. Click 'Next' and you land here:


                          
                           (click image for full view)





This screen shows the available attributes on the mbean that we can monitor. The attributes are cropped in the selection list but we'll see the full names by mousing over them. Here we're going to select 'requestProcessingTime_avg', 'requestProcessingTime_maxTime' and 'requestProcessingTime_minTime'. It should look like this:


                          
                           (click image for full view)





Click 'Next' for the following screen:


                          
                           (click image for full view)





Here we will have multiple selections for the actual mbean instances we want to monitor. Again they are cropped in the selection list but a mouseover reveals the full names. We select the one that contains 'bpel.type=soainfra_message_processing' as follows:


                          
                           (click image for full view)





Click 'Finish' and you're done. You end up back at the landing page for 'Collected Metrics' and you now have an 'enabled' module that is ready for activation.


                          
                           (click image for full view)





Here you just click 'Activate Changes' and you're collecting metrics.

top



Viewing the Collected Metrics

Steps:
  • Go to the WLDF Console landing page and make sure that the 'View List' tab is selected
  • On the left select 'My Views' so that it is highlighted
  • We're now going to create a new view so just click on the 'New View' icon (page with a plus sign) above 'Built-in Views'
  • Fill in the name of the view below 'My Views'
The screen should now look like this:


                          
                           (click image for full view)





You can see that there is only a chart title on the right. Now click on the 'Metric Browser' tab at the top where we'll populate the table. The page will start off with a lot of entries on the left but we'll be filtering them.


                          
                           (click image for full view)





We need to make some selections on the left:
  • Servers: Select the relevant server that you want to collect the metrics from. Here we have a single server domain
  • Select 'Collected Metrics Only'
Your left menu will now look like this:


NOTE: If you are doing this on a SOA server that has NOT processed any composite instances since the last restart you may not see your Collected Metrics module appear. Simply run an instance of the composite to completion and try again. The reason for this is b/c the underlying DMS metric upon which it is based is not created until the first composite instance completes.


                          
                           (click image for full view)





There is an arrow just to the right of the chart name. Click it and select 'New Chart'. We now get a blank chart. From here we can simply drag the three attributes from the bottom left into the chart area and they will be added to the right pane at the bottom. The chart should now look like this:


                          
                           (click image for full view)





We're ready to start collecting some metrics. Click the green 'Start' icon and apply some load to the composite. In my case I modulate the wait time of the BPEL process and end up with a chart that looks like this:


                          
                           (click image for full view)





The beginning of the chart shows data points packed together all at the same level. This is from the background data collection that we configured in 'Collected Metrics' and there was no variable load.

top



Exporting the Metrics

Monitoring the data in the WLDF Console is great but there may be a need to export the data such that it can be manipulated. In order to do this we need to use WLST.

The metrics are stored in a binary file by default and this file is kept in 'DOMAIN_HOME/servers/<server>/data/store/diagnostics'. The file will have a name like 'WLS_DIAGNOSTICS0000000.DAT'. We need this for the WLST command.
  • To use this particular command the server needs to be stopped. The reason for this is because the running server has a continuous lock on the DAT file while running
  • Start WLST from 'FMw_HOME'/oracle_common/common/bin/wlst.sh'
  • Command: exportDiagnosticData(logicalName="HarvestedDataArchive",exportFileName="<Path>/Harvester.xml", storeDir="<MW_HOME>/user_projects/domains/soasingle/servers/AdminServer/data/store/diagnostics")
  • The data is exported in XML format to the specified location
Explanation of command parameters:
  • logicalName: These are predefined values and in this case we need to use 'HarvestedDataArchive'. This persistence mechanism is used for other things so there are 2 additional options that we won't cover.
  • exportFileName: File you want the data exported to
  • storeDir: The location of the DAT file you want to export. WLS does offer customization of this location but that is beyond the scope of this post
The XML output is quite raw and looks like this:



                          
                           (click image for full view)





NOTE: The DAT file is cummulative so if you start a Collected Metrics module, delete it and start a new one then the information will be appended to the same DAT file. To start with a clean DAT file you'll need to stop the server and delete the existing one first or use your own tooling to filter the entries of interest from the combined XML.

top
About

This is the official blog of the SOA Proactive Support Team. Here we will provide information on our activities, publications, product related information and more. Additionally we look forward to your feedback to improve what we do.

Search

Categories
Archives
« March 2015
SunMonTueWedThuFriSat
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
    
       
Today