Friday Oct 30, 2015

Elasticity for Dynamic Clusters

Introducing Elasticity for Dynamic Clusters

WebLogic Server 12.1.2 introduced the concept of dynamic clusters, which are clusters where the Managed Server configurations are based off of a single, shared template.  It greatly simplified the configuration of clustered Managed Servers, and allows for dynamically assigning servers to machine resources and greater utilization of resources with minimal configuration.

In WebLogic Server 12.2.1, we build on the dynamic clusters concept to introduce elasticity to dynamic clusters, allowing them to be scaled up or down based on conditions identified by the user.  Scaling a cluster can be performed on-demand (interactively by the administrator), at a specific date or time, or based on performance as seen through various server metrics.

In this blog entry, we take a high level look at the different aspects of elastic dynamic clusters in WebLogic, the next piece in the puzzle for on-premise elasticity with WebLogic Server!  In subsequent blog entries, we will provide more detailed examinations of the different ways of achieving elasticity with dynamic clusters.

The WebLogic Server Elasticity Framework

The diagram below shows the different parts to the elasticity framework for WebLogic Server:

The Elastic Services Framework are a set of services residing within the Administration Server for a for WebLogic domain, and consists of

  • A new set of elastic properties on the DynamicServersMBean for dynamic clusters to establish the elastic boundaries and characteristics of the cluster
  • New capabilities in the WebLogic Diagnostics Framework (WLDF) to allow for the creation of automated elastic policies
  • A new "interceptors" framework to allow administrators to interact with scaling events for provisioning and database capacity checks
  • A set of internal services that perform the scaling
  • (Optional) integration with Oracle Traffic Director (OTD) 12c to notify it of changes in cluster membership and allow it to adapt the workload accordingly

Note that while tighter integration with OTD is possible in 12.2.1, if the OTD server pool is enabled for dynamic discovery, OTD will adapt as necessary to the set of available servers in the cluster.

Configuring Elasticity for Dynamic Clusters

To get started, when you're configuring a new dynamic cluster, or modifying an existing dynamic cluster, you'll want to leverage some new properties surfaced though the DynamicServersMBean for the cluster to set some elastic boundaries and control the elastic behavior of the cluster.

The new properties to be configured include

  • The starting dynamic cluster size
  • The minimum and maximum elastic sizes of the cluster
  • The "cool-off" period required between scaling events

There are several other properties regarding how to manage the shutdown of Managed Servers in the cluster, but the above settings control the boundaries of the cluster (by how many instances it can scale up or down), and how frequently scaling events can occur.  The Elastic Services Framework will allow the dynamic cluster to scale up to the specified maximum number of instances, or down to the minimum you allow.  

The cool-off period is a safety mechanism designed to prevent scaling events from occurring too frequently.  It should allow enough time for a scaling event to complete and for its effects to be felt on the dynamic cluster's performance characteristics.

Needless to say, the values for these settings should be chosen carefully and aligned with your cluster capacity planning!

Scaling Dynamic Clusters

Scaling of a dynamic cluster can be achieved through the following means:

  • On-demand through WebLogic Server Administration Console and WLST 
  • Using an automated calendar-based schedule utilizing WLDF policies and actions
  • Through automated WLDF policies based on performance metrics

On-Demand Scaling

WebLogic administrators have the ability to scale a dynamic cluster up or down on demand when needed:

Manual Scaling using the WebLogic Server Administration Console

In the console case, the administrator simply indicates the total number of desired running servers in the cluster, and the Console will interact with the Elastic Services Framework to scale the cluster up or down accordingly, within the boundaries of the dynamic cluster.

Automated Scaling

In addition to scaling a dynamic cluster on demand, WebLogic administrators can configure automated polices using the Polices & Actions feature (known in previous releases as the Watch & Notifications Framework) in WLDF.

Typically, automated scaling will consist of creating pairs of WLDF policies, one for scaling up a cluster, and one for scaling it down.  Each scaling policy consists of 

  • (Optionally) A policy (previously known as a "Watch Rule") expression
  • A schedule
  • A scaling action

To create an automated scaling policy, an administrator must

  • Configure a domain-level diagnostic system module and target it to the Administration Server
  • Configure a scale-up or scale-down action for a dynamic cluster within that WLDF module
  • Configure a policy and assign the scaling action

For more information you can consult the documentation for Configuring Policies and Actions.

Calendar Based Elastic Policies

In 12.2.1, WLDF introduces the ability for cron-style scheduling of policy evaluations.  Policies that monitor MBeans according to a specific schedule are called "scheduled" policies.  

A calendar based policy is a policy that unconditionally executes according to its schedule and executes any associated actions.   When combined with a scaling action, you can create a policy that can scale up or scale down a dynamic cluster at specific scheduled times.

Each scheduled policy type has its own schedule (as opposed to earlier releases, which were tied to a single evaluation frequency) which is configured in calendar time, and allowing the ability to create the schedule patterns such as (but not limited to):

  • Recurring interval based patterns (e.g., every 5th minute of the hour, or every 30th second of every minute)
  • Days-of-week or days-of-month (e.g., "every Mon/Wed/Fri at 8 AM", or "every 15th and 30th of every month")
  • Specific days and times within a year  (e.g., "December 26th at 8AM EST")

So, for example, an online retailer could configure a pair of policies around the Christmas holidays:

  • A "Black Friday" policy to scale up the necessary cluster(s) to meet increased shopping demand for the Christmas shopping season
  • Another policy to scale down the cluster(s) on December 25th when the Christmas shopping season is over

Performance-based Elastic Policies

In addition to calendar-based scheduling, in 12.2.1 WLDF provides the ability to create scaling policies based on performance conditions within a server ("server-scoped") or cluster ("cluster-scoped").  You can create a policy based on various run-time metrics supported by WebLogic Server.  WLDF also provides a set of pre-packaged, parameterized, out-of-the-box functions called "Smart Rules" to assist in creating performance-based policies.

Cluster-scoped Smart Rules allow you to look at trends in a performance metric across a cluster over a specified window of time and (when combined with scaling actions) scale up or down based on criteria that you specify.  Some examples of the metrics that are exposed through Smart Rules include:

  • Throughput (requests/second)
  • JVM Free heap percentage
  • Process CPU Load
  • Pending user requests
  • Idle threads count
  • Thread pool queue length

Additionally, WLDF provides some "generic" Smart Rules to allow you to create policies based on your own JMX-based metrics.  The full Smart Rule reference can be found here.

And, if a Smart Rule doesn't suit your needs, you can also craft your own policy expressions.  In 12.2.1, WLDF utilizes Java EL 3.0 as the policy expression language, and allows you to craft your own policy expressions based on JavaBean objects and functions (including Smart Rules!) that we provide out of the box.  

Provisioning and Safeguards with Elasticity

What if you need to add or remove virtual machines during the scaling process?  In WLS 12.2.1 you can participate in the scaling event utilizing script interceptors.  A script interceptor provides call-out hooks where you can supply custom shell scripts, or other executables, to be called when a scaling event happens on a cluster.  In this manner, you can write a script to interact with 3rd-party virtual machine hypervisors to add virtual machines prior to scaling up, or remove/reassign virtual machines after scaling down. 

WebLogic Server also provides administrators the ability to prevent overloading database capacity on a scale up event through the data source interceptor feature.  Data source interceptors allow you to set a value for the maximum number of connections allowed on a database, by associating a set of data source URLs and URL patterns with a maximum connections constraint.   When a scale up is requested on a cluster, the data source interceptor looks at what the new maximum connection requirements are for the cluster (with the additional server capacity), and if it looks like the scale up could lead to a database overload it rejects the scale up request.  While this still requires adequate capacity planning for your database utilization, it allows you to put in some sanity checks at run time to ensure that your database doesn't get overloaded by a cluster scale up.

Integration with Oracle Traffic Director

The elasticity framework also integrates with OTD through the WebLogic Server 12.2.1 life cycle management services.  When a scaling event occurs, the elasticity framework interacts with the life cycle management services to notify OTD of the scaling event so that OTD can update its routing tables accordingly.

In the event of a scale up event, for example, OTD is notified of the candidate servers and adjusts the server pool accordingly.  

In the case of a scale down, the life cycle management services notifies OTD which instances are going away.  OTD then halts sending new requests to the servers being scaled down, and routs new traffic to the remaining set of instances in the cluster, allowing the instances to be removed to be shutdown gracefully without losing any requests.

In order for OTD integration to be active, you must enable life cycle management services for the domain as documented here.

The Big Picture - Tying It All Together

The elasticity framework in 12.2.1 provides a lot of power and flexibility to manage the capacity in your on-premise dynamic clusters.  As part of your dynamic cluster capacity planning, you can use elasticity take into account your dynamic cluster's minimum, baseline, and peak capacity needs, and incorporate those settings into your dynamic servers configuration on the cluster.  Utilizing WLDF policies and actions, you can create automated policies to scale your cluster at times of known increased or decreased capacity, or to scale up or down based on cluster performance.

Through the use of script interceptors, you can interact with virtual machine pools to add or remove virtual machines during scaling, or perhaps even move shared VMs between clusters based on need.  You can also utilize the data source interceptor to prevent exceeding the capacity of any databases affected by scale up events.

And, when so configured, the Elasticity Framework can interact with OTD during scaling events to ensure that new and in-flight sessions are managed safely when adding or removing capacity in the dynamic cluster.

In future blogs (and maybe vlogs!) we'll go into some of the details on these features.  This is really just an overview the new features that are available to help our users implement elasticity with dynamic clusters.  We will follow on in the upcoming weeks and months with more detailed discussions and examples of how to utilize these powerful new features.

In the meantime, you can download a demonstration of policy based scaling with OTD integration from here, with documentation about how to set it up and run it here

Feel free to post any questions you have here, or email me directly.  In the meantime, download WebLogic Server 12.2.1 and start poking around! 


Policy Based Scaling demonstration files and documentation

WebLogic Server 12.2.1 Documentation

Configuring Elasticity for Dynamic Clusters in Oracle WebLogic Server

Configuring WLDF Policies and Actions

Dynamic Clusters Documentation

End-To-End Life Cycle Management and Configuring WebLogic Server MT: The Big Picture

Oracle Traffic Director (OTD) 12c

Java EL 3.0 Specification

Wednesday Oct 28, 2015

Weblogic Server 12.2.1 Multi-Tenancy Diagnostics Overview


WebLogic Server 12.2.1 release includes support for multitenancy, which allows multiple tenants to share a single WebLogic domain. Tenants have access to domain partitions which provides an isolated slice of the WebLogic domain's configuration and runtime infrastructure.

This blog provides an overview of the diagnostics and monitoring capabilities available to tenants for applications and resources deployed to their respective partitions.

These features are provided by the WebLogic Server Diagnostic Framework (WLDF) component.

The following topics are discussed in the sections below.

Log and Diagnostic Data

Log and diagnostic data from different sources are made available to the partition administrators. They are broadly classified into the following groups:

  1. Shared data - Log and diagnostic data not directly available to the partition administrators in raw persisted form. It is only available through the WLDF Accessor component.
  2. Partition scoped data - These logs are available to the partition administrators in its raw form under the partition file system directory.

Note that The WLDF Data Accessor component provides access to both the shared and partition scoped log and diagnostic data available on a WebLogic Server instance for a partition.

The following shared logs and diagnostic data is available to a partition administrator.

Log Type
Content Description
Server Log events from Server and Application components pertaining to the partition recorded in the Server log file.
Domain Log events collected centrally from all the Server instances in the WebLogic domain pertaining to the partition in a single log file.
DataSource DataSource log events pertaining to the partition.
HarvestedData Archive Metrics data gathered by the WLDF Harvester from MBeans pertaining to the partition.
Instrumentation Events Archive WLDF Instrumentation events generated by applications deployed to the partition.

The following partition scoped log and diagnostic data is available to a partition administrator.

Log Type
Content Description
HTPP access.log HTTP access.log from partition virtual target's WebServer
JMSServer JMS server message life-cycle events for JMS server resources defined within a resource group or resource group template scoped to a partition.
SAF Agent SAF agent message life-cycle events for SAF agent resources defined within a resource group or resource group template scoped to a partition.
Connector Log data generated by Java EE resource adapter modules deployed to a resource group or resource group template within a partition.
Servlet Context Servlet context log data generated by Java EE web application modules deployed to a resource group or resource group template within a partition.

WLDF Accessor

The WLDF Accessor provides the RuntimeMBean interface to retrieve diagnostic data over JMX. It also provides a query capability to fetch only a subset of the data.

Please refer to the documentation on WLDF Data Accessor for WebLogic Server for a detailed description of this functionality.

WLDFPartitionRuntimeMBean (child of PartitionRuntimeMBean) is the root of the WLDF Runtime MBeans. It provides a getter for WLDFPartitionAccessRuntimeMBean interface which is the entry point for the WLDF Accessor functionality scoped to a partition. There is an instance of WLDFDataAccessRuntimeMBean for each log instance available for partitions.

Different logs are referred to by their logical names according to a predefined naming scheme.

The following table lists the logical name patterns for the different partition scoped logs.

Shared Logs

Log Type

Logical Name

Server Log


Domain Log




Harvested Metrics


Instrumentation Events


Partition Scoped Logs

Log Type

Logical Name

HTTP Access Log


JMS Server Log


SAF Agent Log


Servlet Context Log


Connector Log


Logging Configuration

WebLogic Server MT supports configuring Level for java.util.logging.Loggers used by application components running within a partition. This will allow Java EE applications using java.util.logging to be able to configure levels for their respective loggers even though they do not have access to the system level java.util.logging configuration mechanism. In case of shared logger instances used by common libraries used across partitions also the level configuration is applied to a Logger instance if it is doing work on behalf of a particular partition.

This feature is available if the WebLogic System Administrator has started the server with -Djava.util.logging.manager=weblogic.logging.WLLogManager command line system property.

If the WebLogic Server was started with the custom log manager as described above, the partition administrator can configure logger levels as follows:

Please refer to the sample WLST script in the WLS-MT documentation.

Note that the level configuration specified in the  PartitionLogMBean.PlatformLoggerLevels attribute is applicable only for the owning partition. It is possible that a logger instance with the same name is used by another partition, each logger's effective level at runtime will defined by the respective partition's  PartitionLogMBean.PlatformLoggerLevels configuration.

Server Debug 

For certain troubleshooting scenarios you may need to enable debug output from WebLogic Server subsystems specific to your partition. The Server debug output is useful to debug internal Server code when it is doing work on behalf of a partition. This needs to be done carefully in collaboration with the WebLogic System Administrator and Oracle Support. The WebLogic System Administrator must enable the ServerDebugMBean.PartitionDebugLoggingEnabled attribute first and will advise you to enable certain debug flags. These flags are boolean attributes defined on the ServerDebugMBean configuration interface. The specific debug flags to be enabled for a partition are configured via the PartitionLogMBean.EnabledServerDebugAttributes attribute. It contains an array of String values that are the names of the specific debug outputs to be enabled for the partition. The debug output thus produced is recorded in the server log from where it can be retrieved via the WLDF Accessor and provided to Oracle Support for further analysis. Note that once the troubleshooting is done the debug needs to be disabled as there is a performance overhead incurred when you enable server debug.

Please refer to the sample WLST script in the WebLogic Server MT documentation on how to enable partition specific server debug.

Diagnostic System Module for Partitions

Diagnostic System Module provides Harvester and Policies and Actions components that can be defined within a resource group or resource group template deployed to a Partition.


WLDF Harvester provides the capability to poll MBean metric values periodically and and archive the data in the harvested data archive for later diagnosis and analysis. All WebLogic Server Runtime MBeans visible to the partition including the PartitionRuntimeMBean and its child MBeans as well as custom MBeans created by applications deployed to the partition are allowed for harvesting. Harvester configuration defines the sampling period, the MBean types and instance specification and their respective MBean attributes that needs to be collected and persisted.

Note that the archived harvested metrics data is available from the WLDF Accessor component as described earlier.

The following is an example of harvester configuration persisted in the Diagnostic System Resource XML descriptor.


For further details refer to the WLDF Harvester documentation.

Policies and Actions

Policies are rules that are defined in Java Expression Language (EL) for conditions that need to be monitored. WLDF provides a rich set of actions that can be attached to policies that get triggered if the rule condition is satisfied. 

The following types of rule based policies can be defined.

  • Harvester - Based on WebLogic Runtime MBean or Application owned custom MBean metrics.
  • Log events - Log messages in the server and domain logs.
  • Instrumentation Events - Events generated from Java EE application instrumented code using WLDF Instrumentation.

The following snippets show the configuration of the policies using the EL language.

    <rule-expression>wls.partition.query("com.bea:Type=WebAppComponentRuntime,*", "OpenSessionsCurrentCount").stream().anyMatch(x -> x >= 1)
  <rule-expression>log.severityString == 'Error'</rule-expression>
  <rule-expression>instrumentationEvent.eventType == 'TraceAction'</rule-expression>

The following types of actions are supported for partitions:

  • JMS
  • SMTP
  • JMX
  • REST
  • Diagnostic Image

For further details refer to the Configuring Policies and Actions documentation.

Instrumentation for Partition Applications

WLDF provides a byte code instrumentation mechanism for Java EE applications deployed within a partition scope. The Instrumentation configuration for the application is specified in the META-INF/weblogic-diagnostics.xml descriptor file.  

This feature is available only if the WebLogic System Administrator has enabled server level instrumentation. Also it is not available for applications that share class loaders across partitions.

The following shows an example WLDF Instrumentation descriptor.

    <pointcut>execution( * example.util.MyUtil * (...))</pointcut>

For further details refer to the WLDF Instrumentation documentation.

Diagnostic Image

The Diagnostic Image is similar to a core dump which captures the state of the different WebLogic Server subsystems in a single image zip file. WLDF supports the capturing of partition specific diagnostic images.

Diagnostic images can be captured in the following ways:

  • From WLST by the partition administrator.
  • As the configured action for a WLDF policy.
  • By invoking the captureImage() operation on the WLDFPartitionImageRuntimeMBean.

Images are output to the logs/diagnostic_images directory in the partition file system.

The image for a partition contains diagnostic data from different sources such as:

  • Connector
  • Instrumentation
  • JDBC
  • JNDI
  • JVM
  • Logging
  • RCM
  • Work Manager
  • JTA

For further details refer to the WLDF documentation.

RCM Runtime Metrics

WebLogic Server 12.2.1 introduced Resource Consumption Management (RCM) feature.  This feature is only available in Oracle JDK JDK8u40 and above.

To enable RCM add the following command line switches on Server start up

-XX:+UnlockCommercialFeatures -XX:+ResourceManagement -XX:+UseG1GC
Please note that RCM is not enabled by default in the startup scripts.
The PartitionResourceMetricsRuntimeMBean which is a child of the PartitionRuntimeMBean provides a bunch of useful metrics for monitoring purposes.

Attribute Getter



Checks whether RCM metrics data is available for this partition.


Total CPU time spent measured in nanoseconds in the context of a partition.


Total allocated memory in bytes for the partition.This metric value increases monotonically over time.


Number of threads currently assigned to the partition.



Total  and current number of sockets opened in the context of a partition.



Total number of bytes read /written from sockets for a partition.



Total  and current number of files opened in the context of a partition.



Total number of file bytes  read/written in the context of a partition.



Total  and current number of file descriptors opened in the context of a partition.


Returns a snapshot of the historical data for retained heap memory usage for the partition.  Data is returned as a two-dimensional array for the usage of retained heap scoped to the partition over time.  Each item in the array contains a tuple of [timestamp (long), retainedHeap(long)] values.


Returns a snapshot of the historical data for CPU usage for the partition. CPU utilization percentage indicates the percentage of CPU utilized by a partition with respect to available CPU to Weblogic Server.

Data is returned as a two-dimensional array for the CPU usage scoped to the partition over time. Each item in the array contains a tuple of [timestamp (long), cpuUsage(long)] values.

Please note that the PartitionMBean.RCMHistoricalDataBufferLimit attribute limits the size of the data arrays for Heap and CPU.

Java Flight Recorder

WLDF provides integration with the Java Flight Recorder which enables WebLogic Server events to be included in the JVM recording. WebLogic Server events generated in the context of work being done on behalf of the partition are tagged with the partition-id and partition-name. These events and the flight recording data are only available to the Weblogic System Administrator.


WLDF provides a rich set of tools to capture and access to different types of monitoring data that will be very useful for troubleshooting and diagnosis tasks. This blog provided an introduction to the WLDF surface area for partition administrators. You are encouraged to take a deeper dive and explore these features further and leverage them in your production environments. More detail information is available in the WLDF documentation for WebLogic Server and the Partition Monitoring section in the WebLogic Server MT documentation.

Friday Jun 05, 2015

A Gentle Introduction to the WebLogic Diagnostic Framework (WLDF)

The WebLogic Diagnostic Framework (WLDF) is a powerful feature that has been around since WebLogic 9. It is an extremely robust way of live monitoring and diagnostics for the server, the underlying JVM, deployed applications and configured resources. Few if any other Java application servers can match the capabilities offered by WLDF. If you using WebLogic in production and don't know about WLDF, you are doing yourself a serious disservice.

Because of the power and flexibility offered by WLDF, it is not trivial to pick up and for some can be daunting. Fortunately Mike Croft of Oracle partner C2B2 consulting can help us out. He wrote up a very nice series of blog entries as a gentle introduction to WLDF. He provides a high level overview and discusses watches, notifications and the monitoring dashboard. The definitive way to learn about WLDF is of course always the latest WebLogic documentation :-).

Monday Nov 18, 2013

Navigating Through Diagnostic Data Options

In this new post I will talk about diagnostic data and more importantly what data and source of data are needed to troubleshoot and resolve some specific common problems.

WebLogic Server along with the JVM offers various ways to collect logs, debug traces, diagnostic images, thread dumps, and much more. However, useful data can easily get buried with lots of noise, unneeded data that won't help resolving a particular problem.

Performance Issues

We will start with performance issues such as long running requests or server hangs.  The best way to narrow down where the issue lies is to dump the java threads and parse them through a thread dump analyzer tool such as ThreadLogic.  Looking at the server log files, or enabling some debugging features would be premature since the first step is to identify the thread(s) that are stuck to determine what is being executed and relevant JEE resources. The log could also alert about stuck threads once they have been stuck for over the maximum allowed time (see MaxStruckThreadTime) but unless thread dumps are examined, it won't be possible to determine possible patterns of issues and threads' relationship around locked objects.

Memory Leaks

Another common performance issue involves memory leaks.  If you encounter OutOfMemory java.lang exceptions you should issue a heap dump and analyze it with a tool such as VisualVM or the Eclipse MAT.  See my post on Heap Dumps analysis for more details. Heap dumps are snapshots of memory certain times.  Heap dumps are to the Java process memory (non native) what thread dumps are to Java process threads.  Enabling -verbose:gc is also an effective way to monitor heap usage at runtime.  The output of the GC stats can be redirected to a file so that not to clog the server logs.  This can be done by setting -Xloggc with HotSpot JVM.  It's recommended to include -XX:+PrintGCDetails, XX:+PrintGCDateStamps (for Java 6 Update 4 and later JVMs) and -XX:+PrintGCTimeStamps to log time stamps and detailed GC activities as well.  Troubleshooting guide for HotSpot VM details these flags among others useful flags for diagnosis purposes.

Applications, 3rd parties, and configuration issues

The following chart summarizes what could be collected to debug each of the listed WLS component.  This list is by no means exhaustive.  Many more debug flags could be used for these listed components.  This chart contains examples of useful debug options and links to additional resources.

For the following, Redirect stdout and stderr logging should be enabled.  This can be done via the Administration Console in the advanced section of the servers logging tab or at startup in the java command line with -Dweblogic.StdoutDebugEnabled=true.  In addition, the severity level of the log file should be set to debug.

WLS comp. or 3rd party resource

Examples of Debug options and flags

MyOracleSupport  related resources

(login required)

 EJB/Web Serv.



To log SOAP requests and response messages

Trace in servers log files 

Troubleshooting EJBs Issues

How to enable Debugging for Weblogic EJB

How To Debug WLS Application Container Problems

Troubleshooting Oracle Web Services 11g



To log info on load, store events and transaction records into servers log files

See Debugging JMS for more JMS debugging options

Troubleshooting JMS Info Center



Trace in servers log files 

Troubleshooting SSL Security

-Dweblogic.debug.DebugReplication for high level replication info

-Dweblogic.debug.DebugClusterAnnouncements to log info on announcement, StateDump and attributes messages sent or received by multicast

Trace in servers log files  

Debug RJVM or Intra-Cluster Communication Problems

weblogic.JTAXA & weblogic.JTA2PC for runtime issues, example:

JAVA_OPTIONS="-Dweblogic.Debug=weblogic.JTAXA, weblogic.JTA2PC"

Trace in servers log files  

Investigating Transaction Problems


to print information about JDBC methods invoked with their arguments, return values, and thrown exceptions

See JDBC Debugging Scopes under Monitoring JDBC Resources

Trace in servers log files  

Investigating JDBC Issues

JDBC Debugging Scopes

Oracle JDBC Driver (*)

-Doracle.jdbc.Trace=true to enable jdbc logging


Trace is file defined in oracle.jdbc.LogFile

Enable Oracle JDBC Driver Debug in WLS
Deployment (Classloader) Set the following options to true to report which classes are getting loaded







Trace in servers log files

Deployment Problems - Enabling Debug

Proxy plug-in

Debug="ALL" in the proxy configuration file

(iisproxy.ini, httpd.conf or obj.conf)

Trace in file set in WLLogFile defined inside proxy configuration file

Common Diagnostic Process for Proxy Plug-In Problems

(*)  Enabling JDBC Driver debug flags is verbose so it's recommended to set logging properties to appropriate levels such as FINE, SEVERE, FINEST, INFO etc. based upon the debugging requirement.


Diagnosing and troubleshooting also require a good understanding of the Weblogic domain environment, its configuration, patching level, JVM parameters and also an easy navigation through various logs and configuration files.  For this, RDA is a great diagnosing tool because it collects all this data indiscriminately.  One of the numerous benefits of RDA is that it doesn't require any complex setting and can collect data from each distinctive log file, xml repository and scripts.  So, RDA reports are of great assistance while working directly with Oracle support because that allow the engineer assigned to a service request to quickly understand the environment under which a specific issue is occurring.  This generally helps expedite the resolution of the SR.


The Weblogic Diagnostic Framework can be used to collect metrics, setup watch and notifications, and to define instrumentation.  With WLS 12.1.2, you can set the level (Low, Medium or High) of built-in modules that will gather metrics, or disable them completely.  WLDF collects runtime statistics on JDBC, JTA and JVM and more.  You can then use the Monitoring Dashboard to view and navigate through the collected data.  WLDF is not an alternative to debug options.  Debug options activate tracing that is coded as part of WLS whereas WLDF collects and monitors runtime mbeans values and can activate notifications with defined rules.  They are used for different purposes.  

The WLDF built-in configurations (Low, Medium, & High) can be cloned into a new configuration, and used as templates for creating custom WLDF configurations.  The Low volume built-in is enabled by default in production mode.  One new feature with WLS 12.1.2 is Runtime control which gives the ability to activate or deactivate diagnostic system modules dynamically at runtime wihtout making any domain configuration change.


JFR recording data can also be captured through WLDF diagnostic images based on WLDF watch rules so that JVM runtime system information can be analyzed along with recording of WLS components diagnostic data.  Once extracted from the diagnostic image capture, the JFR file can be analyzed using Java Mission Control.


WLST diagnostic commands can be used to extract data from the diagnostic archive (event and harvested metric data) in either XML form or CSV, based on the function you use.  You can also dump the diagnostics data from a harvester to a local file.

The following WLST functions allow to capture an image and to download image files from a remote system over WLST:

One major difference between the diagnostic framework and the debug options is that debugs are enabled so that an issue can be reproduced and traced whereas the diagnostic image capture is used as a server-level dump for post-failure analysis in a similar way heap dumps are used post memory leak failures.


The official blog for Oracle WebLogic Server fans and followers!

Stay Connected


« November 2015