Wednesday Nov 11, 2015

ONS Configuration in WLS

Fast Application Notification (FAN) is used to provide notification events about services and nodes as they are added and removed from Oracle Real Application Clusters (RAC) and Oracle Data Guard environments. The Oracle Notification System is used as the transport for FAN. You can find all of the details in the following link.

Configuring ONS has been available for Active GridLink (AGL) since WebLogic Server (WLS) 10.3.6. In recent releases, auto-ONS has been added so that it is no longer required to explicitly configure ONS and this is generally recommended. There are some cases where it is necessary to explicitly configure the ONS configuration.  One reason is to specify a wallet file and password (this cannot be done with auto-ONS).  Another reason is to explicitly specify the ONS topology or modify the number of connections.

ONS configuration has been enhanced in WLS 12.2.1. The OnsNodeList value must be configured either with a single node list or a property node list (new in WLS 12.2.1), but not both.  If the WLS OnsNodeList contains an equals sign (=), it is assumed to be a property node list and not a single node list. 

Single Node List:
A comma separated list of ONS daemon listen addresses and ports pairs separated by colon.



Property Node List:

This string is composed of multiple records, with each record consisting of a key=value pair and terminated by a newline ('\n') character.  The following keys can be specified.

  • nodes.<id> A list of nodes representing a unique topology of remote   ONS servers. <id> specifies a unique identifier for the node list -- duplicate entries are ignored. The list of nodes configured in any list must not include any nodes configured in any other list for the same client or duplicate notifications will be sent and delivered.  The list format is a comma separated list of ONS daemon listen addresses and ports pairs separated by colon.

  • maxconnections.<id> Specifies the maximum number of concurrent connections      mainted with the ONS servers. <id> specifies the node list to which this parameter applies. The default is 3.

  • active.<id> If true the list is active and connections will automatically be established to the configured number of ONS servers. If false the list is inactive and will only be used as a fail over list in the event that no connections for an active list can be established. An inactive list can only serve as a fail over for one active list at a time, and once a single connection is re-established on the active list, the fail-over list will revert to being inactive. Note that only notifications published by the client after a list has failed over   will be sent to the fail over list. <id> specifies the node list to which this parameter applies. The default is true.

  • remotetimeout  The timeout period, in milliseconds, for a connection to each remote server. If the remote server has not responded within this timeout period, the connection will be closed. The default is 30 seconds.

Note that although walletfile and walletpassword are supported in the string, WLS has separate configuration elements for these values, OnsWalletFile and OnsWalletPasswordEncrypted.

This example that is equivalent to the above single node list:


If the datasource is configured to connect to two clusters and receive FAN events from both, for example in the RAC with Data Guard situation, then two ONS node groups are needed. For example


Remember that the URL needs a separate ADDRESS_LIST for each cluster and set LOAD_BALANCE=ON per ADDRESS to expand SCAN names.

When using the administration console to configure an Active GridLink datasource, it is not possible to specify a property node list during the creation flow.  Instead, it is necessary to modify the ONS Node value on the ONS tab after creation.  The following figure shows a property node list with two groups for two RAC clusters.

The following figure shows the Monitoring page for ONS statistics.  Note that there are two entries, one for each host and port pair.

The following figure shows the Monitoring page after testing the rac2 ONS node group.

You can also use WLST to create the ONS parameter.  Multiple lines in the ONS value need to be separated by embedded newlines.  This is a complete example for creating an AGL datasource.

cd('/JDBCSystemResources/' + dsName + '/JDBCResource/' + dsName)
cd('/JDBCSystemResources/' + dsName + '/JDBCResource/' + dsName + '/JDBCDataSourceParams/' + dsName )
set('JNDINames',jarray.array([String('jdbc/' + dsName )], String))
cd('/JDBCSystemResources/' + dsName + '/JDBCResource/' + dsName + '/JDBCDriverParams/' + dsName
cmo.setDriverName( 'oracle.jdbc.OracleDriver' )
cd('/JDBCSystemResources/' + dsName + '/JDBCResource/' + dsName + '/JDBCConnectionPoolParams/' + dsName )
cmo.setTestTableName('SQL ISVALID')
cd('/JDBCSystemResources/' + dsName + '/JDBCResource/' + dsName + '/JDBCDriverParams/' + dsName + '/Properties/' + dsName )
cd('/JDBCSystemResources/' + dsName + '/JDBCResource/' + dsName + '/JDBCDriverParams/' + dsName + '/Properties/' + dsName + '/Properties/user')
cd('/JDBCSystemResources/' + dsName + '/JDBCResource/' + dsName + '/JDBCDataSourceParams/' + dsName )
cd('/JDBCSystemResources/' + dsName + '/JDBCResource/' + dsName + '/JDBCOracleParams/' + dsName)
cd('/SystemResources/' + dsName )
set('Targets',jarray.array([ObjectName('com.bea:Name=' + 'myserver' + ',Type=Server')], ObjectName))

If you can go with automatically configured ONS, that's desirable but if you need to configure ONS explicitly, WLS 12.2.1 has a lot more power to specify exactly what you need. 

Wednesday Nov 04, 2015

Local Transaction Leak Profiling for WLS 12.2.1 Datasource

This is the third of this series on profiling enhancements in WLS 12.2.1 (but maybe not the least since this appears to happen quite often). 

This is a common application error that is difficult to diagnose when an application leaves a local transaction open on a connection and it is returned to the connection pool. This error can manifest as XAException/XAER_PROTO errors, or as unintentional local transaction commits or rollbacks of database updates. Current workarounds to internally commit/rollback the local transaction when a connection is released adds significant overhead, only masks errors that may be surfaced to the application, and still leaves the possibility of data inconsistency.

The Oracle JDBC thin driver supports a proprietary method to obtain the local transaction state of a connection. A new profiling option will be added that will generate a log entry when a local transaction is detected on a connection when it is released to the connection pool. The log record will include the call stack and details about the thread releasing the connection.

To enable local transaction leak profiling, the datasource connection pool ProfileType attribute bitmask must include the value (0x000200).

This is a WLST script to set the values.

# java weblogic.WLST
import sys, socket, os
hostname = socket.gethostname()
# Edit the configuration to set the leak timeout
cd('/JDBCSystemResources/' + datasource + '/JDBCResource/' + datasource +
'/JDBCConnectionPoolParams/' + datasource ) 
cmo.setProfileType(0x000200) # turn on  transaction leak profiling

Note that you can "or" multiple profile options together when setting the profile type. 

In the administrative console on the Diagnostics Tab, this may be enabled using the Profile Connection Local Transaction Leak checkbox. 

The local transaction leak profile record contains two stack traces, one of the reserving thread and one of the thread at the time the connection was closed. An example log record is shown below.

####<mydatasource> <WEBLOGIC.JDBC.CONN.LOCALTX_LEAK> <Thu Apr 09 15:30:11 EDT 2015> <java.lang.Exception
at weblogic.jdbc.common.internal.ConnectionEnv.setup(
at weblogic.common.resourcepool.ResourcePoolImpl.reserveResource(
at weblogic.common.resourcepool.ResourcePoolImpl.reserveResource(
at weblogic.jdbc.common.internal.ConnectionPool.reserve(
at weblogic.jdbc.common.internal.ConnectionPool.reserve(
at weblogic.jdbc.common.internal.ConnectionPoolManager.reserve(
at weblogic.jdbc.common.internal.RmiDataSource.getPoolConnection(
at weblogic.jdbc.common.internal.RmiDataSource.getConnectionInternal(
at weblogic.jdbc.common.internal.RmiDataSource.getConnection(
at weblogic.jdbc.common.internal.RmiDataSource.getConnection(
> <java.lang.Exception
at weblogic.jdbc.common.internal.ConnectionPool.release(
at weblogic.jdbc.common.internal.ConnectionPoolManager.release(
at weblogic.jdbc.wrapper.PoolConnection.doClose(
at weblogic.jdbc.wrapper.PoolConnection.close(
> <[partition-id: 0] [partition-name: DOMAIN] >

Once you look at the record, you can see where in the application the close is done and you should complete the transaction appropriately before doing the close.

Closed JDBC Object Profiling for WLS 12.2.1 Datasource

Accessing a closed JDBC object is a common application error that can be difficult to debug. To help diagnose such conditions there is a new profiling option to generate a diagnostic log message when a JDBC object (Connection, Statement or ResultSet) is accessed after the close() method has been invoked. The log message will include the stack trace of the thread that invoked the close() method.

To enable closed JDBC object profiling, the datasource ProfileType attribute bitmask must have the value 0x000400 set.

This is a WLST script to set the value.

# java weblogic.WLST
import sys, socket, os
hostname = socket.gethostname()
# Edit the configuration to set the leak timeout
cd('/JDBCSystemResources/' + datasource + '/JDBCResource/' + datasource +
'/JDBCConnectionPoolParams/' + datasource )
cmo.setProfileType(0x000400) # turn on profiling

In the administrative console on the Diagnostics Tab, this may be enabled using the Profile Closed Usage checkbox. 

The closed usage log record contains two stack traces, one of the thread that initially closed the object and another of the thread that attempted to access the closed object. An example record is shown below.

####<mydatasource> <WEBLOGIC.JDBC.CLOSED_USAGE> <Thu Apr 09 15:19:04 EDT 2015> <java.lang.Throwable: Thread[[ACTIVE] ExecuteThread: '2' for queue: 'weblogic.kernel.Default (self-tuning)',5,Pooled Threads]
at weblogic.jdbc.common.internal.ProfileClosedUsage.saveWhereClosed(
at weblogic.jdbc.wrapper.PoolConnection.doClose(
at weblogic.jdbc.wrapper.PoolConnection.close(
> <java.lang.Throwable: Thread[[ACTIVE] ExecuteThread: '2' for queue: 'weblogic.kernel.Default (self-tuning)',5,Pooled Threads]
at weblogic.jdbc.common.internal.ProfileClosedUsage.addClosedUsageProfilingRecord(
at weblogic.jdbc.wrapper.PoolConnection.checkConnection(
at weblogic.jdbc.wrapper.Connection.preInvocationHandler(
at weblogic.jdbc.wrapper.Connection.createStatement(
> <[partition-id: 0] [partition-name: DOMAIN] >

When this profiling option is enabled, exceptions indicating that an object is already closed will also include a nested SQLException indicating where the close was done, as shown in the example below.

java.sql.SQLException: Connection has already been closed.
at weblogic.jdbc.wrapper.PoolConnection.checkConnection(
at weblogic.jdbc.wrapper.Connection.preInvocationHandler(
at weblogic.jdbc.wrapper.Connection.createStatement(
at Application.doit(
Caused by: java.sql.SQLException: Where closed: Thread[[ACTIVE] ExecuteThread: ...
at weblogic.jdbc.common.internal.ProfileClosedUsage.saveWhereClosed(
at weblogic.jdbc.wrapper.PoolConnection.doClose(
at weblogic.jdbc.wrapper.PoolConnection.close(
at Application.doit(

This is very helpful when you get an error indicating that a connection has already been closed and you can't figure out where it was done.  Note that there is overhead in getting the stack trace so you wouldn't normally run with this enabled all the time in production (and we don't default to it always being enabled), but it's worth the overhead when you need to resolve a problem.

Register for Oracle WebLogic Multitenant Webcast

On November 18, 2015 at 10 AM Pacific Time, Oracle will deliver a webcast on Oracle WebLogic Multitenant: The World’s First Cloud-Native, Enterprise Java Platform.  

Although the title focuses on multitenancy, we will cover additional new capabilities in Oracle WebLogic Server and Oracle Coherence 12.2.1.   The webcast will include live chat, demos, and commentary from customers and partners on their planned deployments and benefits, along with product expert deep dives.    

Please register here and take advantage of the opportunity to learn more about using partitions as lightweight microcontainers, consolidating with multitenancy to reduce TCO, leveraging multi-datacenter high availability architectures, and maximizing developer and DevOps productivity for your public and private cloud platforms.  See you in two weeks!

Connection Leak Profiling for WLS 12.2.1 Datasource

This is the first of a series of three articles that describes enhancements to datasource profiling in WLS 12.2.1. These enhancements were requested by customers and Oracle support. I think they will be very useful in tracking down problems in the application.

The pre-12.2.1 connection leak diagnostic profiling option requires that the connection pool “Inactive Connection Timeout Seconds” attribute be set to a positive value in order to determine how long before an idle reserved connection is considered leaked. Once identified as being leaked, a connection is reclaimed and information about the reserving thread is written out to the diagnostics log. For applications that hold connections for long periods of time, false positives can result in application errors that complicate debugging. To address this concern and improve usability, two enhancements to connection leak profiling are available:

1. Connection leak profile records will be produced for all reserved connections when the connection pool reaches max capacity and a reserve request results in a PoolLimitSQLException error.

2. An optional Connection Leak Timeout Seconds attribute will be added to the datasource descriptor for use in determining when a connection is considered “leaked”. When an idle connection exceeds the timeout value a leak profile log message is written and the connection is left intact.

The existing connection leak profiling value (0x000004) must be set on the datasource connection pool ProfileType attribute bitmask to enable connection leak detection. Setting the ProfileConnectionLeakTimeoutSeconds attribute may be used in place of InactiveConnectionTimeoutSeconds for identifying potential connection leaks.

This is a WLST script to set the values.

# java weblogic.WLST
import sys, socket, os
hostname = socket.gethostname()
# Edit the configuration to set the leak timeout
cd('/JDBCSystemResources/' + datasource + '/JDBCResource/' + datasource +
'/JDBCConnectionPoolParams/' + datasource )
cmo.setProfileConnectionLeakTimeoutSeconds(120) # set the connection leak timeout
cmo.setProfileType(0x000004) # turn on profiling

This is what the console page looks like after it is set.  Note the profile type and timeout value are set on the Diagnostics tab for the datasource.

The existing leak detection diagnostic profiling log record format is used for leaks triggered by either the ProfileConnectionLeakTimeoutSeconds attribute or when pool capacity is exceeded. In either case a log record is generated only once for each reserved connection. If a connection is subsequently released to pool, re-reserved and leaked again, a new record will be generated. An example resource leak diagnostic log record is shown below.  The output can be reviewed in the console or by looking at the datasource profile output text file.

####<mydatasource> <WEBLOGIC.JDBC.CONN.LEAK> <Thu Apr 09 14:00:22 EDT 2015> <java.lang.Exception
at weblogic.jdbc.common.internal.ConnectionEnv.setup(
at weblogic.common.resourcepool.ResourcePoolImpl.reserveResource(
at weblogic.common.resourcepool.ResourcePoolImpl.reserveResource(
at weblogic.jdbc.common.internal.ConnectionPool.reserve(
at weblogic.jdbc.common.internal.ConnectionPool.reserve(
at weblogic.jdbc.common.internal.ConnectionPoolManager.reserve(
at weblogic.jdbc.common.internal.RmiDataSource.getPoolConnection(
at weblogic.jdbc.common.internal.RmiDataSource.getConnectionInternal(
at weblogic.jdbc.common.internal.RmiDataSource.getConnection(
at weblogic.jdbc.common.internal.RmiDataSource.getConnection(
> <autoCommit=true,enabled=true,isXA=false,isJTS=false,vendorID=100,connUsed=false,doInit=false,'null',destroyed=false,poolname=mydatasource,appname=null,moduleName=null,
currentUser=...,currentThread=Thread[[ACTIVE] ExecuteThread: '0' for queue: 'weblogic.kernel.Default (self-tuning)',5,Pooled Threads],lastUser=null,currentError=null,currentErrorTimestamp=null,JDBC4Runtime=true,supportStatementPoolable=true,needRestoreClientInfo=false,defaultClientInfo={},
supportIsValid=true> <[partition-id: 0] [partition-name: DOMAIN] >

For applications that may have connection leaks but also have some valid long-running operations, you will now be able to scan through a list of connections that may be problems without interfering with normal application execution.

Tuesday Nov 03, 2015

Using Eclipse with WebLogic Server 12.2.1

With the installation of WebLogic Server 12.2.1 now including the Eclipse Network Installer, which enables developers to  download and install Eclipse including the specific features of interest, getting up and running with Eclipse and WebLogic Server has never been easier.

The Eclipse Network Installer presents developers with a guided interface to enable the custom installation of an Eclipse environment through the selection of an Eclipse version to be installed and which of the available capabilities are required - such as Java EE 7, Maven, Coherence, WebLogic, WLST, Cloud and Database tools amongst others.  It will then download the selected components and install them directly on the developers machine

Eclipse and the Oracle Enterprise Pack for Eclipse plugins continue to provide extensive support for WebLogic Server enabling it to be used to throughout the software lifecycle; from develop and test cycles with its Java EE dialogs,  assistants and deployment plugins; through to automation of configuration and provisioning of environments with the authoring, debugging and running of scripts using the WLST Script Editor and MBean palette.

The YouTube video WebLogic Server 12.2.1 - Developing with Eclipse provides a short demonstration on how to install Eclipse and the OEPE components using the new Network Installer that is bundled within the WebLogic Server installations.  It then shows the configuring of a new WebLogic Server 12.2.1 server target within Eclipse and finishes with importing a Maven project that contains a Java EE 7 example application that utilizes the new Batch API that is deployed to the server and called from a browser to run.

Monday Nov 02, 2015

Getting Started with the WebLogic Server 12.2.1 Developer Distribution

The new WebLogic Server 12.2.1 release continues down the the path of providing an installation that is smaller to download and able to be installed with a single operation, providing a quicker approach for developers to get started with the product.

New with the WebLogic Server 12.2.1 release is the use of the quick installer technology which packages the product into an executable jar file, which will silently install the product into a target directory.  Through the use of the quick installer, the installed product can now be patched using the standard Oracle patching utility - opatch - enabling developers to download and apply any patches as needed and to also enable a high degree of consistency with downstream testing and production environments.

Despite it's smaller distribution size the developer distribution delivers a full featured WebLogic Server including the rich administration console, the comprehensive scripting environment with WLST, the Configuration Wizard and Domain Builders, the Maven plugins and artifacts and of course all the new WebLogic Server features such as Java EE 7 support, MultiTenancy, Elastic Dynamic Clusters and more.

For a quick look at using the new developer distribution, creating a domain and accessing the administration console, check out the YouTube video: Getting Started with the Developer Distribution.

WebLogic Scripting Tool (WLST) updates in 12.2.1

A number of updates have been implemented in Oracle WebLogic Server and Oracle Fusion Middleware 12.2.1 to simplify the usage of the WebLogic Scripting Tool (WLST), especially when multiple Oracle Fusion Middleware products are being used.    In his blog, Robert Patrick describes what we have done to unify the usage of WLST across the Oracle Fusion Middleware 12.2.1 product line.    This information will be very helpful to WLST users who want to better understand what was implemented in 12.2.1 and any implications for your environments.   

Wednesday Oct 28, 2015

WebLogic Server Multitenant Info Sources

In Will Lyons’s blog entry, he introduced the idea of WebLogic Server Multitenant, and there have been a few other blog entries related to WebLogic Server Multitenant since then. Besides these blogs and the product documentation, there are a couple of other things to take a look at:

I just posted a video on Youtube at This video includes a high-level introduction to WebLogic Server Multitenant. This video is a little bit longer than my other videos, but there are a lot of good things to talk about in WebLogic Server Multitenant.

We also have a datasheet at, which includes a fair amount of detail regarding the value and usefulness of WebLogic Server Multitenant.

I’m at OpenWorld this week where we are seeing a lot of interest in these new features. One aspect of value that seems to keep coming up is the value of running on a shared platform. There are cases where every time a new environment is added, it needs to be certified against security requirements or standard operating procedures. By sharing a platform, those certifications only need to be done once for the environment. New applications deployed in pluggable partitions would not need a ground-up certification. This can mean faster roll-out times/faster time to market and reduced costs.

 That’s all for now. Keep your eye on this blog. More info coming soon!

Tuesday Oct 27, 2015

Data Source System Property Enhancement in WLS 12.2.1

The earlier blog at Setting V$SESSION for a WLS Datasource described using system properties to set driver connection properties, which in turn automatically set values on the Oracle database session. Some comments from readers indicated that there were some limitations to this mechanism.

- There are some values that can’t be set on the command line because they aren’t available until the application server starts. The most obvious value is the process identifier.

- Values set on the command line imply that they are valid for all environments in the server which is fine values like the program name but not appropriate for datasource-specific values or the new partition name that is available with the WLS Multi Tenancy feature.

- In a recent application that I was working with, it was desirable to connect to the server hosting the datasource that was connected to the session so that we could run a graceful shutdown In this case, additional information was needed to generate the URL.

All of these cases are handled with the enhanced system properties feature.

The original feature supported setting driver properties using the value of system properties. The new feature is overloaded on top of the old feature to avoid introducing yet another set of driver properties in the graphical user interfaces and WLST scripts. It is enabled by specifying one or more of the supported variables listed in the table below into the string value. If one or more of these variables is included in the system property, it is substituted with the corresponding value. If a value for the variable is not found, no substitution is performed. If none of these variables are found in the system property, then the value is taken as a system property name.


Value Description


First half (up to @) of ManagementFactory.getRuntimeMXBean().getName()


Second half of ManagementFactory.getRuntimeMXBean().getName()


Java system property


System property


Data source name from the JDBC descriptor. It does not contain the partition name.


Partition name or DOMAIN


WebLogic Server server listen port


WebLogic Server server SSL listen port


WebLogic Server server name


WebLogic Server domain name

A sample property is shown in the following example:




<sys-prop-value>WebLogic ${servername} Partition ${partition}</sys-prop-value>



In this example, v$session.program running on myserver is set to “WebLogic myserver Partition DOMAIN”.

The biggest limitation of this feature is the character limit on the associated columns in v$session. If you exceed the limit, connection creation will fail.

Using this enhancement combined with the Oracle v$session values can make this a powerful feature for tracking information about the source of the connections.

See the  Blog announcing Oracle WebLogic Server 12.2.1 for more details on Multenacy and other new features.

Monday Oct 26, 2015

Announcing Oracle WebLogic Server 12.2.1

Oracle WebLogic Server 12.2.1 (12cR2) is now generally available.  The release was highlighted in Larry Ellison's Oracle OpenWorld keynote Sunday night, and was formally announced today in Inderjeet Singh's Oracle OpenWorld General Session and in a press release.  Oracle WebLogic Server 12.2.1 is available for download on the Oracle Technology Network (OTN) at the Oracle Fusion Middleware Download page, and the Oracle WebLogic Server Download page.   The product documentation is posted along with all documentation for Oracle Fusion Middleware 12.2.1, which has also been made available.   

Oracle WebLogic Server 12.2.1 is the biggest WebLogic Server product release in many years.    We intend to provide extensive detail about its new features in videos and blogs posted here over the coming weeks.   Consider this note an initial summary on some of the major new feature areas:

  • Multitenancy
  • Continuous Availability
  • Developer Productivity and Portability to the Cloud


WebLogic Server multitenancy enables users to consolidate applications and reduce cost of ownership, while maintaining application isolation and increasing flexibility and agility.   Using multitenancy, different applications (or tenants) can share the same server JVM (or cluster), and the same WebLogic domain, while maintaining isolation from each other by running in separate domain partitions.  

A domain partition is an isolated subset of a WebLogic Server domain, and its servers.  Domain partitions act like microcontainers, encapsulating applications and the resources (datasources, JMS servers, etc) they depend on.  Partitions are isolated from each other, so that applications in one partition do not disrupt applications running in other partitions in the same server or domain. An amazing set of features provide these degrees of isolation.   We will elaborate on them here over time.

Though partitions are isolated, they share many resources - the physical system they run on or the VM, the operating system, the JVM, and WebLogic Server libraries.  Because they share resources, they use fewer resources.   Customers can consolidate applications from separate domains into fewer domains, running in fewer JVMs in fewer physical systems.  Consolidation means fewer entities to manage, reduced resource consumption, and lower cost of ownership.

Partitions are easy to use.  Applications can be deployed to partitions without changes, and we will be providing tools to enable users to migrate existing applications to partitions.   Partitions can be exported from one domain and imported into other domains, simplifying migration of applications across development, test and production environments, and across private and public clouds. Partitions increase application portability and agility, giving development, DevOps, test and production teams more flexibility in how they develop, release and manage production applications.   

WebLogic Server's new multitenancy features are integrated with innovations introduced across the Oracle JDK, Oracle Coherence, Oracle Traffic Director, and Oracle Fusion Middleware and are closely aligned with Oracle Database Pluggable Databases.   Over time you will see multitenancy being leveraged more broadly in the Oracle Cloud and Oracle Fusion Middleware. Multitenancy represents a significant new innovation for WebLogic Server and for Oracle.

Continuous Availability

Oracle WebLogic Server 12.2.1 introduces new features to minimize planned and unplanned downtime in multi data center configurations.    Many WebLogic Server customers have implemented disaster recovery architectures to provide application availability and business continuity.   WebLogic Server's new continuous availability features will make it easier to create, optimize and manage such configurations, while increasing application availability.  They include the following:

  • Zero-downtime patching provides an orchestration framework that controls the application of patches across clusters to maintain application availability during patching. 
  • Live partition migration enables migration of partitions across clusters within the same domain, effectively moving applications from one cluster to another, again while maintaining application availability during the process. 
  • Oracle Traffic Director provides high performance, high availability load balancing, and enables optimized traffic routing and load balancing during patching and partition migration operations. 
  • Cross-site transaction recovery enables automated transaction recovery across active-active configurations when a site fails. 
  • Oracle Coherence 12.2.1 federated caching allows users to replicate data grid updates across clusters and sites to maintain data grid consistency and availability in multi data center configurations.
  • Oracle Site Guard enables the automation and reliable orchestration of failover and failback operations associated with site failures and site switchovers.

These features, combined with existing availability features in WebLogic Server and Coherence, give users powerful capabilities to meet requirements for business continuity, making WebLogic Server and Coherence the application infrastructure of choice for highly available enterprise environments.   We intend to augment these capabilities over time for use in Oracle Cloud and Oracle Fusion Middleware.

Developer Productivity and Portability to the Cloud

Oracle WebLogic Server 12.2.1 enables developers to be more productive, and enables portability of applications to the Cloud.    To empower the individual developer, Oracle WebLogic Server 12.2.1 supports Java SE 8 and the full Java Enterprise Edition 7 (Java EE 7) platform, including new APIs for developer innovation.   We're providing a new lightweight Quick Installer distribution for developers which be easily patched for consistency with test and production systems.  We've improved deployment performance, and updated IDE support provided in Oracle Enterprise Pack for Eclipse, Oracle JDeveloper and Oracle NetBeans.   Improvement for WebLogic Server developers are paired with dramatic new innovations for building Oracle Coherence applications using standard Java SE 8 lambdas and streams.

Team development and DevOps tools complement the features provided above, providing portability between on-premise environments and the Oracle Cloud.   For example, Eclipse- and Maven-based development and build environments can easily be pushed to the Developer Cloud Service in the Oracle Cloud, to enable team application development in a shared, continuous integration environment. Applications developed in this manner can be deployed to traditional WebLogic Server domains, or to WebLogic Server partitions, and soon to Oracle WebLogic Server 12.2.1 running in the Java Cloud Service.   New cloud management and portability features, such as comprehensive REST management APIs, automated elasticity for dynamic clusters, partition management, and continued Docker certification provide new tools for flexible deployment and management control of applications in production both on premise and in the Oracle Cloud.  

All these and many other features make Oracle WebLogic Server 12.2.1 a compelling release with technical innovation and business value for customers building Java applications.   Please download the product, review the documentation and evaluate it for yourself.  And be sure check back here for more information from our team.

Saturday Oct 24, 2015

Using Orachk for Application Continuity Coverage Analysis

As I described in the blog Part 2 - 12c Database and WLS - Application continuity, Application Continuity (AC) is a great feature for avoiding errors to the user with minimal changes to the application and configuration. In the blog Using Orachk to Clean Up Concrete Classes for Application Continuity, I described the one of the many uses of the Oracle utility program, focusing on checking for Oracle concrete class usage that needs to be removed to run with AC.

The download page for Orachk is at

Starting in version, the Oracle concrete class checking has been enhanced to recursively expand .ear, .war, and .rar files in addition to .jar files. You no longer need to explode these archives into a directory for checking. This is a big simplification for customers using Java EE applications. Just specify the root directory for your application when setting the command line option -appjar dirname or the environment variable RAT_AC_JARDIR. The orachk utility will do the analysis.

This article will focus a second analysis that can be using Orachk to see if your application workload will be covered by AC. There are three values that control the AC checking (called acchk in orachk) for coverage analysis. Two of them are the same as concrete class checking. The third is different but can be combined to get both checks done in one run. The values can be set either on the command line or via shell environment variable (or mixed). They are the following.

Command Line Argument

Shell Environment Variable


–asmhome jarfilename  


This must point to a version of asm-all-5.0.3.jar that you download from

-javahome JDK8dirname


This must point to the JAVA_HOME directory for a JDK8 installation.

-apptrc dirname


To analyze trace files for AC coverage, specify a directory name that contains one or more database server trace files. The trace directory is generally


This test works 12 database server only since AC was introduced in that release.



When scanning the trace directory for trace files, this optional value limit the analysis to the specified most recent number of days. There may be thousands of files and this parameter drops files older than the specified number of days.

In addition, you need to turn on a specific tracing flag on the database server to see RDBMS-side program interfaces for AC.

You can turn it on programmatically for a single session using a java.sql.Connection by running something like

Statement statement = conn.createStatement();
statement.executeUpdate("alter session set events 'trace [progint_appcont_rdbms]' ");

More likely, you will want to turn it on for all sessions by running

alter system set event='trace[progint_appcont_rdbms]' scope = spfile ;

Understanding the analysis requires some understanding of how AC is used. First, it’s only available when using a replay driver, e.g., oracle.jdbc.replay.DataSourceImpl. Second, it’s necessary to identify request boundaries to the driver so that operations can be tracked and potentially replayed if necessary. The boundaries are defined by calling beginRequest by casting the connection to oracle.jdbc.replay.ReplayableConnection, which enables replay, and calling endRequest, which disables replay.

1. If you are using UCP or WLS, the boundaries are handled automatically when you get and close a connection.

2. If you aren’t using one of these connection pools that are tightly bound to the Oracle replay driver, you will need to do the calls directly in the application.

3. If you are using a UCP or WLS pool but you get a connection and hold onto it instead of regularly returning it to the connection pool, you will need to handle the intermediate request boundaries. This is error prone and not recommended.

4. If you call commit on the connection, replay is disabled by default. If you set the service SESSION_STATE_CONSISTENCY to STATIC mode instead of the default DYNAMIC mode, then a commit does not disable replay. See the first link above for further discussion on this topic. If you are using the default, you should close the connection immediately after the commit. Otherwise, the subsequent operations are not covered by replay for the remainder of the request.

5. It is also possible for the application to explicitly disable replay in the current request by calling disableReplay() on the connection.

6. There are also some operations that cannot be replayed and calling one will disable replay in the current request.

The following is a summary of the coverage analysis.

- If a round-trip is made to the database server after replay is enabled and not disabled, it is counted as a protected call.

- If a round-trip is made to the database server when replay has been disabled or replay is inactive (not in a request, or it is a restricted call, or the disable API was called), it is counted as an unprotected call until the next endRequest or beginRequest.

- Calls that are ignored for the purpose of replay are ignored in the statistics.

At the end of processing a trace file, it computes

(protected * 100) / (protected + unprotected)

to determine PASS (>= 75), WARNING ( 25 <= value <75) and FAIL (< 25).

Running orachk produces a directory named orachk_<uname>_<date>_<time>. If you want to see all of the details, look for file named o_coverage_classes*.out under the outfiles subdirectory. It has the information for all of the trace files.

The program generates an html file that is listed in the program output. It drops output related to trace files that PASS (but they might not be 100%). If you PASS but you don’t get 100%, it’s possible that an operation won’t be replayed.

The output includes the database service name, the module name (from v$session.program, which can be set on the client side using the connection property oracle.jdbc.v$session.program), the ACTION and CLIENT_ID (which can be set using setClientInfo with "OCSID.ACTION" and "OCSID.CLIENTID” respectively).

The following is an actual table generated by orachk.

Outage Type



Coverage checks

TotalRequest = 25
PASS = 20
FAIL = 0


[WARNING] Trace file name = orcl1_ora_10046.trc Row number = 738
SERVICE NAME = ( MODULE NAME = (ac_1_bt) ACTION NAME = (qryOrdTotal_SP@alterSess_OrdTot) CLIENT ID = (clthost-1199-Default-3-jdbc000386)
Coverage(%) = 66 ProtectedCalls = 2 UnProtectedCalls = 1


[WARNING] Trace file name = orcl1_ora_10046.trc Row number = 31878
SERVICE NAME = ( MODULE NAME = (ac_1_bt) ACTION NAME = (qryOrder3@qryOrder3) CLIENT ID = (clthost-1199-Default-2-jdbc000183)
Coverage(%) = 66 ProtectedCalls = 2 UnProtectedCalls = 1


[WARNING] Trace file name = orcl1_ora_10046.trc Row number = 33240
SERVICE NAME = ( MODULE NAME = (ac_1_bt) ACTION NAME = (addProduct@getNewProdId) CLIENT ID = (clthost-1199-Default-2-jdbc000183)
Coverage(%) = 66 ProtectedCalls = 2 UnProtectedCalls = 1


[WARNING] Trace file name = orcl1_ora_10046.trc Row number = 37963
SERVICE NAME = ( MODULE NAME = (ac_1_bt) ACTION NAME = (updCustCredLimit@updCustCredLim) CLIENT ID = (clthost-1199-Default-2-jdbc000183-CLOSED)
Coverage(%) = 66 ProtectedCalls = 2 UnProtectedCalls = 1


[WARNING] Trace file name = orcl1_ora_32404.trc Row number = 289
SERVICE NAME = (orcl_pdb1) MODULE NAME = (JDBC Thin Client) ACTION NAME = null CLIENT ID = null
Coverage(%) = 40 ProtectedCalls = 2 UnProtectedCalls = 3


Report containing checks that passed: /home/username/orachk/orachk_dbhost_102415_200912/reports/acchk_scorecard_pass.html

If you are not at 100% for all of your trace files, you need to figure out why. Make sure you return connections to the pool, especially after a commit. To figure out exactly what calls are disabling replay or what operations are done after commit, you should turn on replay debugging at the driver side. This is done by running with the debug driver (e.g., ojdbc7_g.jar), setting the command line options -Doracle.jdbc.Trace=true, and including the following line in the properties file: oracle.jdbc.internal.replay.level=FINEST.

The orachk utility can help you get your application up and running using Application Continuity.

- Get rid of Oracle concrete classes.

- Analyze the database operations in the application to see if any of them are not protected by replay.

Tuesday Sep 29, 2015

Multi Data Source Configuration for Database Outages

Planned Database Maintenance with WebLogic Multi Data Source (MDS)

This article discusses how to handle planned maintenance on the database server when it is accessed by WebLogic Multi Data Source (MDS) in a fashion that no service interruption occurs.

To ensure there is no service interruption there must be multiple database instances available so the database can be updated in a rolling fashion.  Oracle technologies to accomplish this include RAC cluster  and GoldenGate or a combination of these products (note that DataGuard cannot be used for planned maintenance without service interruption).  Each database instance is configured as a member generic data source, as described in the product documentation.  This approach assumes that the application is returning connections to the pool on a regular basis.

Process Overview

1. On mid-tier systems - Shutdown all member data sources associated with the RAC instance that will be shut down for maintenance. It's important that not all data sources in each MDS list be shutdown so that connections can be reserved on the other member(s). Wait for data source shutdown to complete. See

2. At this point, it may be desirable to do some work on the database side to reduce remaining connections not associated with WLS data source. For the Oracle database server, this might include stopping (or relocating) the application services at the instances that will be shut down for maintenance, stopping the listener, and/or issue a transactional disconnect for the services on the database instance.  See for more information that is included in the Active GridLink description.

3. Shutdown the instance immediate using your preferred tools

4. Do the planned maintenance.

5. Start up the database instance using your preferred tools

6. Startup the services when the database instances are ready for application use.

7. On midtier systems -Start the member data sources. See See

Shutting down the data source

Shutting down the data source involves first suspending the data source and then releasing the associated resources including the connections. When a member data source in a MDS is marked as suspended, the MDS will not try to get connections from the suspended pool but will go to the next member data source in the MDS to reserve connections. It's important not all data sources in each MDS list be shut down at the same time. If all members are shut down or fail, then access to the MDS will fail and the application will see failures.

When you gracefully suspend a data source, which happens as the first step of shut down:

- the data source is immediately marked as suspended at the beginning of the operation so that no further connections will be created on the data source

- idle (not reserved) connections are not closed but are marked as disabled.

- after a timeout period for the suspend operation, all remaining connections in the pool will be marked as suspended and “java.sql.SQLRecoverableException: Connection has been administratively disabled. Try later.” will be thrown for any operations on the connection, indicating that the data source is suspended. These connections remain in the pool and are not closed. We won't know until the data source is resumed if they are good or not. In this case, we know that the database will be shut down and the connections in the pool will not be good if the data source is resumed. Instead, we are doing a data source shutdown which will close all of the disabled connections.

The timeout period is 60 seconds by default. This can be changed by configuring or dynamically setting Inactive Connection Timeout Seconds to a non-zero value (note that this value is overloaded with another feature when connection leak profiling is enabled). There is no upper limit on the inactive timeout. Note that the processing actually checks for in-use (reserved) resources every tenth of a second so if the timeout value is set to 2 hours and it's done a second later, it will complete a second later.

Note that this operation runs synchronously; there is no asynchronous version of the mbean operation available. It was designed to run in a short amount of time but testing shows that there is no problem setting it for much longer. It should be possible to use threads in jython if you want to run multiple operations in one script as opposed to lots of script (a jython programmer is needed).

This procedure works for MDS configured with either Load-Balancing or Failover.

This is what a WLST script looks like to edit the configuration to increase the suspend timeout and then use the runtime MBean to shutdown a data source. It would need to be integrated into the existing framework for all WLS servers/data sources.

java weblogic.WLST
import sys, socket, os
hostname = socket.gethostname()
# Edit the configuration to set the suspend timeout

cd('/JDBCSystemResources/' + datasource + '/JDBCResource/' + datasource + '/JDBCConnectionPoolParams/' + datasource )

cmo.setInactiveConnectionTimeoutSeconds(21600) # set the suspend timeout


# Shutdown the data source

cd('/JDBCServiceRuntime/' + svr + '/JDBCDataSourceRuntimeMBeans/' + datasource )



Note that if MDS is using a database service, you cannot stop or relocate the service before suspending or shutting down the MDS. If you do,. MDS may attempt to create a connection to the now missing service and it will react as though the database is down and kill all connections, not allowing for a graceful shutdown. Since MDS suspend ensures that no new connections are created at the associated instance (and the MDS only creates connections on this instance, never another instance even if relocated), stopping the service is not necessary for MDS graceful shutdown. Also, since MDS suspend causes all connections to no longer do any operations, no further progress will be made on any sessions (the transactions won't complete) that remain in the MDS pool.

There is one known problem with this approach related to XA affinity that is enforced by the MDS algorithms. When an XA branch is created on a RAC instance, all additional branches are created on the same instance. While RAC supports XA across instances, there are some significant limitations that applications run into before the prepare so MDS enforces that all operations be on the same instance. As soon as the graceful suspend operation starts, the member data source is marked as suspended so no further connections are allocated there. If an application using global transactions tries to start another branch on the suspending data source, it will fail to get a connection and the transaction will fail. In this case of an XA transaction spanning multiple WLS servers, the suspend is not graceful. This is not a problem for Emulate 1PC or 2pc, which uses a single connection for all work, and LLR.

If there is a reason to separate the suspending of the data source, at which point all connections are disabled, from the releasing of the resources, it is possible to run suspend followed by forceShutdown. A forced shutdown must be used to avoid going through the waiting period a second time. This separation is not recommended.

To get a graceful shutdown of the data source when shutting down the database, the data source must be involved. This process of shutting down the data source followed by shutdown of the database requires coordination between the mid-tier and the database server processing. Processing is simplified by using Active GridLink instead of MDS; see the AGL blog included above.

When using the Oracle database, it is recommended that an application service be configured for each database so that it can be configured to have HA features. By using an application service, it is possible to start up the database without data source starting to use it until the administrator is ready to make it available and the application service is explicitly started.

Thursday Sep 10, 2015

Active GridLink Configuration for Database Outages

This article discusses designing and deploying an Active GridLink (AGL) data source to handle database down times with an Oracle Database RAC environment. 

AGL Configuration for Database Outages

It is assumed that an Active GridLink data source is configured as described "Using Active GridLink Data Sources"   with the following.

- FAN enabled.  FAN provides rapid notification about state changes for database services, instances, the databases themselves, and the nodes that form the cluster.  It allows for draining of work during planned maintenance with no errors whatsoever returned to applications.

- Either auto-ONS or an explicit ONS configuration.

- A dynamic database service.  Do not connect using the database service or PDB service – these are for administration only and are not supported for FAN.

- Testing connections.  Depending on the outage, applications may receive stale connections when connections are borrowed before a down event is processed.  This can occur, for example, on a clean instance down when sockets are closed coincident with incoming connection requests. To prevent the application from receiving any errors, connection checks should be enabled at the connection pool.  This requires setting test-connections-on-reserve to true and setting the test-table (the recommended value for Oracle is “SQL ISVALID”).

- Optimize SCAN usage.  As an optimization to force re-ordering of the SCAN IP addresses returned from DNS for a SCAN address, set the URL setting LOAD_BALANCE=TRUE for the ADDRESSLIST in database driver and later.   (Before, use the connection property oracle.jdbc.thinForceDNSLoadBalancing=true.)

Planned Outage Operations

For a planned downtime, the goals are to achieve:

- Transparent scheduled maintenance: Make the scheduled maintenance process at the database servers transparent to applications.

- Session Draining: When an instance is brought down for maintenance at the database server draining ensures that all work using instances at that node completes and that idle sessions are removed. Sessions are drained without impacting in-flight work.  

The goal is to manage scheduled maintenance with no application interruption while maintenance is underway at the database server.  For maintenance purposes (e.g., software and hardware upgrades, repairs, changes, migrations within and across systems), the services used are shutdown gracefully one or several at a time without disrupting the operations and availability of the WLS applications. Upon FAN DOWN event, AGL drains sessions away from the instance(s) targeted for maintenance. It is necessary to stop non-singleton services running on the target database instance (assuming that they are still available on the remaining running instances) or relocate singleton services from the target instance to another instance.  Once the services have drained, the instance is stopped with no errors whatsoever to applications.

 The following is a high level overview of how planned maintenance occurs.

–Detect “DOWN” event triggered by DBA on instances targeted for maintenance

–Drain sessions away from that (those) instance(s)

–Perform scheduled maintenance at the database servers

–Resume operations on the upgraded node(s)

Unlike Multi Data Source where operations need to be coordinated on both the database server and the mid tier, Active GridLink co-operates with the database so that all of these operations are managed from the database server, simplifying the process.  The following table lists the steps that are executed on the database server and the corresponding reactions at the mid tier.

Database   Server Steps


Mid Tier Reaction

Stop the   non-singleton service without   ‘-force’ or relocate the singleton service.  

Omitting the –server option operates on all   services on the instance.

$ srvctl   stop service –db <db_name>
-service <service_name>
-instance   <instance_name


$ srvctl   relocate service –db <db_name>
-service <service_name> -oldinst   <oldins> -newinst <newinst>

The FAN Planned   Down (reason=USER) event for the service informs the connection pool that a service   is no longer available for use and connections should be drained.  Idle   connections on the stopped service are released immediately.  In-use connections are released when returned   (logically closed) by the application.    New connections are reserved on other  instance(s) and databases offering the   services.  This FAN action invokes draining the sessions from the   instance without disrupting the application.

Disable   the stopped service to ensure it is   not automatically started again. Disabling the service is optional. This step   is recommended for maintenance actions where the service must not restart   automatically until the action has completed. . 

$ srvctl   disable service –db <db_name> -service <service_name> -instance   <instance_name>

No new   connections are associated with the stopped/disabled service at the mid-tier.

Allow   sessions to drain.

The   amount of time depends on the application.    There may be long-running queries.    Batch programs may not be written to periodically return connections   and get new ones. It is recommended that batch be drained in advance of the   maintenance.

Check   for long-running sessions. Terminate   these using a transactional disconnect.    Wait for the sessions to drain.    You can run the query again to check if any sessions remain.

SQL>   select count(*) from ( select 1 from v$sessionwhere service_name in   upper('<service_name>') union all
select 1 from v$transaction where   status = 'ACTIVE' )
  SQL> exec

The   connection on the mid-tier will get an error.    If using application continuity, it’s possible to hide the error from   the application by automatically replaying the operations on a new connection   on another instance.  Otherwise, the   application will get a SQLException.

Repeat   the steps above.

Repeat   for all services targeted for planned maintenance

Stop the   database instance using the immediate option.

$ srvctl   stop instance –db <db_name>
-instance <instance_name> -stopoption   immediate

No   impact on the mid-tier until the database and service are re-started.

Optionally   disable the instance so that it will not automatically start again during   maintenance.

This   step is for maintenance operations where the services cannot resume during   the maintenance.

$ srvctl   disable instance –db <db_name> -instance <instance_name>

Perform   the scheduled maintenance work.

Perform   the scheduled maintenance
work – patches, repairs and changes.

Enable   and start the instance.

$ srvctl   enable instance –db <db_name> -instance <instance_name>
  $ srvctl start instance –db <db_name>
-instance <instance_name>

Enable   and start the service back.  Check that   the service is up and running.

$ srvctl   enable service –db <db_name>
-service <service_name>
-instance   <instance_name>

$ srvctl   start service –db <db_name>
-service <service_name>
-instance   <instance_name>

The FAN   UP event for the service informs the connection pool that a new instance is   available for use, allowing sessions to be created on this instance at the   next request submission.  Automatic   rebalancing of sessions starts.

The following figure shows the distribution of connections for a service across two RAC instances before and after Planned Downtime.  Notice that the connection workload moves from fifty-fifty across both instances to hundred-zero.  In other words, RAC_INST_1 can be taken down for maintenance without any impact on the business operation.

Unplanned Outages

The configuration is the same for planned and unplanned outages.

There are several differences when an unplanned outage occurs.

  • A component at the database server may fail making all services unavailable on the instances running at that node.  There is not stop or disable on the services because they have failed.
  • The FAN unplanned DOWN event (reason=FAILURE) is delivered to the mid-tier.
  • For an unplanned event, all sessions are closed immediately preventing the application from hanging on TCP/IP timeouts.  Existing connections on other instances remain usable, and new connections are opened to these instances as needed.
  • There is no graceful draining of connections.  For those applications using services that are configured to use Application Continuity, active sessions are restored on a surviving instance and recovered by replaying the operations, masking the outage from applications.  If not protected by Application Continuity, any sessions in active communication with the instance will receive a SQLException.

Tuesday Sep 01, 2015

Active GridLink URLs

Active GridLink (AGL) is the data source type that provides connectivity between WebLogic Server and an Oracle Database service, which may include one or more Oracle RAC clusters or DataGuard sites.  As the supported topologies grow to include additional features like Global Database Services (GDS) and new features are added to the Oracle networking and database support, the complexity of the URL to access this has also gotten more complex. There are lots of examples in the documentation.  This is a short article that summarizes patterns for defining the URL string for use with AGL.

It should be obvious but let me start by saying AGL only works with the Oracle Thin Driver.

AGL data sources only support long format JDBC URLs. The supported long format pattern is basically the following (there are lots of additional properties, some of which are described below).


If not using SCAN, then the ADDRESS_LIST would have one or more ADDRESS attributes with HOST/PORT pairs. It's recommended to use SCAN if possible and it's recommended to use VIP addresses to avoid TCP/IP hangs.

Easy Connect (short) format URLs are not supported for AGL data sources. The following is an example of a Easy Connect URL pattern that is not supported for use with AGL data sources:


General recommendations for the URL are as follows.

- Use a single DESCRIPTION.  Avoid a DESCRIPTION_LIST to avoid connection delays.

- Use one ADDRESS_LIST per RAC cluster or DataGuard database.

- Put RETRY_COUNT, RETRY_DELAY, CONNECT_TIMEOUT at the DESCRIPTION level so that all ADDRESS_LIST entries use the same value. 

- RETRY_DELAY specifies the delay, in seconds, between the connection retries.  It is new in the release.

- RETRY_COUNT is used to specify the number of times an ADDRESS list is traversed before the connection attempt is terminated. The default value is 0.  When using SCAN listeners with FAILOVER = on, setting the RETRY_COUNT parameter to 2 means the three SCAN IP addresses are traversed three times each, such that there are nine connect attempts (3 * 3).

- CONNECT_TIMEOUT is used to specify the overall time used to complete the Oracle Net connect.  Set CONNECT_TIMEOUT=90 or higher to prevent logon storms.   Through the JDBC driver, CONNECT_TIMEOUT is also used for the TCP/IP connection timeout for each address in the URL.  This second usage is preferred to be shorter and eventually a separate TRANSPORT_CONNECT_TIMEOUT will be introduced.  Do not set the driver property on the datasource because it is overridden by the URL property.

- The service name should be a configured application service, not a PDB or administration service.

- Specify LOAD_BALANCE=on per address list to balance the SCAN addresses.


The official blog for Oracle WebLogic Server fans and followers!

Stay Connected


« July 2016