Wednesday Nov 04, 2015

Connection Leak Profiling for WLS 12.2.1 Datasource

This is the first of a series of three articles that describes enhancements to datasource profiling in WLS 12.2.1. These enhancements were requested by customers and Oracle support. I think they will be very useful in tracking down problems in the application.

The pre-12.2.1 connection leak diagnostic profiling option requires that the connection pool “Inactive Connection Timeout Seconds” attribute be set to a positive value in order to determine how long before an idle reserved connection is considered leaked. Once identified as being leaked, a connection is reclaimed and information about the reserving thread is written out to the diagnostics log. For applications that hold connections for long periods of time, false positives can result in application errors that complicate debugging. To address this concern and improve usability, two enhancements to connection leak profiling are available:

1. Connection leak profile records will be produced for all reserved connections when the connection pool reaches max capacity and a reserve request results in a PoolLimitSQLException error.

2. An optional Connection Leak Timeout Seconds attribute will be added to the datasource descriptor for use in determining when a connection is considered “leaked”. When an idle connection exceeds the timeout value a leak profile log message is written and the connection is left intact.

The existing connection leak profiling value (0x000004) must be set on the datasource connection pool ProfileType attribute bitmask to enable connection leak detection. Setting the ProfileConnectionLeakTimeoutSeconds attribute may be used in place of InactiveConnectionTimeoutSeconds for identifying potential connection leaks.

This is a WLST script to set the values.

# java weblogic.WLST
import sys, socket, os
hostname = socket.gethostname()
# Edit the configuration to set the leak timeout
cd('/JDBCSystemResources/' + datasource + '/JDBCResource/' + datasource +
'/JDBCConnectionPoolParams/' + datasource )
cmo.setProfileConnectionLeakTimeoutSeconds(120) # set the connection leak timeout
cmo.setProfileType(0x000004) # turn on profiling

This is what the console page looks like after it is set.  Note the profile type and timeout value are set on the Diagnostics tab for the datasource.

The existing leak detection diagnostic profiling log record format is used for leaks triggered by either the ProfileConnectionLeakTimeoutSeconds attribute or when pool capacity is exceeded. In either case a log record is generated only once for each reserved connection. If a connection is subsequently released to pool, re-reserved and leaked again, a new record will be generated. An example resource leak diagnostic log record is shown below.  The output can be reviewed in the console or by looking at the datasource profile output text file.

####<mydatasource> <WEBLOGIC.JDBC.CONN.LEAK> <Thu Apr 09 14:00:22 EDT 2015> <java.lang.Exception
at weblogic.jdbc.common.internal.ConnectionEnv.setup(
at weblogic.common.resourcepool.ResourcePoolImpl.reserveResource(
at weblogic.common.resourcepool.ResourcePoolImpl.reserveResource(
at weblogic.jdbc.common.internal.ConnectionPool.reserve(
at weblogic.jdbc.common.internal.ConnectionPool.reserve(
at weblogic.jdbc.common.internal.ConnectionPoolManager.reserve(
at weblogic.jdbc.common.internal.RmiDataSource.getPoolConnection(
at weblogic.jdbc.common.internal.RmiDataSource.getConnectionInternal(
at weblogic.jdbc.common.internal.RmiDataSource.getConnection(
at weblogic.jdbc.common.internal.RmiDataSource.getConnection(
> <autoCommit=true,enabled=true,isXA=false,isJTS=false,vendorID=100,connUsed=false,doInit=false,'null',destroyed=false,poolname=mydatasource,appname=null,moduleName=null,
currentUser=...,currentThread=Thread[[ACTIVE] ExecuteThread: '0' for queue: 'weblogic.kernel.Default (self-tuning)',5,Pooled Threads],lastUser=null,currentError=null,currentErrorTimestamp=null,JDBC4Runtime=true,supportStatementPoolable=true,needRestoreClientInfo=false,defaultClientInfo={},
supportIsValid=true> <[partition-id: 0] [partition-name: DOMAIN] >

For applications that may have connection leaks but also have some valid long-running operations, you will now be able to scan through a list of connections that may be problems without interfering with normal application execution.

Tuesday Nov 03, 2015

Using Eclipse with WebLogic Server 12.2.1

With the installation of WebLogic Server 12.2.1 now including the Eclipse Network Installer, which enables developers to  download and install Eclipse including the specific features of interest, getting up and running with Eclipse and WebLogic Server has never been easier.

The Eclipse Network Installer presents developers with a guided interface to enable the custom installation of an Eclipse environment through the selection of an Eclipse version to be installed and which of the available capabilities are required - such as Java EE 7, Maven, Coherence, WebLogic, WLST, Cloud and Database tools amongst others.  It will then download the selected components and install them directly on the developers machine

Eclipse and the Oracle Enterprise Pack for Eclipse plugins continue to provide extensive support for WebLogic Server enabling it to be used to throughout the software lifecycle; from develop and test cycles with its Java EE dialogs,  assistants and deployment plugins; through to automation of configuration and provisioning of environments with the authoring, debugging and running of scripts using the WLST Script Editor and MBean palette.

The YouTube video WebLogic Server 12.2.1 - Developing with Eclipse provides a short demonstration on how to install Eclipse and the OEPE components using the new Network Installer that is bundled within the WebLogic Server installations.  It then shows the configuring of a new WebLogic Server 12.2.1 server target within Eclipse and finishes with importing a Maven project that contains a Java EE 7 example application that utilizes the new Batch API that is deployed to the server and called from a browser to run.

Monday Nov 02, 2015

Getting Started with the WebLogic Server 12.2.1 Developer Distribution

The new WebLogic Server 12.2.1 release continues down the the path of providing an installation that is smaller to download and able to be installed with a single operation, providing a quicker approach for developers to get started with the product.

New with the WebLogic Server 12.2.1 release is the use of the quick installer technology which packages the product into an executable jar file, which will silently install the product into a target directory.  Through the use of the quick installer, the installed product can now be patched using the standard Oracle patching utility - opatch - enabling developers to download and apply any patches as needed and to also enable a high degree of consistency with downstream testing and production environments.

Despite it's smaller distribution size the developer distribution delivers a full featured WebLogic Server including the rich administration console, the comprehensive scripting environment with WLST, the Configuration Wizard and Domain Builders, the Maven plugins and artifacts and of course all the new WebLogic Server features such as Java EE 7 support, MultiTenancy, Elastic Dynamic Clusters and more.

For a quick look at using the new developer distribution, creating a domain and accessing the administration console, check out the YouTube video: Getting Started with the Developer Distribution.

WebLogic Scripting Tool (WLST) updates in 12.2.1

A number of updates have been implemented in Oracle WebLogic Server and Oracle Fusion Middleware 12.2.1 to simplify the usage of the WebLogic Scripting Tool (WLST), especially when multiple Oracle Fusion Middleware products are being used.    In his blog, Robert Patrick describes what we have done to unify the usage of WLST across the Oracle Fusion Middleware 12.2.1 product line.    This information will be very helpful to WLST users who want to better understand what was implemented in 12.2.1 and any implications for your environments.   

Wednesday Oct 28, 2015

WebLogic Server Multitenant Info Sources

In Will Lyons’s blog entry, he introduced the idea of WebLogic Server Multitenant, and there have been a few other blog entries related to WebLogic Server Multitenant since then. Besides these blogs and the product documentation, there are a couple of other things to take a look at:

I just posted a video on Youtube at This video includes a high-level introduction to WebLogic Server Multitenant. This video is a little bit longer than my other videos, but there are a lot of good things to talk about in WebLogic Server Multitenant.

We also have a datasheet at, which includes a fair amount of detail regarding the value and usefulness of WebLogic Server Multitenant.

I’m at OpenWorld this week where we are seeing a lot of interest in these new features. One aspect of value that seems to keep coming up is the value of running on a shared platform. There are cases where every time a new environment is added, it needs to be certified against security requirements or standard operating procedures. By sharing a platform, those certifications only need to be done once for the environment. New applications deployed in pluggable partitions would not need a ground-up certification. This can mean faster roll-out times/faster time to market and reduced costs.

 That’s all for now. Keep your eye on this blog. More info coming soon!

Tuesday Oct 27, 2015

Data Source System Property Enhancement in WLS 12.2.1

The earlier blog at Setting V$SESSION for a WLS Datasource described using system properties to set driver connection properties, which in turn automatically set values on the Oracle database session. Some comments from readers indicated that there were some limitations to this mechanism.

- There are some values that can’t be set on the command line because they aren’t available until the application server starts. The most obvious value is the process identifier.

- Values set on the command line imply that they are valid for all environments in the server which is fine values like the program name but not appropriate for datasource-specific values or the new partition name that is available with the WLS Multi Tenancy feature.

- In a recent application that I was working with, it was desirable to connect to the server hosting the datasource that was connected to the session so that we could run a graceful shutdown In this case, additional information was needed to generate the URL.

All of these cases are handled with the enhanced system properties feature.

The original feature supported setting driver properties using the value of system properties. The new feature is overloaded on top of the old feature to avoid introducing yet another set of driver properties in the graphical user interfaces and WLST scripts. It is enabled by specifying one or more of the supported variables listed in the table below into the string value. If one or more of these variables is included in the system property, it is substituted with the corresponding value. If a value for the variable is not found, no substitution is performed. If none of these variables are found in the system property, then the value is taken as a system property name.


Value Description


First half (up to @) of ManagementFactory.getRuntimeMXBean().getName()


Second half of ManagementFactory.getRuntimeMXBean().getName()


Java system property


System property


Data source name from the JDBC descriptor. It does not contain the partition name.


Partition name or DOMAIN


WebLogic Server server listen port


WebLogic Server server SSL listen port


WebLogic Server server name


WebLogic Server domain name

A sample property is shown in the following example:




<sys-prop-value>WebLogic ${servername} Partition ${partition}</sys-prop-value>



In this example, v$session.program running on myserver is set to “WebLogic myserver Partition DOMAIN”.

The biggest limitation of this feature is the character limit on the associated columns in v$session. If you exceed the limit, connection creation will fail.

Using this enhancement combined with the Oracle v$session values can make this a powerful feature for tracking information about the source of the connections.

See the  Blog announcing Oracle WebLogic Server 12.2.1 for more details on Multenacy and other new features.

Monday Oct 26, 2015

Announcing Oracle WebLogic Server 12.2.1

Oracle WebLogic Server 12.2.1 (12cR2) is now generally available.  The release was highlighted in Larry Ellison's Oracle OpenWorld keynote Sunday night, and was formally announced today in Inderjeet Singh's Oracle OpenWorld General Session and in a press release.  Oracle WebLogic Server 12.2.1 is available for download on the Oracle Technology Network (OTN) at the Oracle Fusion Middleware Download page, and the Oracle WebLogic Server Download page.   The product documentation is posted along with all documentation for Oracle Fusion Middleware 12.2.1, which has also been made available.   

Oracle WebLogic Server 12.2.1 is the biggest WebLogic Server product release in many years.    We intend to provide extensive detail about its new features in videos and blogs posted here over the coming weeks.   Consider this note an initial summary on some of the major new feature areas:

  • Multitenancy
  • Continuous Availability
  • Developer Productivity and Portability to the Cloud


WebLogic Server multitenancy enables users to consolidate applications and reduce cost of ownership, while maintaining application isolation and increasing flexibility and agility.   Using multitenancy, different applications (or tenants) can share the same server JVM (or cluster), and the same WebLogic domain, while maintaining isolation from each other by running in separate domain partitions.  

A domain partition is an isolated subset of a WebLogic Server domain, and its servers.  Domain partitions act like microcontainers, encapsulating applications and the resources (datasources, JMS servers, etc) they depend on.  Partitions are isolated from each other, so that applications in one partition do not disrupt applications running in other partitions in the same server or domain. An amazing set of features provide these degrees of isolation.   We will elaborate on them here over time.

Though partitions are isolated, they share many resources - the physical system they run on or the VM, the operating system, the JVM, and WebLogic Server libraries.  Because they share resources, they use fewer resources.   Customers can consolidate applications from separate domains into fewer domains, running in fewer JVMs in fewer physical systems.  Consolidation means fewer entities to manage, reduced resource consumption, and lower cost of ownership.

Partitions are easy to use.  Applications can be deployed to partitions without changes, and we will be providing tools to enable users to migrate existing applications to partitions.   Partitions can be exported from one domain and imported into other domains, simplifying migration of applications across development, test and production environments, and across private and public clouds. Partitions increase application portability and agility, giving development, DevOps, test and production teams more flexibility in how they develop, release and manage production applications.   

WebLogic Server's new multitenancy features are integrated with innovations introduced across the Oracle JDK, Oracle Coherence, Oracle Traffic Director, and Oracle Fusion Middleware and are closely aligned with Oracle Database Pluggable Databases.   Over time you will see multitenancy being leveraged more broadly in the Oracle Cloud and Oracle Fusion Middleware. Multitenancy represents a significant new innovation for WebLogic Server and for Oracle.

Continuous Availability

Oracle WebLogic Server 12.2.1 introduces new features to minimize planned and unplanned downtime in multi data center configurations.    Many WebLogic Server customers have implemented disaster recovery architectures to provide application availability and business continuity.   WebLogic Server's new continuous availability features will make it easier to create, optimize and manage such configurations, while increasing application availability.  They include the following:

  • Zero-downtime patching provides an orchestration framework that controls the application of patches across clusters to maintain application availability during patching. 
  • Live partition migration enables migration of partitions across clusters within the same domain, effectively moving applications from one cluster to another, again while maintaining application availability during the process. 
  • Oracle Traffic Director provides high performance, high availability load balancing, and enables optimized traffic routing and load balancing during patching and partition migration operations. 
  • Cross-site transaction recovery enables automated transaction recovery across active-active configurations when a site fails. 
  • Oracle Coherence 12.2.1 federated caching allows users to replicate data grid updates across clusters and sites to maintain data grid consistency and availability in multi data center configurations.
  • Oracle Site Guard enables the automation and reliable orchestration of failover and failback operations associated with site failures and site switchovers.

These features, combined with existing availability features in WebLogic Server and Coherence, give users powerful capabilities to meet requirements for business continuity, making WebLogic Server and Coherence the application infrastructure of choice for highly available enterprise environments.   We intend to augment these capabilities over time for use in Oracle Cloud and Oracle Fusion Middleware.

Developer Productivity and Portability to the Cloud

Oracle WebLogic Server 12.2.1 enables developers to be more productive, and enables portability of applications to the Cloud.    To empower the individual developer, Oracle WebLogic Server 12.2.1 supports Java SE 8 and the full Java Enterprise Edition 7 (Java EE 7) platform, including new APIs for developer innovation.   We're providing a new lightweight Quick Installer distribution for developers which be easily patched for consistency with test and production systems.  We've improved deployment performance, and updated IDE support provided in Oracle Enterprise Pack for Eclipse, Oracle JDeveloper and Oracle NetBeans.   Improvement for WebLogic Server developers are paired with dramatic new innovations for building Oracle Coherence applications using standard Java SE 8 lambdas and streams.

Team development and DevOps tools complement the features provided above, providing portability between on-premise environments and the Oracle Cloud.   For example, Eclipse- and Maven-based development and build environments can easily be pushed to the Developer Cloud Service in the Oracle Cloud, to enable team application development in a shared, continuous integration environment. Applications developed in this manner can be deployed to traditional WebLogic Server domains, or to WebLogic Server partitions, and soon to Oracle WebLogic Server 12.2.1 running in the Java Cloud Service.   New cloud management and portability features, such as comprehensive REST management APIs, automated elasticity for dynamic clusters, partition management, and continued Docker certification provide new tools for flexible deployment and management control of applications in production both on premise and in the Oracle Cloud.  

All these and many other features make Oracle WebLogic Server 12.2.1 a compelling release with technical innovation and business value for customers building Java applications.   Please download the product, review the documentation and evaluate it for yourself.  And be sure check back here for more information from our team.

Saturday Oct 24, 2015

Using Orachk for Application Continuity Coverage Analysis

As I described in the blog Part 2 - 12c Database and WLS - Application continuity, Application Continuity (AC) is a great feature for avoiding errors to the user with minimal changes to the application and configuration. In the blog Using Orachk to Clean Up Concrete Classes for Application Continuity, I described the one of the many uses of the Oracle utility program, focusing on checking for Oracle concrete class usage that needs to be removed to run with AC.

The download page for Orachk is at

Starting in version, the Oracle concrete class checking has been enhanced to recursively expand .ear, .war, and .rar files in addition to .jar files. You no longer need to explode these archives into a directory for checking. This is a big simplification for customers using Java EE applications. Just specify the root directory for your application when setting the command line option -appjar dirname or the environment variable RAT_AC_JARDIR. The orachk utility will do the analysis.

This article will focus a second analysis that can be using Orachk to see if your application workload will be covered by AC. There are three values that control the AC checking (called acchk in orachk) for coverage analysis. Two of them are the same as concrete class checking. The third is different but can be combined to get both checks done in one run. The values can be set either on the command line or via shell environment variable (or mixed). They are the following.

Command Line Argument

Shell Environment Variable


–asmhome jarfilename  


This must point to a version of asm-all-5.0.3.jar that you download from

-javahome JDK8dirname


This must point to the JAVA_HOME directory for a JDK8 installation.

-apptrc dirname


To analyze trace files for AC coverage, specify a directory name that contains one or more database server trace files. The trace directory is generally


This test works 12 database server only since AC was introduced in that release.



When scanning the trace directory for trace files, this optional value limit the analysis to the specified most recent number of days. There may be thousands of files and this parameter drops files older than the specified number of days.

In addition, you need to turn on a specific tracing flag on the database server to see RDBMS-side program interfaces for AC.

You can turn it on programmatically for a single session using a java.sql.Connection by running something like

Statement statement = conn.createStatement();
statement.executeUpdate("alter session set events 'trace [progint_appcont_rdbms]' ");

More likely, you will want to turn it on for all sessions by running

alter system set event='trace[progint_appcont_rdbms]' scope = spfile ;

Understanding the analysis requires some understanding of how AC is used. First, it’s only available when using a replay driver, e.g., oracle.jdbc.replay.DataSourceImpl. Second, it’s necessary to identify request boundaries to the driver so that operations can be tracked and potentially replayed if necessary. The boundaries are defined by calling beginRequest by casting the connection to oracle.jdbc.replay.ReplayableConnection, which enables replay, and calling endRequest, which disables replay.

1. If you are using UCP or WLS, the boundaries are handled automatically when you get and close a connection.

2. If you aren’t using one of these connection pools that are tightly bound to the Oracle replay driver, you will need to do the calls directly in the application.

3. If you are using a UCP or WLS pool but you get a connection and hold onto it instead of regularly returning it to the connection pool, you will need to handle the intermediate request boundaries. This is error prone and not recommended.

4. If you call commit on the connection, replay is disabled by default. If you set the service SESSION_STATE_CONSISTENCY to STATIC mode instead of the default DYNAMIC mode, then a commit does not disable replay. See the first link above for further discussion on this topic. If you are using the default, you should close the connection immediately after the commit. Otherwise, the subsequent operations are not covered by replay for the remainder of the request.

5. It is also possible for the application to explicitly disable replay in the current request by calling disableReplay() on the connection.

6. There are also some operations that cannot be replayed and calling one will disable replay in the current request.

The following is a summary of the coverage analysis.

- If a round-trip is made to the database server after replay is enabled and not disabled, it is counted as a protected call.

- If a round-trip is made to the database server when replay has been disabled or replay is inactive (not in a request, or it is a restricted call, or the disable API was called), it is counted as an unprotected call until the next endRequest or beginRequest.

- Calls that are ignored for the purpose of replay are ignored in the statistics.

At the end of processing a trace file, it computes

(protected * 100) / (protected + unprotected)

to determine PASS (>= 75), WARNING ( 25 <= value <75) and FAIL (< 25).

Running orachk produces a directory named orachk_<uname>_<date>_<time>. If you want to see all of the details, look for file named o_coverage_classes*.out under the outfiles subdirectory. It has the information for all of the trace files.

The program generates an html file that is listed in the program output. It drops output related to trace files that PASS (but they might not be 100%). If you PASS but you don’t get 100%, it’s possible that an operation won’t be replayed.

The output includes the database service name, the module name (from v$session.program, which can be set on the client side using the connection property oracle.jdbc.v$session.program), the ACTION and CLIENT_ID (which can be set using setClientInfo with "OCSID.ACTION" and "OCSID.CLIENTID” respectively).

The following is an actual table generated by orachk.

Outage Type



Coverage checks

TotalRequest = 25
PASS = 20
FAIL = 0


[WARNING] Trace file name = orcl1_ora_10046.trc Row number = 738
SERVICE NAME = ( MODULE NAME = (ac_1_bt) ACTION NAME = (qryOrdTotal_SP@alterSess_OrdTot) CLIENT ID = (clthost-1199-Default-3-jdbc000386)
Coverage(%) = 66 ProtectedCalls = 2 UnProtectedCalls = 1


[WARNING] Trace file name = orcl1_ora_10046.trc Row number = 31878
SERVICE NAME = ( MODULE NAME = (ac_1_bt) ACTION NAME = (qryOrder3@qryOrder3) CLIENT ID = (clthost-1199-Default-2-jdbc000183)
Coverage(%) = 66 ProtectedCalls = 2 UnProtectedCalls = 1


[WARNING] Trace file name = orcl1_ora_10046.trc Row number = 33240
SERVICE NAME = ( MODULE NAME = (ac_1_bt) ACTION NAME = (addProduct@getNewProdId) CLIENT ID = (clthost-1199-Default-2-jdbc000183)
Coverage(%) = 66 ProtectedCalls = 2 UnProtectedCalls = 1


[WARNING] Trace file name = orcl1_ora_10046.trc Row number = 37963
SERVICE NAME = ( MODULE NAME = (ac_1_bt) ACTION NAME = (updCustCredLimit@updCustCredLim) CLIENT ID = (clthost-1199-Default-2-jdbc000183-CLOSED)
Coverage(%) = 66 ProtectedCalls = 2 UnProtectedCalls = 1


[WARNING] Trace file name = orcl1_ora_32404.trc Row number = 289
SERVICE NAME = (orcl_pdb1) MODULE NAME = (JDBC Thin Client) ACTION NAME = null CLIENT ID = null
Coverage(%) = 40 ProtectedCalls = 2 UnProtectedCalls = 3


Report containing checks that passed: /home/username/orachk/orachk_dbhost_102415_200912/reports/acchk_scorecard_pass.html

If you are not at 100% for all of your trace files, you need to figure out why. Make sure you return connections to the pool, especially after a commit. To figure out exactly what calls are disabling replay or what operations are done after commit, you should turn on replay debugging at the driver side. This is done by running with the debug driver (e.g., ojdbc7_g.jar), setting the command line options -Doracle.jdbc.Trace=true, and including the following line in the properties file: oracle.jdbc.internal.replay.level=FINEST.

The orachk utility can help you get your application up and running using Application Continuity.

- Get rid of Oracle concrete classes.

- Analyze the database operations in the application to see if any of them are not protected by replay.

Tuesday Sep 29, 2015

Multi Data Source Configuration for Database Outages

Planned Database Maintenance with WebLogic Multi Data Source (MDS)

This article discusses how to handle planned maintenance on the database server when it is accessed by WebLogic Multi Data Source (MDS) in a fashion that no service interruption occurs.

To ensure there is no service interruption there must be multiple database instances available so the database can be updated in a rolling fashion.  Oracle technologies to accomplish this include RAC cluster  and GoldenGate or a combination of these products (note that DataGuard cannot be used for planned maintenance without service interruption).  Each database instance is configured as a member generic data source, as described in the product documentation.  This approach assumes that the application is returning connections to the pool on a regular basis.

Process Overview

1. On mid-tier systems - Shutdown all member data sources associated with the RAC instance that will be shut down for maintenance. It's important that not all data sources in each MDS list be shutdown so that connections can be reserved on the other member(s). Wait for data source shutdown to complete. See

2. At this point, it may be desirable to do some work on the database side to reduce remaining connections not associated with WLS data source. For the Oracle database server, this might include stopping (or relocating) the application services at the instances that will be shut down for maintenance, stopping the listener, and/or issue a transactional disconnect for the services on the database instance.  See for more information that is included in the Active GridLink description.

3. Shutdown the instance immediate using your preferred tools

4. Do the planned maintenance.

5. Start up the database instance using your preferred tools

6. Startup the services when the database instances are ready for application use.

7. On midtier systems -Start the member data sources. See See

Shutting down the data source

Shutting down the data source involves first suspending the data source and then releasing the associated resources including the connections. When a member data source in a MDS is marked as suspended, the MDS will not try to get connections from the suspended pool but will go to the next member data source in the MDS to reserve connections. It's important not all data sources in each MDS list be shut down at the same time. If all members are shut down or fail, then access to the MDS will fail and the application will see failures.

When you gracefully suspend a data source, which happens as the first step of shut down:

- the data source is immediately marked as suspended at the beginning of the operation so that no further connections will be created on the data source

- idle (not reserved) connections are not closed but are marked as disabled.

- after a timeout period for the suspend operation, all remaining connections in the pool will be marked as suspended and “java.sql.SQLRecoverableException: Connection has been administratively disabled. Try later.” will be thrown for any operations on the connection, indicating that the data source is suspended. These connections remain in the pool and are not closed. We won't know until the data source is resumed if they are good or not. In this case, we know that the database will be shut down and the connections in the pool will not be good if the data source is resumed. Instead, we are doing a data source shutdown which will close all of the disabled connections.

The timeout period is 60 seconds by default. This can be changed by configuring or dynamically setting Inactive Connection Timeout Seconds to a non-zero value (note that this value is overloaded with another feature when connection leak profiling is enabled). There is no upper limit on the inactive timeout. Note that the processing actually checks for in-use (reserved) resources every tenth of a second so if the timeout value is set to 2 hours and it's done a second later, it will complete a second later.

Note that this operation runs synchronously; there is no asynchronous version of the mbean operation available. It was designed to run in a short amount of time but testing shows that there is no problem setting it for much longer. It should be possible to use threads in jython if you want to run multiple operations in one script as opposed to lots of script (a jython programmer is needed).

This procedure works for MDS configured with either Load-Balancing or Failover.

This is what a WLST script looks like to edit the configuration to increase the suspend timeout and then use the runtime MBean to shutdown a data source. It would need to be integrated into the existing framework for all WLS servers/data sources.

java weblogic.WLST
import sys, socket, os
hostname = socket.gethostname()
# Edit the configuration to set the suspend timeout

cd('/JDBCSystemResources/' + datasource + '/JDBCResource/' + datasource + '/JDBCConnectionPoolParams/' + datasource )

cmo.setInactiveConnectionTimeoutSeconds(21600) # set the suspend timeout


# Shutdown the data source

cd('/JDBCServiceRuntime/' + svr + '/JDBCDataSourceRuntimeMBeans/' + datasource )



Note that if MDS is using a database service, you cannot stop or relocate the service before suspending or shutting down the MDS. If you do,. MDS may attempt to create a connection to the now missing service and it will react as though the database is down and kill all connections, not allowing for a graceful shutdown. Since MDS suspend ensures that no new connections are created at the associated instance (and the MDS only creates connections on this instance, never another instance even if relocated), stopping the service is not necessary for MDS graceful shutdown. Also, since MDS suspend causes all connections to no longer do any operations, no further progress will be made on any sessions (the transactions won't complete) that remain in the MDS pool.

There is one known problem with this approach related to XA affinity that is enforced by the MDS algorithms. When an XA branch is created on a RAC instance, all additional branches are created on the same instance. While RAC supports XA across instances, there are some significant limitations that applications run into before the prepare so MDS enforces that all operations be on the same instance. As soon as the graceful suspend operation starts, the member data source is marked as suspended so no further connections are allocated there. If an application using global transactions tries to start another branch on the suspending data source, it will fail to get a connection and the transaction will fail. In this case of an XA transaction spanning multiple WLS servers, the suspend is not graceful. This is not a problem for Emulate 1PC or 2pc, which uses a single connection for all work, and LLR.

If there is a reason to separate the suspending of the data source, at which point all connections are disabled, from the releasing of the resources, it is possible to run suspend followed by forceShutdown. A forced shutdown must be used to avoid going through the waiting period a second time. This separation is not recommended.

To get a graceful shutdown of the data source when shutting down the database, the data source must be involved. This process of shutting down the data source followed by shutdown of the database requires coordination between the mid-tier and the database server processing. Processing is simplified by using Active GridLink instead of MDS; see the AGL blog included above.

When using the Oracle database, it is recommended that an application service be configured for each database so that it can be configured to have HA features. By using an application service, it is possible to start up the database without data source starting to use it until the administrator is ready to make it available and the application service is explicitly started.

Thursday Sep 10, 2015

Active GridLink Configuration for Database Outages

This article discusses designing and deploying an Active GridLink (AGL) data source to handle database down times with an Oracle Database RAC environment. 

AGL Configuration for Database Outages

It is assumed that an Active GridLink data source is configured as described "Using Active GridLink Data Sources"   with the following.

- FAN enabled.  FAN provides rapid notification about state changes for database services, instances, the databases themselves, and the nodes that form the cluster.  It allows for draining of work during planned maintenance with no errors whatsoever returned to applications.

- Either auto-ONS or an explicit ONS configuration.

- A dynamic database service.  Do not connect using the database service or PDB service – these are for administration only and are not supported for FAN.

- Testing connections.  Depending on the outage, applications may receive stale connections when connections are borrowed before a down event is processed.  This can occur, for example, on a clean instance down when sockets are closed coincident with incoming connection requests. To prevent the application from receiving any errors, connection checks should be enabled at the connection pool.  This requires setting test-connections-on-reserve to true and setting the test-table (the recommended value for Oracle is “SQL ISVALID”).

- Optimize SCAN usage.  As an optimization to force re-ordering of the SCAN IP addresses returned from DNS for a SCAN address, set the URL setting LOAD_BALANCE=TRUE for the ADDRESSLIST in database driver and later.   (Before, use the connection property oracle.jdbc.thinForceDNSLoadBalancing=true.)

Planned Outage Operations

For a planned downtime, the goals are to achieve:

- Transparent scheduled maintenance: Make the scheduled maintenance process at the database servers transparent to applications.

- Session Draining: When an instance is brought down for maintenance at the database server draining ensures that all work using instances at that node completes and that idle sessions are removed. Sessions are drained without impacting in-flight work.  

The goal is to manage scheduled maintenance with no application interruption while maintenance is underway at the database server.  For maintenance purposes (e.g., software and hardware upgrades, repairs, changes, migrations within and across systems), the services used are shutdown gracefully one or several at a time without disrupting the operations and availability of the WLS applications. Upon FAN DOWN event, AGL drains sessions away from the instance(s) targeted for maintenance. It is necessary to stop non-singleton services running on the target database instance (assuming that they are still available on the remaining running instances) or relocate singleton services from the target instance to another instance.  Once the services have drained, the instance is stopped with no errors whatsoever to applications.

 The following is a high level overview of how planned maintenance occurs.

–Detect “DOWN” event triggered by DBA on instances targeted for maintenance

–Drain sessions away from that (those) instance(s)

–Perform scheduled maintenance at the database servers

–Resume operations on the upgraded node(s)

Unlike Multi Data Source where operations need to be coordinated on both the database server and the mid tier, Active GridLink co-operates with the database so that all of these operations are managed from the database server, simplifying the process.  The following table lists the steps that are executed on the database server and the corresponding reactions at the mid tier.

Database   Server Steps


Mid Tier Reaction

Stop the   non-singleton service without   ‘-force’ or relocate the singleton service.  

Omitting the –server option operates on all   services on the instance.

$ srvctl   stop service –db <db_name>
-service <service_name>
-instance   <instance_name


$ srvctl   relocate service –db <db_name>
-service <service_name> -oldinst   <oldins> -newinst <newinst>

The FAN Planned   Down (reason=USER) event for the service informs the connection pool that a service   is no longer available for use and connections should be drained.  Idle   connections on the stopped service are released immediately.  In-use connections are released when returned   (logically closed) by the application.    New connections are reserved on other  instance(s) and databases offering the   services.  This FAN action invokes draining the sessions from the   instance without disrupting the application.

Disable   the stopped service to ensure it is   not automatically started again. Disabling the service is optional. This step   is recommended for maintenance actions where the service must not restart   automatically until the action has completed. . 

$ srvctl   disable service –db <db_name> -service <service_name> -instance   <instance_name>

No new   connections are associated with the stopped/disabled service at the mid-tier.

Allow   sessions to drain.

The   amount of time depends on the application.    There may be long-running queries.    Batch programs may not be written to periodically return connections   and get new ones. It is recommended that batch be drained in advance of the   maintenance.

Check   for long-running sessions. Terminate   these using a transactional disconnect.    Wait for the sessions to drain.    You can run the query again to check if any sessions remain.

SQL>   select count(*) from ( select 1 from v$sessionwhere service_name in   upper('<service_name>') union all
select 1 from v$transaction where   status = 'ACTIVE' )
  SQL> exec

The   connection on the mid-tier will get an error.    If using application continuity, it’s possible to hide the error from   the application by automatically replaying the operations on a new connection   on another instance.  Otherwise, the   application will get a SQLException.

Repeat   the steps above.

Repeat   for all services targeted for planned maintenance

Stop the   database instance using the immediate option.

$ srvctl   stop instance –db <db_name>
-instance <instance_name> -stopoption   immediate

No   impact on the mid-tier until the database and service are re-started.

Optionally   disable the instance so that it will not automatically start again during   maintenance.

This   step is for maintenance operations where the services cannot resume during   the maintenance.

$ srvctl   disable instance –db <db_name> -instance <instance_name>

Perform   the scheduled maintenance work.

Perform   the scheduled maintenance
work – patches, repairs and changes.

Enable   and start the instance.

$ srvctl   enable instance –db <db_name> -instance <instance_name>
  $ srvctl start instance –db <db_name>
-instance <instance_name>

Enable   and start the service back.  Check that   the service is up and running.

$ srvctl   enable service –db <db_name>
-service <service_name>
-instance   <instance_name>

$ srvctl   start service –db <db_name>
-service <service_name>
-instance   <instance_name>

The FAN   UP event for the service informs the connection pool that a new instance is   available for use, allowing sessions to be created on this instance at the   next request submission.  Automatic   rebalancing of sessions starts.

The following figure shows the distribution of connections for a service across two RAC instances before and after Planned Downtime.  Notice that the connection workload moves from fifty-fifty across both instances to hundred-zero.  In other words, RAC_INST_1 can be taken down for maintenance without any impact on the business operation.

Unplanned Outages

The configuration is the same for planned and unplanned outages.

There are several differences when an unplanned outage occurs.

  • A component at the database server may fail making all services unavailable on the instances running at that node.  There is not stop or disable on the services because they have failed.
  • The FAN unplanned DOWN event (reason=FAILURE) is delivered to the mid-tier.
  • For an unplanned event, all sessions are closed immediately preventing the application from hanging on TCP/IP timeouts.  Existing connections on other instances remain usable, and new connections are opened to these instances as needed.
  • There is no graceful draining of connections.  For those applications using services that are configured to use Application Continuity, active sessions are restored on a surviving instance and recovered by replaying the operations, masking the outage from applications.  If not protected by Application Continuity, any sessions in active communication with the instance will receive a SQLException.

Tuesday Sep 01, 2015

Active GridLink URLs

Active GridLink (AGL) is the data source type that provides connectivity between WebLogic Server and an Oracle Database service, which may include one or more Oracle RAC clusters or DataGuard sites.  As the supported topologies grow to include additional features like Global Database Services (GDS) and new features are added to the Oracle networking and database support, the complexity of the URL to access this has also gotten more complex. There are lots of examples in the documentation.  This is a short article that summarizes patterns for defining the URL string for use with AGL.

It should be obvious but let me start by saying AGL only works with the Oracle Thin Driver.

AGL data sources only support long format JDBC URLs. The supported long format pattern is basically the following (there are lots of additional properties, some of which are described below).


If not using SCAN, then the ADDRESS_LIST would have one or more ADDRESS attributes with HOST/PORT pairs. It's recommended to use SCAN if possible and it's recommended to use VIP addresses to avoid TCP/IP hangs.

Easy Connect (short) format URLs are not supported for AGL data sources. The following is an example of a Easy Connect URL pattern that is not supported for use with AGL data sources:


General recommendations for the URL are as follows.

- Use a single DESCRIPTION.  Avoid a DESCRIPTION_LIST to avoid connection delays.

- Use one ADDRESS_LIST per RAC cluster or DataGuard database.

- Put RETRY_COUNT, RETRY_DELAY, CONNECT_TIMEOUT at the DESCRIPTION level so that all ADDRESS_LIST entries use the same value. 

- RETRY_DELAY specifies the delay, in seconds, between the connection retries.  It is new in the release.

- RETRY_COUNT is used to specify the number of times an ADDRESS list is traversed before the connection attempt is terminated. The default value is 0.  When using SCAN listeners with FAILOVER = on, setting the RETRY_COUNT parameter to 2 means the three SCAN IP addresses are traversed three times each, such that there are nine connect attempts (3 * 3).

- CONNECT_TIMEOUT is used to specify the overall time used to complete the Oracle Net connect.  Set CONNECT_TIMEOUT=90 or higher to prevent logon storms.   Through the JDBC driver, CONNECT_TIMEOUT is also used for the TCP/IP connection timeout for each address in the URL.  This second usage is preferred to be shorter and eventually a separate TRANSPORT_CONNECT_TIMEOUT will be introduced.  Do not set the driver property on the datasource because it is overridden by the URL property.

- The service name should be a configured application service, not a PDB or administration service.

- Specify LOAD_BALANCE=on per address list to balance the SCAN addresses.

Using Orachk to Clean Up Concrete Classes for Application Continuity

As I described in the blog Part 2 - 12c Database and WLS - Application continuity, Application Continuity (AC) is a great feature for avoiding errors to the user with minimal changes to the application and configuration. Getting rid of any references to Oracle concrete classes is the first step.

Oracle has a utility program that you can download from MOS to validate various hardware, operating system, and software attributes associated with the Oracle database and more (it’s growing). The program name is orachk. The latest version is and starting in this version, there are some checks available for applications running with AC.

There is enough documentation about getting started with orachk so I’ll just say to download and unzip the file. The AC checking is part of a larger framework that will have additional analysis in future versions. This article focuses on the analysis for Oracle concrete classes in the application code.

AC is unable to replay transactions that use oracle.sql deprecated concrete classes of the form ARRAY, BFILE, BLOB, CLOB, NCLOB, OPAQUE, REF, or STRUCT as a variable type, a cast, the return type of a method, or calling a constructor. See New Jdbc Interfaces for Oracle types (Doc ID 1364193.1) for further information about concrete classes. They must be modified for AC to work with the application. See Using API Extensions for Oracle JDBC Types for many examples of using the newer Oracle JDBC types in place of the older Oracle concrete types.

There are three values that control the AC checking (called acchk in orachk) for Oracle concrete classes. They can be set either on the command line or via shell environment variable (or mixed). They are the following.

Command Line Argument

Shell Environment Variable


–asmhome jarfilename  


This must point to a version of asm-all-5.0.3.jar that you download from

-javahome JDK8dirname


This must point to the JAVA_HOME directory for a JDK8 installation.

-appjar dirname


To analyze the application code for references to Oracle concrete classes like oracle.sql.BLOB, this must point to the parent directory name for the code. The program will analyze .class files, and recursively .jar files and directories. If you have J2EE .ear or .war files, you must recursively explode these into a directory structure with .class files exposed.

This test works with software classes compiled for Oracle JDBC 11 or 12.

When you run the AC checking, the additional checking about database server, etc. is turned off. It would be common to run the concrete class checking on the mid-tier to analyze software that accesses the Oracle driver.

I chose some old QA test classes that I knew had some bad usage of concrete classes and ran the test on a small subset for illustration purposes. The command line was the following.

$ ./orachk -asmhome /tmp/asm-all-5.0.3.jar -javahome /tmp/jdk1.8.0_40 -appjar /tmp/appdir

This is a screen shot of the report details. There is additional information reported about the machine, OS, database, timings, etc.

From this test run, I can see that my one test class has five references to STRUCT that need to be changed to java.sql.Struct or oracle.jdbc.OracleStruct.

Note that WLS programmers have been using the* interfaces for over a decade to allow for wrapping Oracle concrete classes and this AC analysis doesn’t pick that up (there are five* interfaces that correspond to concrete classes). These should be removed as well. For example, trying to run with this ORACLE extension API and the WLS wrapper

OracleThinBlob blob = (OracleThinBlob)rs.getBlob(2); os = blob.getBinaryOutputStream();

on a Blob column using the normal driver works but using the replay driver yields

java.lang.ClassCastException: weblogic.jdbc.wrapper.Blob_oracle_jdbc_proxy_oracle
$1OracleBlob$$$Proxy cannot be cast to

It must be changed to use the standard JDBC API;
java.sql.Blob blob = rs.getBlob(2); os = blob.setBinaryStream(1);

So it’s time to remove references to the deprecated Oracle and WebLogic classes and preferably migrate to the standard JDBC API’s or at least the new Oracle interfaces. This will clean up the code and get it ready to take advantage of Application Continuity in the Oracle database.

The ORAchk download page is at .

Thursday Aug 13, 2015

WebLogic Server 12.1.3 Developer Zip Update 3

WebLogic Server 12.1.3 Developer Zip Update 3 has just been posted to OTN for download. 

Update 3 contains the same set of bugs fixes in the WebLogic Server Patch Set Update, providing developers who use the developer zip distribution with a version that corresponds to the latest version of WebLogic Server 12.1.3 used for production.

Download page:

The README file for Update 3 provides details of the updates:

Tuesday Jun 30, 2015

Oracle Cloud Application Foundation Innovation Awards Now Open for Nominations!

Is your organization using Oracle Cloud Application Foundation that includes Oracle WebLogic Server, Oracle Coherence and Oracle Tuxedo to deliver unique business value? The Innovation Awards awards honor our customers and partners for their cutting-edge solutions. Winners are selected based on the uniqueness of their business case, business benefits, level of impact relative to the size of the organization, complexity and magnitude of implementation, and the originality of architecture.

The 2015 awards will be presented during Oracle OpenWorld 2015, October 26-29, in San Francisco.

Submit your nomination for WebLogic/Coherence/Tuxedo by July 31!

Award winners receive:

  • Oracle Fusion Middleware Innovation Award for WebLogic trophy
  • One free pass to Oracle OpenWorld 2015
  • Priority consideration for placement in Profit magazine, Oracle Magazine, or other Oracle publications and press releases
  • Oracle Fusion Middleware Innovation logo for inclusion on your own website and/or press release   

All nominees receive consideration for:

  • Participating in OpenWorld panels and speaking opportunities
  • Featured Customer Success Story on
  • Placement in Profit magazine and/or Oracle Magazine
  • Placement in an Oracle press release or Oracle Fusion Middleware podcast
Nomination deadline: 5:00 p.m. PT July 31, 2015
All nominated solutions should be in production or in active pilot phase

For additional information, please email

Monday Mar 16, 2015

OpenWorld 2014: WebLogic Cloud Approaches

By Ancy Dow, Oracle Tech Cloud Account Strategist [Another OpenWorld session that Ancy found relevant and wanted to share with all of us.]

According to a recent ComputerWorld Survey, nearly 90% of IT executives now want to implement cloud solutions. But what is the best cloud strategy for your organization—private, public, or hybrid? Senior Product Marketing Director Ayalla Goldschmidt and Product Management Vice President Mike Lehman share best practices on choosing a pragmatic cloud approach for your organization’s implementation of WebLogic Server, as customers leveraging WebLogic Server now have unprecedented options when architecting an enterprise cloud strategy.

WebLogic in the Cloud – Oracle’s Investment Strategy

Typically, we see three types of customers really interested in moving to the cloud. First are developers who look to move to the cloud for faster provisioning and working with Java in a very lightweight fashion. Secondly, many customers are IT operations-oriented individuals who seek to shift their capital expenditures, and instead pay a far lower subscription cost for a cloud provider to take care of everything. Finally, lines-of-businesses individuals building seasonal or non-mission critical applications don’t want to go through the long development cycle of building out an infrastructure and supporting environments.

To meet all these needs, Oracle’s Cloud strategy is to deliver a flexibility of deployment choices, with unparalleled ease of use. The on-premise private cloud is the most straightforward path to cloud and Oracle has currently invested significantly in this area to bring cloud capabilities into the WebLogic platform. The second investment area is bringing WebLogic to the public cloud through the Java Cloud Service (JCS), with options such as automated patching and tooling. The third and final investment area is partnerships we’ve established with Microsoft Azure, Amazon, Verizon Terremark, and even more vendors coming in the future. 

A Hybrid Cloud Model

Given all these options, should you mix your workloads between private and public? Very sensitive customer or employee data that needs to meet geopolitical boundaries should be kept in an on-premise private cloud. On the opposite end, customers who pursue public cloud entrust security in the hands of the cloud vendors, prioritizing faster response times and extreme agility within competitive environments. With a hybrid approach, customers host mission-critical applications in-house, but also look opportunistically for places in which public cloud could make more sense. With this strategy, customers meet compliance and security requirements where needed, but can also learn and seek to expand their public cloud footprint over time for better resource utilization, cost savings, and flexibility. 

Why Choose Oracle Cloud?

Oracle is uniquely positioned as a cloud provider because of its ease of portability from one cloud solution to the other. From a private cloud perspective, investments center around WebLogic Server, Coherence with in-memory caching, and Enterprise Manager as a set of high availability technology for provisioning and managing customers’ environment lifecycle. Customers can take the latest versions of these tools—WebLogic Server 12c—to get a cloud environment up and running from an operational and developer-friendly perspective. This same exact set of products is available through a self-service, self-managing, public cloud portal with Java Cloud Service. 
The full 45-minute session offers further insights on criteria that can help you create a framework for decision making around private versus public cloud. More information is also available in the Computerworld cloud survey. Download the survey report from Our early adopters have already been able to reduce their implementation time for new applications from months to weeks, and we look forward to making that a possibility for you as well.

With this blog post, we end our series on OpenWorld Cloud Application Foundation sessions. Hope you enjoyed it. 


The official blog for Oracle WebLogic Server fans and followers!

Stay Connected


« May 2016