Thursday Jul 14, 2016

WebLogic Server - Domain to Partition Conversion Tool (DPCT) Updates

WebLogic Server - Domain to Partition Conversion Tool (DPCT) Updates

The Domain to Partition Conversion Tool (DPCT) provides assistance with the process of migrating an existing domain from WebLogic Server releases 10.3.6, 12.1.2, 12.1.3 or 12.2.1 domain to a partition in a WebLogic Server 12.2.1 domain.

The DPCT process consists of two independent but related operations:

  • The first operation involves inspecting an existing domain and exporting into an archive that captures the relevant configuration and binary files.
  • The second task is to use one of several import partition options available with WebLogic Server 12.2.1 to import the contents of the exported domain to create a new partition. The new partition will contain the configuration resources and application deployments from the source domain.

With the release of WebLogic Server several updates and changes have been made to DPCT to further improve its functionality.

The updated documentation covering the new features, bug fixes and known limitations is here:

Key Updates

a) Distribution of DPCT tooling with WebLogic Server installation: initially the DPCT tooling was distributed as a separate zip file only available for download from OTN.

With the release, the DPCT tooling is provided as part of the base product installation as:


This file can be copied from the installation to the servers where the source domain is present and extracted for use.

 The DPCT tooling is also still available for download from OTN:

b) No patch require: previous use of DPCT required a patch to be applied to the target 12.2.1 installation in order to import an archive generated by the DPCT tooling. This requirement has been resolved.

c) Improved platform support: several small issues relating to the use of DPCT tooling on Windows have been resolved.

d) Improved reporting: a new report file is generated for each domain that is exported, listing the details of the source domain as well as each of the configuration resources and deployments that were captured in the exported archive. Any resources that were unable to be exported are also noted.

e) JSON Overrides file formatting: the generated JSON file that serves as an overrides mechanism to allow target environment customizations to be specified on the import is now formatted correctly to make it clearer and easier to make changes.

f) Additional Resources in JSON Overrides file: in order to better support customization on the target domain additional resources such as JDBC System Resources, SAF Agents, Mail Sessions and JDBC Stores are now expressed as configurable objects in the generated JSON file.

g) Inclusion of new export-domain scripts: the scripts used to run the DPCT tooling have been reworked and included as new (additional) scripts. The new scripts are named export-domain.[cmd|sh] and provide clearer help text and make use of named parameters for providing input values to the script. The previous scripts are provided for backwards compatibility and continue to work, but it is recommended the new scripts are used where possible.

Usage detail for the export-domain script:

Usage: -oh {ORACLE_HOME} -domainDir {WL_DOMAIN_HOME}
       [-keyFile {KEYFILE}] [-toolJarFile {TOOL_JAR}] [-appNames {APP_NAMES}]
        [-includeAppBits {INCLUDE_APP_BITS}] [-wlh {WL_HOME}]
             {ORACLE_HOME} : the MW_HOME of where the WebLogic is installed
             {WL_DOMAIN_HOME} : the source WebLogic domain path
             {KEYFILE} : an optional user-provided file containing a clear-text passphrase used to encrypt exported attributes written to the archive, default: None;
             {TOOL_JAR} : file path to the file.
             Optional if jar is in the same directory location as the
             {APP_NAMES} : applicationNames is an optional list of application names to export.
             {WL_HOME} : an optional parameter giving the path of the weblogic server for version 10.3.6.Used only when the WebLogic Server from 10.3.6 release is installed under a directory other than {ORACLE_HOME}/wlserver_10.3

Enhanced Cluster Topology and JMS Support

In addition to the items listed above, some restructuring of the export and import operation has enabled DPCT to better support a number of key WebLogic Server areas. 

When inspecting the source domain and generating the export archive, DPCT now enables the targeting of the resources and deployments to appropriate Servers and Clusters in the target domain. For every Server and Cluster in the source domain, there will be a corresponding resource-group object created in the generated JSON file, with each resource-group targeted to a dedicated Virtual Target, which in turn can be targeted to a Server or Cluster on the target domain.

All application deployments and resources targeted to that particular WebLogic Server instance or cluster in the source domain corresponds to a resource group in the target domain.

This change also supports the situation where the target domain has differently named Cluster and Server resources than the source domain, by allowing the target to be specified in the JSON overrides file so that it can be mapped appropriately to the new environment.

A number of the previous limitations around the exporting of JMS configurations for both single server and cluster topologies have been addressed, enabling common JMS use cases to be supported with DPCT migrations. The documentation contains the list of existing known limitations.

Wednesday Jun 29, 2016

Connection Initialization Callback on WLS Datasource

WebLogic Server is now available. You can see the blog article announcing it at Oracle WebLogic Server is Now Available.

One of the WLS datasource features that appeared quite a while ago but not mentioned much is the ability to define a callback that is called during connection initialization.  The original intent of this callback was to provide a mechanism that is used with the Application Continuity (AC) feature.  It allows for the application to ensure that the same initialization of the connection can be done when it is reserved and also later on if the connection is replayed.  For the latter case, the original connection has some type of "recoverable" error and is closed, a new connection is reserved under the covers, and all of the operations that were done on the original connection are replayed on the new connection.  The callback allows for the connection to be re-initialized with whatever state is needed by the application.

The concept of having a callback to allow for the application to initialize all connections without scattering this processing all over the application software wherever getConnection() is called is very useful, even without replay being involved.  In fact, since the callback can be configured in the datasource descriptor, which I recommend, there is no change to the application except to write the callback itself.  

Here's the history of support for this feature, assuming that the connection initialization callback is configured.

WLS 10.3.6 - It is only called on an Active GridLink datasource when running with the replay driver (replay was only supported with AGL).

WLS  12.1.1, 12.1.2, and 12.1.3 - It is called if used with the replay driver and any datasource type (replay support was added to GENERIC datasources).

WLS 12.2.1 - It is called with any Oracle driver and any datasource type. 

WLS - It is called with any driver and any datasource type.  Why limit the goodness to just the Oracle driver?

The callback can be configured in the application by registering it on the datasource in the Java code. You need to ensure that you only do this once per datasource.  I think it's much easier to register it in the datasource configuration.   

Here's a sample callback.

package demo;
import oracle.ucp.jdbc.ConnectionInitializationCallback;

public class MyConnectionInitializationCallback implements
  ConnectionInitializationCallback {
  public MyConnectionInitializationCallback()  {
  public void initialize(java.sql.Connection connection)
    throws java.sql.SQLException {
     // Re-set the state for the connection, if necessary

This is a simple Jython script using as many defaults as possible to just show registering the callback.

import sys, socket
hostname = socket.gethostname()
jdbcSR = create(dsname, 'JDBCSystemResource')
jdbcResource = jdbcSR.getJDBCResource()
dsParams = jdbcResource.getJDBCDataSourceParams()
driverParams = jdbcResource.getJDBCDriverParams()
driverProperties = driverParams.getProperties()
userprop = driverProperties.createProperty('user')
oracleParams = jdbcResource.getJDBCOracleParams()
oracleParams.setConnectionInitializationCallback('demo.MyConnectionInitializationCallback')  # register the callback

 Here are a few observations.  First, to register the callback using the configuration, the class must be in your classpath.  It will need to be in the server classpath anyway to run but it needs to get there earlier for configuration.  Second, because of the history of this feature, it's contained in the Oracle parameters instead of the Connection parameters; there isn't much we can do about that.  In the WLS administration console, the entry can be seen and configured in the Advanced parameters of the Connection Pool tab as shown in the following figure (in addition to the Oracle tab).  Finally, note that the interface is a Universal Connection Pool (UCP) interface so that this callback can be shared with your UCP application (all driver types are supported starting in Database

This feature is documented in the Application continuity section of the Administration Guide.   See .

You might be disappointed that I didn't actually do anything in the callback.  I'll use this callback again in my next blog to show how it's used in another new WLS feature.

Tuesday Jun 28, 2016

WebLogic Server Continuous Availability in

We have made enhancements to the Continuous Availability Offering in WebLogic in the areas of Zero Downtime Patching, Cross Site Transaction Recovery, Coherence Federated Caching and Coherence Persistence. We have also enhanced the documentation to provide design considerations for the multi-data center Maximum Availability Architectures (MAA) that are supported for WebLogic Server Continuous Availability.

Zero Downtime Patching Enhancements

Enhancements in Zero Downtime Patching support updating applications running in a multitenant partition without affecting other partitions that run in the same cluster. Coherence applications can now be updated while maintaining high availability of the Coherence data during the rollout process. We have also removed the dependency on NodeManager to upgrade the WebLogic Administration Server.

  • Multitenancy support
  • Application updates can use partition shutdown instead of server shutdowns.
  • Can update an application in a partition on a server without affecting other partitions.
  • Can update an application referenced by a ResourceGroupTemplate.
  • Coherence support - User can supply minimum safety mode for rollout to Coherence cluster.
  • Removed Administration Server dependency on NodeManager – The Administration Server no longer needs to be started by NodeManager.

Cross-Site Transaction Recovery

We introduced a “Site Leasing” mechanism to do auto recovery when there is a site failure or mid-tier failure. With site leasing we provide a more robust mechanism to failover and failback transaction recovery without imposing dependencies on the TLog which affect the health of the Servers hosting the Transaction Manager.

Every server in a site will update their lease. When the lease expires for all servers running in a cluster in Site 1, servers running in a cluster in a remote site assume ownership of the TLogs, and recover the transactions while still continuing their transaction work. To learn more, please read Active-Active XA Transaction Recovery.

Coherence Federated Caching and Coherence Persistence Administration Enhancements

We have enhanced the WebLogic Server Administration Console to make it easier to configure Coherence Federated Caching and Coherence Persistence.

  • Coherence Federated Caching - Added the ability to setup Federation with basic active/active and active/passive configurations using the Administration Console and eliminated the need to use configuration files.
  • Coherence Persistence - Added a persistence tab in the Administration Console that provides the ability to configure Persistence related settings that apply to all services.


In WebLogic Server we have enhanced the document Continuous Availability for Oracle WebLogic Server to include a new chapter “Design Considerations for Continuous Availability” See

This new chapter provides design considerations and best practices for the components of your multi-data center environments. In addition to the general best practices recommended for all continuous availability MAA architectures, we provide specific advice for each of the Continuous Availability supported topologies, and describe how the features can be used in these topologies to provide maximum high availability and disaster recovery.

Tuesday Jun 21, 2016

Oracle WebLogic Server is Now Available

Last October, we delivered Oracle WebLogic Server 12.2.1 as part of the overall Oracle Fusion Middleware 12.2.1 Release.   As noted previously on this blog, WebLogic Server 12.2.1 delivers compelling new feature capabilities in the areas of Multitenancy, Continuous Availability, and Developer Productivity and Portability to Cloud.  

Today, we are releasing WebLogic Server, which is the first patch set release for WebLogic Server and Fusion Middleware 12.2.1.   New WebLogic Server installers are now posted on the Oracle Technology Network and Oracle Software Delivery Cloud, and new documentation has been made available.  WebLogic Server contains all the new features in WebLogic Server 12.2.1, and also includes an integrated, cumulative set of fixes and a small number of targeted, non-disruptive enhancements.   

For customers who have just begun evaluating WebLogic Server 12cR2, or are planning evaluation and adoption, we recommend that you adopt WebLogic Server so that you can benefit from the maintenance and enhancements that have been included.   For customers who are already running in production on WebLogic Server 12.2.1, you can continue to do so, though we will encourage adoption of WebLogic Server 12.2.1 patch sets.

The enhancements are primarily in the following areas:

  • Multitenancy - Improvements to Resource Consumption Management, partition security management, REST management, and Fusion Middleware Control, all targeted at multitenancy manageability and usability.
  • Continuous Availability - New documented best practices for multi data center deployments, and product improvements to Zero Downtime Patching capabilities.
  • Developer Productivity and Portability to the Cloud - The Domain to Partition Conversion Tool (D-PCT), which enables you to convert an existing domain to a WebLogic Server 12.2.1 partition, has been integrated into with improved functionality.   So it's now easier to migrate domains and applications to WebLogic Server partitions, including partitions running in the Oracle Java Cloud Service. 
We will provide additional updates on the capabilities described above, but everything is ready for you to get started using WebLogic Server today.   Try it out and give us your feedback!

Sunday May 01, 2016

Using SQLXML Data Type with Application Continuity

When I first wrote an article about changing Oracle concrete classes to interfaces to work with Application Continuity (AC) (, I left out one type. oracle.sql.OPAQUE is replaced with oracle.jdbc.OracleOpaque. There isn’t a lot that you can do with this opaque type. While the original class had a lot of conversion methods, the new Oracle type interfaces have only methods that are considered significant or not available with standard JDBC API’s. The new interface only has a method to get the value as an Object and two meta information methods to get meta data and type name. Unlike the other Oracle type interfaces (oracle.jdbc.OracleStruct extends java.sql.Struct and oracle.jdbc.OracleArray extends java.sql.Array), oracle.jdbc.OracleOpaque does not extend a JDBC interface

There is one related very common use case that needs to be changed to work with AC. Early uses of SQLXML made use of the following XDB API.

SQLXML sqlXml = oracle.xdb.XMLType.createXML(

oracle.xdb.XMLType extends oracle.sql.OPAQUE and its use will disable AC replay. This must be replaced with the standard JDBC API

SQLXML sqlXml = resultSet.getSQLXML("issue");

If you try to do a “new oracle.xdb.XMLType(connection, string)” when running with the replay datasource, you will get a ClassCastException. Since XMLType doesn’t work with the replay datasource and the oracle.xdb package uses XMLType extensively, this package is no longer usable for AC replay.

The API’s for SQLXML are documented at The javadoc shows API’s to work with DOM, SAX, StAX, XLST, and XPath.

Take a look at the sample program at

The sample uses StAX to store the information and DOM to get it. By default, it uses the replay datasource and it does not use XDB.

You can run with replay debugging by doing something like the following. Create a file named /tmp/config.txt that has the following text.

java.util.logging.ConsoleHandler.formatter = java.util.logging.SimpleFormatter
handlers = java.util.logging.FileHandler
java.util.logging.FileHandler.pattern = /tmp/replay.log
oracle.jdbc.internal.replay.level = FINEST

Change your WLS CLASSPATH (or one with the Oracle client jar files) to put ojdbc7_g.jar at the front (to replace ojdbc7.jar) and add the current directory.

Compile the program (after renaming .txt to .java) and run it using

java -Djava.util.logging.config.file=/tmp/config.txt XmlSample

The output replay log is in /tmp/replay.log. With the defaults in the sample program, you won’t see replay disabled in the log. If you change the program to set useXdb to true, you will see that replay is disabled. The log will have “DISABLE REPLAY in preForMethodWithConcreteClass(getOPAQUE)” and “Entering disableReplayInternal”.

This sample can be used to test other sequences of operations to see if they are safe for replay.

Alternatively, you can use orachk to do a static analysis of the class. See for more information. If you run orachk on this program, you will get this failure.

FAILED - [XmlSample][[MethodCall] desc= (Ljava/lang/String;)Loracle/sql/OPAQUE; method name=getOPAQUE, lineno=105]

Thursday Apr 28, 2016

Testing WLS and ONS Configuration


Oracle Notification Service (ONS) is installed and configured as part of the Oracle Clusterware installation. All nodes participating in the cluster are automatically registered with the ONS during Oracle Clusterware installation. The configuration file is located on each node in $ORACLE_HOME/opmn/conf/ons.config. See the Oracle documentation for further information. This article focuses on the client side.

Oracle RAC Fast Application Notification (FAN) events are available starting in database 11.2. This is the minimum database release required for WLS Active GridLink. FAN events are notifications sent by a cluster running Oracle RAC to inform the subscribers about the configuration changes within the cluster. The supported FAN events are service up, service down, node down, and load balancing advisories (LBA).

fanWatcher Program

You can optionally test your ONS configuration independent of running WLS. This tests the connection from the ONS client to the ONS server but not configuration of your RAC services. See for details to get, compile, and run the fanWatcher program. I’m assuming that you have WLS 10.3.6 or later installed and you have your CLASSPATH set appropriately. You would run the test program using something like

java fanWatcher "nodes=rac1:6200,rac2:6200" database/event/service

If you are using the database client jar files, you can handle more complex configurations with multiple clusters, for example DataGuard, with something like

java fanWatcher "nodes.1=site1.rac1:6200,site1.rac2:6200
nodes.2=site2.rac1:6200,site2.rac2:6200" database/event/service

Note that a newline is used to separate multiple node lists. You can also test with a wallet file and password, if the ONS server is configured to use SSL communications.

Once this program is running, you should minimally see occasional LBA notifications. If you start or stop a service, you should see an associated event.

Auto ONS

It’s possible to run without specifying the ONS information using a feature call auto-ONS. The auto-ONS feature cannot be used if you are running with

- an 11g driver or 11g database. Auto-ONS depends on protocol flowing between the driver and the database server and this feature was added in 12c.

- pre-WLS 12.1.3. Auto-ONS is supported starting in WLS 12.1.3.

- an Oracle wallet with SSL communications. Configuration of the wallet requires also configuring the ONS information.

- complicated ONS topology. In general, auto-ONS can figure out what you need but there are cases where you need to specify it precisely. In WLS 12.2.1, an extension of the ONS configuration allows for specifying the exact topology using a property node list. See for more information.

If you have some configurations that use an 11g driver or database and some that run with 12c driver/database, you may want to just specify the ONS information all of the time instead of using the auto-ONS simplification. The fanWatcher link above indicates how to test fanWatcher using auto-ONS.

WLS ONS Configuration and Testing

The next step is to ensure that you have end-to-end configuration running. That includes the database service for which events will be generated to the AGL datasource that processes the events for the corresponding service.

On the server side, the database service must be configured RCLB enabled. RCLB is enabled for a service if the service GOAL (NOT CLB_GOAL) is set to either SERVICE_TIME or THROUGHPUT. See the Oracle documentation for further information on using srvctl to set this when creating the service.

On the WLS side, the key pieces are the URL and the ONS configuration.

The URL is configured using a long format with this service name specified. The URL can use an Oracle Single Client Access Name (SCAN) address, for example,


or multiple non-SCAN addresses with LOAD_BALANCE=on, for example,


Defining the URL is a complex topic - see the Oracle documentation for more information.

As described above, the ONS configuration can be implicit using auto-ONS or explicit. The trade-offs and restrictions are also described above. The format of the explicit ONS information is described at

If you create the datasource using the administration console with explicit ONS configuration, there is a button to click on to test the ONS configuration. This tests doing a simple handshake with the ONS server.

Of course, the first real test of your ONS configuration with WLS is deploying the datasource, either when starting the server or when targeting the datasource on a running server.

In the administration console, you can look at the AGL runtime monitoring page for ONS, especially if using auto-ONS, to see the ONS configuration.

You can look at the page for instances and check the affinity flag and instance weight attributes that are updated on LBA events. If you stop a service using something like

srvctl stop service -db beadev -i beadev2 -s otrade

that should also show up on this page with the weight and capacity going to 0.

If you look at the server log (for example servers/myserver/logs/myserver.log) you should see a message tracking the outage like the following.

…. <Info> <JDBC> … <Data source JDBC Data Source-0 for service otrade received a service down event for instance [beadev2].>

If you want to see more information like the LBA events, you can enable the JDBCRAC debugging using –Dweblogic.debug.DebugJDBCRAC=true. For example,

... <JDBCRAC> ... lbaEventOccurred() event=service=otrade, database=beadev, event=VERSION=1.0 database=beadev service=otrade { {instance=beadev1 percent=50 flag=GOOD aff=FALSE}{instance=beadev2 percent=50 flag=UNKNOWN aff=FALSE} }

There will be a lot of debug output with this setting so it is not recommended for production.

Wednesday Apr 27, 2016

Migrating from Generic Data Source to Active GridLink

Earlier, I wrote an article about how to migrate from a Multi Data source (MDS) for RAC connectivity to Active GridLink (AGL). This is needed to move from the older datasource technology to the newer technology, both supporting Oracle RAC. The information is now in the public documentation set at

There are also many customers that are growing up from a standalone database to an Oracle RAC cluster. In this case, it’s a migration from a GENERIC datasource to an AGL datasource. This migration is pretty simple.

No changes should be required to your applications.  A standard application looks up the datasource in JNDI and uses it to get connections.  The JNDI name won’t change.

The only changes necessary should be to your configuration and the necessary information is generally provided by your database administrator.   The information needed is the new URL and optionally the configuration of Oracle Notification Service (ONS) on the RAC cluster. The latter is only needed if you are running with

- an 11g driver or 11g database. Auto-ONS depends on protocol flowing between the driver and the database server and this feature was added in 12c.

- pre-WLS 12.1.3. Auto-ONS is supported starting in WLS 12.1.3.

- an Oracle wallet with SSL communications. Configuration of the wallet requires also configuring the ONS information.

- complicated ONS topology. In general, auto-ONS can figure out what you need but there are cases where you need to specify it precisely. In WLS 12.2.1, an extension of the ONS configuration allows for specifying the exact topology using a property node list. See for more information.

The URL and ONS attributes are configurable but not dynamic. That means that the datasource will need to be shutdown and restarted after the change. The simplest way to do this is to untarget the datasource, make the changes, and then re-target the datasource.

The recommended approach to migrate from a GENERIC to AGL datasource is to use WLST. The URL must be changed in the JDBCDriverParams object. The new JDBCOracleParams object (it generally doesn’t exist for a GENERIC datasource) needs to have FAN enabled set to true and optionally set the ONS information.

The following is a sample WLST script with the new values hard-coded. You could parameterize it and make it more flexible in handling multiple datasources. If you are using an Oracle wallet for ONS, that needs to be added to the JDBCOracleParams object as well.

# java weblogic.WLST
import sys, socket, os
hostname = socket.gethostname()
datasource="JDBC Data Source-0"
cd("/JDBCSystemResources/" + datasource )
set("Targets",jarray.array([], ObjectName))
cd("/JDBCSystemResources/" + datasource + "/JDBCResource/" +
datasource + "/JDBCDriverParams/" + datasource )
set("Url","jdbc:oracle:thin:@(DESCRIPTION=(ADDRESS_LIST=(" +
"ADDRESS=(PROTOCOL=TCP)(HOST=dbhost)(PORT=1521)))" +
cd("/JDBCSystemResources/" + datasource + "/JDBCResource/" +
datasource + "/JDBCOracleParams/" + datasource )
# The following is for WLS 12.1.2 and 12.1.3 if not setting FanEnabled true, which is not recommended
# The following is for WLS 12.2.1 and later

#cd("/JDBCSystemResources/" + datasource + "/JDBCResource/" +
# datasource )
#set("DatasourceType", "AGL")
cd("/JDBCSystemResources/" + datasource )
set("Targets", targets)

In WLS 12.1.2 and 12.1.3, there is an explicit ActiveGridlink flag that can be used to identify an AGL datasource, even if FanEnabled is not set to true (which is not recommended) and auto-ONS is used (12.1.2 is the first release in which auto-ONS is supported). In the script above, uncomment the line to set it only if FanEnabled and OnsNodeList are not set.

Starting in WLS 12.2.1, there is an explicit datasource type at the JDBCResource level. If that is set to GENERIC, you must re-set it using set("DatasourceType", "AGL"). In the script above, uncomment the lines to set it.  In this case, the ActiveGridlink flag is not necessary.

In the administrative console, the database type is read-only and there is no mechanism to change the database type. You can try to get around this by setting the URL, FAN Enabled box, and ONS information. However, in 12.2.1 there is no way to re-set the Datasource Type in the console and that value overrides all others.

Wednesday Feb 17, 2016

WebLogic Server 12.2.1: Elastic Cluster Scaling

WebLogic Server 12.2.1 added support for the elastic scaling of dynamic clusters:

Elasticity allows you to configure elastic scaling for a dynamic cluster based on either of the following:

  • Manually adding or removing a running dynamic server instance from an active dynamic cluster. This is called on-demand scaling. You can perform on-demand scaling using the Fusion Middleware component of Enterprise Manager, the WebLogic Server Administration Console, or the WebLogic Scripting Tool (WLST).

  • Establishing policies that set the conditions under which a dynamic cluster should be scaled up or down and actions that define the scaling operations themselves. When the conditions defined in the scaling policy occur, the corresponding scaling action is triggered automatically.

To see this in action, a set of video demonstrations have been added to the channel that show the use of various elastic scaling options available.

WebLogic Server 12.2.1 Elastic Cluster Scaling with WLST
WebLogic Server 12.2.1 Elastic Cluster Scaling with WebLogic Console
WebLogic Server 12.2.1 Automated Elastic Cluster Scaling

Sunday Feb 14, 2016

Now Available: Domain to Partition Conversion Tool (DPCT)

We are pleased to announce that a new utility has just been published to help with the process of converting existing WebLogic Server domains into WebLogic Server 12.2.1 partitions. 

The Domain to Partition Conversion Tool (DPCT) provides a utility that inspects a specified source domain and produces an archive containing the resources, deployed applications and other settings.  This can then be used with the importPartition operation provided in WebLogic Server 12.2.1 to create a new partition that represents the original source domain.  An external overrides file is generated (in JSON format) that can be modified to adjust the targets and names used for the relevant artifacts when they are created in the partition.

DPCT supports WebLogic Server 10.3.6, 12.1.1, 12.1.2 and 12.1.3 source domains and  makes the conversion to WebLogic Server 12.2.1 partitions a straightforward process.

DPCT is available for downloaded from OTN:

 ** Note: there is also a corresponding patch (opatch) posted alongside the DPCT download that needs to be downloaded and applied to the target installation of WebLogic Server 12.2.1 to support the import operation **

 The README contains more details and examples of using the tool:

A video demonstration of using DPCT to convert a WebLogic Server 12.1.3 domain with a deployed application into a WebLogic Server 12.2.1 is also available on our YouTube channel:

Tuesday Dec 08, 2015

WLS JNDI Multitenancy

  The most important feature introduced in WebLogic Server 12.2.1 is multi-tenancy. As we know, before WLS 12.2.1, one WLS domain is used by one tenant. Since WLS 12.2.1, WLS domain can be divided into multiple partitions so that tenants can use different partitions of on WLS domain. Then Multiple tenants can share one WLS domain without influence each other. So isolation of resources between partitions is key. Since JNDI is a common way to access to these resources, the main target of JNDI in WLS 12.2.1 is to isolate JNDI resources.

  Before WLS 12.2.1, there is only one JNDI global tree of WLS domain. It is difficult to use this one JNDI global tree to support multiple partitions. Because partition requires a unique isolated namespace. For example, it is possible multiple partitions use same JNDI name to bind/lookup JNDI resource separately. And it will result in NameAlreadyBoundException. To isolate JNDI resources in different partitions, every partition has unique JNDI global tree since WLS 12.2.1. Then tenant can operate JNDI resource in one partition without name conflict with one in other partition. For Application scoped JNDI tree, it is only visible in application internal, so it isolated naturally. So there is no change for App scoped JNDI tree in WLS 12.2.1. Let us see how to access JNDI resource in partition.

Access JNDI resource in partition

  To access JNDI resources in partition, we need add partition information in provider url property during creating InitialContext.

  Runtime environment:

    Managed server:           ms1 , ms2

    Cluster:                         managed server ms1, managed server ms2

    Virtual target:               VT1 target to managed server ms1, VT2 target to cluster

    Partition:                      Partition 1 has available target VT1, Partition2 has available target VT2.

  We need add parition1 inforamtion in properties during creating InitialContext to access JNDI resources in partition1.

    Hashtable<String, String> env = new Hashtable<>();
    env.put(Context.PROVIDER_URL, "t3://ms1:7001/partition1");  
    env.put(Context.SECURITY_PRINCIPAL, "weblogic");
    env.put(Context.SECURITY_CREDENTIALS, "welcome1");
    Context ctx = new InitialContext(env);
    Object c = ctx.lookup("jdbc/ds1");

  Partition2 runs in cluster, so we can use cluster address format in properties during creating InitialContext.

    Hashtable<String, String> env = new Hashtable<>();
    env.put(Context.PROVIDER_URL, "t3://ms1:7001,ms2:7003/partition2");
    env.put(Context.SECURITY_PRINCIPAL, "weblogic");
    env.put(Context.SECURITY_CREDENTIALS, "welcome1");
    Context ctx = new InitialContext(env);
    Object c = ctx.lookup("jdbc/ds1");

  In weblogic, we can create Foreign JNDI provider to link JNDI resources in other server. In WLS 12.2.1, we also can use Foreign JNDI provider to link to  JNDI resources in specified partition by adding partition information in configuration. These partition information including provider url, user and password will be used to create JNDI context. The following is an example of Foreign JNDI provider configuration in partition1. This provider links to partition2.


Stickiness of JNDI Context

  When a JNDI context is created, the context will associate with  specified partition. Then all subsequent JNDI operations are done within associated partition JNDI tree, not in current partition one. And this associated partition will remain even if the context is used by a different thread than was used to create the context.. If provider url property is set in environment during creating JNDI context, partition specified in provider url is associated. Otherwise, JNDI context associates with current partition.

Life cycle of Partition JNDI service

  Before WLS 12.2.1,, JNDI service life cycle is same with weblogic server. In WLS 12.2.1, every partition owns JNDI global tree by itself, so JNDI service life cycle is same with partition. As soon as partition startup, JNDI service of this partition is available. And during partition shutdown, JNDI service of this partition is destroyed.

Monitoring FAN Events

fanWatcher is a sample program to print the Oracle Notification Service (ONS) Fast Application Notification (FAN) event information. These events provide information regarding load balancing, and service and instance up and down events. This information is automatically processed by WebLogic Server Active GridLink and UCP on the mid-tier. For more information about FAN events, see this link.  The program described here is an enhancement of the earlier program described in that white paper  This program can be modified to work as desired to monitor events and help diagnose problems with configuration. The code is available this link (rename it from .txt to .java).

To run this Java application, you need to be set up to run a JDK and you need ons.jar and ojdbcN.jar in the CLASSPATH. The CLASSPATH is set differently depending on whether you are running on the database server or on the mid-tier with WebLogic Server or UCP. Make sure to use the correct path separator for CLASSPATH on your platform (';' for Windows, ':' otherwise).

The general format for the command line is

java fanWatcher config_type [eventtype … ]

Event Type Subscription

The event type sets up the subscriber to only return limited events. You can run without specifying the event type to see what types of events are returned. When you specify an event name on the command line, the program sets up the subscriber to have a simple match on the event. If the specified pattern occurs anywhere in a notification's header, then the comparison statement evaluates true. The most basic pattern match is an empty string (not null), which matches all notifications. The pattern is enclosed in double quotes (required) and prefixed with “%” to be case insensitive.

Event processing is more complete than shown in this sample. The subscription string is generally composed of one or more comparison statements, each logically related to another with the boolean operators '|' for an OR relationship or '&' for an AND relationship. Parentheses are used to group these comparison statements, and the '!' operator placed before an opening parenthesis negates the evaluated value within.

Each individual comparison statement must be enclosed within double quotes ('"'), and can take one of two basic forms: "pattern" or "name=value". A "pattern" is a simple string match of the notification header: if the specified "pattern" occurs anywhere in a notification's header, then the comparison statement evaluates true. The most basic pattern match is an empty string (not NULL) which matches all notifications.

The "name=value" format compares the ONS notification header or property name with the name against the specified value, and if the values match, then the comparison statement evaluates true. If the specified header or property name does not exist in the notification the comparison statement evaluates false. A comparison statement will be interpreted as a case insensitive when a percent character ('%') is placed before the opening quote. Note that for "name=value" comparison statements, only the value is treated as case insensitive with this option: the name lookup will always be case sensitive. A comparison statement will be interpreted as a regular expression when a dollar sign character ('$') is placed before the opening quote. Standard POSIX regular expressions are supported. To specify a regular expression that is also case insensitive, place the dollar sign and percent sign together and in that order ("$%") before the opening quote.

A special case subscription string composed of only the exclamation point character ('!') signifies that the subscription will not match any notifications.

You might want to modify the event to select on a specific service by using something like

%"eventType=database/event/servicemetrics/<serviceName> "

Running with Database Server 10.2 or later

This approach runs on the database server and connects directly to the local ONS daemon available in the Grid Infrastructure cluster. The FANwatcher utility must be run as a user that has privilege to access the $CRS_HOME/opmn/conf/ons.config, which is used by the ons daemon to start and accessed by this program. The configuration type on the command line is set to “crs”.

# CRS_HOME should be set for your Grid infrastructure
echo $CRS_HOME
java -Doracle.ons.oraclehome=$CRS_HOME fanWatcher crs

Running with WLS 10.3.6 or later using an explicit node list

There are two ways to run in a client environment – with an explicit node list and using auto-ONS. It’s necessary to have ojdbcN.jar and ons.jar that are available when configured for WLS. If you are set up to run with UCP directly, these should also be in your CLASSPATH.

In the first approach, it will work with Oracle driver and database 11 and later (SCAN support came in later versions of Oracle including the jar files that shipped with WLS 10.3.6).

# Set the WLS environment using wlserver*/server/bin/setWLSEnv
CLASSPATH="$CLASSPATH:." # add local directory for sample program
java fanWatcher "nodes=rac1:6200,rac2:6200" database/event/service

The node list is a string of one or more values of the form name=value separated by a newline character (\n).

There are two supported formats for the node list.

The first format is available for all versions of ONS. The following names may be specified.

nodes – This is required. The format is one or more host:port pairs separated by a comma.

walletfile – Oracle wallet file used for SSL communication with the ONS server.

walletpassword – Password to open the Oracle wallet file.

The second format is available starting in database It supports more complicated topologies with multiple clusters and node lists. It has the following names.—this value is a list of nodes representing a unique topology of remote ONS servers. id specifies a unique identifier for the node list. Duplicate entries are ignored. The list of nodes configured in any list must not include any nodes configured in any other list for the same client or duplicate notifications will be sent and delivered. The list format is a comma separated list of ONS daemon listen addresses and ports pairs separated by colon.— this value specifies the maximum number of concurrent connections maintained with the ONS servers. id specifies the node list to which this parameter applies. The default is 3. If true, the list is active and connections are automatically established to the configured number of ONS servers. If false, the list is inactive and is only be used as a fail over list in the event that no connections for an active list can be established. An inactive list can only serve as a fail over for one active list at a time, and once a single connection is re-established on the active list, the fail-over list reverts to being inactive. Note that only notifications published by the client after a list has failed over are sent to the fail over list. id specifies the node list to which this parameter applies. The default is true.

remotetimeout —The timeout period, in milliseconds, for a connection to each remote server. If the remote server has not responded within this timeout period, the connection is closed. The default is 30 seconds.

The walletfile and walletpassword may also be specified (note that there is one walletfile for all ONS servers). The nodes attribute cannot be combined with attributes.

Running with WLS using auto-ONS

Auto-ONS is available starting in Database Before that, no information is available. Auto-ONS only works with RAC configurations; it does not work with an Oracle Restart environment.  Since the first version of WLS that ships with Database 12.1 is WLS 12.1.3, this approach will only work with upgraded database jar files on versions of WLS earlier than 12.1.3. Auto-ONS works by getting a connection to the database to query the ONS information from the server. For this program to work, a user, password, and URL are required. For the sample program, the values are assumed to be in the environment (to avoid putting them on the command line). If you want, you can change the program to prompt for them or hard-code the values into the java code.

# Set the WLS environment using wlserver*/server/bin/setWLSEnv
# Set the credentials in the environment. If you don't like doing this,
# hard-code them into the java program
export password url user
java fanWatcher autoons

fanWatcher Output

The output looks like the following. You can modify the program to change the output as desired. In this short output capture, there is a metric event and an event caused by stopping the service on one of the instances.

** Event Header **
Notification Type: database/event/servicemetrics/otrade
Delivery Time: Fri Dec 04 20:08:10 EST 2015
Creation Time: Fri Dec 04 20:08:10 EST 2015
Generating Node: rac1
Event payload:
VERSION=1.0 database=dev service=otrade { {instance=inst2 percent=50 flag=U
NKNOWN aff=FALSE}{instance=inst1 percent=50 flag=UNKNOWN aff=FALSE} } timestam
p=2015-12-04 17:08:03

** Event Header **
Notification Type: database/event/service
Delivery Time: Fri Dec 04 20:08:20 EST 2015
Creation Time: Fri Dec 04 20:08:20 EST 2015
Generating Node: rac1
Event payload:
VERSION=1.0 event_type=SERVICEMEMBER service=otrade instance=inst2 database=dev db_domain= host=rac2 status=down reason=USER timestamp=2015-12-04 17:

Monday Dec 07, 2015

Three Easy Steps to a Patched Domain Using ZDT Patching and OPatchAuto

Now that you’ve seen how easy it is to Update WebLogic by rolling out a new patched OracleHome to your managed servers, let’s go one step further and see how we can automate the preparation and distribution parts of that operation as well.

ZDT Patching has integrated with a great new tool in 12.2.1 called OPatchAuto. OPatchAuto is a single interface that allows you to apply patches to an OracleHome, distribute the patched OracleHome to all the nodes you want to update, and start the OracleHome rollout, in just three steps.

1. The first step is to create a patched OracleHome archive (.jar) based on combining an OracleHome in your production environment with the desired patch or patchSetUpdate. This operation will make a copy of that OracleHome so it will not affect the production environment. It will then apply the specified patches to the copy of the OracleHome and create the archive from it. This is the archive that the rollout will use when the time comes, but first it needs to be copied to all of the targeted nodes.

The OPatchAuto command for the first step looks like this:

${ORACLE_HOME}/OPatch/auto/core/bin/ apply /pathTo/PatchHome -create-image -image-location /pathTo/image.jar –oop –oh /pathTo/OracleHome

PatchHome is a directory or file containing the patch or patchSetUpdate to apply.

image-location is where to put the resulting image file

-oop means “out-of-place” and tells OPatchAuto to copy the source OracleHome before applying the patches

2.  The second step is to copy the patched OracleHome archive created in step one to all of the targeted nodes. One cool thing about this step is that since OPatchAuto is integrated with ZDT Patching, you can give OPatchAuto the same target you would use with ZDT Patching, and it will ask ZDT Patching to calculate the nodes automatically. Here’s an example of what this command might look like:

${ORACLE_HOME}/OPatch/auto/core/bin/ apply -plan wls-push-image -image-location /pathTo/image.jar -wls-admin-host ${ADMINHOSTNAME}:7001 -wls-target Cluster1 -remote-image-location /pathTo/image.jar -wallet ${WALLET_DIR}

image-location is the jar file created in the first step

wls-target can be a domain name, cluster name, or list of clusters

Note that if you do not already have a wallet for ssh authorization to the remote hosts, you may need to configure one using

3.  The last step is using OPatchAuto to invoke the ZDT Patching OracleHome rollout. You could switch to WLST at this point and start it as described in the previous post, but OPatchAuto will monitor the progress of the rollout and give you some helpful feedback as well. The command to start the rollout through OPatchAuto looks like this:

${ORACLE_HOME}/OPatch/auto/core/bin/ apply -plan wls-zdt-rollout -image-location /pathTo/image.jar -wls-admin-host ${ADMINHOSTNAME}:7001 -wls-target Cluster1 -backup-home /pathTo/home-backup -remote-image-location /pathTo/image.jar -wallet ${WALLET_DIR}

image-location is the jar file created in the first step

backup-home is the location on each remote node to store the backup of the original OracleHome

image-location and remote-image-location are both specified so that if a node is encountered that is missing the image, it can be copied automatically. This is also why the wallet is specified here again

One more great thing to consider when looking at automating the entire process is how easy it would be to use these same commands to distribute and rollout the same patched OracleHome archive to a test environment for verification. Once verification is passed, a minor change to the same two commands will push the exact same (verified) OracleHome archive out to a production environment.

For more information about updating OracleHome with Zero Downtime Patching and OPatchAuto, view the documentation.

Thursday Nov 19, 2015

WLS Replay Statistics

Starting in the Oracle thin driver, the replay driver has statistics related to replay. This is useful to understand how many connections are being replayed. It should be completely transparent to the application so you won’t know if connection replays are occurring unless you check.

The statistics are available on a per connection basis or on a datasource basis. However, connections on a WLS datasource don’t share a driver-level datasource object so the latter isn’t useful in WLS. WLS 12.2.1 provides another mechanism to get the statistics at the datasource level.

The following code sample shows how to print out the available statistics for an individual connection using the oracle.jdbc.replay.ReplayableConnection interface, which exposes the method to get a oracle.jdbc.replay.ReplayStatistics object. See for a description of the statistics values.

if (conn instanceof ReplayableConnection) {
  ReplayableConnection rc = ((ReplayableConnection)conn);
  ReplayStatistics rs = rc.getReplayStatistics(
  System.out.println("Individual Statistics");

Besides a getReplayStatistics() method, there is also a clearReplayStatistics() method.

To provide for a consolidated view of all of the connections associated with a WLS datasource, the information is available via a new operation on the associated runtime MBean. You need to look-up the WLS MBean server, get the JDBC service, then search for the datasource name in the list of JDBC datasource runtime MBeans, and get the JDBCReplayStatisticsRuntimeMBean. This value will be null if the datasource is not using a replay driver, if the driver is earlier than, or if it’s not a Generic or AGL datasource. To use the replay information, you need to first call the refreshStatistics() operation that sets the MBean values, aggregating the values for all connections on the datasource. Then you can call the operations on the MBean to get the statistics values, as in the following sample code. Note that there is also a clearStatistics() operation to clear the statistics on all connections on the datasource. The following code shows an example of how to print the aggregated statistics from the data source.

public void printReplayStats(String dsName) throws Exception {
  MBeanServer server = getMBeanServer();
  ObjectName[] dsRTs = getJdbcDataSourceRuntimeMBeans(server);
  for (ObjectName dsRT : dsRTs) {
    String name = (String)server.getAttribute(dsRT, "Name");
    if (name.equals(dsName)) {
      ObjectName mb =(ObjectName)server.getAttribute(dsRT,  
      server.invoke(mb,"refreshStatistics", null, null);
      MBeanAttributeInfo[] attributes = server.getMBeanInfo(mb).getAttributes();
      for (int i = 0; i <attributes.length; i++) {
        if(attributes[i].getType().equals("java.lang.Long")) {
            (Long)server.getAttribute(mb, attributes[i].getName()));

MBeanServer getMBeanServer() throws Exception {
  InitialContext ctx = new InitialContext();
  MBeanServer server = (MBeanServer)ctx.lookup("java:comp/env/jmx/runtime");
  return server;
ObjectName[] getJdbcDataSourceRuntimeMBeans(MBeanServer server) 
  throws Exception {
  ObjectName service = new ObjectName( "com.bea:Name=RuntimeService,Type=\");
  ObjectName serverRT = (ObjectName)server.getAttribute(service,  "ServerRuntime");
  ObjectName jdbcRT = (ObjectName)server.getAttribute(serverRT,  "JDBCServiceRuntime");
  ObjectName[] dsRTs = (ObjectName[])server.getAttribute(jdbcRT,
  return dsRTs;

Now run an application that gets a connection, does some work, kills the session, replays, then gets a second connection and does the same thing. Each connection successfully replays once. That means that the individual statistics show a single replay and the aggregated statistics will show two replays. Here is what the output might look like.

Individual Statistics


Looking carefully at the numbers, you can see that the individual count was done before the connections were closed (TotalCompletedRequests=0) and the roll-up was done after both connections were closed. 

You can also use WLST to get the statistics values for the datasource. The statistics are not visible in the administration console or FMWC in WLS 12.2.1.

Sunday Nov 15, 2015

Deploying Java EE 7 Applications to Partitions from Eclipse

The new WebLogic Server 12.2.1 Multi-tenant feature enables partitions to be created in a domain that are isolated from one another and able to be managed independently of one another. From a development perspective, this isolation opens up some interesting opportunities - for instance it enables the use of a single domain to be shared by multiple developers, working on the same application, without them needing to worry about collisions of URLs or cross accessing of resources.

The lifecycle of a partition can be managed independently of others so starting and stopping the partition to start and stop applications can be done with no impact on other users of the shared domain. A partition can be exported (unplugged) from a domain, including all of it's resources and application bits that are deployed, and imported (plugged) into a completely different domain to restore the exact same partition in the new location. This enables complete, working applications to be shared and moved between between different environments in a very straightforward manner.

As an illustration of this concept of using partitions within a development environment, the YouTube video - WebLogic Server 12.2.1 - Deploying Java EE 7 Application to Partitions - takes the Java EE 7 CargoTracker application and deploys it to different targets from Eclipse.

  • In the first instance, CargoTracker is deployed to a known WebLogic Server target using the well known "Run as Server" approach, with which Eclipse will start the configured server and deploy the application to the base domain.
  • Following that, using a partition that has been created on the same domain called "test", the same application code-base is built and deployed to the partition using maven and the weblogic-maven-plugin. The application is accessed in its partition using its Virtual Target mapping and shown to be working as expected.
  • To finish off the demonstration the index page of the CargoTracker application is modified to mimic a development change and deployed to another partition called "uat" - where it is accessed and seen that the page change is active.
  • At this point, all three instances of the same application are running independently on the same server and are accessible at the same time, essentially showing how a single domain can independently host multiple instances of the same application as it is being developed.

Thursday Nov 12, 2015

WLS UCP Datasource

WebLogic Server (WLS) 12.2.1 introduces a new datasource type that uses the Oracle Universal Connection Pool (UCP) as an alternative connection pool.  The UCP datasource allows for configuration, deployment, and monitoring of the UCP connection pool as part of the WLS domain.  It is certified with the Oracle Thin driver (simple, XA, and replay drivers). 

The product documentation is at .  The goal of this article  is not to reproduce that information but to summarize the feature and provide some additional information and screen shots for configuring the datasource.

A UCP data source is defined using a jdbc-data-source descriptor as a system resource.  With respect to multi-tenancy, these system resources can be defined at the domain, partition, resource group template, or resource group level. 

The configuration  for a UCP data source is pretty simple with the standard datasource parameters.  You can  name it, give it a URL, user, password and JNDI name.  Most of the detailed configuration and tuning comes in the form of UCP connection properties.  The administrator can configure values for any setters supported by oracle.ucp.jdbc.PoolDataSourceImpl except LogWriter  (see oracle.ucp.PoolDaaSourceImpl) by just removing the "set" from the attribute name (the names are case insensitive).  For example,


Table 8-2 in the documentation lists all of the UCP attributes that are currently supported, based on the UCP jar that ships with WLS 12.2.1.

There is some built-in validation of the (common sense) combinations of driver and connection factory:


Factory (ConnectionFactoryClassName)

oracle.ucp.jdbc.PoolDataSourceImpl (default)






To simplify the configuration, if the "driver-name" is not specified, it will default to oracle.ucp.jdbc.PoolDataSourceImpl  and the ConnectionFactoryClassName connection property defaults to the corresponding entry from the above table.

Example 8.1 in the product documentation gives a complete example of creating a UCP data source using WLST.   WLST usage is very common for application configuration these days.

Monitoring is available via the  This MBean extends JDBCDataSourceRuntimeMBean so that it can be returned with the list of other JDBC MBeans from the JDBC service for tools like the administration console or your WLST script.  For a UCP data source, the state and the following attributes are set: CurrCapacity, ActiveConnectionsCurrentCount, NumAvailable, ReserveRequestCount, ActiveConnectionsAverageCount, CurrCapacityHighCount, ConnectionsTotalCount, NumUnavailable, and WaitingForConnectionSuccessTotal.

The administration console and FMWC make it easy to create, update, and monitor UCP datasources.

The following images are from the administration console. For the creation path, there is a drop-down that lists the data source types; UCP is one of the choices.  The resulting data source descriptor datasource-type set to "UCP".

The first step is to specify the JDBC Data Source Properties that determine the identity of the data source. They include the datasource names, the scope (Global or Multi Tenant Partition, Resource Group, or Resource Group Template) and the JNDI names.

The next page handles the user name and password, URL, and additional connection properties. Additional connection properties are used to configure the UCP connection pool. There are two ways to provide the connection properties for a UCP data source in the console. On the Connection Properties page, all of the available connection properties for the UCP driver are displayed so that you only need to enter the property value.  On the next page for Test Database Connection, you can enter a propertyName=value directly into the Properties text box.  Any values entered on the previous Connection Properties page will already appear in the text box.  This page can be used to test the specified values including the connection properties.