Thursday Jul 14, 2016

WebLogic Server - Domain to Partition Conversion Tool (DPCT) Updates

WebLogic Server - Domain to Partition Conversion Tool (DPCT) Updates

The Domain to Partition Conversion Tool (DPCT) provides assistance with the process of migrating an existing domain from WebLogic Server releases 10.3.6, 12.1.2, 12.1.3 or 12.2.1 domain to a partition in a WebLogic Server 12.2.1 domain.

The DPCT process consists of two independent but related operations:

  • The first operation involves inspecting an existing domain and exporting into an archive that captures the relevant configuration and binary files.
  • The second task is to use one of several import partition options available with WebLogic Server 12.2.1 to import the contents of the exported domain to create a new partition. The new partition will contain the configuration resources and application deployments from the source domain.

With the release of WebLogic Server several updates and changes have been made to DPCT to further improve its functionality.

The updated documentation covering the new features, bug fixes and known limitations is here:

Key Updates

a) Distribution of DPCT tooling with WebLogic Server installation: initially the DPCT tooling was distributed as a separate zip file only available for download from OTN.

With the release, the DPCT tooling is provided as part of the base product installation as:


This file can be copied from the installation to the servers where the source domain is present and extracted for use.

 The DPCT tooling is also still available for download from OTN:

b) No patch require: previous use of DPCT required a patch to be applied to the target 12.2.1 installation in order to import an archive generated by the DPCT tooling. This requirement has been resolved.

c) Improved platform support: several small issues relating to the use of DPCT tooling on Windows have been resolved.

d) Improved reporting: a new report file is generated for each domain that is exported, listing the details of the source domain as well as each of the configuration resources and deployments that were captured in the exported archive. Any resources that were unable to be exported are also noted.

e) JSON Overrides file formatting: the generated JSON file that serves as an overrides mechanism to allow target environment customizations to be specified on the import is now formatted correctly to make it clearer and easier to make changes.

f) Additional Resources in JSON Overrides file: in order to better support customization on the target domain additional resources such as JDBC System Resources, SAF Agents, Mail Sessions and JDBC Stores are now expressed as configurable objects in the generated JSON file.

g) Inclusion of new export-domain scripts: the scripts used to run the DPCT tooling have been reworked and included as new (additional) scripts. The new scripts are named export-domain.[cmd|sh] and provide clearer help text and make use of named parameters for providing input values to the script. The previous scripts are provided for backwards compatibility and continue to work, but it is recommended the new scripts are used where possible.

Usage detail for the export-domain script:

Usage: -oh {ORACLE_HOME} -domainDir {WL_DOMAIN_HOME}
       [-keyFile {KEYFILE}] [-toolJarFile {TOOL_JAR}] [-appNames {APP_NAMES}]
        [-includeAppBits {INCLUDE_APP_BITS}] [-wlh {WL_HOME}]
             {ORACLE_HOME} : the MW_HOME of where the WebLogic is installed
             {WL_DOMAIN_HOME} : the source WebLogic domain path
             {KEYFILE} : an optional user-provided file containing a clear-text passphrase used to encrypt exported attributes written to the archive, default: None;
             {TOOL_JAR} : file path to the file.
             Optional if jar is in the same directory location as the
             {APP_NAMES} : applicationNames is an optional list of application names to export.
             {WL_HOME} : an optional parameter giving the path of the weblogic server for version 10.3.6.Used only when the WebLogic Server from 10.3.6 release is installed under a directory other than {ORACLE_HOME}/wlserver_10.3

Enhanced Cluster Topology and JMS Support

In addition to the items listed above, some restructuring of the export and import operation has enabled DPCT to better support a number of key WebLogic Server areas. 

When inspecting the source domain and generating the export archive, DPCT now enables the targeting of the resources and deployments to appropriate Servers and Clusters in the target domain. For every Server and Cluster in the source domain, there will be a corresponding resource-group object created in the generated JSON file, with each resource-group targeted to a dedicated Virtual Target, which in turn can be targeted to a Server or Cluster on the target domain.

All application deployments and resources targeted to that particular WebLogic Server instance or cluster in the source domain corresponds to a resource group in the target domain.

This change also supports the situation where the target domain has differently named Cluster and Server resources than the source domain, by allowing the target to be specified in the JSON overrides file so that it can be mapped appropriately to the new environment.

A number of the previous limitations around the exporting of JMS configurations for both single server and cluster topologies have been addressed, enabling common JMS use cases to be supported with DPCT migrations. The documentation contains the list of existing known limitations.

Wednesday Jun 29, 2016

Connection Initialization Callback on WLS Datasource

WebLogic Server is now available. You can see the blog article announcing it at Oracle WebLogic Server is Now Available.

One of the WLS datasource features that appeared quite a while ago but not mentioned much is the ability to define a callback that is called during connection initialization.  The original intent of this callback was to provide a mechanism that is used with the Application Continuity (AC) feature.  It allows for the application to ensure that the same initialization of the connection can be done when it is reserved and also later on if the connection is replayed.  For the latter case, the original connection has some type of "recoverable" error and is closed, a new connection is reserved under the covers, and all of the operations that were done on the original connection are replayed on the new connection.  The callback allows for the connection to be re-initialized with whatever state is needed by the application.

The concept of having a callback to allow for the application to initialize all connections without scattering this processing all over the application software wherever getConnection() is called is very useful, even without replay being involved.  In fact, since the callback can be configured in the datasource descriptor, which I recommend, there is no change to the application except to write the callback itself.  

Here's the history of support for this feature, assuming that the connection initialization callback is configured.

WLS 10.3.6 - It is only called on an Active GridLink datasource when running with the replay driver (replay was only supported with AGL).

WLS  12.1.1, 12.1.2, and 12.1.3 - It is called if used with the replay driver and any datasource type (replay support was added to GENERIC datasources).

WLS 12.2.1 - It is called with any Oracle driver and any datasource type. 

WLS - It is called with any driver and any datasource type.  Why limit the goodness to just the Oracle driver?

The callback can be configured in the application by registering it on the datasource in the Java code. You need to ensure that you only do this once per datasource.  I think it's much easier to register it in the datasource configuration.   

Here's a sample callback.

package demo;
import oracle.ucp.jdbc.ConnectionInitializationCallback;

public class MyConnectionInitializationCallback implements
  ConnectionInitializationCallback {
  public MyConnectionInitializationCallback()  {
  public void initialize(java.sql.Connection connection)
    throws java.sql.SQLException {
     // Re-set the state for the connection, if necessary

This is a simple Jython script using as many defaults as possible to just show registering the callback.

import sys, socket
hostname = socket.gethostname()
jdbcSR = create(dsname, 'JDBCSystemResource')
jdbcResource = jdbcSR.getJDBCResource()
dsParams = jdbcResource.getJDBCDataSourceParams()
driverParams = jdbcResource.getJDBCDriverParams()
driverProperties = driverParams.getProperties()
userprop = driverProperties.createProperty('user')
oracleParams = jdbcResource.getJDBCOracleParams()
oracleParams.setConnectionInitializationCallback('demo.MyConnectionInitializationCallback')  # register the callback

 Here are a few observations.  First, to register the callback using the configuration, the class must be in your classpath.  It will need to be in the server classpath anyway to run but it needs to get there earlier for configuration.  Second, because of the history of this feature, it's contained in the Oracle parameters instead of the Connection parameters; there isn't much we can do about that.  In the WLS administration console, the entry can be seen and configured in the Advanced parameters of the Connection Pool tab as shown in the following figure (in addition to the Oracle tab).  Finally, note that the interface is a Universal Connection Pool (UCP) interface so that this callback can be shared with your UCP application (all driver types are supported starting in Database

This feature is documented in the Application continuity section of the Administration Guide.   See .

You might be disappointed that I didn't actually do anything in the callback.  I'll use this callback again in my next blog to show how it's used in another new WLS feature.

Tuesday Jun 28, 2016

WebLogic Server Continuous Availability in

We have made enhancements to the Continuous Availability Offering in WebLogic in the areas of Zero Downtime Patching, Cross Site Transaction Recovery, Coherence Federated Caching and Coherence Persistence. We have also enhanced the documentation to provide design considerations for the multi-data center Maximum Availability Architectures (MAA) that are supported for WebLogic Server Continuous Availability.

Zero Downtime Patching Enhancements

Enhancements in Zero Downtime Patching support updating applications running in a multitenant partition without affecting other partitions that run in the same cluster. Coherence applications can now be updated while maintaining high availability of the Coherence data during the rollout process. We have also removed the dependency on NodeManager to upgrade the WebLogic Administration Server.

  • Multitenancy support
  • Application updates can use partition shutdown instead of server shutdowns.
  • Can update an application in a partition on a server without affecting other partitions.
  • Can update an application referenced by a ResourceGroupTemplate.
  • Coherence support - User can supply minimum safety mode for rollout to Coherence cluster.
  • Removed Administration Server dependency on NodeManager – The Administration Server no longer needs to be started by NodeManager.

Cross-Site Transaction Recovery

We introduced a “Site Leasing” mechanism to do auto recovery when there is a site failure or mid-tier failure. With site leasing we provide a more robust mechanism to failover and failback transaction recovery without imposing dependencies on the TLog which affect the health of the Servers hosting the Transaction Manager.

Every server in a site will update their lease. When the lease expires for all servers running in a cluster in Site 1, servers running in a cluster in a remote site assume ownership of the TLogs, and recover the transactions while still continuing their transaction work. To learn more, please read Active-Active XA Transaction Recovery.

Coherence Federated Caching and Coherence Persistence Administration Enhancements

We have enhanced the WebLogic Server Administration Console to make it easier to configure Coherence Federated Caching and Coherence Persistence.

  • Coherence Federated Caching - Added the ability to setup Federation with basic active/active and active/passive configurations using the Administration Console and eliminated the need to use configuration files.
  • Coherence Persistence - Added a persistence tab in the Administration Console that provides the ability to configure Persistence related settings that apply to all services.


In WebLogic Server we have enhanced the document Continuous Availability for Oracle WebLogic Server to include a new chapter “Design Considerations for Continuous Availability” See

This new chapter provides design considerations and best practices for the components of your multi-data center environments. In addition to the general best practices recommended for all continuous availability MAA architectures, we provide specific advice for each of the Continuous Availability supported topologies, and describe how the features can be used in these topologies to provide maximum high availability and disaster recovery.

Tuesday Jun 21, 2016

Oracle WebLogic Server is Now Available

Last October, we delivered Oracle WebLogic Server 12.2.1 as part of the overall Oracle Fusion Middleware 12.2.1 Release.   As noted previously on this blog, WebLogic Server 12.2.1 delivers compelling new feature capabilities in the areas of Multitenancy, Continuous Availability, and Developer Productivity and Portability to Cloud.  

Today, we are releasing WebLogic Server, which is the first patch set release for WebLogic Server and Fusion Middleware 12.2.1.   New WebLogic Server installers are now posted on the Oracle Technology Network and Oracle Software Delivery Cloud, and new documentation has been made available.  WebLogic Server contains all the new features in WebLogic Server 12.2.1, and also includes an integrated, cumulative set of fixes and a small number of targeted, non-disruptive enhancements.   

For customers who have just begun evaluating WebLogic Server 12cR2, or are planning evaluation and adoption, we recommend that you adopt WebLogic Server so that you can benefit from the maintenance and enhancements that have been included.   For customers who are already running in production on WebLogic Server 12.2.1, you can continue to do so, though we will encourage adoption of WebLogic Server 12.2.1 patch sets.

The enhancements are primarily in the following areas:

  • Multitenancy - Improvements to Resource Consumption Management, partition security management, REST management, and Fusion Middleware Control, all targeted at multitenancy manageability and usability.
  • Continuous Availability - New documented best practices for multi data center deployments, and product improvements to Zero Downtime Patching capabilities.
  • Developer Productivity and Portability to the Cloud - The Domain to Partition Conversion Tool (D-PCT), which enables you to convert an existing domain to a WebLogic Server 12.2.1 partition, has been integrated into with improved functionality.   So it's now easier to migrate domains and applications to WebLogic Server partitions, including partitions running in the Oracle Java Cloud Service. 
We will provide additional updates on the capabilities described above, but everything is ready for you to get started using WebLogic Server today.   Try it out and give us your feedback!

Sunday May 01, 2016

Using SQLXML Data Type with Application Continuity

When I first wrote an article about changing Oracle concrete classes to interfaces to work with Application Continuity (AC) (, I left out one type. oracle.sql.OPAQUE is replaced with oracle.jdbc.OracleOpaque. There isn’t a lot that you can do with this opaque type. While the original class had a lot of conversion methods, the new Oracle type interfaces have only methods that are considered significant or not available with standard JDBC API’s. The new interface only has a method to get the value as an Object and two meta information methods to get meta data and type name. Unlike the other Oracle type interfaces (oracle.jdbc.OracleStruct extends java.sql.Struct and oracle.jdbc.OracleArray extends java.sql.Array), oracle.jdbc.OracleOpaque does not extend a JDBC interface

There is one related very common use case that needs to be changed to work with AC. Early uses of SQLXML made use of the following XDB API.

SQLXML sqlXml = oracle.xdb.XMLType.createXML(

oracle.xdb.XMLType extends oracle.sql.OPAQUE and its use will disable AC replay. This must be replaced with the standard JDBC API

SQLXML sqlXml = resultSet.getSQLXML("issue");

If you try to do a “new oracle.xdb.XMLType(connection, string)” when running with the replay datasource, you will get a ClassCastException. Since XMLType doesn’t work with the replay datasource and the oracle.xdb package uses XMLType extensively, this package is no longer usable for AC replay.

The API’s for SQLXML are documented at The javadoc shows API’s to work with DOM, SAX, StAX, XLST, and XPath.

Take a look at the sample program at

The sample uses StAX to store the information and DOM to get it. By default, it uses the replay datasource and it does not use XDB.

You can run with replay debugging by doing something like the following. Create a file named /tmp/config.txt that has the following text.

java.util.logging.ConsoleHandler.formatter = java.util.logging.SimpleFormatter
handlers = java.util.logging.FileHandler
java.util.logging.FileHandler.pattern = /tmp/replay.log
oracle.jdbc.internal.replay.level = FINEST

Change your WLS CLASSPATH (or one with the Oracle client jar files) to put ojdbc7_g.jar at the front (to replace ojdbc7.jar) and add the current directory.

Compile the program (after renaming .txt to .java) and run it using

java -Djava.util.logging.config.file=/tmp/config.txt XmlSample

The output replay log is in /tmp/replay.log. With the defaults in the sample program, you won’t see replay disabled in the log. If you change the program to set useXdb to true, you will see that replay is disabled. The log will have “DISABLE REPLAY in preForMethodWithConcreteClass(getOPAQUE)” and “Entering disableReplayInternal”.

This sample can be used to test other sequences of operations to see if they are safe for replay.

Alternatively, you can use orachk to do a static analysis of the class. See for more information. If you run orachk on this program, you will get this failure.

FAILED - [XmlSample][[MethodCall] desc= (Ljava/lang/String;)Loracle/sql/OPAQUE; method name=getOPAQUE, lineno=105]

Thursday Apr 28, 2016

Testing WLS and ONS Configuration


Oracle Notification Service (ONS) is installed and configured as part of the Oracle Clusterware installation. All nodes participating in the cluster are automatically registered with the ONS during Oracle Clusterware installation. The configuration file is located on each node in $ORACLE_HOME/opmn/conf/ons.config. See the Oracle documentation for further information. This article focuses on the client side.

Oracle RAC Fast Application Notification (FAN) events are available starting in database 11.2. This is the minimum database release required for WLS Active GridLink. FAN events are notifications sent by a cluster running Oracle RAC to inform the subscribers about the configuration changes within the cluster. The supported FAN events are service up, service down, node down, and load balancing advisories (LBA).

fanWatcher Program

You can optionally test your ONS configuration independent of running WLS. This tests the connection from the ONS client to the ONS server but not configuration of your RAC services. See for details to get, compile, and run the fanWatcher program. I’m assuming that you have WLS 10.3.6 or later installed and you have your CLASSPATH set appropriately. You would run the test program using something like

java fanWatcher "nodes=rac1:6200,rac2:6200" database/event/service

If you are using the database client jar files, you can handle more complex configurations with multiple clusters, for example DataGuard, with something like

java fanWatcher "nodes.1=site1.rac1:6200,site1.rac2:6200
nodes.2=site2.rac1:6200,site2.rac2:6200" database/event/service

Note that a newline is used to separate multiple node lists. You can also test with a wallet file and password, if the ONS server is configured to use SSL communications.

Once this program is running, you should minimally see occasional LBA notifications. If you start or stop a service, you should see an associated event.

Auto ONS

It’s possible to run without specifying the ONS information using a feature call auto-ONS. The auto-ONS feature cannot be used if you are running with

- an 11g driver or 11g database. Auto-ONS depends on protocol flowing between the driver and the database server and this feature was added in 12c.

- pre-WLS 12.1.3. Auto-ONS is supported starting in WLS 12.1.3.

- an Oracle wallet with SSL communications. Configuration of the wallet requires also configuring the ONS information.

- complicated ONS topology. In general, auto-ONS can figure out what you need but there are cases where you need to specify it precisely. In WLS 12.2.1, an extension of the ONS configuration allows for specifying the exact topology using a property node list. See for more information.

If you have some configurations that use an 11g driver or database and some that run with 12c driver/database, you may want to just specify the ONS information all of the time instead of using the auto-ONS simplification. The fanWatcher link above indicates how to test fanWatcher using auto-ONS.

WLS ONS Configuration and Testing

The next step is to ensure that you have end-to-end configuration running. That includes the database service for which events will be generated to the AGL datasource that processes the events for the corresponding service.

On the server side, the database service must be configured RCLB enabled. RCLB is enabled for a service if the service GOAL (NOT CLB_GOAL) is set to either SERVICE_TIME or THROUGHPUT. See the Oracle documentation for further information on using srvctl to set this when creating the service.

On the WLS side, the key pieces are the URL and the ONS configuration.

The URL is configured using a long format with this service name specified. The URL can use an Oracle Single Client Access Name (SCAN) address, for example,


or multiple non-SCAN addresses with LOAD_BALANCE=on, for example,


Defining the URL is a complex topic - see the Oracle documentation for more information.

As described above, the ONS configuration can be implicit using auto-ONS or explicit. The trade-offs and restrictions are also described above. The format of the explicit ONS information is described at

If you create the datasource using the administration console with explicit ONS configuration, there is a button to click on to test the ONS configuration. This tests doing a simple handshake with the ONS server.

Of course, the first real test of your ONS configuration with WLS is deploying the datasource, either when starting the server or when targeting the datasource on a running server.

In the administration console, you can look at the AGL runtime monitoring page for ONS, especially if using auto-ONS, to see the ONS configuration.

You can look at the page for instances and check the affinity flag and instance weight attributes that are updated on LBA events. If you stop a service using something like

srvctl stop service -db beadev -i beadev2 -s otrade

that should also show up on this page with the weight and capacity going to 0.

If you look at the server log (for example servers/myserver/logs/myserver.log) you should see a message tracking the outage like the following.

…. <Info> <JDBC> … <Data source JDBC Data Source-0 for service otrade received a service down event for instance [beadev2].>

If you want to see more information like the LBA events, you can enable the JDBCRAC debugging using –Dweblogic.debug.DebugJDBCRAC=true. For example,

... <JDBCRAC> ... lbaEventOccurred() event=service=otrade, database=beadev, event=VERSION=1.0 database=beadev service=otrade { {instance=beadev1 percent=50 flag=GOOD aff=FALSE}{instance=beadev2 percent=50 flag=UNKNOWN aff=FALSE} }

There will be a lot of debug output with this setting so it is not recommended for production.

Wednesday Apr 27, 2016

Migrating from Generic Data Source to Active GridLink

Earlier, I wrote an article about how to migrate from a Multi Data source (MDS) for RAC connectivity to Active GridLink (AGL). This is needed to move from the older datasource technology to the newer technology, both supporting Oracle RAC. The information is now in the public documentation set at

There are also many customers that are growing up from a standalone database to an Oracle RAC cluster. In this case, it’s a migration from a GENERIC datasource to an AGL datasource. This migration is pretty simple.

No changes should be required to your applications.  A standard application looks up the datasource in JNDI and uses it to get connections.  The JNDI name won’t change.

The only changes necessary should be to your configuration and the necessary information is generally provided by your database administrator.   The information needed is the new URL and optionally the configuration of Oracle Notification Service (ONS) on the RAC cluster. The latter is only needed if you are running with

- an 11g driver or 11g database. Auto-ONS depends on protocol flowing between the driver and the database server and this feature was added in 12c.

- pre-WLS 12.1.3. Auto-ONS is supported starting in WLS 12.1.3.

- an Oracle wallet with SSL communications. Configuration of the wallet requires also configuring the ONS information.

- complicated ONS topology. In general, auto-ONS can figure out what you need but there are cases where you need to specify it precisely. In WLS 12.2.1, an extension of the ONS configuration allows for specifying the exact topology using a property node list. See for more information.

The URL and ONS attributes are configurable but not dynamic. That means that the datasource will need to be shutdown and restarted after the change. The simplest way to do this is to untarget the datasource, make the changes, and then re-target the datasource.

The recommended approach to migrate from a GENERIC to AGL datasource is to use WLST. The URL must be changed in the JDBCDriverParams object. The new JDBCOracleParams object (it generally doesn’t exist for a GENERIC datasource) needs to have FAN enabled set to true and optionally set the ONS information.

The following is a sample WLST script with the new values hard-coded. You could parameterize it and make it more flexible in handling multiple datasources. If you are using an Oracle wallet for ONS, that needs to be added to the JDBCOracleParams object as well.

# java weblogic.WLST
import sys, socket, os
hostname = socket.gethostname()
datasource="JDBC Data Source-0"
cd("/JDBCSystemResources/" + datasource )
set("Targets",jarray.array([], ObjectName))
cd("/JDBCSystemResources/" + datasource + "/JDBCResource/" +
datasource + "/JDBCDriverParams/" + datasource )
set("Url","jdbc:oracle:thin:@(DESCRIPTION=(ADDRESS_LIST=(" +
"ADDRESS=(PROTOCOL=TCP)(HOST=dbhost)(PORT=1521)))" +
cd("/JDBCSystemResources/" + datasource + "/JDBCResource/" +
datasource + "/JDBCOracleParams/" + datasource )
# The following is for WLS 12.1.2 and 12.1.3 if not setting FanEnabled true, which is not recommended
# The following is for WLS 12.2.1 and later

#cd("/JDBCSystemResources/" + datasource + "/JDBCResource/" +
# datasource )
#set("DatasourceType", "AGL")
cd("/JDBCSystemResources/" + datasource )
set("Targets", targets)

In WLS 12.1.2 and 12.1.3, there is an explicit ActiveGridlink flag that can be used to identify an AGL datasource, even if FanEnabled is not set to true (which is not recommended) and auto-ONS is used (12.1.2 is the first release in which auto-ONS is supported). In the script above, uncomment the line to set it only if FanEnabled and OnsNodeList are not set.

Starting in WLS 12.2.1, there is an explicit datasource type at the JDBCResource level. If that is set to GENERIC, you must re-set it using set("DatasourceType", "AGL"). In the script above, uncomment the lines to set it.  In this case, the ActiveGridlink flag is not necessary.

In the administrative console, the database type is read-only and there is no mechanism to change the database type. You can try to get around this by setting the URL, FAN Enabled box, and ONS information. However, in 12.2.1 there is no way to re-set the Datasource Type in the console and that value overrides all others.

Sunday Apr 10, 2016

New WebLogic Server Running on Docker in Multi-Host Environments

Oracle WebLogic Server 12.2.1 is now certified to run on Docker 1.9 containers. As part of this certification, you can create Oracle WebLogic Server 12.2.1 clusters which can span multiple physical hosts. Containers running on multi-host are built as an extension of existing Oracle WebLogic 12.2.1 Install images built with Dockerfiles , Domain images built with Dockerfiles, and existing Oracle Linux images Oracle Linux Images. To help you with this, we have posted scripts on GitHub as examples for you to get started.

The table below describes the certification provided for WebLogic Server 12.2.1 on Docker 1.9. You can use these combinations of Oracle WebLogic Server, JDK, Linux and Docker versions when building your Docker images.

WLS Version

JDK Version

Host OS


Docker Version



Oracle Linux 6


1.9 or higher

Oracle Linux 7

Please read earlier blog Oracle Weblogic 12.2.1 Running on Docker Containers for details on Oracle WebLogic Server 12.1.3 and Oracle WebLogic 12.2.1 certification on other versions of Docker. We support Oracle WebLogic Server in certified Docker containers running on other Linux host operating systems that have Kernel 4 or larger and that support Docker Containers, please read our Support statement. For additional details on the most current Oracle WebLogic Server supported configurations please refer to Oracle Fusion Middleware Certification Pages.

The scripts that support multi-host environment on GitHub are based on the latest versions of Docker Networking, Swarm, and Docker Compose. The Docker Machine participates in the Swarm which is networked by a Docker overlay network. The WebLogic Admin Server container as well as the WebLogic Managed Servers containers run on different VMs in the Swarm and are able to communicate with each other.

Dockerfiles and scripts we have provided enable users to create clustered and non-clustered Oracle WebLogic Server domain configurations, including both development and production running on a single or multiple hosts operating system or VMs. Each server running in the resulting domain configurations runs in its Docker container, and is capable of communicating as required with other servers. When these containers run in a WebLogic cluster all HA properties of the WebLogic cluster are supported such as in memory session replication, HTTP load balancing service and server migration.

Please check the new WebLogic on Docker Multi Host Workshop in Github. This workshop takes you step by step in how to build a WebLogic Server Domain on Docker in a multi host environment.  After the WebLogic domain has been started an Apache Plugin Web Tier container is started in the Swarm, the Apache Plugin load balances invocations to an application deployed to a WebLogic cluster. 

This project takes advantage of the following tools Docker Machine, Docker Swarm, Docker Overlay Network, Docker Compose, Docker Registry, and Consul.  Very easily and quickly using the sample Dockerfiles, and scripts you can set up your environment running on Docker.  Try it out and enjoy!

On  YouTube we have a video that shows you how to create a WLS domain/cluster on Multi Host environment. For documentation on how to use these Dockerfiles and scripts, see the whitepaper on OTN. .  We hope you will try running the different configurations of WebLogic Server on Docker containers, and look forward to hearing any feedback you might have.

Wednesday Mar 09, 2016

WebLogic Server Multi-tenancy and Partition Isolation

One advantage of using the multi-tenancy features in WebLogic Server is that you can get higher density. If you target multiple partitions to the same WLS cluster, then those partitions share the hardware and software stack, from the iron up through the Java virtual machine and the WLS infrastructure itself. This is great for efficient use of resources, but what about isolating those partitions from each other?

We've published a short article about the tension between density and isolation. It highlights how to set up partitions and virtual targets for either purpose. Please take a look!

Friday Feb 26, 2016

The next VTS round is fast approaching!

Virtual Technology Summit is a set of free online events covering a wide variety of technical topics (Java, Middleware, Database, IoT, etc.). And there is something for everyone (see full agenda). In the upcoming edition, the following sessions should be particularly interesting for WebLogic users:
  • Developing Java EE 7 Applications with WebLogic Server 12.2.1
  • Multi-Tenancy Fundamental
  • Introduction to WebLogic Server Zero Down-time Patching
There are 3 VTS events to suit your geographic location - Americas (March 8th), APAC (March 15th) and Europe (April 5th).  For schedules and abstracts for all sessions, please see OTN Virtual Technology Summit All Track Agenda and Abstracts and make sure to register today!

Wednesday Feb 17, 2016

WebLogic Server 12.2.1: Elastic Cluster Scaling

WebLogic Server 12.2.1 added support for the elastic scaling of dynamic clusters:

Elasticity allows you to configure elastic scaling for a dynamic cluster based on either of the following:

  • Manually adding or removing a running dynamic server instance from an active dynamic cluster. This is called on-demand scaling. You can perform on-demand scaling using the Fusion Middleware component of Enterprise Manager, the WebLogic Server Administration Console, or the WebLogic Scripting Tool (WLST).

  • Establishing policies that set the conditions under which a dynamic cluster should be scaled up or down and actions that define the scaling operations themselves. When the conditions defined in the scaling policy occur, the corresponding scaling action is triggered automatically.

To see this in action, a set of video demonstrations have been added to the channel that show the use of various elastic scaling options available.

WebLogic Server 12.2.1 Elastic Cluster Scaling with WLST
WebLogic Server 12.2.1 Elastic Cluster Scaling with WebLogic Console
WebLogic Server 12.2.1 Automated Elastic Cluster Scaling

Sunday Feb 14, 2016

Now Available: Domain to Partition Conversion Tool (DPCT)

We are pleased to announce that a new utility has just been published to help with the process of converting existing WebLogic Server domains into WebLogic Server 12.2.1 partitions. 

The Domain to Partition Conversion Tool (DPCT) provides a utility that inspects a specified source domain and produces an archive containing the resources, deployed applications and other settings.  This can then be used with the importPartition operation provided in WebLogic Server 12.2.1 to create a new partition that represents the original source domain.  An external overrides file is generated (in JSON format) that can be modified to adjust the targets and names used for the relevant artifacts when they are created in the partition.

DPCT supports WebLogic Server 10.3.6, 12.1.1, 12.1.2 and 12.1.3 source domains and  makes the conversion to WebLogic Server 12.2.1 partitions a straightforward process.

DPCT is available for downloaded from OTN:

 ** Note: there is also a corresponding patch (opatch) posted alongside the DPCT download that needs to be downloaded and applied to the target installation of WebLogic Server 12.2.1 to support the import operation **

 The README contains more details and examples of using the tool:

A video demonstration of using DPCT to convert a WebLogic Server 12.1.3 domain with a deployed application into a WebLogic Server 12.2.1 is also available on our YouTube channel:

Tuesday Feb 09, 2016

Cargo Tracker Java EE 7 Blue Prints Now Running on WebLogic 12.2.1

The Cargo Tracker project exists to serve as a blue print for developing well designed Java EE 7 applications, principally utilizing the well known Domain-Driven Design (DDD) architectural paradigm. The project provides a first hand look at how a realistic Java EE 7 application might look like.

The project was started some time ago and runs on GlassFish 4 and Java SE 7 by default. The project has now been enhanced to run the same code base on WebLogic 12.2.1 with Java SE 8. The code is virtually unchanged and the minor configuration difference between GlassFish and WebLogic are handled through Maven profiles. The instructions for running the project on WebLogic are available here. Feel free to give it a spin to get a feel for the Java EE 7 development experience with WebLogic 12.2.1.

Tuesday Jan 12, 2016

ZDT Technical Topic: How are Those Sessions Kept Alive Anyway?

By now you have probably read documentation or previous blog posts about how Zero Downtime Patching provides a convenient automated method of updating a WebLogic Domain in a rolling fashion.  By automating the process, Zero Downtime Patching greatly saves time and eliminates the potential human errors from the repetitive course of procedure. In addition to that there is also some special features around replicated HTTP sessions that make sure end users do not lose their session at any point during the rollout process. Lets explore the technical details around maintaining session state during Zero Downtime Patching.

One of the key aspects of WLS replicated session persistence contract is that the session may be maintained within the cluster even in the rare situation where a server crashes. However, the session persistence contract cannot guarantee sessions will be maintained when more than a single server goes down in a short time period. This is because the session has a single copy replicated to some secondary server within the cluster. The session is only replicated when the client makes a request to update the session so the client’s cookie can store a reference to the secondary server. Thus, if the primary server were to go down and then the secondary server were to go down before the session could be updated by a subsequent client request then the session would be lost. The rolling nature of Zero Downtime Patching fits this pattern, and thus must take extra care to avoid losing sessions. Administrators may have already observed that it is very easy to lose sessions by restarting one server at a time through the cluster.

Before we go into technical details on how Zero Downtime Patching prevents the issue of losing sessions, it is important to note that the entire methodology relies on Oracle Traffic Director for load balancing, dynamic discovery, health checks, and session failover handling. In addition to this setup, 3 key features were utilized by Zero Downtime Patching directly to prevent the loss of sessions:

Session Handling Overview

1. Preemptive Session Replication - Session data is preemptively propagated to another server in the cluster during graceful shutdown when necessary.  To get even more detailed on this, lets examine the scenario where the ZDT rollout has shutdown the server holding the HTTP Session, and the next step is to shutdown the server holding the replica. In that case, WebLogic can detect during shutdown that the session will be lost as there is no backup copy within the cluster. So the ZDT rollout can ensure that WebLogic Server replicates that session to another server within the cluster.

The illustration below shows the problematic scenario where the server,  s1, holding the primary copy of the session is shutdown followed by the shutdown of the server, s2, holding the secondary or replica copy.  The ZDT Orchestration signals that s2 should preemptively replicate any single session copies before shutting down.  Thus there is always a copy available within the cluster.  

Preemptive Session Replication on Shutdown

2. Session State Query Protocol - Due to the way that WebLogicServer relies on the association of an HTTP Session with a primary server and a secondary server, it is not sufficient to simply have the session somewhere in the cluster.  There is also a need to be able to find the session when the client request lands on an arbitrary server within the cluster. The ZDT rollout enables the ability for WebLogicServers to query other servers in the cluster for specific sessions if they don’t have their own copy.

Session Fetching via Session State Protocol Query

The diagram above shows that an incoming request to a server without the session can trigger a query and once the session is found within the cluster it can be fetched so that the request can be served on the server, "s4". 

3. Orphaned Session Cleanup - Once we combine the ability to preemptive replicate session instances, and the ability to fetch sessions from within the cluster, we must also take a more active approach to cleanup instances that are fetched.  Historically, WebLogic Server hasn’t had to worry much about orphaned sessions. Front end load balancers and web servers have been required to honor the session’s server affinity. And in the rare case that a request would land on a server that did not contain the primary or secondary, the session would be fetched from the primary server or secondary server and then the orphaned copy would be forgotten to be cleaned up upon timeout or other regular intervals. It was assumed that because the pointer to the session changed, that the actual stored reference would never be used again. However, the ZDT rollout repeatedly presents the scenario where a session must be found within the cluster and fetched from the server that holds the session. Not only can the number of session instances proliferate - all with various versions of the same session - the cluster is now queried for the copy and we must not find any stale copies - only the current replica of the session.

Orphaned Session Cleanup

The above illustration shows the cleanup action after s4 has fetched the session data to serve the incoming request.  It launches the cleanup request to s3 to ensure no stale data is left within the cluster.


Now during ZDT Patching we can shutdown server1, and expect that any lone session copies will be propagated to server2 without the clients knowledge. When the client does send another request, WLS will be able to handle that request and query the cluster to find the session data. The data will be fetched and used on the server handling the request. The orphaned copy will be cleaned up and the server handling the request will go through the process of choosing its preferred Secondary server to store the replica.

For more information about Zero Downtime Patching, view the documentation



Friday Jan 08, 2016

ZDT Rollouts and Singletons

WebLogic Server offers messaging, transaction and other system services to facilitate building enterprise grade applications. Typically, services can be either clustered or singleton. Clustered services are deployed identically to each server in a cluster to provide increased scalability and reliability. The session state of one clustered server is replicated on another server in the cluster. In contrast, singleton services run on only one server in a cluster at any given point of time so as to offer specific quality of service (QOS) but most importantly to preserve data consistency. Singleton services can be JMS-related, JTA-related or user-defined. In highly available (HA) environments, it is important for all services to be up and running even during patch upgrades.

The new WebLogic Zero Downtime Patching (a.k.a ZDT patching) feature introduces a fully automated rolling upgrade solution to perform upgrades such that deployed applications continue to function and are available for end users even during the upgrade process. ZDT patching supports rolling out Oracle Home, Java Home and also updating applications. Check out these blogs or view the documentation for more information on ZDT.

[Read More]

The official blog for Oracle WebLogic Server fans and followers!

Stay Connected


« October 2016