Wednesday Nov 18, 2015

Patching Oracle Home Across your Domain with ZDT Patching

Now it’s time for the really good stuff!  In this post, you will see how Zero Downtime (ZDT) Patching can be used to rollout a patched WebLogic OracleHome directory to all your managed servers (and optionally to your AdminServer) without incurring any downtime or loss of session data for your end-users.

This rollout, like the others, is based on the controlled rolling shutdown of nodes, and using Oracle Traffic Director (OTD) load balancer to route user requests around the offline node. The difference with this rollout is what happens when the managed servers are shut down. In this case, when the managed servers are shutdown, the rollout will actually move the current OracleHome directory to a backup location, and replace it with a patched OracleHome directory that the administrator has prepared, verified, and distributed in advance. (More on the preparation in a moment)

When everything has been prepared, starting the rollout is simply a matter of issuing a WLST command like this one:

rolloutOracleHome(“Cluster1”, “/pathTo/PatchedOracleHome.jar”, “/pathTo/BackupCurrentOracleHome”, “FALSE”)

The AdminServer will then check that the PatchedOracleHome.jar file exists everywhere that it should, and it will begin the rollout. Note that the “FALSE” flag simply indicates that this is not a domain level rollback operation where we would be required to update the AdminServer last instead of first.

In order to prepare the patched OracleHome directory, as mentioned above, the user can start with a copy of a production OracleHome, usually in a test (non-production) environment, and apply the desired patches in whatever way is already familiar to them. Once this is done, the administrator uses the included CIE tool copyBinary to create a distributable jar archive of the OracleHome. Once the jar archive of the patched OracleHome directory has been created, it can be distributed to all of the nodes that will be updated. Note that it needs to reside on the same path for all nodes. With that, the preparation is complete and the rollout can begin!

Be sure to check back soon to read about how the preparation phase has been automated as well by integrating ZDT Patching with another new tool called OPatchAuto.

For more information about updating OracleHome with Zero Downtime Patching, view the documentation.

Friday Nov 13, 2015

Oracle WebLogic Server 12.2.1 Running on Docker Containers

Oracle WebLogic Server 12.2.1 is now certified to run on Docker containers. As part of this certification, we are releasing Docker files on GitHub to create Oracle WebLogic Server 12.2.1 install images and Oracle WebLogic Server 12.2.1 domain images. These images are built as an extension of existing Oracle Linux images Oracle Linux Images. To help you with this, we have posted Dockerfiles and scripts on GitHub as examples for you to get started.

Docker is a platform that enables users to build, package, ship and run distributed applications. Docker users package up their applications, and any dependent libraries or files, into a Docker image. Docker images are portable artifacts that can be distributed across Linux environments. Images that have been distributed can be used to instantiate containers where applications can run in isolation from other applications running in other containers on the same host operating system.

The table below describes the certification provided for various WebLogic Server versions. You can use these combinations of Oracle WebLogic Server, JDK, Linux and Docker versions when building your Docker images.

Oracle WebLogic Server Version

JDK Version


Kernel Version

Docker Version


Oracle Linux 6 UL 6+

UEK Release 3 (3.8.13)



Oracle Linux 7

UL 0+

UEK Release 3 (3.8.13)

Or RHCK 3 (3.10)



RedHat Linux 7+

RHCK 3 (3.10)



Oracle Linux 6 UL 5+

UEK Release 3 (3.8.13)



Oracle Linux 7 UL 0+

UEK Release 3 (3.8.13)

Or RHCK 3 (3.10)



RedHat Linux 7+

RHCK 3 (3.10)


We support Oracle WebLogic Server in certified Docker containers running on other Linux host operating systems that have Kernel 3.8.13 or larger and that support Docker Containers, please read our support statement at Support statement. For additional details on the most current Oracle WebLogic Server supported configurations please refer to Oracle Fusion Middleware Certification Pages.

These Dockerfiles and scripts we have provided enable users to create clustered and non-clustered Oracle WebLogic Server domain configurations, including both development and production running on a single host operating system or VMs. Each server running in the resulting domain configurations runs in its Docker container, and is capable of communicating as required with other servers.

A topology which is in line with the “Docker-way” for containerized applications and services consists of a container designed to run only an administration server containing all resources, shared libraries and deployments. These Docker containers can all be on a single physical or virtual server Linux host or on multiple physical or virtual server Linux hosts. The Dockerfiles in GitHub to create an image with a WebLogic Server domain can be used to start these admin server containers.

For documentation on how to use these Dockerfiles and scripts, see the whitepaper on OTN. The Oracle WebLogic Server video and demo presents our certification effort and shows a Demo of WebLogic Server 12.2.1 running on Docker Containers. We hope you will try running the different configurations of WebLogic Server on Docker containers, and look forward to hearing any feedback you might have.

Monday Nov 09, 2015

Update your Java Version Easily with ZDT Patching

 Another great feature of ZDT Patching is that it provides a simple way to update the Java version used to run WebLogic.  Keeping up-to-date with Java security patches is an ongoing task of critical importance. Prior to ZDT Patching, there was no easy way to migrate all of your managed servers to a new Java version, but ZDT Patching makes this a simple two step procedure.

The first step is to install the updated Java version to all of the nodes that you will be updating. This can be done manually or by using any of the normal software distribution tools typically used to manage enterprise software installations. This operation can be done outside a planned maintenance window as it will not affect any running servers. Note that when installing the new Java version, it must not overwrite the existing Java directory, and the location of the new directory must be the same on every node.

The second step is to simply run the Java rollout using a WLST command like this one:

rolloutJavaHome(“Cluster1”, “/pathTo/jdk1.8.0_66”)

In this example, the Admin Server will start the rollout to coordinate the rolling restart of each node in the cluster named “Cluster1”. While the managed servers and NodeManager on a given node are down, the path to the Java executable that they are started with will be updated. The rollout will then start the managed servers and NodeManager from the new Java path.

Easy as that!

For more information about upgrading Java with Zero Downtime Patching, view the documentation.

Wednesday Nov 04, 2015

Application MBeans Visibility in Oracle WebLogic Server 12.2.1

Oracle WebLogic Server (WLS) version 12.2.1 supports a feature called Multi-Tenancy (WLS MT). WLS MT introduces the partition, partition administrator, and partition resource concepts.  Partition isolation is enforced when accessing resources (e.g., MBeans) in a domain. WLS administrators can see MBeans in the domain and the partitions. But a partition administrator as well as other partition roles are only allowed to see the MBeans in their partition, not in other partitions. 

In this article, I will explore the visibility support on the application MBeans to demonstrate partition isolation in WLS MT in 12.2.1. This includes

  • An overview of application MBean visibility in WLS MT
  • A simple user case that demonstrates what MBeans are registered on a WLS MBeanServer, what MBeans can be visible by WLS administrators or partition administrators
  • Links to reference materials for more information

The use case used in this article is run based on a domain created in another article "Create WebLogic Server Domain with Partitions using WLST in 12.2.1". In this article, I will 

  • Briefly show the domain topology
  • Demonstrate how to deploy an application to the domain and partitions
  • Demonstrate how to access the application MBeans via JMX clients using global/domain url or partition specific url
  • Demonstrate how to enable debugging/logging

1. Overview

An application can be deployed to WLS servers per partition, so the application is multiplied for multiple partitions. WLS contains three MBeanServers: Domain Runtime MBeanServer, Runtime MBeanServer. Each MBeanServer can be used for all partitions. WLS needs to ensure that the MBeans registered on each MBeanServer by the application are unique for each partition.

The application MBean visibility in WLS MT can be illustrated by several parts:

  • Partition Isolation
  • Application MBeans Registration
  • Query Application MBeans
  • Access Application MBeans

1.1 Partition Isolation

A WLS administrator can see application MBeans in partitions. But a partition administrator for a partition is not able to see application MBeans from the domain or other partitions.  

1.2 Application MBeans Registration

When an application is deployed to a partition, application MBeans are registered during the application deployment. WLS adds a partition specific key (e.g. Partition=<partition name>) to the MBean Object Names when registering them onto the WLS MBeanServer. This will ensure that MBean object names are unique when registered from a multiplied application.

Figure on the right shows how application MBean ObjectNames are different when registered onto the WLS MBeanServer on the domain and the partitions.

Figure on the right shows there is a WLS domain and an application.

WLS domain is configured with two partitions: cokePartition and pepsiPartition.

An application registers one MBean, e.g., testDomain:type=testType, during the application deployment.

The application is deployed to WLS domain, cokePartition and pepsiPartition. Since an WLS MBeanServer instance is shared by the domain, cokePartition and pepsiPartition, there are three application MBeans registered on the same MBeanServer after three application deployments:

  • An MBean belongs to domain:          testDomain:type=testType
  • An MBean belongs to cokePartition: testDomain:Partition=cokePartition,type=testType
  • An MBean belongs to cokePartition: testDomain:Partition=pepsiPartition,type=testType

The MBeans belong to the partitions contains an Partition key property in the ObjectNames.

1.3 Query Application MBeans

JMX clients, e.g., WebLogic WLST, JConsole etc., connect to a global/domain URL or partition specific URL, then do a query on the WebLogic MBeanServer. The query results are different:

  • When connecting to a global/domain URL, the application MBeans that belong to the partitions can be visible to those JMX clients.
  • When connecting to a partition specific URL, WLS filters the query results. Only the application MBeans that belong to that partition are returned. MBeans belonging to the domain and other partitions are filtered out.

1.4 Access Application MBeans

JMX clients, e.g., WebLogic WLST, JConcole, etc., connect to a partition specific URL, and do an JMX operation, e.g., getAttribute(<MBean ObjectName>, <attributeName>), the JMX operation is actually done on different MBeans:

  • When connecting to a global/domain URL, the getAttribute() is called on the MBean that belongs to the domain. (The MBean without the Partition key property on the MBean ObjectName.)
  • When connecting to a partition specific URL, the getAttribute() is called on the MBean that belongs to that partition. (The MBean with the Partition key property on the MBean ObjectName.)

2. Use case

Now I will demonstrate how MBean visibility works in WebLogic Server MT in 12.2.1 to support partition isolation. 

2.1 Domain with Partitions

In the article "Create WebLogic Server Domain with Partitions using WLST in 12.2.1", a domain with 2 partitions: coke and pepsi is created. This domain is also used for the use case in this article. Here is the summary of the domain topology:

  • A domain is configured with one AdminServer named "admin", one partition named "coke", one partition named "pepsi". 
  • The "coke" partition contains one resource group named "coke-rg1", targeting to a target named "coke-vt". 
  • The "pepsi" partition contains one resource group named "pepsi-rg1", targeting to a virtual target named "pepsi-vt". 

More specifically, each domain/partition has the following configuration values:

  Domain Name User Name Password
Domain base_domain weblogic welcome1
Coke Partition coke mtadmin1 welcome1
Pepsi Partition pepsi mtadmin2 welcome2

Please see details in the article "Create Oracle WebLogic Server Domain with Partitions using WLST in 12.2.1" on how to create this domain.

2.2 Application deployment

When the domain is set up and started, an application "helloTenant.ear" is deployed to the domain. It is also deployed to the "coke-rg1" in the "coke" partition and to the "pepsi-rg1" in the "pepsi" partition. The deployment can be done using different WLS tools, like FMW Console, WLST, etc.. Below are the WLST commands that deploy an application to the domain and the partitions:


For other WLS deployment tools, please see the "Reference" section.

2.3 Access Application MBeans

During the application deployment, application MBeans are registered onto the WebLogic Server MBeanServer. As mentioned in the previous section 1.2 Application MBean Registration, multiple MBeans are registered, even though there is only one application.

To access application MBeans, there are multiple ways to achieve this

  • WLST
  • JConsole
  • JSR 160 apis

2.3.1 WLST

The WebLogic Scripting Tool (WLST) is a command-line scripting interface that system administrators and operators use to monitor and manage WebLogic Server instances and domains. To start WLST:


Once WLST is started, user can connect to the server by providing a connection url. Below will show different values of an application MBean attribute by the WLS administrator or partition administrator when providing different connection urls. WLS administrator

WLS administrator 'weblogic' connects to the domain using the following connect command:

connect("weblogic", "welcome1", "t3://localhost:7001")

The picture below shows there are 3 MBeans registered on the WebLogic Server MBeanServer, whose domain is "test.domain", and the value of the attribute "PartitionName" on each MBean.

  • test.domain:Partition=coke,type=testType,name=testName
    • belongs to the coke partition. The value of the PartitionName attribute is "coke"
  • test.domain:Partition=pepsi,type=testType,name=testName
    • belongs to the pepsi partition. The value of the PartitionName attribute is "pepsi"
  • test.domain:type=testType,name=testName
    • belongs to the domain. No Partition key property in the ObjectName. The value of the PartitionName attribute is "DOMAIN"

The MBean belonging to the partition will contain a Partition key property in the ObjectName. The Partition key property is added by WLS internally when they are registered in a partition context. Partition administrator for coke

Similarly, the partition administrator 'mtadmin1' for coke can connect to the coke partition. The connection url uses "/coke" which is the uri prefix defined in the virtual target coke-vt. (Check the config/config.xml in the domain.)

connect("mtadmin1", "welcome1", "t3://localhost:7001/coke")

From the picture below, when connecting to the coke partition, there is only one MBean listed:


Even though there is no Partition key property in the ObjectName, this MBean still belongs to the coke partition. The value of the PartitionName attribute is "coke". Partition administrator for pepsi

Similarly, the partition administrator 'mtadmin2' for pepsi can connect to the pepsi partition. The connection url uses "/pepsi" which is the uri prefix defined in the virtual target pepsi-vt.

connect("mtadmin2", "welcome2", "t3://localhost:7001/pepsi")

From the picture below, when connecting to the pepsi partition, there is only one MBean listed:


Even though there is no Partition key property in the ObjectName, same as the one seen by the partition administrator for coke, this MBean still belongs to the pepsi partition. The value of the PartitionName attibute is "pepsi".

2.3.2 JConsole

The JConsole graphical user interface is a build-in tool in JDK. It's a monitoring tool that complies to the Java Management Extensions (JMX) specification.  By using JConsole you can get a overview of the MBeans registered on the MBeanServer.

To start JConsole, do this:


where <MW_HOME> is the location where WebLogic Server is installed.

Once JConsole is started, WLS administrator and partition administrator can use it to browse the MBeans given the credentials and the JMX service URL. WLS administrator

The WLS administrator "weblogic" provides an JMX service URL to connect to the WLS Runtime MBeanServer like below:


When connected by a WLS administrator, an MBean tree in JConsole shows 3 MBeans with the "test.domain" in the ObjectName:

The highlighted ObjectName in the right pane in the picture below is the MBean that belongs to the coke partition. It has the Partition key property: Partition=coke.

The highlighted below is the MBean that belongs in the pepsi partition. It has the Partition key property: Partition=pepsi.

The highlighted below is the MBean that belongs to the domain. It does not have the Partition key property.

The result here is consistent with what we have seen in WLST for WLS administrator. Partition administrator for coke

The partition administrator "mtadmin1" provides a different JMX service URL to JConsole:


When connected via partition specific JMX service url,, the partition administrator can only see one MBean:


This MBean belongs to the coke partition and the value of the PartitionName is coke as shown in the picture below. However, there is no Partition key property in the ObjectName. Partition administrator for pepsi

The partition administrator "mtadmin2" provides a different JMX service URL to JConsole:


When connected via partition specific JMX service url, the partition administrator "mtadmin2" can only see one MBean:


This MBean belongs to the pepsi partition and the value of the PartitionName is pepsi as shown in the picture below.

2.3.3 JSR 160 APIs

JMX clients can use JSR 160 APIs to access the MBeans registered on the MBeanServer. For example, the code below shows how to get the JMXConnector by providing a service url and the env, to get the MBean attribute:

import java.util.*
public class TestJMXConnection {
public static void main(String[] args) throws Exception {
JMXConnector jmxCon = null;
try {
// Connect to JMXConnector
JMXServiceURL serviceUrl = new JMXServiceURL(
Hashtable env = new Hashtable();
env.put(JMXConnectorFactory.PROTOCOL_PROVIDER_PACKAGES, "");
env.put(javax.naming.Context.SECURITY_PRINCIPAL, "weblogic");
env.put(javax.naming.Context.SECURITY_CREDENTIALS, "welcome1");
jmxCon = JMXConnectorFactory.newJMXConnector(serviceUrl, env);
// Access the MBean
MBeanServerConnection con = jmxCon.getMBeanServerConnection();
ObjectName oname = new ObjectName("test.domain:type=testType,name=testName,*");
Set queryResults = (Set)con.queryNames(oname, null);
for (ObjectName theName : queryResults) {
System.out.print("queryNames(): " + theName);
String partitionName = (String)con.getAttribute(theName, "PartitionName");
System.out.println(", Attribute PartitionName: " + partitionName);
} finally {
if (jmxCon != null)

To compile and run this code, provide the wljmxclient.jar on the classpath, like:

$JAVA_HOME/bin/java -classpath $MW_HOME/wlserver/server/lib/wljmxclient.jar:. TestJMXConnection

You will get the results below:

Connecting to: service:jmx:t3://localhost:7001/jndi/
queryNames(): test.domain:Partition=pepsi,type=testType,name=testName, Attribute PartitionName: pepsi
queryNames(): test.domain:Partition=coke,type=testType,name=testName,  Attribute PartitionName: coke
queryNames(): test.domain:type=testType,name=testName, Attribute PartitionName: DOMAIN

When change the code to use partition administrator "mtadmin1",

JMXServiceURL serviceUrl = new JMXServiceURL(
env.put(javax.naming.Context.SECURITY_PRINCIPAL, "mtadmin1");
env.put(javax.naming.Context.SECURITY_CREDENTIALS, "welcome1");

Running the code will return only one MBean:

Connecting to: service:jmx:t3://localhost:7001/coke/jndi/
queryNames(): test.domain:type=testType,name=testName,  Attribute PartitionName: coke

Similar results would be seen for the partition administrator for pepsi. If provide a pepsi specific JMX service url, only the MBean that belongs to the pepsi partition is returned.

2.4 Enable logging/debugging flags

If it appears the MBean is not behaving correctly in WebLogic Server 12.2.1, for example:

  • Partition administrator can see MBeans from global domain or other partitions when quiery the MBeans, or
  • Got JMX exceptions, e.g.,, when accessing an MBean

Try the followings to triage the errors:

  • If it's a connection problem in JConsole, add -debug on the JConsole command line when starting JConsole.
  • Partition administrator can see MBeans from global domain or other partitions when query the MBeans:
    • When connected by JMX clients, e.g., WLST, JConsole, JSR 160 APIs, make sure the host name on the service url matches the host name defined in the virtual target in the config/config.xml in the domain.
    • Make sure the uri prefix on the service url matches the uri prefix defined in the virtual target in the config/config.xml in the domain.
  • Got JMX exceptions, e.g.,, when accessing an MBean:

    • When the MBean belongs to a partition, make sure the partition is started. The application deployment is only happened when the partition is started.
    • Enable the debug flags during the server startup, like this:
      • -Dweblogic.StdoutDebugEnabled=true -Dweblogic.log.LogSeverity=Debug -Dweblogic.log.LoggerSeverity=Debug -Dweblogic.debug.DebugPartitionJMX=true -Dweblogic.debug.DebugCIC=false
    • Search server logs for the specific MBean ObjectName you are interested. Make sure the MBean you are debugging is registered in a correct partition context. Make sure the MBean operation is called in a correct partition context.

Here are sample debug messages for the MBean "test.domain:type=testType,name=testName" related to the MBean registration, queryNames() invocation, and getAttribute() invocation.

<Oct 21, 2015 11:36:43 PM PDT> <Debug> <PartitionJMX> <BEA-000000> <Calling register MBean test.domain:type=testType,name=testName in partition DOMAIN>
<Oct 21, 2015 11:36:44 PM PDT> <Debug> <PartitionJMX> <BEA-000000> <Calling register MBean test.domain:Partition=coke,type=testType,name=testName in partition coke>
<Oct 21, 2015 11:36:45 PM PDT> <Debug> <PartitionJMX> <BEA-000000> <Calling register MBean test.domain:Partition=pepsi,type=testType,name=testName in partition pepsi>
<Oct 21, 2015 11:36:56 PM PDT> <Debug> <PartitionJMX> <BEA-000000> <queryNames on MBean test.domain:Partition=coke,type=testType,name=testName,* in partition coke>
<Oct 21, 2015 11:36:56 PM PDT> <Debug> <MBeanCIC> <BEA-000000> <getAttribute: MBean: test.domain:Partition=coke,type=testType,name=testName, CIC: (pId = 2d044835-3ca9-4928-915f-6bd1d158f490, pName = coke, appId = helloTenant$coke, appName = helloTenant, appVersion = null, mId = null, compName = null)>


    • To check why the partition context is not right, turn on this debug flag, in addition to the debug flags mentioned above, when starting WLS servers:
      • -Dweblogic.debug.DebugCIC=true. Once this flag is used, there are a lot of messages logged into the server log. Search for the messages logged by DebugCIC logger, like 
        ExecuteThread: '<thread id #>' for queue: 'weblogic.kernel.Default (self-tuning)'): Pushed 

        and the messages logged by DebugPartitionJMX logger.

<Oct 21, 2015, 23:59:34 PDT> INVCTXT (24-[ACTIVE] ExecuteThread: '0' for queue: 'weblogic.kernel.Default (self-tuning)'): Pushed [(pId = 0, pName = DOMAIN, appId = null, appName = null, appVersion = null, mId = null, compName = null)] on top of [(pId = 0, pName = DOMAIN, appId = null, appName = null, appVersion = null, mId = null, compName = null)]. New size is [2]. Pushed by [weblogic.application.ComponentInvocationContextManagerImpl.pushComponentInvocationContext(
<Oct 21, 2015 11:59:34 PM PDT> <Debug> <PartitionJMX> <BEA-000000> <Calling register MBean test.domain:type=testType,name=testName in partition DOMAIN>
<Oct 21, 2015, 23:59:37 PDT> INVCTXT (29-[STANDBY] ExecuteThread: '2' for queue: 'weblogic.kernel.Default (self-tuning)'): Pushed [(pId = 2d044835-3ca9-4928-915f-6bd1d158f490, pName = coke, appId = helloTenant$coke, appName = helloTenant, appVersion = null, mId = null, compName = null)] on top of [(pId = 2d044835-3ca9-4928-915f-6bd1d158f490, pName = coke, appId = null, appName = null, appVersion = null, mId = null, compName = null)]. New size is [3]. Pushed by
<Oct 21, 2015 11:59:37 PM PDT> <Debug> <PartitionJMX> <BEA-000000> <Calling register MBean test.domain:Partition=coke,type=testType,name=testName in partition coke>

3. Conclusion

WebLogic Server 12.2.1 provides a new feature: Multi-Tenancy (MT). With this feature, partition isolation is enforced. Applications can deploy to the domain and the partitions. Users in one partition cannot see the resources in other partitions, including MBeans registered by applications. In this article, a use case is used to briefly demonstrate how application MBeans are affected by partition isolation in regards to MBean visibility. For more detailed information, see "References" section.

4. References

WebLogic Server domain

Config Wizard

WLST command reference  


Managing WebLogic Server with JConsole

JSR 160: Java Management Extensions Remote JMX api

WebLogic Server Security

WebLogic Server Deployment



Create WebLogic Server Domain with Partitions using WLST in 12.2.1

Oracle WebLogic Server 12.2.1 added support for multitenancy (WLS MT). In WLS MT, WLS can be configured with a domain, as well as one or more partitions. A partition contains new elements introduced in WLS MT, like resource groups, resource group templates, virtual targets, etc. Setting up a domain with partitions requires additional steps compared to a traditional WLS domain. For more detailed information about these new WLS MT related concepts, please see Oracle Docs listed in the "References" section. 

Oracle recommends to use Fusion Middleware Control (FMWC) to create WebLogic domains via Restricted JRF template. Oracle also support creating WebLogic Server domains using WLST. In this article, I will demonstrate how to create a WLS domain with 2 partitions using WLST. This includes:

  • Displaying domain topology
  • Creating a domain with 2 partitions using WLST
  • Displaying domain config file sample

These tasks are described in the subsequence sections.

1.Domain Topology

In this article, I will create a domain that is configured with:

  • One AdminServer named "admin", one partition named "coke", one partition named "pepsi".
  • The "coke" partition contains one resource group named "coke-rg1", targeting to a virtual target named "coke-vt". 
  • The "pepsi" partition contains one resource group named "pepsi-rg1", targeting to a virtual target named "pepsi-vt". 
  • An application "helloTenant.ear" is deployed to the domain, the "coke-rg1" in the "coke" partition and the "pepsi-rg1" in the "pepsi" partition.

The following picture shows what the domain topology looks:

Note this domain topology does not contain other MT related concepts, like a resource group template. They are not covered in this article. To see more information about other MT related concepts, please check the "References" section for details.

2. Create a domain with partitions

To create a domain with the topology shown in the picture above, several steps are required:

  • Create a traditional WLS domain
  • Start the domain
  • Create a partition in the domain
    • Create a security realm for the partition
    • Create a user for the partition
    • Add the user to the groups in the security realm
    • Create a virtual target
    • Create a partition
      • Create a resource group
      • Set a virtual target as a default target
      • Setup security IDD for the partition
  • Restart the server
  • Start the partition


Below will illustrate each step in details.

2.1 Create a traditional WLS domain

A traditional WLS domain can be created by using the Config Wizard. Start the Config Wizard via a command script:

sh $MW_Home/oracle_common/common/bin/

Create a domain using all the defaults, Specify the following:

  • Domain name = base_domain
  • User name = weblogic
  • User password = welcome1

2.2 Start the domain

cd $MW_Home/user_projects/domains/base_domain/

2.3 Create a partition: coke in a domain

The steps below require WLST to be started. Use the following command to start WLST:

sh $MW_Home/oracle_common/common/bin/

Note, all of the WLST commands shown below are run after connecting to the Admin server "admin" with the admin user "weblogic" credentials, e.g.,

connect("weblogic", "welcome1", t3://localhost:7001")

Now, WLST is ready to run commands to setup the partition for coke. The partition for coke has the following values:

  • Partition name = coke
  • Partition user name = mtadmin1
  • Partition password = welcome1

To do that, a security realm and a user are created for the partition as shown below. We explain it step-by-step.

2.3.1 Create a security realm for the partition 

The security realm is created using the standard WLS APIs.

realmName = 'coke_realm'
security = cmo.getSecurityConfiguration()
print 'realm name is ' + realmName
realm = security.createRealm(realmName)
atnp = realm.createAuthenticationProvider(
atna = realm.createAuthenticationProvider(
# IA
ia = realm.createAuthenticationProvider(
# ATZ/Role
# Adjudicator
# Auditor

# Cred Mapper
# Cert Path
# Password Validator
pv = realm.createPasswordValidator('PV',

2.3.2 Add a user and group to the security realm for the partition

Create a user and add the user to a security group Administrators in the realm. In this use case, the username and the password for the coke partition are mtadmin1 and welcome1.

realmName = 'coke_realm'
userName = 'mtadmin1'
groupName = 'Administrators'
print 'add user: realmName ' + realmName
if realmName == 'DEFAULT_REALM':
  realm = cmo.getSecurityConfiguration().getDefaultRealm()
  realm = cmo.getSecurityConfiguration().lookupRealm(realmName)
print "Creating user " + userName + " in realm: " + realm.getName()
atn = realm.lookupAuthenticationProvider('ATNPartition')
if atn.userExists(userName):
  print "User already exists."
  atn.createUser(userName, '${password}', realmName + ' Realm User')
print "Done creating user. ${password}"
print "Creating group " + groupName + " in realm: " + realm.getName()
if atn.groupExists(groupName):
  print "Group already exists."
  atn.createGroup(groupName, realmName + ' Realm Group')
if atn.isMember(groupName,userName,true) == 0:
  atn.addMemberToGroup(groupName, userName)
  print "User is already member of the group."

2.3.3 Create a virtual target for the partition

This virtual target is targeted to the admin server. The uri prefix is /coke. This is the url prefix used for making JMX connections to WebLogic Server MBeanServer.

vt = cmo.createVirtualTarget("coke-vt")
as = cmo.lookupServer("admin")

2.3.4 Create the partition: coke

The partition name is coke and it is targeted to the coke-vt virtual target.

vt = cmo.lookupVirtualTarget("coke-vt")
p = cmo.createPartition('coke')
realm = cmo.getSecurityConfiguration().lookupRealm("coke-realm")

2.3.5 Setup IDD for the partition

Set up primary identity domain (IDD) for the partition.

sec = cmo.getSecurityConfiguration()
realmName = 'coke_realm'
realm = cmo.getSecurityConfiguration().lookupRealm(realmName)
defAtnP = realm.lookupAuthenticationProvider('ATNPartition')
defAtnA = realm.lookupAuthenticationProvider('ATNAdmin')
# Partition
pcoke= cmo.lookupPartition('coke')
# Default realm
realm = sec.getDefaultRealm()
defAtn = realm.lookupAuthenticationProvider('DefaultAuthenticator')

2.3.6 Restart the Server

Restart WebLogic Server because of the security setting changes.

2.3.7 Start the partition

This is required for a partition to receive requests.

# start the partition (required)

2.4 Create another partition: pepsi in a domain

Repeat the same steps in 2.3 to create another partition: pepsi, but with different values:

  • Partition name = pepsi
  • User name = mtadmin2
  • Password = welcome2
  • Security realm = pepsi_realm
  • IDD name = pepsiIDD
  • Virtual target name = pepsi-vt
  • Resource group name = pepsi-rg1

2.5 Deploy User Application

Now the domain is ready to use. Let's deploy an application ear file. The application, e.g., helloTenant.ear, is deployed to the WebLogic Server domain, the coke partition, the pepsi partition.


2.6 Domain config file sample

When all of the steps are finished, the domain config file in $DOMAIN_HOME/config/config.xml will contain all of the info needed for the domain and the partitions. Here is a sample snippet related to the coke partition in the config.xml:


  <staging-mode xsi:nil="true"></staging-mode>
   <plan-staging-mode xsi:nil="true"></plan-staging-mode>
      <staging-mode xsi:nil="true"></staging-mode>
      <plan-staging-mode xsi:nil="true"></plan-staging-mode>


For the pepsi partition, there is a similar <virtual-target> element and the <partition> element for pepsi added in the config.xml.

From now on, the domain with 2 partitions are created and ready to serve requests. Users can access their applications deployed onto this domain. Check this blog Application MBean Visibility in Oracle WebLogic Server 12.2.1 regarding how to access the application MBeans registered on WebLogic Server MBeanServers in MT in 12.2.1.

3. Debug Flags

In case of errors during domain creation, there are debug flags which can be used to triage the errors:

  • If the error is related to security realm setup, restart the WLS server with these debug flags:
    • -Dweblogic.debug.DebugSecurityAtn=true -Dweblogic.debug.DebugSecurity=true -Dweblogic.debug.DebugSecurityRealm=true
  • If the error is related to a bean config error in a domain, restart the WLS server with these debug flags:
    • -Dweblogic.debug.DebugJMXCore=true -Dweblogic.debug.DebugJMXDomain=true
  • If the error is related to an edit session issue, restart the WLS server with these debug flags:
    • -Dweblogic.debug.DebugConfigurationEdit=true -Dweblogic.debug.DebugDeploymentService=true -Dweblogic.debug.DebugDeploymentServiceInternal=true -Dweblogic.debug.DebugDeploymentServiceTransportHttp=true 

4. Conclusion

An Oracle WebLogic Server domain in 12.2.1 can contain partitions. Creating a domain with partitions needs additional steps compared to creating a traditional WLS domain. This article shows the domain creation using WLST. There are other ways to create domains with partitions, e.g., FMW Control.  For more information on how to create a domain with partitions, please check the "References" section.

5. References

WebLogic Server domain

Domain partitions for multi tenency

Enterprise Manager Fusion Middleware Control (FMWC)

Config Wizard

Creating WebLogic domains using WLST offline

Restricted JRF template

WebLogic Server Security

WebLogic Server Deployment

WebLogic Server Debug Flags


Register for Oracle WebLogic Multitenant Webcast

On November 18, 2015 at 10 AM Pacific Time, Oracle will deliver a webcast on Oracle WebLogic Multitenant: The World’s First Cloud-Native, Enterprise Java Platform.  

Although the title focuses on multitenancy, we will cover additional new capabilities in Oracle WebLogic Server and Oracle Coherence 12.2.1.   The webcast will include live chat, demos, and commentary from customers and partners on their planned deployments and benefits, along with product expert deep dives.    

Please register here and take advantage of the opportunity to learn more about using partitions as lightweight microcontainers, consolidating with multitenancy to reduce TCO, leveraging multi-datacenter high availability architectures, and maximizing developer and DevOps productivity for your public and private cloud platforms.  See you in two weeks!

Tuesday Nov 03, 2015

Using Eclipse with WebLogic Server 12.2.1

With the installation of WebLogic Server 12.2.1 now including the Eclipse Network Installer, which enables developers to  download and install Eclipse including the specific features of interest, getting up and running with Eclipse and WebLogic Server has never been easier.

The Eclipse Network Installer presents developers with a guided interface to enable the custom installation of an Eclipse environment through the selection of an Eclipse version to be installed and which of the available capabilities are required - such as Java EE 7, Maven, Coherence, WebLogic, WLST, Cloud and Database tools amongst others.  It will then download the selected components and install them directly on the developers machine

Eclipse and the Oracle Enterprise Pack for Eclipse plugins continue to provide extensive support for WebLogic Server enabling it to be used to throughout the software lifecycle; from develop and test cycles with its Java EE dialogs,  assistants and deployment plugins; through to automation of configuration and provisioning of environments with the authoring, debugging and running of scripts using the WLST Script Editor and MBean palette.

The YouTube video WebLogic Server 12.2.1 - Developing with Eclipse provides a short demonstration on how to install Eclipse and the OEPE components using the new Network Installer that is bundled within the WebLogic Server installations.  It then shows the configuring of a new WebLogic Server 12.2.1 server target within Eclipse and finishes with importing a Maven project that contains a Java EE 7 example application that utilizes the new Batch API that is deployed to the server and called from a browser to run.

Monday Nov 02, 2015

Getting Started with the WebLogic Server 12.2.1 Developer Distribution

The new WebLogic Server 12.2.1 release continues down the the path of providing an installation that is smaller to download and able to be installed with a single operation, providing a quicker approach for developers to get started with the product.

New with the WebLogic Server 12.2.1 release is the use of the quick installer technology which packages the product into an executable jar file, which will silently install the product into a target directory.  Through the use of the quick installer, the installed product can now be patched using the standard Oracle patching utility - opatch - enabling developers to download and apply any patches as needed and to also enable a high degree of consistency with downstream testing and production environments.

Despite it's smaller distribution size the developer distribution delivers a full featured WebLogic Server including the rich administration console, the comprehensive scripting environment with WLST, the Configuration Wizard and Domain Builders, the Maven plugins and artifacts and of course all the new WebLogic Server features such as Java EE 7 support, MultiTenancy, Elastic Dynamic Clusters and more.

For a quick look at using the new developer distribution, creating a domain and accessing the administration console, check out the YouTube video: Getting Started with the Developer Distribution.

WebLogic Scripting Tool (WLST) updates in 12.2.1

A number of updates have been implemented in Oracle WebLogic Server and Oracle Fusion Middleware 12.2.1 to simplify the usage of the WebLogic Scripting Tool (WLST), especially when multiple Oracle Fusion Middleware products are being used.    In his blog, Robert Patrick describes what we have done to unify the usage of WLST across the Oracle Fusion Middleware 12.2.1 product line.    This information will be very helpful to WLST users who want to better understand what was implemented in 12.2.1 and any implications for your environments.   

ZDT Patching; A Simple Case – Rolling Restart

To get started understanding ZDT Patching, let’s take a look at it in its simplest form, the rolling restart.  In many ways, this simple use case is the foundation for all of the other types of rollouts – Java Version, Oracle Patches, and Application Updates. Executing the rolling restart requires the coordinated and controlled shutdown of all of the managed servers in a domain or cluster while ensuring that service to the end-user is not interrupted, and none of their session data is lost.

The administrator can start a rolling restart by issuing the WLST command below:


In this case, the rolling restart will affect all managed servers in the cluster named “Cluster1”. This is called the target. The target can be a single cluster, a list of clusters, or the name of the domain.

When the command is entered, the WebLogic Admin Server will analyze the topology of the target and dynamically create a workflow (also called a rollout), consisting of every step that needs to be taken in order to gracefully shutdown and restart each managed server in the cluster, while ensuring that all sessions on that managed server are available to the other managed servers. The workflow will also ensure that all of the running apps on a managed server are fully ready to accept requests from the end-users before moving on to the next node. The rolling restart is complete once every managed server in the cluster has been restarted.

A diagram illustrating this process on a very simple topology is shown below.  In the diagram you can see that a node is taken offline (shown in red) and end-user requests that would have gone to that node are re-routed to active nodes.  Once the servers on the offline node have been restarted and their applications are again ready to receive requests, that node is added back to the pool of active nodes and the rolling restart moves on to the next node.

Animated image illustrating a rolling restart

 Illustration of a Rolling Restart Across a Cluster.

The rolling restart functionality was introduced based on customer feedback.  Some customers have a policy of preemptively restarting their managed servers in order to refresh the memory usage of applications running on top of them. With this feature we are greatly simplifying that tedious and time consuming process, and doing so in a way that doesn’t affect end-users.

For more information about Rolling Restarts with Zero Downtime Patching, view the documentation.

Thursday Oct 29, 2015

Oracle WebLogic Server 12.2.1 Continuous Availability

New in Oracle WebLogic Server 12.2.1, Continuous Availability! Continuous Availability is an end to end solution for building Multi Data Center architectures. With Continuous Availability, applications running in multi data center environments can run in Active-Active environments continuously. When one site fails the other site will recover work for the failed site. During upgrades, applications can still run continuously with zero down time. What ties it all together is automated data site failover, reducing human error and risk during failover or switchover events.

Reduce Application Downtime

· WebLogic Zero Down Time Patching (ZDT): Automatically orchestrates the rollout of patches and updates, while avoiding downtime and session loss. Reduces risk, cost and session downtime by automating the rollout process. ZDT automatically retries on failure and rollsback on retry failure retry.  Please read the blog Zero Downtime Patching Released!  to learn more about this feature.

· WebLogic Multitenant Live Partition Migration: In Multitenant environments Live Partition Migration is the ability to move running partitions and resource groups from one cluster to another, without impacting application users. During upgrade, load balancing, or imminent failure partitions can be migrated with zero impact to applications.

· Coherence Persistence: Persists cache data and metadata to durable storage. In case of failure of one or more Coherence servers, or the entire cluster, the persisted data and metadata can be recovered.

Replicate State for Multi-Datacenter Deployments

· WebLogic Cross Domain XA Recovery: When a WebLogic Server domain fails in one site or the entire site comes down, the ability to automatically recover transactions in a domain on the surviving site. This allows automated transaction recovery in Active-Active Maximum Availability Architectures.

· Coherence Federated Caching: Distributes Coherence updates across distributed geographical sites with conflict resolution. The modes of replication are Active-Active with data being continuously replicated and providing applications access to their local cached data, Active-Passive with the passive site serving as backup of the active site, and Hub Spoke where the Hub replicates the cache data to distributed Spokes.

Operational Support for Site Failover

· Oracle Traffic Director (OTD): Fast, reliable, and scalable software load balancer that routes traffic to application servers and web servers in the network. Oracle Traffic Director is aware of server availability, when a server is added to the cluster OTD starts routing traffic to that server. OTD itself can be highly available either in Active-Active or Active-Passive mode.

· Oracle Site Guard: Provides end-to-end Disaster Recovery automation. Oracle Site Guard automates failover or switchover by starting stopping site components in a predetermined order, running scripts and post failover checks. Oracle Site guard minimizes down time and human error during failover or switchover.

Continuous Availability provides flexibility by supporting different topologies to meet application needs.

· Active-Active Application Tier with Active-Passive Database Tier

· Active-Passive Application Tier with Active-Passive Database Tier

· Active-Active Stretch Cluster with Active-Passive Database Tier

Continuous Availability provide applications with Maximum Availability and Productivity, Data Integrity and Recovery, Local Access to data in multi data center environments, Real Time access to data updates, Automated Failover and Switchover of sites, and Reduce Human Error and Risk during failover/switchover. Protect your applications from down time with Continuous Availability. If you want to learn more please read Continuous Availability documentation or watch the Continuous Availability video.

Wednesday Oct 28, 2015

Zero Downtime Patching Released!

Patching and Updating WebLogic servers just got a whole lot easier!  The release of Zero Downtime Patching marks a huge step forward in Oracle's commitment both to simplifying the maintenance of WebLogic servers, and to our ability to provide continuous availability.

Zero Downtime Patching allows you to rollout distributed patches to multiple clusters or to your entire domain with a single command. All without causing any service outages or loss of session data for the end-user. It takes what was once a tedious and time-consuming task and replaces it with a consistent, efficient, and resilient automated process.

By automating this process, we're able to drastically reduce the amount of human input required (errors), and we're able to verify the input that is given before making any changes. This will have a huge impact on the consistency and reliability of the process, and it will also greatly improve the effiency of the process.

The process is resilient in that it can retry steps when there are errors, it can pause for problem resolution and resume where it left off, or if desired, it can revert the entire environment back to its original state.

As an administrator, you create and verify a patched OracleHome archive with existing and familiar tools, and place the archive on each node that you want to upgrade. Then, a simple command like the one below will handle the rest.

rolloutOracleHome("Cluster1, Cluster2", "/pathTo/patchedOracleHome.jar", "/pathTo/backupOfUnpatchedOracleHome")

The way the process works is that we take advantage of existing clustering technology combined with an Oracle Traffic Director (OTD) load balancer, to allow us to take individual nodes offline one at a time to be updated. We communicate with the load balancer and instruct it to redirect requests to active nodes. We also created some advanced techiques for preserving active sessions so the end-user will never even know the patching is taking place.

We can leverage this same process for updating the Java version used by servers, and even for doing some upgrades to running applications, all without service downtime for the end-user.

There's a lot of exciting aspects to Zero Downtime (ZDT) Patching that we will be discussing here, so check back often!

For more information about Zero Downtime Patching, view the documentation.

WebLogic Server Multitenant Info Sources

In Will Lyons’s blog entry, he introduced the idea of WebLogic Server Multitenant, and there have been a few other blog entries related to WebLogic Server Multitenant since then. Besides these blogs and the product documentation, there are a couple of other things to take a look at:

I just posted a video on Youtube at This video includes a high-level introduction to WebLogic Server Multitenant. This video is a little bit longer than my other videos, but there are a lot of good things to talk about in WebLogic Server Multitenant.

We also have a datasheet at, which includes a fair amount of detail regarding the value and usefulness of WebLogic Server Multitenant.

I’m at OpenWorld this week where we are seeing a lot of interest in these new features. One aspect of value that seems to keep coming up is the value of running on a shared platform. There are cases where every time a new environment is added, it needs to be certified against security requirements or standard operating procedures. By sharing a platform, those certifications only need to be done once for the environment. New applications deployed in pluggable partitions would not need a ground-up certification. This can mean faster roll-out times/faster time to market and reduced costs.

 That’s all for now. Keep your eye on this blog. More info coming soon!

Tuesday Oct 27, 2015

Resource Consumption Management in WebLogic Server MultiTenant 12.2.1 to Control Resource Usage of Domain Partitions

[This blog post is part of a series of posts that introduce you to new features in the recently announced Oracle WebLogic Server 12.2.1, and introduces an exciting performance isolation feature that is part of it.] 

With the increasing push to "doing more with less" in the enterprise, system administrators and deployers are constantly looking to increase density and improve hardware utilization for their enterprise deployments. The support for micro-containers/pluggable Domain Partitions in WebLogic Server Multitenant helps system administrators collocate their existing silo-ed business critical Java EE deployments into a single Mutitenant domain.

Say, a system administrator creates two Partitions "Red" and "Blue" in a shared JVM (a WebLogic Multitenant Server instance), and deploys Java EE applications and resources to them. A system administrator would like to avoid the situation where one partition's applications (say the "Blue" partition) "hogs" all shared resources in the Server instance's JVM (Heap)/the operating system (CPU, File descriptors), and negatively affecting the "Red" partition applications' access to these resources.

Runtime Isolation

Therefore, while consolidating existing enterprise workloads into a single Multitenant Server instance, system administrators would require better control (track, manage, monitor, control) over usage of shared resources by collocated Domain Partitions so that:


  • One Partition doesn't consume all available resources, and exhaust them from other collocated partitions. This helps a system administrator plan for, and support consistent performance to all collocated partitions.
  • Fair and effecient allocation of available resources are provided to collocated partitions. This helps a system administrator confidently place complementary workloads in the same environment, while achieving enhanced density and great cost-savings.

Control Resource Consumption Management


In Fusion Middleware 12.2.1, Oracle WebLogic Server Multitenant supports establishing resource management policies on the following resources


  • Heap Retained: Track and control the amount of Heap retained by a Partition
  • CPU Utilization: Track and control the amount of CPU utilization used by a Partition
  • Open File Descriptors: Track and control the amount of open file descriptors (due to File I/O, Sockets etc) used by a Partition.

    Recourse Actions

    When a trigger is breached, a system administrator may want to react to that by automatically taking certain recourse actions in response. The following actions are available out of the box with WebLogic.


    • Notify: inform administrator that a threshold has been surpassed
    • Slow: reduce partition’s ability to consume resources, predominantly through manipulation of work manager settings – should cause system to self-correct in certain situations
    • Fail: reject requests for the resource, i.e. throw an exception - only supported for file descriptors today
    • Stop: As an extreme step, initiate the shut down sequence for the offending partition on the current server instance


      The Resource Consumption Management feature in Oracle WebLogic Server Multitenant, enables a system administrator to specify resource consumption management policies on resources, and direct WebLogic to automatically take specific recourse actions when the policies are violated. A policy could either be created as one of the following two types


      • Trigger: This is useful when resource usage by Partitions are predictable and takes the form "when a resource's usage by a Partition crosses a Threshold, take a recourse action". 


      For example, a sample resource consumption policy that a system administrator may establish on a "Blue" Partition to ensure that it doesn't run away with all the Heap looks like: When the “Retained Heap” (Resource) usage for the “Blue” (Partition) crosses “2 GB” (Trigger), “stop” (Action) the partition.


      • Fair share: Similar to the Work Manager fair share policy in WebLogic, this policy allows a system administrator to specify "shares" of a bounded-size shared resource to a Partition. WebLogic then ensures that this resource is shared effectively (yet fairly) by competing consumers while honouring the "shares" allocated by the system administrator. 


      For example, a sample resource consumption policy that a system administrator who prefers "Red" partition over "Blue" may set the fair-share for the "CPU Utilization" resource in the ration 60:40 in favour of "Red".

      When complementary workloads are deployed to collocated partitions, fair-share policies also helps achieving maximal utilization of resources. For instance, when there are no or limited requests for the "Blue" partition, the "Red" partition would be allowed to "steal" and use all the available CPU time. When traffic resumes on the "Blue" partition and there is contention for CPU, WebLogic would allocate CPU time as per the fair-share ratio set by the system administrator. This helps system administrators reuse a single shared infrastructure and saving infrastructure costs in turn, while still retaining control over how those resources are allocated to Partitions. 


      Policy configurations could be defined at the domain level and reused across multiple pluggable Partitions, or they can be defined unique to a Partition. Policy configurations are flexible to support different combinations of trigger-based and fair-share policies for multiple resources to meet your unique business requirements. Policies can also be dynamically reconfigured without any restart of the Partition required. 

      The picture below shows how a system administrator could configure two resource consumption management policies (a stricter "trial" policy and a lax "approved" policy) and how they could be assigned to individual Domain Partitions. Heap and CPU resource by the two domain Partitions are then governed by the policies associated with each of them.

      WLS 12.2.1 RCM resource manager sample schematic

      Enabling Resource Management

      The Resource Consumption Management feature in WebLogic Server 12.2.1 is built on top of the resource management support in Oracle JDK 8u40. WebLogic RCM requires Oracle JDK 8u40 and the G1 Garbage Collector. In WebLogic Server Multitenant, you would need to pass the following additional JVM arguments to enable Resource Management:

      “-XX:+UnlockCommercialFeatures -XX:+ResourceManagement -XX:+UseG1GC”

      Track Resource Consumption

      Resource consumption metrics are also available on a per partition basis, and is provided through a Monitoring MBean, PartitionResourceMetricsRuntimeMBean. Detailed usage metrics on a per partition basis is available through this monitoring Mbean, and system administrators may use these metrics for the purposes of tracking, sizing, analysis, monitoring, and for configuring business-specific Watch and Harvester WLDF rules.


      Resource Consumption Managers in WebLogic MultiTenant helps provide the runtime isolation and protection needed for applications running in your shared and consolidated environments.

      For More Information

      This blog post only scratches the surface of the possibilities with the Resource Consumption Management feature. For more details on this feature, and how you can configure resource consumption management policies in a consolidated Multitenant domain using Weblogic Scripting Tool (WLST) and Fusion Middleware Control, and best practices, please refer the detailed technical document at "Resource Consumption Management (RCM) in Oracle WebLogic Server Multitenant (MT) - Flexibility and Control Over Resource Usage in Consolidated Environments".

      The Weblogic MultiTenant documentation's chapter "Configuring Resource Consumption Management" also has more details on using the feature.

      This feature is a result of deep integration between the Oracle JDK and WebLogic Server. If you are attending Oracle OpenWorld 2015 in San Francisco, head over to the session titled "Multitenancy in Java: Innovation in the JDK and Oracle WebLogic Server 12.2.1" [CON8633] (Wednesday, Oct 28, 1:45 p.m. | Moscone South—302) to hear us talk about this feature in more detail.

      We are also planning a series of videos on using the feature and we will update this blog entry as they become available.

      [Read More]

      Monday Oct 26, 2015

      Announcing Oracle WebLogic Server 12.2.1

      Oracle WebLogic Server 12.2.1 (12cR2) is now generally available.  The release was highlighted in Larry Ellison's Oracle OpenWorld keynote Sunday night, and was formally announced today in Inderjeet Singh's Oracle OpenWorld General Session and in a press release.  Oracle WebLogic Server 12.2.1 is available for download on the Oracle Technology Network (OTN) at the Oracle Fusion Middleware Download page, and the Oracle WebLogic Server Download page.   The product documentation is posted along with all documentation for Oracle Fusion Middleware 12.2.1, which has also been made available.   

      Oracle WebLogic Server 12.2.1 is the biggest WebLogic Server product release in many years.    We intend to provide extensive detail about its new features in videos and blogs posted here over the coming weeks.   Consider this note an initial summary on some of the major new feature areas:

      • Multitenancy
      • Continuous Availability
      • Developer Productivity and Portability to the Cloud


      WebLogic Server multitenancy enables users to consolidate applications and reduce cost of ownership, while maintaining application isolation and increasing flexibility and agility.   Using multitenancy, different applications (or tenants) can share the same server JVM (or cluster), and the same WebLogic domain, while maintaining isolation from each other by running in separate domain partitions.  

      A domain partition is an isolated subset of a WebLogic Server domain, and its servers.  Domain partitions act like microcontainers, encapsulating applications and the resources (datasources, JMS servers, etc) they depend on.  Partitions are isolated from each other, so that applications in one partition do not disrupt applications running in other partitions in the same server or domain. An amazing set of features provide these degrees of isolation.   We will elaborate on them here over time.

      Though partitions are isolated, they share many resources - the physical system they run on or the VM, the operating system, the JVM, and WebLogic Server libraries.  Because they share resources, they use fewer resources.   Customers can consolidate applications from separate domains into fewer domains, running in fewer JVMs in fewer physical systems.  Consolidation means fewer entities to manage, reduced resource consumption, and lower cost of ownership.

      Partitions are easy to use.  Applications can be deployed to partitions without changes, and we will be providing tools to enable users to migrate existing applications to partitions.   Partitions can be exported from one domain and imported into other domains, simplifying migration of applications across development, test and production environments, and across private and public clouds. Partitions increase application portability and agility, giving development, DevOps, test and production teams more flexibility in how they develop, release and manage production applications.   

      WebLogic Server's new multitenancy features are integrated with innovations introduced across the Oracle JDK, Oracle Coherence, Oracle Traffic Director, and Oracle Fusion Middleware and are closely aligned with Oracle Database Pluggable Databases.   Over time you will see multitenancy being leveraged more broadly in the Oracle Cloud and Oracle Fusion Middleware. Multitenancy represents a significant new innovation for WebLogic Server and for Oracle.

      Continuous Availability

      Oracle WebLogic Server 12.2.1 introduces new features to minimize planned and unplanned downtime in multi data center configurations.    Many WebLogic Server customers have implemented disaster recovery architectures to provide application availability and business continuity.   WebLogic Server's new continuous availability features will make it easier to create, optimize and manage such configurations, while increasing application availability.  They include the following:

      • Zero-downtime patching provides an orchestration framework that controls the application of patches across clusters to maintain application availability during patching. 
      • Live partition migration enables migration of partitions across clusters within the same domain, effectively moving applications from one cluster to another, again while maintaining application availability during the process. 
      • Oracle Traffic Director provides high performance, high availability load balancing, and enables optimized traffic routing and load balancing during patching and partition migration operations. 
      • Cross-site transaction recovery enables automated transaction recovery across active-active configurations when a site fails. 
      • Oracle Coherence 12.2.1 federated caching allows users to replicate data grid updates across clusters and sites to maintain data grid consistency and availability in multi data center configurations.
      • Oracle Site Guard enables the automation and reliable orchestration of failover and failback operations associated with site failures and site switchovers.

      These features, combined with existing availability features in WebLogic Server and Coherence, give users powerful capabilities to meet requirements for business continuity, making WebLogic Server and Coherence the application infrastructure of choice for highly available enterprise environments.   We intend to augment these capabilities over time for use in Oracle Cloud and Oracle Fusion Middleware.

      Developer Productivity and Portability to the Cloud

      Oracle WebLogic Server 12.2.1 enables developers to be more productive, and enables portability of applications to the Cloud.    To empower the individual developer, Oracle WebLogic Server 12.2.1 supports Java SE 8 and the full Java Enterprise Edition 7 (Java EE 7) platform, including new APIs for developer innovation.   We're providing a new lightweight Quick Installer distribution for developers which be easily patched for consistency with test and production systems.  We've improved deployment performance, and updated IDE support provided in Oracle Enterprise Pack for Eclipse, Oracle JDeveloper and Oracle NetBeans.   Improvement for WebLogic Server developers are paired with dramatic new innovations for building Oracle Coherence applications using standard Java SE 8 lambdas and streams.

      Team development and DevOps tools complement the features provided above, providing portability between on-premise environments and the Oracle Cloud.   For example, Eclipse- and Maven-based development and build environments can easily be pushed to the Developer Cloud Service in the Oracle Cloud, to enable team application development in a shared, continuous integration environment. Applications developed in this manner can be deployed to traditional WebLogic Server domains, or to WebLogic Server partitions, and soon to Oracle WebLogic Server 12.2.1 running in the Java Cloud Service.   New cloud management and portability features, such as comprehensive REST management APIs, automated elasticity for dynamic clusters, partition management, and continued Docker certification provide new tools for flexible deployment and management control of applications in production both on premise and in the Oracle Cloud.  

      All these and many other features make Oracle WebLogic Server 12.2.1 a compelling release with technical innovation and business value for customers building Java applications.   Please download the product, review the documentation and evaluate it for yourself.  And be sure check back here for more information from our team.


      The official blog for Oracle WebLogic Server fans and followers!

      Stay Connected


      « November 2015