Monday Nov 16, 2015

Weblogic 12.2.1 Multitenancy Support for Resource Adapter

One of the key features in WLS 12.2.1 is the multi-tenancy support, you can learn more about its concept in Tim Quinn's blog:Domain Partitions for Multi-tenancy in WebLogic Server. For resource adapter, besides deploying it to domain partition, you can also deploy a resource adapter to partition's resource group or resource group template. This can be done by selecting resource group scope or resource group template scope while deploying resource adapter in console. Following graph shows the deployment page in console. In the example graph, we have a resource group Partition1-rg in Partition1 and a resource group template TestRGT:

Deploy RA to MT in Console

When you select 'Global' scope, the resource adapter will be deployed to domain partition. If selecting 'TestRGT template', the resource adapter will be deployed to resource group template TestRGT, and if Partition1's resource group references TestRGT, the resource adapter will be deployed to Partition1. If selecting 'Partition1-rg in Partition1', the resource adapter will be deployed to Partition1. 

You can learn more about multi-tendency deployment in Hong Zhang's blog: Multi Tenancy Deployment.

If you deploy resource adapters to different partitions, these resources in different partitions will not interfere each other, because:

  1. Resource adapter's JNDI resources in one partition can not be looked up by another partition, you can only lookup resource adapter resources being bound in same partition.
  2. Resource adapter classes packaged in resource adapter archive are loaded by different classloaders when they are deployed to different partitions. You do not need to worry about mistakenly using some resource adapter classes loaded by another partition.
  3. If you somehow get a reference to one of the following resource adapter's resource objects which belongs to another partition, you still can not use it. You will get a exception if you calling some of the methods of that object.
    • javax.resource.spi.work.WorkManager
    • javax.resource.spi.BootstrapContext
    • javax.resource.spi.ConnectionManager
    • javax.validation.Validator
    • javax.validation.ValidatorFactory
    • javax.enterprise.inject.spi.BeanManager
    • javax.resource.spi.ConnectionEventListener

After having resource adapter deployed,  you can access domain resource adapter's runtime mbean through 'ConnectorServiceRuntime' directory under ServerRuntime, using WebLogic Scripting Tool (WLST):

View Connector RuntimeMBean in WLST

In above example, we have a resource adapter named 'jca_ra' deployed in domain partition, so we can see it's runtime mbean under ConnectorServiceRuntime/ConnectorService. jms-internal-notran-adp and jms-internal-xa-adp are also listed here, they are weblogic internal resource adapters.

But how can we monitor resource adapters deployed in partition? they are under PartitionRuntimes:

View MT Connector RuntimeMBean in WLST

In above example, we have a resource adapter named 'jca_ra' deployed in Partition1.


You can also get resource adapter's runtimembean through JMX(see how to access runtimembean using JMX):

      JMXServiceURL serviceURL = new JMXServiceURL("t3", hostname, port,          
                             "/jndi/weblogic.management.mbeanservers.domainruntime");
      Hashtable h = new Hashtable();
      h.put(Context.SECURITY_PRINCIPAL, user);
      h.put(Context.SECURITY_CREDENTIALS, passwd);
      h.put(JMXConnectorFactory.PROTOCOL_PROVIDER_PACKAGES,
                            "weblogic.management.remote");
      h.put("jmx.remote.x.request.waiting.timeout", new Long(10000));
      JMXConnector connector = JMXConnectorFactory.connect(serviceURL, h);
      MBeanServerConnection connection = connector.getMBeanServerConnection();
      Set<ObjectName> names = connection.queryNames(new ObjectName(
                            "*:Type=ConnectorComponentRuntime,Name=jca_ra,*"), null);
      for (ObjectName oname : names) {
          Object o = MBeanServerInvocationHandler.newProxyInstance(connection, oname, 
                               ConnectorComponentRuntimeMBean.class, false);
          System.out.println(o);
      }

Running above example code in a domain which has a resource adapter named 'jca_ra' deployed to both domain and Partition1, you will get following result:

[MBeanServerInvocationHandler]com.bea:Name=jca_ra,ServerRuntime=admin,Location=admin,

                                Type=ConnectorComponentRuntime,ApplicationRuntime=jca_ra

[MBeanServerInvocationHandler]com.bea:Name=jca_ra,ServerRuntime=admin,Location=admin,

                              Type=ConnectorComponentRuntime,ApplicationRuntime=jca_ra,PartitionRuntime=Partition1

You can see the connection pool runtime mbean(ConnectorComponentRuntime) of the resource adapter which is deployed to Partition1 has a valid PartitionRuntime attribute. So you can query Partition1's resource adapter's runtime mbean by following code:

connection.queryNames(new ObjectName(

                   "*:Type=ConnectorComponentRuntime,Name=jca_ra,PartitionRuntime=Partition1,*"), null);

Monday Nov 09, 2015

Update your Java Version Easily with ZDT Patching

 Another great feature of ZDT Patching is that it provides a simple way to update the Java version used to run WebLogic.  Keeping up-to-date with Java security patches is an ongoing task of critical importance. Prior to ZDT Patching, there was no easy way to migrate all of your managed servers to a new Java version, but ZDT Patching makes this a simple two step procedure.

The first step is to install the updated Java version to all of the nodes that you will be updating. This can be done manually or by using any of the normal software distribution tools typically used to manage enterprise software installations. This operation can be done outside a planned maintenance window as it will not affect any running servers. Note that when installing the new Java version, it must not overwrite the existing Java directory, and the location of the new directory must be the same on every node.

The second step is to simply run the Java rollout using a WLST command like this one:

rolloutJavaHome(“Cluster1”, “/pathTo/jdk1.8.0_66”)

In this example, the Admin Server will start the rollout to coordinate the rolling restart of each node in the cluster named “Cluster1”. While the managed servers and NodeManager on a given node are down, the path to the Java executable that they are started with will be updated. The rollout will then start the managed servers and NodeManager from the new Java path.

Easy as that!

For more information about upgrading Java with Zero Downtime Patching, view the documentation.

Wednesday Nov 04, 2015

Application MBeans Visibility in Oracle WebLogic Server 12.2.1

Oracle WebLogic Server (WLS) version 12.2.1 supports a feature called Multi-Tenancy (WLS MT). WLS MT introduces the partition, partition administrator, and partition resource concepts.  Partition isolation is enforced when accessing resources (e.g., MBeans) in a domain. WLS administrators can see MBeans in the domain and the partitions. But a partition administrator as well as other partition roles are only allowed to see the MBeans in their partition, not in other partitions. 

In this article, I will explore the visibility support on the application MBeans to demonstrate partition isolation in WLS MT in 12.2.1. This includes

  • An overview of application MBean visibility in WLS MT
  • A simple user case that demonstrates what MBeans are registered on a WLS MBeanServer, what MBeans can be visible by WLS administrators or partition administrators
  • Links to reference materials for more information

The use case used in this article is run based on a domain created in another article "Create WebLogic Server Domain with Partitions using WLST in 12.2.1". In this article, I will 

  • Briefly show the domain topology
  • Demonstrate how to deploy an application to the domain and partitions
  • Demonstrate how to access the application MBeans via JMX clients using global/domain url or partition specific url
  • Demonstrate how to enable debugging/logging

1. Overview

An application can be deployed to WLS servers per partition, so the application is multiplied for multiple partitions. WLS contains three MBeanServers: Domain Runtime MBeanServer, Runtime MBeanServer. Each MBeanServer can be used for all partitions. WLS needs to ensure that the MBeans registered on each MBeanServer by the application are unique for each partition.

The application MBean visibility in WLS MT can be illustrated by several parts:

  • Partition Isolation
  • Application MBeans Registration
  • Query Application MBeans
  • Access Application MBeans

1.1 Partition Isolation

A WLS administrator can see application MBeans in partitions. But a partition administrator for a partition is not able to see application MBeans from the domain or other partitions.  

1.2 Application MBeans Registration

When an application is deployed to a partition, application MBeans are registered during the application deployment. WLS adds a partition specific key (e.g. Partition=<partition name>) to the MBean Object Names when registering them onto the WLS MBeanServer. This will ensure that MBean object names are unique when registered from a multiplied application.

Figure on the right shows how application MBean ObjectNames are different when registered onto the WLS MBeanServer on the domain and the partitions.

Figure on the right shows there is a WLS domain and an application.

WLS domain is configured with two partitions: cokePartition and pepsiPartition.

An application registers one MBean, e.g., testDomain:type=testType, during the application deployment.

The application is deployed to WLS domain, cokePartition and pepsiPartition. Since an WLS MBeanServer instance is shared by the domain, cokePartition and pepsiPartition, there are three application MBeans registered on the same MBeanServer after three application deployments:

  • An MBean belongs to domain:          testDomain:type=testType
  • An MBean belongs to cokePartition: testDomain:Partition=cokePartition,type=testType
  • An MBean belongs to cokePartition: testDomain:Partition=pepsiPartition,type=testType

The MBeans belong to the partitions contains an Partition key property in the ObjectNames.

1.3 Query Application MBeans

JMX clients, e.g., WebLogic WLST, JConsole etc., connect to a global/domain URL or partition specific URL, then do a query on the WebLogic MBeanServer. The query results are different:

  • When connecting to a global/domain URL, the application MBeans that belong to the partitions can be visible to those JMX clients.
  • When connecting to a partition specific URL, WLS filters the query results. Only the application MBeans that belong to that partition are returned. MBeans belonging to the domain and other partitions are filtered out.

1.4 Access Application MBeans

JMX clients, e.g., WebLogic WLST, JConcole, etc., connect to a partition specific URL, and do an JMX operation, e.g., getAttribute(<MBean ObjectName>, <attributeName>), the JMX operation is actually done on different MBeans:

  • When connecting to a global/domain URL, the getAttribute() is called on the MBean that belongs to the domain. (The MBean without the Partition key property on the MBean ObjectName.)
  • When connecting to a partition specific URL, the getAttribute() is called on the MBean that belongs to that partition. (The MBean with the Partition key property on the MBean ObjectName.)

2. Use case

Now I will demonstrate how MBean visibility works in WebLogic Server MT in 12.2.1 to support partition isolation. 

2.1 Domain with Partitions

In the article "Create WebLogic Server Domain with Partitions using WLST in 12.2.1", a domain with 2 partitions: coke and pepsi is created. This domain is also used for the use case in this article. Here is the summary of the domain topology:

  • A domain is configured with one AdminServer named "admin", one partition named "coke", one partition named "pepsi". 
  • The "coke" partition contains one resource group named "coke-rg1", targeting to a target named "coke-vt". 
  • The "pepsi" partition contains one resource group named "pepsi-rg1", targeting to a virtual target named "pepsi-vt". 

More specifically, each domain/partition has the following configuration values:

  Domain Name User Name Password
Domain base_domain weblogic welcome1
Coke Partition coke mtadmin1 welcome1
Pepsi Partition pepsi mtadmin2 welcome2

Please see details in the article "Create Oracle WebLogic Server Domain with Partitions using WLST in 12.2.1" on how to create this domain.

2.2 Application deployment

When the domain is set up and started, an application "helloTenant.ear" is deployed to the domain. It is also deployed to the "coke-rg1" in the "coke" partition and to the "pepsi-rg1" in the "pepsi" partition. The deployment can be done using different WLS tools, like FMW Console, WLST, etc.. Below are the WLST commands that deploy an application to the domain and the partitions:

startEdit()
deploy(appName='helloTenant',target='admin,path='${path-to-the-ear-file}/helloTenant.ear')
deploy(appName='helloTenant-coke',partition='coke',resourceGroup='coke-rg1',path='${path-to-the-ear-file}/helloTenant.ear')
deploy(appName='helloTenant-pepsi',partition='pepsi',resourceGroup='pepsi-rg1',path='${path-to-the-ear-file}/helloTenant.ear')
save()
activate()

For other WLS deployment tools, please see the "Reference" section.

2.3 Access Application MBeans

During the application deployment, application MBeans are registered onto the WebLogic Server MBeanServer. As mentioned in the previous section 1.2 Application MBean Registration, multiple MBeans are registered, even though there is only one application.

To access application MBeans, there are multiple ways to achieve this

  • WLST
  • JConsole
  • JSR 160 apis

2.3.1 WLST

The WebLogic Scripting Tool (WLST) is a command-line scripting interface that system administrators and operators use to monitor and manage WebLogic Server instances and domains. To start WLST:

$MW_HOME/oracle_common/common/bin/wlst.sh

Once WLST is started, user can connect to the server by providing a connection url. Below will show different values of an application MBean attribute by the WLS administrator or partition administrator when providing different connection urls.

2.3.1.1 WLS administrator

WLS administrator 'weblogic' connects to the domain using the following connect command:

connect("weblogic", "welcome1", "t3://localhost:7001")

The picture below shows there are 3 MBeans registered on the WebLogic Server MBeanServer, whose domain is "test.domain", and the value of the attribute "PartitionName" on each MBean.

  • test.domain:Partition=coke,type=testType,name=testName
    • belongs to the coke partition. The value of the PartitionName attribute is "coke"
  • test.domain:Partition=pepsi,type=testType,name=testName
    • belongs to the pepsi partition. The value of the PartitionName attribute is "pepsi"
  • test.domain:type=testType,name=testName
    • belongs to the domain. No Partition key property in the ObjectName. The value of the PartitionName attribute is "DOMAIN"

The MBean belonging to the partition will contain a Partition key property in the ObjectName. The Partition key property is added by WLS internally when they are registered in a partition context.

2.3.1.2 Partition administrator for coke

Similarly, the partition administrator 'mtadmin1' for coke can connect to the coke partition. The connection url uses "/coke" which is the uri prefix defined in the virtual target coke-vt. (Check the config/config.xml in the domain.)

connect("mtadmin1", "welcome1", "t3://localhost:7001/coke")

From the picture below, when connecting to the coke partition, there is only one MBean listed:

test.domain:type=testType,name=testName

Even though there is no Partition key property in the ObjectName, this MBean still belongs to the coke partition. The value of the PartitionName attribute is "coke".

2.3.1.3 Partition administrator for pepsi

Similarly, the partition administrator 'mtadmin2' for pepsi can connect to the pepsi partition. The connection url uses "/pepsi" which is the uri prefix defined in the virtual target pepsi-vt.

connect("mtadmin2", "welcome2", "t3://localhost:7001/pepsi")

From the picture below, when connecting to the pepsi partition, there is only one MBean listed:

test.domain:type=testType,name=testName

Even though there is no Partition key property in the ObjectName, same as the one seen by the partition administrator for coke, this MBean still belongs to the pepsi partition. The value of the PartitionName attibute is "pepsi".

2.3.2 JConsole

The JConsole graphical user interface is a build-in tool in JDK. It's a monitoring tool that complies to the Java Management Extensions (JMX) specification.  By using JConsole you can get a overview of the MBeans registered on the MBeanServer.

To start JConsole, do this:

$JAVA_HOME/bin/jconsole
-J-Djava.class.path=$JAVA_HOME/lib/jconsole.jar:
$JAVA_HOME/lib/tools.jar:$MW_HOME/wlserver/server/lib/wljmxclient.jar
 -J-Djmx.remote.protocol.provider.pkgs=weblogic.management.remote

where <MW_HOME> is the location where WebLogic Server is installed.

Once JConsole is started, WLS administrator and partition administrator can use it to browse the MBeans given the credentials and the JMX service URL.

2.3.2.1 WLS administrator

The WLS administrator "weblogic" provides an JMX service URL to connect to the WLS Runtime MBeanServer like below:

service:jmx:t3://localhost:7001/jndi/weblogic.management.mbeanservers.runtime

When connected by a WLS administrator, an MBean tree in JConsole shows 3 MBeans with the "test.domain" in the ObjectName:

The highlighted ObjectName in the right pane in the picture below is the MBean that belongs to the coke partition. It has the Partition key property: Partition=coke.

The highlighted below is the MBean that belongs in the pepsi partition. It has the Partition key property: Partition=pepsi.

The highlighted below is the MBean that belongs to the domain. It does not have the Partition key property.

The result here is consistent with what we have seen in WLST for WLS administrator.

2.3.2.2 Partition administrator for coke

The partition administrator "mtadmin1" provides a different JMX service URL to JConsole:

service:jmx:t3://localhost:7001/coke/jndi/weblogic.management.mbeanservers.runtime

When connected via partition specific JMX service url,, the partition administrator can only see one MBean:

test.domain:type=testType,name=testName

This MBean belongs to the coke partition and the value of the PartitionName is coke as shown in the picture below. However, there is no Partition key property in the ObjectName.

2.3.2.3 Partition administrator for pepsi

The partition administrator "mtadmin2" provides a different JMX service URL to JConsole:

service:jmx:t3://localhost:7001/pepsi/pepsi/weblogic.management.mbeanservers.runtime

When connected via partition specific JMX service url, the partition administrator "mtadmin2" can only see one MBean:

test.domain:type=testType,name=testName

This MBean belongs to the pepsi partition and the value of the PartitionName is pepsi as shown in the picture below.

2.3.3 JSR 160 APIs

JMX clients can use JSR 160 APIs to access the MBeans registered on the MBeanServer. For example, the code below shows how to get the JMXConnector by providing a service url and the env, to get the MBean attribute:

import javax.management.*;
import javax.management.remote.JMXConnector;
import javax.management.remote.JMXServiceURL;
import javax.management.remote.JMXConnectorFactory;
import java.util.*
 
public class TestJMXConnection {
public static void main(String[] args) throws Exception {
JMXConnector jmxCon = null;
try {
// Connect to JMXConnector
JMXServiceURL serviceUrl = new JMXServiceURL(
"service:jmx:t3://localhost:7001/jndi/weblogic.management.mbeanservers.runtime");
Hashtable env = new Hashtable();
env.put(JMXConnectorFactory.PROTOCOL_PROVIDER_PACKAGES, "weblogic.management.remote");
env.put(javax.naming.Context.SECURITY_PRINCIPAL, "weblogic");
env.put(javax.naming.Context.SECURITY_CREDENTIALS, "welcome1");
jmxCon = JMXConnectorFactory.newJMXConnector(serviceUrl, env);
jmxCon.connect();
 
// Access the MBean
MBeanServerConnection con = jmxCon.getMBeanServerConnection();
ObjectName oname = new ObjectName("test.domain:type=testType,name=testName,*");
Set queryResults = (Set)con.queryNames(oname, null);
for (ObjectName theName : queryResults) {
System.out.print("queryNames(): " + theName);
String partitionName = (String)con.getAttribute(theName, "PartitionName");
System.out.println(", Attribute PartitionName: " + partitionName);
}
} finally {
if (jmxCon != null)
jmxCon.close();
}

To compile and run this code, provide the wljmxclient.jar on the classpath, like:

$JAVA_HOME/bin/java -classpath $MW_HOME/wlserver/server/lib/wljmxclient.jar:. TestJMXConnection

You will get the results below:

Connecting to: service:jmx:t3://localhost:7001/jndi/weblogic.management.mbeanservers.runtime
queryNames(): test.domain:Partition=pepsi,type=testType,name=testName, Attribute PartitionName: pepsi
queryNames(): test.domain:Partition=coke,type=testType,name=testName,  Attribute PartitionName: coke
queryNames(): test.domain:type=testType,name=testName, Attribute PartitionName: DOMAIN

When change the code to use partition administrator "mtadmin1",

JMXServiceURL serviceUrl = new JMXServiceURL(
"service:jmx:t3://localhost:7001/coke/jndi/weblogic.management.mbeanservers.runtime");
env.put(javax.naming.Context.SECURITY_PRINCIPAL, "mtadmin1");
env.put(javax.naming.Context.SECURITY_CREDENTIALS, "welcome1");

Running the code will return only one MBean:

Connecting to: service:jmx:t3://localhost:7001/coke/jndi/weblogic.management.mbeanservers.runtime
queryNames(): test.domain:type=testType,name=testName,  Attribute PartitionName: coke

Similar results would be seen for the partition administrator for pepsi. If provide a pepsi specific JMX service url, only the MBean that belongs to the pepsi partition is returned.

2.4 Enable logging/debugging flags

If it appears the MBean is not behaving correctly in WebLogic Server 12.2.1, for example:

  • Partition administrator can see MBeans from global domain or other partitions when quiery the MBeans, or
  • Got JMX exceptions, e.g., javax.management.InstanceNotFoundException, when accessing an MBean

Try the followings to triage the errors:

  • If it's a connection problem in JConsole, add -debug on the JConsole command line when starting JConsole.
  • Partition administrator can see MBeans from global domain or other partitions when query the MBeans:
    • When connected by JMX clients, e.g., WLST, JConsole, JSR 160 APIs, make sure the host name on the service url matches the host name defined in the virtual target in the config/config.xml in the domain.
    • Make sure the uri prefix on the service url matches the uri prefix defined in the virtual target in the config/config.xml in the domain.
  • Got JMX exceptions, e.g., javax.management.InstanceNotFoundException, when accessing an MBean:

    • When the MBean belongs to a partition, make sure the partition is started. The application deployment is only happened when the partition is started.
    • Enable the debug flags during the server startup, like this:
      • -Dweblogic.StdoutDebugEnabled=true -Dweblogic.log.LogSeverity=Debug -Dweblogic.log.LoggerSeverity=Debug -Dweblogic.debug.DebugPartitionJMX=true -Dweblogic.debug.DebugCIC=false
    • Search server logs for the specific MBean ObjectName you are interested. Make sure the MBean you are debugging is registered in a correct partition context. Make sure the MBean operation is called in a correct partition context.

Here are sample debug messages for the MBean "test.domain:type=testType,name=testName" related to the MBean registration, queryNames() invocation, and getAttribute() invocation.

<Oct 21, 2015 11:36:43 PM PDT> <Debug> <PartitionJMX> <BEA-000000> <Calling register MBean test.domain:type=testType,name=testName in partition DOMAIN>
<Oct 21, 2015 11:36:44 PM PDT> <Debug> <PartitionJMX> <BEA-000000> <Calling register MBean test.domain:Partition=coke,type=testType,name=testName in partition coke>
<Oct 21, 2015 11:36:45 PM PDT> <Debug> <PartitionJMX> <BEA-000000> <Calling register MBean test.domain:Partition=pepsi,type=testType,name=testName in partition pepsi>
<Oct 21, 2015 11:36:56 PM PDT> <Debug> <PartitionJMX> <BEA-000000> <queryNames on MBean test.domain:Partition=coke,type=testType,name=testName,* in partition coke>
<Oct 21, 2015 11:36:56 PM PDT> <Debug> <MBeanCIC> <BEA-000000> <getAttribute: MBean: test.domain:Partition=coke,type=testType,name=testName, CIC: (pId = 2d044835-3ca9-4928-915f-6bd1d158f490, pName = coke, appId = helloTenant$coke, appName = helloTenant, appVersion = null, mId = null, compName = null)>

 

    • To check why the partition context is not right, turn on this debug flag, in addition to the debug flags mentioned above, when starting WLS servers:
      • -Dweblogic.debug.DebugCIC=true. Once this flag is used, there are a lot of messages logged into the server log. Search for the messages logged by DebugCIC logger, like 
        ExecuteThread: '<thread id #>' for queue: 'weblogic.kernel.Default (self-tuning)'): Pushed 

        and the messages logged by DebugPartitionJMX logger.

<Oct 21, 2015, 23:59:34 PDT> INVCTXT (24-[ACTIVE] ExecuteThread: '0' for queue: 'weblogic.kernel.Default (self-tuning)'): Pushed [(pId = 0, pName = DOMAIN, appId = null, appName = null, appVersion = null, mId = null, compName = null)] on top of [(pId = 0, pName = DOMAIN, appId = null, appName = null, appVersion = null, mId = null, compName = null)]. New size is [2]. Pushed by [weblogic.application.ComponentInvocationContextManagerImpl.pushComponentInvocationContext(ComponentInvocationContextManagerImpl.java:173)
...
<Oct 21, 2015 11:59:34 PM PDT> <Debug> <PartitionJMX> <BEA-000000> <Calling register MBean test.domain:type=testType,name=testName in partition DOMAIN>
...
<Oct 21, 2015, 23:59:37 PDT> INVCTXT (29-[STANDBY] ExecuteThread: '2' for queue: 'weblogic.kernel.Default (self-tuning)'): Pushed [(pId = 2d044835-3ca9-4928-915f-6bd1d158f490, pName = coke, appId = helloTenant$coke, appName = helloTenant, appVersion = null, mId = null, compName = null)] on top of [(pId = 2d044835-3ca9-4928-915f-6bd1d158f490, pName = coke, appId = null, appName = null, appVersion = null, mId = null, compName = null)]. New size is [3]. Pushed by
[weblogic.application.ComponentInvocationContextManagerImpl.pushComponentInvocationContext(ComponentInvocationContextManagerImpl.java:173)
...
<Oct 21, 2015 11:59:37 PM PDT> <Debug> <PartitionJMX> <BEA-000000> <Calling register MBean test.domain:Partition=coke,type=testType,name=testName in partition coke>

3. Conclusion

WebLogic Server 12.2.1 provides a new feature: Multi-Tenancy (MT). With this feature, partition isolation is enforced. Applications can deploy to the domain and the partitions. Users in one partition cannot see the resources in other partitions, including MBeans registered by applications. In this article, a use case is used to briefly demonstrate how application MBeans are affected by partition isolation in regards to MBean visibility. For more detailed information, see "References" section.

4. References

WebLogic Server domain

Config Wizard

WLST command reference  

JConsole

Managing WebLogic Server with JConsole

JSR 160: Java Management Extensions Remote JMX api

WebLogic Server Security

WebLogic Server Deployment

 

 

Create WebLogic Server Domain with Partitions using WLST in 12.2.1

Oracle WebLogic Server 12.2.1 added support for multitenancy (WLS MT). In WLS MT, WLS can be configured with a domain, as well as one or more partitions. A partition contains new elements introduced in WLS MT, like resource groups, resource group templates, virtual targets, etc. Setting up a domain with partitions requires additional steps compared to a traditional WLS domain. For more detailed information about these new WLS MT related concepts, please see Oracle Docs listed in the "References" section. 

Oracle recommends to use Fusion Middleware Control (FMWC) to create WebLogic domains via Restricted JRF template. Oracle also support creating WebLogic Server domains using WLST. In this article, I will demonstrate how to create a WLS domain with 2 partitions using WLST. This includes:

  • Displaying domain topology
  • Creating a domain with 2 partitions using WLST
  • Displaying domain config file sample

These tasks are described in the subsequence sections.

1.Domain Topology

In this article, I will create a domain that is configured with:

  • One AdminServer named "admin", one partition named "coke", one partition named "pepsi".
  • The "coke" partition contains one resource group named "coke-rg1", targeting to a virtual target named "coke-vt". 
  • The "pepsi" partition contains one resource group named "pepsi-rg1", targeting to a virtual target named "pepsi-vt". 
  • An application "helloTenant.ear" is deployed to the domain, the "coke-rg1" in the "coke" partition and the "pepsi-rg1" in the "pepsi" partition.

The following picture shows what the domain topology looks:

Note this domain topology does not contain other MT related concepts, like a resource group template. They are not covered in this article. To see more information about other MT related concepts, please check the "References" section for details.

2. Create a domain with partitions

To create a domain with the topology shown in the picture above, several steps are required:

  • Create a traditional WLS domain
  • Start the domain
  • Create a partition in the domain
    • Create a security realm for the partition
    • Create a user for the partition
    • Add the user to the groups in the security realm
    • Create a virtual target
    • Create a partition
      • Create a resource group
      • Set a virtual target as a default target
      • Setup security IDD for the partition
  • Restart the server
  • Start the partition

Below will illustrate each step in details.

2.1 Create a traditional WLS domain

A traditional WLS domain can be created by using the Config Wizard. Start the Config Wizard via a command script:

sh $MW_Home/oracle_common/common/bin/config.sh

Create a domain using all the defaults, Specify the following:

  • Domain name = base_domain
  • User name = weblogic
  • User password = welcome1

2.2 Start the domain

cd $MW_Home/user_projects/domains/base_domain/
sh startWebLogic.sh

2.3 Create a partition: coke in a domain

The steps below require WLST to be started. Use the following command to start WLST:

sh $MW_Home/oracle_common/common/bin/wlst.sh

Note, all of the WLST commands shown below are run after connecting to the Admin server "admin" with the admin user "weblogic" credentials, e.g.,

connect("weblogic", "welcome1", t3://localhost:7001")

Now, WLST is ready to run commands to setup the partition for coke. The partition for coke has the following values:

  • Partition name = coke
  • Partition user name = mtadmin1
  • Partition password = welcome1

To do that, a security realm and a user are created for the partition as shown below. We explain it step-by-step.

2.3.1 Create a security realm for the partition 

The security realm is created using the standard WLS APIs.

edit()
startEdit()
realmName = 'coke_realm'
security = cmo.getSecurityConfiguration()
print 'realm name is ' + realmName
realm = security.createRealm(realmName)
# ATN
atnp = realm.createAuthenticationProvider(
  'ATNPartition','weblogic.security.providers.authentication.DefaultAuthenticator')
atna = realm.createAuthenticationProvider(
  'ATNAdmin','weblogic.security.providers.authentication.DefaultAuthenticator')
# IA
ia = realm.createAuthenticationProvider(
  'IA','weblogic.security.providers.authentication.DefaultIdentityAsserter')
ia.setActiveTypes(['AuthenticatedUser'])
# ATZ/Role
realm.createRoleMapper(
  'Role','weblogic.security.providers.xacml.authorization.XACMLRoleMapper')
realm.createAuthorizer(
  'ATZ','weblogic.security.providers.xacml.authorization.XACMLAuthorizer')
# Adjudicator
realm.createAdjudicator('
  ADJ','weblogic.security.providers.authorization.DefaultAdjudicator')
# Auditor
realm.createAuditor('
  AUD','weblogic.security.providers.audit.DefaultAuditor')

# Cred Mapper
realm.createCredentialMapper(
  'CM','weblogic.security.providers.credentials.DefaultCredentialMapper')
# Cert Path
realm.setCertPathBuilder(realm.createCertPathProvider(
  'CP','weblogic.security.providers.pk.WebLogicCertPathProvider'))
# Password Validator
pv = realm.createPasswordValidator('PV',
  'com.bea.security.providers.authentication.passwordvalidator.SystemPasswordValidator')
pv.setMinPasswordLength(8)
pv.setMinNumericOrSpecialCharacters(1)
save()
activate()

2.3.2 Add a user and group to the security realm for the partition

Create a user and add the user to a security group Administrators in the realm. In this use case, the username and the password for the coke partition are mtadmin1 and welcome1. There is no need to start an edit session when looking up the Authentication Provider to create a user/group.

realmName = 'coke_realm'
userName = 'mtadmin1'
groupName = 'Administrators'
print 'add user: realmName ' + realmName
if realmName == 'DEFAULT_REALM':
  realm = cmo.getSecurityConfiguration().getDefaultRealm()
else:
  realm = cmo.getSecurityConfiguration().lookupRealm(realmName)
print "Creating user " + userName + " in realm: " + realm.getName()
atn = realm.lookupAuthenticationProvider('ATNPartition')
if atn.userExists(userName):
  print "User already exists."
else:
  atn.createUser(userName, '${password}', realmName + ' Realm User')
print "Done creating user. ${password}"
print "Creating group " + groupName + " in realm: " + realm.getName()
if atn.groupExists(groupName):
  print "Group already exists."
else:
  atn.createGroup(groupName, realmName + ' Realm Group')
if atn.isMember(groupName,userName,true) == 0:
  atn.addMemberToGroup(groupName, userName)
else:
  print "User is already member of the group."

2.3.3 Create a virtual target for the partition

This virtual target is targeted to the admin server. The uri prefix is /coke. This is the url prefix used for making JMX connections to WebLogic Server MBeanServer.

edit()
startEdit()
vt = cmo.createVirtualTarget("coke-vt")
vt.setHostNames(array(["localhost"],java.lang.String))
vt.setUriPrefix("/coke")
as = cmo.lookupServer("admin")
vt.addTarget(as)
save()
activate()

2.3.4 Create the partition: coke

The partition name is coke and it is targeted to the coke-vt virtual target.

edit()
startEdit()
vt = cmo.lookupVirtualTarget("coke-vt")
p = cmo.createPartition('coke')
p.addAvailableTarget(vt)
p.addDefaultTarget(vt)
rg=p.createResourceGroup('coke-rg1')
rg.addTarget(vt)
realm = cmo.getSecurityConfiguration().lookupRealm("coke-realm")
p.setRealm(realm)
save()
activate()

2.3.5 Setup IDD for the partition

Set up primary identity domain (IDD) for the partition.

edit()
startEdit()
sec = cmo.getSecurityConfiguration()
sec.setAdministrativeIdentityDomain("AdminIDD")
realmName = 'coke_realm'
realm = cmo.getSecurityConfiguration().lookupRealm(realmName)
# ATN
defAtnP = realm.lookupAuthenticationProvider('ATNPartition')
defAtnP.setIdentityDomain('cokeIDD')
defAtnA = realm.lookupAuthenticationProvider('ATNAdmin')
defAtnA.setIdentityDomain("AdminIDD")
# Partition
pcoke= cmo.lookupPartition('coke')
pcoke.setPrimaryIdentityDomain('cokeIDD')
# Default realm
realm = sec.getDefaultRealm()
defAtn = realm.lookupAuthenticationProvider('DefaultAuthenticator')
defAtn.setIdentityDomain("AdminIDD")
save()
activate()

2.3.6 Restart the Server

Restart WebLogic Server because of the security setting changes.

2.3.7 Start the partition

This is required for a partition to receive requests.

edit()
startEdit()
partitionBean=cmo.lookupPartition('coke')
# start the partition (required)
startPartitionWait(partitionBean)
save()
activate()

2.4 Create another partition: pepsi in a domain

Repeat the same steps in 2.3 to create another partition: pepsi, but with different values:

  • Partition name = pepsi
  • User name = mtadmin2
  • Password = welcome2
  • Security realm = pepsi_realm
  • IDD name = pepsiIDD
  • Virtual target name = pepsi-vt
  • Resource group name = pepsi-rg1

2.5 Deploy User Application

Now the domain is ready to use. Let's deploy an application ear file. The application, e.g., helloTenant.ear, is deployed to the WebLogic Server domain, the coke partition, the pepsi partition.

edit()
startEdit()
deploy(appName='helloTenant',target='admin,path='${path-to-the-ear-file}/helloTenant.ear')
deploy(appName='helloTenant-coke',partition='coke',resourceGroup='coke-rg1',path='${path-to-the-ear-file}/helloTenant.ear')
deploy(appName='helloTenant-pepsi',partition='pepsi',resourceGroup='pepsi-rg1',path='${path-to-the-ear-file}/helloTenant.ear')
save()
activate()

2.6 Domain config file sample

When all of the steps are finished, the domain config file in $DOMAIN_HOME/config/config.xml will contain all of the info needed for the domain and the partitions. Here is a sample snippet related to the coke partition in the config.xml:

<server>
  <name>admin</name>
  <listen-address>localhost</listen-address>
</server>
<configuration-version>12.2.1.0.0</configuration-version>
<app-deployment>
  <name>helloTenant</name>
  <target>admin</target>
  <module-type>ear</module-type>
  <source-path>${path-to-the-ear-file}/helloTenant.ear</source-path>
  <security-dd-model>DDOnly</security-dd-model>
  <staging-mode xsi:nil="true"></staging-mode>
   <plan-staging-mode xsi:nil="true"></plan-staging-mode>
  <cache-in-app-directory>false</cache-in-app-directory>
</app-deployment>
<virtual-target>
  <name>coke-vt</name>
  <target>admin</target>
  <host-name>localhost</host-name>
  <uri-prefix>/coke</uri-prefix>
  <web-server>
    <web-server-log>
      <number-of-files-limited>false</number-of-files-limited>
    </web-server-log>
  </web-server>
</virtual-target>
<admin-server-name>admin</admin-server-name>
<partition>
  <name>coke</name>
  <resource-group>
    <name>coke-rg1</name>
    <app-deployment>
      <name>helloTenant-coke</name>
      <module-type>ear</module-type>
      <source-path>${path-to-the-ear-file}/helloTenant.ear</source-path>
      <security-dd-model>DDOnly</security-dd-model>
      <staging-mode xsi:nil="true"></staging-mode>
      <plan-staging-mode xsi:nil="true"></plan-staging-mode>
      <cache-in-app-directory>false</cache-in-app-directory>
    </app-deployment>
    <target>coke-vt</target>
    <use-default-target>false</use-default-target>
  </resource-group>
  <default-target>coke-vt</default-target>
  <available-target>coke-vt</available-target>
  <realm>coke_realm</realm>
  <partition-id>2d044835-3ca9-4928-915f-6bd1d158f490</partition-id>
  <primary-identity-domain>cokeIDD</primary-identity-domain>
</partition>

For the pepsi partition, there is a similar <virtual-target> element and the <partition> element for pepsi added in the config.xml.

From now on, the domain with 2 partitions are created and ready to serve requests. Users can access their applications deployed onto this domain. Check this blog Application MBean Visibility in Oracle WebLogic Server 12.2.1 regarding how to access the application MBeans registered on WebLogic Server MBeanServers in MT in 12.2.1.

3. Debug Flags

In case of errors during domain creation, there are debug flags which can be used to triage the errors:

  • If the error is related to security realm setup, restart the WLS server with these debug flags:
    • -Dweblogic.debug.DebugSecurityAtn=true -Dweblogic.debug.DebugSecurity=true -Dweblogic.debug.DebugSecurityRealm=true
  • If the error is related to a bean config error in a domain, restart the WLS server with these debug flags:
    • -Dweblogic.debug.DebugJMXCore=true -Dweblogic.debug.DebugJMXDomain=true
  • If the error is related to an edit session issue, restart the WLS server with these debug flags:
    • -Dweblogic.debug.DebugConfigurationEdit=true -Dweblogic.debug.DebugDeploymentService=true -Dweblogic.debug.DebugDeploymentServiceInternal=true -Dweblogic.debug.DebugDeploymentServiceTransportHttp=true 

4. Conclusion

An Oracle WebLogic Server domain in 12.2.1 can contain partitions. Creating a domain with partitions needs additional steps compared to creating a traditional WLS domain. This article shows the domain creation using WLST. There are other ways to create domains with partitions, e.g., FMW Control.  For more information on how to create a domain with partitions, please check the "References" section.

5. References

WebLogic Server domain

Domain partitions for multi tenency

Enterprise Manager Fusion Middleware Control (FMWC)

Config Wizard

Creating WebLogic domains using WLST offline

Restricted JRF template

WebLogic Server Security

WebLogic Server Deployment

WebLogic Server Debug Flags

Monday Nov 02, 2015

ZDT Patching; A Simple Case – Rolling Restart

To get started understanding ZDT Patching, let’s take a look at it in its simplest form, the rolling restart.  In many ways, this simple use case is the foundation for all of the other types of rollouts – Java Version, Oracle Patches, and Application Updates. Executing the rolling restart requires the coordinated and controlled shutdown of all of the managed servers in a domain or cluster while ensuring that service to the end-user is not interrupted, and none of their session data is lost.

The administrator can start a rolling restart by issuing the WLST command below:

rollingRestart(“Cluster1”)

In this case, the rolling restart will affect all managed servers in the cluster named “Cluster1”. This is called the target. The target can be a single cluster, a list of clusters, or the name of the domain.

When the command is entered, the WebLogic Admin Server will analyze the topology of the target and dynamically create a workflow (also called a rollout), consisting of every step that needs to be taken in order to gracefully shutdown and restart each managed server in the cluster, while ensuring that all sessions on that managed server are available to the other managed servers. The workflow will also ensure that all of the running apps on a managed server are fully ready to accept requests from the end-users before moving on to the next node. The rolling restart is complete once every managed server in the cluster has been restarted.

A diagram illustrating this process on a very simple topology is shown below.  In the diagram you can see that a node is taken offline (shown in red) and end-user requests that would have gone to that node are re-routed to active nodes.  Once the servers on the offline node have been restarted and their applications are again ready to receive requests, that node is added back to the pool of active nodes and the rolling restart moves on to the next node.

Animated image illustrating a rolling restart

 Illustration of a Rolling Restart Across a Cluster.

The rolling restart functionality was introduced based on customer feedback.  Some customers have a policy of preemptively restarting their managed servers in order to refresh the memory usage of applications running on top of them. With this feature we are greatly simplifying that tedious and time consuming process, and doing so in a way that doesn’t affect end-users.

For more information about Rolling Restarts with Zero Downtime Patching, view the documentation.

Thursday Oct 29, 2015

Dynamic Debug Patches in WebLogic Server 12.2.1

Introduction

Whether we like it not, we know that no software is perfect. Bugs happen, in spite of the best efforts by the developers. Worse, in many circumstances, they show up in unexpected ways. They can also be intermittent and hard to reproduce in some cases. In such cases, there is often not enough information even to understand the nature of the problem if the product is not sufficiently instrumented to reveal the underlying causes. Direct access to a customer's production environment is usually not an option. To get better understanding of the underlying problem, instrumented debug patches are usually created with the hope that running the applications with debug patches would provide more insight. This can be a trial and error method and can take several iterations before hitting upon the actual cause. The folks creating a debug patch (typically Support or Development teams in the software provider organization) and the customers running the application are almost always different groups, often in different companies. Thus, each iteration of creating a debug patch, providing it to the customer, getting it applied in customer environment and getting the results back can take substantial time. In turn, it can result in delays in problem resolution.

In addition, there can be other significant issues with deploying such debug patches. Applying patches in a Java EE environment requires bouncing servers and domains or at least redeploying applications. In mission critical deployments, it may not be possible to immediately apply patches. Moreover, when a server is bounced, its state is lost. Thus, vital failure data in memory may be lost. Also, an intermittent failure may not show up for a long time after restarting servers, making quick diagnosis difficult.

Dynamic Debug Patches

In the WebLogic Server 12.2.1 release, a new feature called Dynamic Debug Patches is introduced which aims to simplify the process of capturing diagnostic data for quicker problem resolution. With this feature, debug patches can be dynamically activated without having to restart servers or clusters or redeploy applications in a WebLogic domain. It leverages the JDK's instrumentation feature to hot-swap classes from specified debug patches using run-time WLST commands. With provided WLST commands (as described below), one or more debug patches can be activated within the scope of selected servers, clusters, partitions and applications. Since no server restart  or application redeployment is needed, associated logistical impediments are a non-issue. For one, since the applications and services continue to run, there is less of a barrier to activate these patches in production environments. Also, there is no loss of state. Thus, the instrumented code in newly activated debug patches have a better chance at revealing erroneous transient state and providing meaningful diagnostic information.

Prerequisites

Dynamic debug patches are ordinary jar files containing patched classes with additional instrumentation such as debug logging, print statements, etc. Typically, product development or support teams build these patch jars and make them available to system operations teams for their activation in the field. To make them available to the WebLogic Server's dynamic debug patches feature, system administrators need to copy them to a specific directory in a domain. By default, this directory is the debug_patches subdirectory under the domain root. However, it can be changed by reconfiguring the DebugPatchDirectory attribute of the DebugPatchesMBean.

Another requirement is to start the servers in the domain with the debugpatch instrumentation agent with the following option in the server's startup command. It is automatically added by the startup scripts created for WebLogic Server 12.2.1 domains.

-javaagent:${WL_HOME}/server/lib/debugpatch-agent.jar

Using Dynamic Debug Patches Feature

We will illustrate the use of this feature by activating and deactivating debug patches on a simple toy application.

The Application

We will use a minimalist toy web application which computes the factorial value of an input integer and returns it to the browser.

FactorialServlet.java:

package example;

import java.io.IOException;
import javax.servlet.GenericServlet;
import javax.servlet.ServletException;
import javax.servlet.ServletRequest;
import javax.servlet.ServletResponse;
import javax.servlet.annotation.WebServlet;

import java.util.Map;
import java.util.HashMap;
import java.util.concurrent.ConcurrentHashMap;

/**
 * A trivial servlet: Returns addition of two numbers.
 */
@WebServlet(value="/factorial", name="factorial-servlet")
public class FactorialServlet extends GenericServlet {

  public void service(ServletRequest request, ServletResponse response)
      throws ServletException, IOException {
    String n = request.getParameter("n");
    System.out.println("FactorialServlet called for input=" + n);
    int result = Factorial.getInstance().factorial(n);
    response.getWriter().print("factorial(" + n + ") = " + result);
  }
}

The servlet delegates to the Factorial singleton to compute the factorial value. As an optimization, the Factorial class maintains a Map of previously computed values which serves as an illustration of retaining stateful information while activating or deactivating dynamic debug patches.

Factorial.java:

package example;

import java.util.Map;
import java.util.concurrent.ConcurrentHashMap;

class Factorial {
  private static final Factorial SINGLETON = new Factorial();
  private Map<String, Integer> map = new ConcurrentHashMap<String, Integer>();

  static Factorial getInstance() {
    return SINGLETON;
  }

  public int factorial(String n) {
    if (n == null) {
      throw new NumberFormatException("Invalid argument: " + n);
    }
    n = n.trim();
    Integer val = map.get(n);
    if (val == null) {
      int i = Integer.parseInt(n);
      if (i < 0)
        throw new NumberFormatException("Invalid argument: " + n);
      int fact = 1;
      while (i > 0) {
        fact *= i;
        i--;
      }
      val = new Integer(fact);
      map.put(n, val);
    }
    return val;
  }
}

Building and Deploying the Application

To build the factorial.war web application, create FactorialServlet.java and Factorial.java files as above in an empty directory. Build the application war files with following commands:

mkdir -p WEB-INF/classes
javac -d WEB-INF/classes FactorialServlet.java Factorial.java
jar cvf factorial.war WEB-INF

Deploy the application using WLST (or WebLogic Server Administration Console):

$MW_HOME/oracle_common/common/bin/wlst.sh
Initializing WebLogic Scripting Tool (WLST) ...
Welcome to WebLogic Server Administration Scripting Shell
Type help() for help on available commands
connect(username, password, adminUrl)  # e.g. connect('weblogic', 'weblogic', 't3://localhost:7001')
Connecting to t3://localhost:7001 with userid weblogic ...
Successfully connected to Admin Server "myserver" that belongs to domain "mydomain".
Warning: An insecure protocol was used to connect to the server.
To ensure on-the-wire security, the SSL port or Admin port should be used instead.
deploy('factorial', 'factorial.war', targets='myserver')

Note that in the illustration above, we targeted the application only to the administration server. It may have been targeted to other managed servers or clusters in real world. We will discuss how to activate and deactivate debug patches over multiple managed servers and clusters in a subsequent article.

Invoke the web application from your browser. For example: http://localhost:7001/factorial/factorial?n=4 You should see the result in the browser and a message in the server's stdout window such as:

FactorialServlet called for input=4

The Debug Patch

The application as written does not perform lot of logging and does not reveal much about its functioning. Perhaps there is a problem and we need more information when it executes. We can create a debug patch from the application code and provide it to the system administrator so he/she can activate it on the running server/application. Let us modify above code to put additional print statements for getting additional information (i.e. the lines with "MYDEBUG" below).

Updated (version 1)  Factorial.java:

class Factorial {
  private static final Factorial SINGLETON = new Factorial();
  private Map<String, Integer> map = new ConcurrentHashMap<String, Integer>();
  static Factorial getInstance() {
    return SINGLETON;
  }
  public int factorial(String n) {
    if (n == null) {
      throw new NumberFormatException("Invalid argument: " + n);
    }
    n = n.trim();
    Integer val = map.get(n);
    if (val == null) {
      int i = Integer.parseInt(n);
      if (i < 0)
        throw new NumberFormatException("Invalid argument: " + n);
      int fact = 1;
      while (i > 0) {
        fact *= i;
        i--;
      }
      val = new Integer(fact);
      System.out.println("MYDEBUG> saving factorial(" + n + ") = " + val);
      map.put(n, val);
    } else {
      System.out.println("MYDEBUG> returning saved factorial(" + n + ") = " + val);
    }
    return val;
  }
}

Build the debug patch jar. Note that this a plain jar file, that is, not built as an application archive. Also note that we need not compile the entire application (although it would not hurt). The debug patch jar should contain only the classes which have changed (in this case, Factorial.class).

mkdir patch_classes
javac -d patch_classes Factorial.java
jar cvf factorial_debug_01.jar -C patch_classes

Activating Debug Patches

In most real world scenarios, creators (developers) and activators (system administraors) of debug patches would be different people. For the purpose of illustration, we will wear multiple hats here. Assuming that we are using the default configuration for the location of the debug patches directory, create the debug_patches directory under the domain directory if it is not already there. Copy factorial_debug_01.jar debug patch jar into the debug_patches directory.  Connect to the server with WLST as above.

First, let us check which debug patches are available in the domain. This can be done with the listDebugPatches command.

Hint: To see available diagnostics commands, issue help('diagnostics') command. To get information on specific command, issue help(commandName), e.g. help('activateDebugPatch').

wls:/mydomain/serverConfig/> listDebugPatches()         
myserver:
Active Patches:
Available Patches:
    factorial_debug_01.jar
    app2.0_patch01.jar
    app2.0_patch02.jar 

factorial_debug_01.jar is the newly created debug patch. app2.0_patch01.jar and app2.0_patch02.jar were created in the past to investigate issues with some other application. The listing above shows no "active" patches since none have been activated so far.

Now, let us activate the debug patch with the activateDebugPatch command.

tasks=activateDebugPatch('factorial_debug_01.jar', app='factorial', target='myserver')
wls:/mydomain/serverConfig/> print tasks[0].status                                                                 
FINISHED
wls:/mydomain/serverConfig/> listDebugPatches()     
myserver:
Active Patches:
    factorial_debug_01.jar:app=factorial
Available Patches:
    factorial_debug_01.jar
    app2.0_patch01.jar
    app2.0_patch02.jar

The command returns an array of tasks which can be used to monitor the progress and status of activation command. Multiple managed servers and/or clusters can be specified as targets if applicable. Corresponding to each applicable target server, there is a task in the returned tasks array. The command can also be used to activate debug patches at the server and middleware level as well. Such patches will be typically created by Oracle Support as needed. Output of listDebugPatches() command above shows that factorial_debug_01.jar is now activated on application "factorial".

Now, let us send some requests to the application: http://localhost:7001/factorial/factorial?n=4 and http://localhost:7001/factorial/factorial?n=5

Server output:

FactorialServlet called for input=4
MYDEBUG> returning saved factorial(4) = 24
FactorialServlet called for input=5
MYDEBUG> saving factorial(5) = 120

Notice that for input=4, saved results were returned since the values were computed and saved in the map due to a prior request. Thus, the debug patch was activated without destroying existing state in the application. For input=5, values were not previously computed and saved, thus a different debug message showed up.

Activating Multiple Debug Patches

If needed, multiple patches which potentially overlap can be activated. A patch which is activated later would mask the effects of a previously activated patch if there is an overlap. Say, in the above case, we need more detailed information from the factorial() method as it is executing its inner loop. Let us create another debug patch, copy it to debug_patches directory and activate it.

Updated (version 2) Factorial.java:

class Factorial {
  private static final Factorial SINGLETON = new Factorial();
  private Map<String, Integer> map = new ConcurrentHashMap<String, Integer>();
  static Factorial getInstance() {
    return SINGLETON;
  }
  public int factorial(String n) {
    if (n == null) {
      throw new NumberFormatException("Invalid argument: " + n);
    }
    n = n.trim();
    Integer val = map.get(n);
    if (val == null) {
      int i = Integer.parseInt(n);
      if (i < 0)
        throw new NumberFormatException("Invalid argument: " + n);
      int fact = 1;
      while (i > 0) {
        System.out.println("MYDEBUG> multiplying by " + i);
        fact *= i;
        i--;
      }
      val = new Integer(fact);
      System.out.println("MYDEBUG> saving factorial(" + n + ") = " + val);
      map.put(n, val);
    } else {
      System.out.println("MYDEBUG> returning saved factorial(" + n + ") = " + val);
    }
    return val;
  }
}

Build factorial_debug_02.jar

javac -d patch_classes Factorial.java
jar cvf factorial_debug_02.jar  -C patch_classes .
cp factorial_debug_02.jar $DOMAIN_DIR/debug_patches

Activate factorial_debug_02.jar

wls:/mydomain/serverConfig/> listDebugPatches()     
myserver:
Active Patches:
    factorial_debug_01.jar:app=factorial
Available Patches:
    factorial_debug_01.jar
    factorial_debug_02.jar
    app2.0_patch01.jar
    app2.0_patch02.jar
wls:/mydomain/serverConfig/> tasks=activateDebugPatch('factorial_debug_01.jar', app='factorial', target='myserver')
wls:/mydomain/serverConfig/> listDebugPatches()                                                                    
myserver:
Active Patches:
    factorial_debug_01.jar:app=factorial
    factorial_debug_02.jar:app=factorial
Available Patches:
    factorial_debug_01.jar
    factorial_debug_02.jar
    servlet3.0_patch01.jar
    servlet3.0_patch02.jar

Now, let us send some requests to the application: http://localhost:7001/factorial/factorial?n=5 and http://localhost:7001/factorial/factorial?n=6

FactorialServlet called for input=5
MYDEBUG> returning saved factorial(5) = 120
FactorialServlet called for input=6
MYDEBUG> multiplying by 6
MYDEBUG> multiplying by 5
MYDEBUG> multiplying by 4
MYDEBUG> multiplying by 3
MYDEBUG> multiplying by 2
MYDEBUG> multiplying by 1
MYDEBUG> saving factorial(6) = 720

We see the additional information printed due to code in factorial_debug_02.jar.

Deactivating Debug Patches

When the debug patch is not needed any more, it can be deactivated deactivateDebugPatches command. To get help on it, execute help('deactivateDebugPatches').

wls:/mydomain/serverConfig/> tasks=deactivateDebugPatches('factorial_debug_02.jar', app='factorial', target='myserver')            
wls:/mydomain/serverConfig/> listDebugPatches()                                                                        
myserver:
Active Patches:
    factorial_debug_01.jar:app=factorial
Available Patches:
    factorial_debug_01.jar
    factorial_debug_02.jar
    servlet3.0_patch01.jar
    servlet3.0_patch02.jar

Now, executing http://localhost:7001/factorial/factorial?n=2 gets us the following output in server's stdout window:

FactorialServlet called for input=2
MYDEBUG> saving factorial(2) = 2

Note that when we had activated factorial_debug_01.jar and factorial_debug_02.jar in that order, the classes in factorial_debug_02.jar masked those in factorial_debug_01.jar. After deactivating factorial_debug_02.jar, the classes in factorial_debug_01.jar got unmasked and became effective again. A list of comma separated list of debug patches may be specified with the deactivateDebugPatches command. To deactivate all active debug patches on applicable target servers, deactivateAllDebugPatches() command may be used.

WLST Commands

Following diagnostic WLST commands are provided to interact with the Dynamic Debug Patches feature. As noted above, help(command-name) shows help for that command.

 Command  Description
activateDebugPatch Activate a debug patch on specified targets.
deactivateAllDebugPatches De-activate all debug patches from specified targets.
deactivateDebugPatches De-activate debug patches on specified targets.
listDebugPatches List activated and available debug patches on specified targets.
listDebugPatchTasks List debug patch tasks from specified targets.
purgeDebugPatchTasks Purge debug patch tasks from specified targets.
showDebugPatchInfo  Show details about a debug patch on specified targets.

Limitations

The Dynamic Debug Patch features leverages JDK's hot-swap feature. The hot-swap feature has a limitation that how-swapped classes cannot have a different shape than the original classes. This means that the classes which are swapped in cannot add, remove or update constructors, methods, fields, super classes, implemented interfaces, etc. Only changes in method bodies are allowed. It should be noted that debug patches typically only gather additional information and not attempt to "fix" the problems as such. Minor fixes which would not change the shape of classes may be tried, but that is not the main purpose of this feature. Therefore, we don't expect this to be a big limitation in practice.

One issue, however is that, in some cases the new debug code may need to maintain some state. For example, perhaps we want to collect some data in some map and only dump it out on some threshold. The JDK limitation regarding shape-change creates problems in that case. The Dynamic Debug Patches provides a DebugPatchHelper utility class to help address some of those concerns. We will discuss that in a subsequent article. Please visit us back to read about it.

Using Diagnostic Context for Correlation

The WebLogic Diagnostics Framework (WLDF) and Fusion Middleware Diagnostics Monitoring System (DMS) provide correlation information in diagnostic artifacts such as logs and Java Flight Recorder (JFR).

The correlation information flows along with a Request across threads within and between WebLogic server processes, and can also flow across process boundaries to/from other Oracle products (such as from OTD or to the Database). This correlation information is exposed in the form of unique IDs which can be used to identify and correlate the flow of a specific request through the system. This information can also provide details on the ordering of the flow as well.

The correlation IDs are described as follows:

  • DiagnosticContextID (DCID) and ExecutionContextID (ECID). This is the unique identifier which identifies the Request flowing through the system. While the name of the ID may be different depending on whether you are using WLDF or DMS, it is the same ID. I will be using the term ECID as that is the name used in the broader set of Oracle products.
  • Relationship ID (RID). This ID is used to describe where in the overall flow (or tree) the Request is currently at. The ID itself is an ordered set of numbers that describes the location of each task in the tree of tasks. The leading number is usually a zero. A leading number of 1 indicates that it has not been possible to track the location of the sub-task within the overall sub-task tree.

These correlation IDs have been around for quite a long time, what is new in 12.2.1 is that WLDF now picks up some capabilities from DMS (even when DMS is not present):

  1) The RelationshipID (RID) feature from DMS is now supported
  2) The ability to handle correlation information coming in over HTTP
  3) The ability to propagate correlation out over HTTP when using the WebLogic HTTP client
  4) The concept of a non-inheritable Context (not covered in this blog, may be the topic of another blog)

For this blog, we will walk through a simple contrived scenario to show how an administrator can make use of this correlation information to quickly find the data available related to a particular Request flow. This diagram shows the basic scenario:


Each arrow in the diagram shows where a Context propagation could occur, however in our example propagation occurs only where we have solid blue arrows. The reason for this is that in our example we are using a Browser client which does not supply a Context, so for our example case the first place where a Context is created is when MySimpleServlet is called. Note that a Context could propagate into MySimpleServlet if it is called by a clients capable of providing the Context (for example, a DMS enabled HTTP client, a 12.2.1+ WebLogic HTTP client, or OTD).

In our contrived applications, we have each level querying the value of the ECID/RID using the DiagnosticContextHelper API, and the servlet will report these values. A real application would not be doing this, this is just for our example purposes so our servlet can display them.

We also have the EJB hard-coded to throw an Exception if the servlet request was supplied with a query string. The application will log warnings when that is detected, the warning log messages will automatically get the ECID/RID values included in them. The application does not need to do anything special to get them.

The applications used here as well as well as some basic instructions are attached in blog_example.zip.

First we will show hitting our servlet with an URL that is not expected to fail (http://myhost:7003/MySimpleServlet/MySimpleServlet):




From the screen shot above we can see that all of the application components are reporting the same ECID (f7cf87c6-9ef3-42c8-80fa-e6007c56c21f-0000022f). We also can see that the RID being reported by each components here are different and show the relationship between each of the components:


Next we will show hitting our servlet with an URL that is expected to fail (http://myhost:7003/MySimpleServlet/MySimpleServlet?fail):

We see that the EJB reported that it failed. In our contrived example app, we can see that the ECID is for the entire flow where the failure occured was "f7cf87c6-9ef3-42c8-80fa-e6007c56c21f-00000231". In a real application, that would not be the case. An administrator would most likely first see warnings reported in the various server logs, and see the ECID reported with those warnings. Since we know the ECID in this case, we can "grep" for it to show what those warnings would look like and that they have ECID/RID reported in them:

Upon seeing that we had a failure, the admin will capture JFR data from all of the servers involved. In a real scenario, the admin may have noticed the warnings in the logs, or perhaps had a Policy/Action (formerly known as Watch/Notification) configured to automatically notify or capture data. For our simple example, a WLST script is included to capture the JFR data.


The assumption is that folks here are familiar with JFR and Java Mission Control (JMC), also that they have installed the WebLogic Plugin for JMC (video on installing the plugin)

Since we have an ECID in hand already related to the failure (in a real case this would be from the warnings in the logs), we will pull up the JFR data in JMC and go directly to the "ECIDs" tab in the "WebLogic" tab group. This tab initially shows us an unfiltered view from the AdminServer JFR, which includes all ECIDs present in that JFR recording:

Next we will copy/paste the ECID "f7cf87c6-9ef3-42c8-80fa-e6007c56c21f-00000231" into the "Filter Column" for "ECID":


With only the specific ECID displayed, we can select that and see the JFR events that are present in the JFR recording related to that ECID. We can right-click to add those associated events to the "operative set" feature in JMC. Once in the "operative set" other views in JMC can also be set to show only the operative set as well, see Using WLDF with Java Flight Recorder for more information.

Here we see screen shots showing the same filtered view for the ejbServer and webappServer JFR data:



In our simple contrived case, the failure we forced was entirely within application code. As a result, the JFR data we see here shows us the overall flow for example purposes, but it is not going to give us more insight into the failure in this specific case itself. In cases where something that is covered by JFR events caused a failure, it is a good way to see what failed and what happened leading up to the failure.

For more related information, see:

Wednesday Oct 28, 2015

Weblogic Server 12.2.1 Multi-Tenancy Diagnostics Overview

Introduction

WebLogic Server 12.2.1 release includes support for multitenancy, which allows multiple tenants to share a single WebLogic domain. Tenants have access to domain partitions which provides an isolated slice of the WebLogic domain's configuration and runtime infrastructure.

This blog provides an overview of the diagnostics and monitoring capabilities available to tenants for applications and resources deployed to their respective partitions.

These features are provided by the WebLogic Server Diagnostic Framework (WLDF) component.

The following topics are discussed in the sections below.

Log and Diagnostic Data

Log and diagnostic data from different sources are made available to the partition administrators. They are broadly classified into the following groups:

  1. Shared data - Log and diagnostic data not directly available to the partition administrators in raw persisted form. It is only available through the WLDF Accessor component.
  2. Partition scoped data - These logs are available to the partition administrators in its raw form under the partition file system directory.

Note that The WLDF Data Accessor component provides access to both the shared and partition scoped log and diagnostic data available on a WebLogic Server instance for a partition.

The following shared logs and diagnostic data is available to a partition administrator.

Log Type
Content Description
Server Log events from Server and Application components pertaining to the partition recorded in the Server log file.
Domain Log events collected centrally from all the Server instances in the WebLogic domain pertaining to the partition in a single log file.
DataSource DataSource log events pertaining to the partition.
HarvestedData Archive Metrics data gathered by the WLDF Harvester from MBeans pertaining to the partition.
Instrumentation Events Archive WLDF Instrumentation events generated by applications deployed to the partition.

The following partition scoped log and diagnostic data is available to a partition administrator.

Log Type
Content Description
HTPP access.log HTTP access.log from partition virtual target's WebServer
JMSServer JMS server message life-cycle events for JMS server resources defined within a resource group or resource group template scoped to a partition.
SAF Agent SAF agent message life-cycle events for SAF agent resources defined within a resource group or resource group template scoped to a partition.
Connector Log data generated by Java EE resource adapter modules deployed to a resource group or resource group template within a partition.
Servlet Context Servlet context log data generated by Java EE web application modules deployed to a resource group or resource group template within a partition.

WLDF Accessor

The WLDF Accessor provides the RuntimeMBean interface to retrieve diagnostic data over JMX. It also provides a query capability to fetch only a subset of the data.

Please refer to the documentation on WLDF Data Accessor for WebLogic Server for a detailed description of this functionality.

WLDFPartitionRuntimeMBean (child of PartitionRuntimeMBean) is the root of the WLDF Runtime MBeans. It provides a getter for WLDFPartitionAccessRuntimeMBean interface which is the entry point for the WLDF Accessor functionality scoped to a partition. There is an instance of WLDFDataAccessRuntimeMBean for each log instance available for partitions.

Different logs are referred to by their logical names according to a predefined naming scheme.

The following table lists the logical name patterns for the different partition scoped logs.

Shared Logs

Log Type

Logical Name

Server Log

ServerLog

Domain Log

DomainLog

JDBC Log

DataSourceLog

Harvested Metrics

HarvestedDataArchive

Instrumentation Events

EventsDataArchive

Partition Scoped Logs

Log Type

Logical Name

HTTP Access Log

HTTPAccessLog/<WebServer-Name>

JMS Server Log

JMSMessageLog/<JMSServer-Name>

SAF Agent Log

JMSSAFMessageLog/<SAFAgent-Name>

Servlet Context Log

WebAppLog/<WebServer-Name>/context-path

Connector Log

ConnectorLog/connection-Factory-jndiName$partition-name

Logging Configuration

WebLogic Server MT supports configuring Level for java.util.logging.Loggers used by application components running within a partition. This will allow Java EE applications using java.util.logging to be able to configure levels for their respective loggers even though they do not have access to the system level java.util.logging configuration mechanism. In case of shared logger instances used by common libraries used across partitions also the level configuration is applied to a Logger instance if it is doing work on behalf of a particular partition.

This feature is available if the WebLogic System Administrator has started the server with -Djava.util.logging.manager=weblogic.logging.WLLogManager command line system property.

If the WebLogic Server was started with the custom log manager as described above, the partition administrator can configure logger levels as follows:

Please refer to the sample WLST script in the WLS-MT documentation.

Note that the level configuration specified in the  PartitionLogMBean.PlatformLoggerLevels attribute is applicable only for the owning partition. It is possible that a logger instance with the same name is used by another partition, each logger's effective level at runtime will defined by the respective partition's  PartitionLogMBean.PlatformLoggerLevels configuration.

Server Debug 

For certain troubleshooting scenarios you may need to enable debug output from WebLogic Server subsystems specific to your partition. The Server debug output is useful to debug internal Server code when it is doing work on behalf of a partition. This needs to be done carefully in collaboration with the WebLogic System Administrator and Oracle Support. The WebLogic System Administrator must enable the ServerDebugMBean.PartitionDebugLoggingEnabled attribute first and will advise you to enable certain debug flags. These flags are boolean attributes defined on the ServerDebugMBean configuration interface. The specific debug flags to be enabled for a partition are configured via the PartitionLogMBean.EnabledServerDebugAttributes attribute. It contains an array of String values that are the names of the specific debug outputs to be enabled for the partition. The debug output thus produced is recorded in the server log from where it can be retrieved via the WLDF Accessor and provided to Oracle Support for further analysis. Note that once the troubleshooting is done the debug needs to be disabled as there is a performance overhead incurred when you enable server debug.

Please refer to the sample WLST script in the WebLogic Server MT documentation on how to enable partition specific server debug.

Diagnostic System Module for Partitions

Diagnostic System Module provides Harvester and Policies and Actions components that can be defined within a resource group or resource group template deployed to a Partition.

Harvester

WLDF Harvester provides the capability to poll MBean metric values periodically and and archive the data in the harvested data archive for later diagnosis and analysis. All WebLogic Server Runtime MBeans visible to the partition including the PartitionRuntimeMBean and its child MBeans as well as custom MBeans created by applications deployed to the partition are allowed for harvesting. Harvester configuration defines the sampling period, the MBean types and instance specification and their respective MBean attributes that needs to be collected and persisted.

Note that the archived harvested metrics data is available from the WLDF Accessor component as described earlier.

The following is an example of harvester configuration persisted in the Diagnostic System Resource XML descriptor.

<harvester>
 <enabled>true</enabled>
 <sample-period>2000</sample-period>
 <harvested-type>
   <name>weblogic.management.runtime.ServletRuntimeMBean</name>
   <harvested-attribute>ExecutionTimeAverage</harvested-attribute>
   <namespace>ServerRuntime</namespace>
 </harvested-type>
 <harvested-type>
  <name>sandbox.mbean.SandboxCustomMBeanImpl</name>
  <namespace>ServerRuntime</namespace>
 </harvested-type>
</harvester>

For further details refer to the WLDF Harvester documentation.

Policies and Actions

Policies are rules that are defined in Java Expression Language (EL) for conditions that need to be monitored. WLDF provides a rich set of actions that can be attached to policies that get triggered if the rule condition is satisfied. 

The following types of rule based policies can be defined.

  • Harvester - Based on WebLogic Runtime MBean or Application owned custom MBean metrics.
  • Log events - Log messages in the server and domain logs.
  • Instrumentation Events - Events generated from Java EE application instrumented code using WLDF Instrumentation.

The following snippets show the configuration of the policies using the EL language.

<watch>
  <name>Session-Count-Watch</name>
  <enabled>true</enabled>
  <rule-type>Harvester</rule-type>
    <rule-expression>wls.partition.query("com.bea:Type=WebAppComponentRuntime,*", "OpenSessionsCurrentCount").stream().anyMatch(x -> x >= 1)
  </rule-expression>
  <schedule>
    <minute>*</minute>
    <second>*/2</second>
  </schedule>
  <notification>jmx-notif1</notification>
</watch>
<watch>
  <name>Partition-Error-Log-Watch</name>
  <rule-type>Log</rule-type>
  <rule-expression>log.severityString == 'Error'</rule-expression>
  <notification>jmx-notif1,r1,r2</notification>
</watch>
<watch>
 <name>Inst-Trace-Event-Watch</name>
  <rule-type>EventData</rule-type>
  <rule-expression>instrumentationEvent.eventType == 'TraceAction'</rule-expression>
  <notification>jmx-notif1</notification>
</watch>

The following types of actions are supported for partitions:

  • JMS
  • SMTP
  • JMX
  • REST
  • Diagnostic Image

For further details refer to the Configuring Policies and Actions documentation.

Instrumentation for Partition Applications

WLDF provides a byte code instrumentation mechanism for Java EE applications deployed within a partition scope. The Instrumentation configuration for the application is specified in the META-INF/weblogic-diagnostics.xml descriptor file.  

This feature is available only if the WebLogic System Administrator has enabled server level instrumentation. Also it is not available for applications that share class loaders across partitions.

The following shows an example WLDF Instrumentation descriptor.

  <instrumentation>
  <enabled>true</enabled>
  <wldf-instrumentation-monitor>
    <name>Servlet_Before_Service</name>
    <enabled>true</enabled>
  <action>TraceAction</action>
  </wldf-instrumentation-monitor>
  <wldf-instrumentation-monitor>
    <name>MyCustomMonitor</name>
    <enabled>true</enabled>
    <action>TraceAction</action>
    <location-type>before</location-type>
    <pointcut>execution( * example.util.MyUtil * (...))</pointcut>
  </wldf-instrumentation-monitor>
</instrumentation>

For further details refer to the WLDF Instrumentation documentation.

Diagnostic Image

The Diagnostic Image is similar to a core dump which captures the state of the different WebLogic Server subsystems in a single image zip file. WLDF supports the capturing of partition specific diagnostic images.

Diagnostic images can be captured in the following ways:

  • From WLST by the partition administrator.
  • As the configured action for a WLDF policy.
  • By invoking the captureImage() operation on the WLDFPartitionImageRuntimeMBean.

Images are output to the logs/diagnostic_images directory in the partition file system.

The image for a partition contains diagnostic data from different sources such as:

  • Connector
  • Instrumentation
  • JDBC
  • JNDI
  • JVM
  • Logging
  • RCM
  • Work Manager
  • JTA

For further details refer to the WLDF documentation.

RCM Runtime Metrics

WebLogic Server 12.2.1 introduced Resource Consumption Management (RCM) feature.  This feature is only available in Oracle JDK JDK8u40 and above.

To enable RCM add the following command line switches on Server start up

-XX:+UnlockCommercialFeatures -XX:+ResourceManagement -XX:+UseG1GC
Please note that RCM is not enabled by default in the startup scripts.
The PartitionResourceMetricsRuntimeMBean which is a child of the PartitionRuntimeMBean provides a bunch of useful metrics for monitoring purposes.

Attribute Getter

Description

isRCMMetricsDataAvailable()

Checks whether RCM metrics data is available for this partition.

getCpuTimeNanos()

Total CPU time spent measured in nanoseconds in the context of a partition.

getAllocatedMemory()

Total allocated memory in bytes for the partition.This metric value increases monotonically over time.

getThreadCount()

Number of threads currently assigned to the partition.

getTotalOpenedSocketCount()

getCurrentOpenSocketCount()

Total  and current number of sockets opened in the context of a partition.

getNetworkBytesRead()

getNetworkBytesWritten()

Total number of bytes read /written from sockets for a partition.

getTotalOpenedFileCount()

getCurrentOpenFileCount()

Total  and current number of files opened in the context of a partition.

getFileBytesRead()

getFileBytesWritten()

Total number of file bytes  read/written in the context of a partition.

getTotalOpenedFileDescriptorCount()

getCurrentOpenFileDescriptorCount()

Total  and current number of file descriptors opened in the context of a partition.

getRetainedHeapHistoricalData()

Returns a snapshot of the historical data for retained heap memory usage for the partition.  Data is returned as a two-dimensional array for the usage of retained heap scoped to the partition over time.  Each item in the array contains a tuple of [timestamp (long), retainedHeap(long)] values.

getCpuUtilizationHistoricalData()

Returns a snapshot of the historical data for CPU usage for the partition. CPU utilization percentage indicates the percentage of CPU utilized by a partition with respect to available CPU to Weblogic Server.

Data is returned as a two-dimensional array for the CPU usage scoped to the partition over time. Each item in the array contains a tuple of [timestamp (long), cpuUsage(long)] values.

Please note that the PartitionMBean.RCMHistoricalDataBufferLimit attribute limits the size of the data arrays for Heap and CPU.

Java Flight Recorder

WLDF provides integration with the Java Flight Recorder which enables WebLogic Server events to be included in the JVM recording. WebLogic Server events generated in the context of work being done on behalf of the partition are tagged with the partition-id and partition-name. These events and the flight recording data are only available to the Weblogic System Administrator.

Conclusion

WLDF provides a rich set of tools to capture and access to different types of monitoring data that will be very useful for troubleshooting and diagnosis tasks. This blog provided an introduction to the WLDF surface area for partition administrators. You are encouraged to take a deeper dive and explore these features further and leverage them in your production environments. More detail information is available in the WLDF documentation for WebLogic Server and the Partition Monitoring section in the WebLogic Server MT documentation.

Partition Import/Export

This article will discuss common use cases scenarios of Import/Export Partition in Weblogic Multitenant Edition
[Read More]

Tuesday Oct 27, 2015

Resource Consumption Management in WebLogic Server MultiTenant 12.2.1 to Control Resource Usage of Domain Partitions

[This blog post is part of a series of posts that introduce you to new features in the recently announced Oracle WebLogic Server 12.2.1, and introduces an exciting performance isolation feature that is part of it.] 

With the increasing push to "doing more with less" in the enterprise, system administrators and deployers are constantly looking to increase density and improve hardware utilization for their enterprise deployments. The support for micro-containers/pluggable Domain Partitions in WebLogic Server Multitenant helps system administrators collocate their existing silo-ed business critical Java EE deployments into a single Mutitenant domain.

Say, a system administrator creates two Partitions "Red" and "Blue" in a shared JVM (a WebLogic Multitenant Server instance), and deploys Java EE applications and resources to them. A system administrator would like to avoid the situation where one partition's applications (say the "Blue" partition) "hogs" all shared resources in the Server instance's JVM (Heap)/the operating system (CPU, File descriptors), and negatively affecting the "Red" partition applications' access to these resources.


Runtime Isolation

Therefore, while consolidating existing enterprise workloads into a single Multitenant Server instance, system administrators would require better control (track, manage, monitor, control) over usage of shared resources by collocated Domain Partitions so that:

 

  • One Partition doesn't consume all available resources, and exhaust them from other collocated partitions. This helps a system administrator plan for, and support consistent performance to all collocated partitions.
  • Fair and effecient allocation of available resources are provided to collocated partitions. This helps a system administrator confidently place complementary workloads in the same environment, while achieving enhanced density and great cost-savings.


Control Resource Consumption Management


Resources

In Fusion Middleware 12.2.1, Oracle WebLogic Server Multitenant supports establishing resource management policies on the following resources

 

  • Heap Retained: Track and control the amount of Heap retained by a Partition
  • CPU Utilization: Track and control the amount of CPU utilization used by a Partition
  • Open File Descriptors: Track and control the amount of open file descriptors (due to File I/O, Sockets etc) used by a Partition.


    Recourse Actions

    When a trigger is breached, a system administrator may want to react to that by automatically taking certain recourse actions in response. The following actions are available out of the box with WebLogic.

     

    • Notify: inform administrator that a threshold has been surpassed
    • Slow: reduce partition’s ability to consume resources, predominantly through manipulation of work manager settings – should cause system to self-correct in certain situations
    • Fail: reject requests for the resource, i.e. throw an exception - only supported for file descriptors today
    • Stop: As an extreme step, initiate the shut down sequence for the offending partition on the current server instance


      Policies

      The Resource Consumption Management feature in Oracle WebLogic Server Multitenant, enables a system administrator to specify resource consumption management policies on resources, and direct WebLogic to automatically take specific recourse actions when the policies are violated. A policy could either be created as one of the following two types

       

      • Trigger: This is useful when resource usage by Partitions are predictable and takes the form "when a resource's usage by a Partition crosses a Threshold, take a recourse action". 

       

      For example, a sample resource consumption policy that a system administrator may establish on a "Blue" Partition to ensure that it doesn't run away with all the Heap looks like: When the “Retained Heap” (Resource) usage for the “Blue” (Partition) crosses “2 GB” (Trigger), “stop” (Action) the partition.

       

      • Fair share: Similar to the Work Manager fair share policy in WebLogic, this policy allows a system administrator to specify "shares" of a bounded-size shared resource to a Partition. WebLogic then ensures that this resource is shared effectively (yet fairly) by competing consumers while honouring the "shares" allocated by the system administrator. 

       

      For example, a sample resource consumption policy that a system administrator who prefers "Red" partition over "Blue" may set the fair-share for the "CPU Utilization" resource in the ration 60:40 in favour of "Red".

      When complementary workloads are deployed to collocated partitions, fair-share policies also helps achieving maximal utilization of resources. For instance, when there are no or limited requests for the "Blue" partition, the "Red" partition would be allowed to "steal" and use all the available CPU time. When traffic resumes on the "Blue" partition and there is contention for CPU, WebLogic would allocate CPU time as per the fair-share ratio set by the system administrator. This helps system administrators reuse a single shared infrastructure and saving infrastructure costs in turn, while still retaining control over how those resources are allocated to Partitions. 

       

      Policy configurations could be defined at the domain level and reused across multiple pluggable Partitions, or they can be defined unique to a Partition. Policy configurations are flexible to support different combinations of trigger-based and fair-share policies for multiple resources to meet your unique business requirements. Policies can also be dynamically reconfigured without any restart of the Partition required. 

      The picture below shows how a system administrator could configure two resource consumption management policies (a stricter "trial" policy and a lax "approved" policy) and how they could be assigned to individual Domain Partitions. Heap and CPU resource by the two domain Partitions are then governed by the policies associated with each of them.

      WLS 12.2.1 RCM resource manager sample schematic


      Enabling Resource Management

      The Resource Consumption Management feature in WebLogic Server 12.2.1 is built on top of the resource management support in Oracle JDK 8u40. WebLogic RCM requires Oracle JDK 8u40 and the G1 Garbage Collector. In WebLogic Server Multitenant, you would need to pass the following additional JVM arguments to enable Resource Management:

      “-XX:+UnlockCommercialFeatures -XX:+ResourceManagement -XX:+UseG1GC”


      Track Resource Consumption

      Resource consumption metrics are also available on a per partition basis, and is provided through a Monitoring MBean, PartitionResourceMetricsRuntimeMBean. Detailed usage metrics on a per partition basis is available through this monitoring Mbean, and system administrators may use these metrics for the purposes of tracking, sizing, analysis, monitoring, and for configuring business-specific Watch and Harvester WLDF rules.


      Conclusion

      Resource Consumption Managers in WebLogic MultiTenant helps provide the runtime isolation and protection needed for applications running in your shared and consolidated environments.


      For More Information

      This blog post only scratches the surface of the possibilities with the Resource Consumption Management feature. For more details on this feature, and how you can configure resource consumption management policies in a consolidated Multitenant domain using Weblogic Scripting Tool (WLST) and Fusion Middleware Control, and best practices, please refer the detailed technical document at "Resource Consumption Management (RCM) in Oracle WebLogic Server Multitenant (MT) - Flexibility and Control Over Resource Usage in Consolidated Environments".

      The Weblogic MultiTenant documentation's chapter "Configuring Resource Consumption Management" also has more details on using the feature.

      This feature is a result of deep integration between the Oracle JDK and WebLogic Server. If you are attending Oracle OpenWorld 2015 in San Francisco, head over to the session titled "Multitenancy in Java: Innovation in the JDK and Oracle WebLogic Server 12.2.1" [CON8633] (Wednesday, Oct 28, 1:45 p.m. | Moscone South—302) to hear us talk about this feature in more detail.

      We are also planning a series of videos on using the feature and we will update this blog entry as they become available.


      [Read More]

      Domain Partitions for Multi-tenancy in WebLogic Server 12.2.1

      One of the signature enhancements in WebLogic Server 12.2.1 is support for multi-tenancy. Some of the key additions to WebLogic for multi-tenancy are in the configuration and runtime areas where we have added domain partitions, resource groups, and resource group templates. This post gives you an overview of what these are and how they fit together.

      Domain Partition

      A domain partition (partition for short) is an administrative and runtime slice of a WebLogic domain. In many ways you can think of a partition as a WebLogic micro-container.

      In 12.1.3 and before, you define managed servers and security realms in the domain and you deploy apps and resources to the domain. You target the apps and resources to managed servers and clusters to control where they will run.

      And in 12.2.1 you can still do all that.

      But you can also create one or more partitions in the domain. Each partition will contain its own apps and resources. 

      You can target each partition to a managed server cluster where you want the partition’s apps and resources to run. (More about targeting later.)

      You can start and shut down each partition independently, even if you have targeted multiple partitions to the same cluster. You can have different partitions use different security realms.

      Plus, for each partition you can identify partition administrators who can control that partition. You can begin to see how a partition is a micro-container — a slice of the domain which contains it.

      Resource Group and PaaS

      Suppose you have a set of Java EE applications and supporting resources that are a human resources solution, and you have another set of apps and their resources that are a finance package. In 12.1.3 you would deploy all the apps and all the resources to the domain. You might target them to different clusters — in fact you would have to do that if you wanted to start up and shut down the two packages independently.

      Or, in 12.1.3 you could use separate WebLogic domains — one for the HR apps and one for the finance ones. That would give you more control over targeting and starting and shutting down, but at the cost of running more domains.

      With 12.2.1 we introduce the resource group which is simply a collection of related Java EE apps and resources. You can gather the HR apps and resources into one resource group and collect the finance apps and resources into another resource group.

      Things get even more interesting because each partition can contain one or more resource groups. 

      If you were thinking about running two separate domains before — one domain for HR and one for finance — you could instead use one domain containing two partitions — one partition for HR and one for finance. With our simple example the HR partition would have a resource group containing the HR apps and resources, and the finance partition would have a resource group containing the finance apps and resources.

      In 12.2.1 you actually target each resource group. If it makes sense, you could target the resource groups in both the HR and finance partitions to the same cluster. And, because you can control the partitions independently, you can start or shut down either one without disturbing the other. The managed servers in the cluster stay up throughout. When you start up or shut down a partition, it is the apps and resources in the partition’s resource groups that are started up or shut down, not the entire server.

      People often call this the consolidation use case — you are consolidating multiple domains into one by mapping separate domains to separate partitions in a single consolidated domain. This is also called the platform-as-a-service (PAAS) use case. Each partition is used as a WebLogic “platform” (a micro-container) and you deploy different apps and resources to each.

      Resource Group Template and SaaS

      There is an entirely different way to use partitions and resource groups to solve different sorts of problems, but we need one more concept to do it well.

      Suppose you wanted to offer those HR and finance suites of apps, running in your data center, as services to other enterprises.

      In 12.1.3 you might create a separate domain for each client and deploy the HR and finance apps and resources the same way in each domain. You might use a domain template to simplify your job, but you’d still have the overhead of multiple domains. And one domain template might not work too well if one customer subscribed to your HR service but another subscribed to HR and finance.

      In 12.2.1 terms this sounds like two resource groups — the HR resource group and the finance resource group — but each running once for each of your enterprise customers. Using one partition per client sounds just right, but you would not want to define the two resource group repetitively and identically in each customer's partition.

      WebLogic 12.2.1 introduces the resource group template for just such a situation.

      You define the HR apps and resources in a resource group template within the WebLogic domain. Do the same for the finance apps and resources in another resource group template. Create a partition for each of your enterprise customers, and as before, in each partition, you create a resource group. 

      But now the resource group does not itself define the apps and resources but instead ">references the HR resource group template. And, if one customer wants to use both suites you create a second resource group in just that customer's partition, linked to the second resource group template.

      When you start the partition corresponding to one of your customers, WebLogic essentially starts up that customer's copies of the apps and resources as defined in the resource group template. When you start another client’s partition, WebLogic starts that customer's copies of the apps and resources.

      This is the classic software-as-a-service (SAAS) use case. And, if you replace the word “customer” with the word “tenant” in this description you see right away how simply WebLogic 12.2.1 supports multi-tenancy through partitions, resource groups, and resource group templates.

      There are other ways to use resource group templates besides offering packaged apps as a service for sale to others, but this example helps to illustrate the basics of how you can use these new features in WebLogic Server together to solve problems in whole new ways.

      Some Details

      The multi-tenancy support in WebLogic 12.2.1 is very rich. We’ve just scratched the surface here in this posting, and as you might expect there are other related features that make this all work in practice. This posting is not the place to cover all those details, but there are a few things I want to mention briefly.

      Resource Overriding

      About the SAAS use case you might be thinking “But I do not want each tenant’s partition configured exactly the same way as set up in the resource group template. For example, different tenants need to use different databases, so the JDBC connection information has to be different for different partitions.”

      WebLogic 12.2.1 lets you create overrides of the settings for apps and resources that are defined in a resource group template, and you can do so differently for each partition. For common cases (such as JDBC connection information) these overrides expose the key attributes you might need to adjust. For each partition and for each resource within the partition you can set up a separate override.

      We intend these overrides to cover most of the cases, but if you need even more control you can create — again, separately for each partition — a resource deployment plan. If you are familiar with application deployment plans in WebLogic the idea is very much the same, except applied to the non-app resources defined in resource group templates.

      To illustrate, here is what WebLogic does  conceptually, at least  when you start a partition containing resource groups linked to resource group templates:

      • The system reads all the resource settings from the resource group template.
      • If there is a resource deployment plan for the partition, the system applies any adjustments in the plan to the resource settings.
      • Finally, if you have created any overrides for that partition, the system applies them.
      • WebLogic uses the resulting resource settings to create that partition’s copies of the resources defined by the template.

      Targeting

      In this post I’ve mentioned targeting but WebLogic 12.2.1 lets you set up targeting of partitions and resource groups ranging from simple to very sophisticated (if you need that). My colleague Joe DiPol has published a separate posting about targeting.

      What Next?

      Here is the WebLogic Server 12.2.1 documentation that describes all the new features, and this part specifically describes the new multi-tenancy features. 

      Remember to check out Joe DiPol's posting about targeting

      For even more about the release of WebLogic Server please browse other postings on the WebLogic Server blog


      Monday Oct 26, 2015

      WLS Data Source Multitenancy

      See the  Blog announcing Oracle WebLogic Server 12.2.1 for more details on Multenacy and other new features.

      The largest and most innovative feature in WebLogic Server (WLS) 12.2.1 is Multitenancy.  It is integrated with every component in the application server.  As part of the Multi-tenancy effort one of the key concepts being introduced is the notion of a slice of the domain which is referred to as a Partition or Domain Partition.  A Partition defines applications and resources for a specific Tenant where each Partition's configuration and runtime are isolated from other Partitions in the Domain.  Multi-tenancy is expected to reduce administrative overhead associated with managing multiple domains and application deployments, and to improve the density of these deployments, such that operational and infrastructure costs are reduced.

      The concepts of the WLS MT feature are described WebLogic Server Multitenant (MT).  The details for MT data source are in the Configuring JDBC  chapter.  This article summarizes the use of data sources in a MT environment and focuses on finding your way around the administrative console and Fusion Middleware Control.

      When working without the WLS Multi Tenant feature, a data source (DS) may be defined as a system resource or deployed at the domain level. When using the Multi Tenant feature, a data source may also be defined in the following scopes.

      • Domain
        • DS with global scope
        • Domain-level Resource Group with DS with global scope
        • Domain-level Resource Group Template with DS
        • Partition
          • Partition-level Resource Group with DS
          • Partition-level Resource Group based on Resource Group Template
          • Partition-level JDBC System Resource Override
          • Partition-level Resource Deployment Plan
          • Object deployed at the partition level

      The following table summarizes the various deployment types and the mechanism to update or override the data source definition.

      Data Source Deployment Type

      Parameter Override Support

      Domain-level System Resource, optionally scoped in RG

      No override support – change the DS directly.

      RGT System Resource

      Change the DS directly or override in the RG derived from the RGT

      Partition-level System Resource in RG

      No override support – change the DS directly.

      Partition-level System Resource in a RG based on RGT

      JDBC System Override or Resource deployment plan.

      Application Scoped/Packaged Data Source deployed to domain or partition

      Application Deployment plan.

      Standalone Data Source Module deployed to domain or partition

      Application Deployment plan.

      Data Source Definitions (Java EE 6) deployed to domain or partition

      No override support.

      Creating a data source that is scoped to a Domain-level RG or in a Partition is similar to creating a domain-level system resource. The only additional step is to specify the scope. In the administration console or Fusion Middleware Control (FMWC), there is a drop-down on the first step of creation that lists the available scopes in which to create a data source. In WLST, it’s necessary to create the data source using createJDBCSystemResource on the owner MBean (the MBean for the domain, RG, or RGT).

      The WLST example at Configuring JDBC Data Sources: WLST Example is very useful in setting up a partitioned domain. It demonstrates creating a virtual target, partition, RGT, RG, and data sources at all levels.

      The remainder of this article focuses on the graphical user interfaces. 

      In the administration console, start by selecting the Data Source summary from the Home page. In this first figure, we see the four data sources that were created by running the WLST script.  One is global and the remaining three have various scopes within the partition. 

      If we click on the “ds-using-template” data source and look at the connection pool properties, we see the original values for the data source based on the template.  The overrides don’t show up at this level.

      Selecting New on the Data Source Summary page and creating a Generic Data Source, we can see the drop-down Scope on the first page.  It changes based on the scopes that are currently available.

      Back on the Home page, we can select Domain Partitions and we see the one “partition1” partition with two resource groups.

      If we click on “partition1” and go to the overrides page, we can see the JDBC override for “ds-in-template”. Note that the URL has now been overridden from using “otrade” to “otrade2”.

      Clicking on the “ds-in-template’ link allows for changing the overrides. Note that on this page, we can also see that the user has been overridden to be “scott”.

      You can create a new JDBC System Resource Override by selecting New. The drop-down box for Data Source lists the available resources for creating the override. The administration console currently lists all RG in the partition. However, the intent is that only RG derived from RGT should be allowed to have an override. Non-derived RG should be updated directly so it’s recommended not to override such groups (it may be removed in the future).


      Going back to the Home page, Data Sources, select a Data Source, select Security, select Credential Mappings, then New, we can enter new User, Remote User, Remote Password triplets

      It’s possible to look at lists of data sources at various levels. From the domain level, the monitoring tab on the Data Sources page shows all running data sources in all scopes.

      From the Partition page selecting “partition1”, select Resource Group, select “partition1-rg”, select Services, select JDBC, we see the one data source defined in this scope.

      Partition-scoped deployments are handled as with non-partition scoped deployments, you start by selecting Deployments from the Home page and then find the ear or war file that you want to deploy. On the first page of the “Install Application Assistant”, you can select the Scope as shown in the following figure.

      Once you finish deploying the ear or war file, you can see the associated modules in the console, with the associated scope by clicking on the associated link. Initially there is no deployment plan.

      Creating an application deployment plan is a bit complex.  It's recommended to use the administration console to create it automatically.  Simply go to the deployed data source, change the configuration, and save the changes.  The console creates the associated deployment plan and fills in the Deployment Plan name.

      If you want to override attributes of a RGT-derived partition datasource that are not one of user, password, or URL, you will need to create a Resource Deployment Plan. There is no automation in the console for doing this. You can massage an Application Deployment Plan to look like a Resource Deployment Plan or create one from scratch using an XML editor. Here is an equivalent Resource Deployment Plan.  To use the resource deployment plan, go to Home, Domain Partitions, click on the Partition link, and type in the pathname in the Resource Deployment Plan Path.

      The capabilities in FMWC are similar to the administrative console but with a different look and feel. FMWC currently does not have security pages; data source security configuration must be done in the administration console.

      If you select JDBC Data Sources from the WebLogic Domain drop-down, you will see something like this.

      Selecting Create brings up a page that includes a Scope drop-down.

      Selecting a resource group name brings up a page to edit the RG.

      Selecting an existing DS brings up a page to edit the DS.

      Selecting a partition name brings up a page to edit the Partition attributes.

      If you are looking for the Partition system resource overrides, select the partition name, select the Domain Partition drop-down, select Administration, then select Resource Overrides.

      The page looks like this.

      This page will list every RG that is derived from a RGT. If there is no existing override, the “Has Overrides” column will not have a check and clicking on “Edit Overrides” will create a new override. If “Has Overrides” has a check, clicking on “Edit Overrides” will update the existing overrides, as in the following figure.

      The focus of this article has been on configuration of data sources in a Multi Tenant environment.  It is the intent that using an application in a Multi Tenant environment should be largely transparent to the application software.  The critical part is to configure the application server objects and containers to be deployed in the partitions and resource groups where they are needed.


      Wednesday Jul 29, 2015

      WebLogic Update @ Voxxed Days Istanbul

      It's relatively rare Java focused conferences have clearly WebLogic centric sessions. This is understandable as conference organizers must carefully balance between education, vendor-neutrality, sharing useful information and outright sales pitches. The distinction is very tenuous and historically frequently abused at Java conferences. As a result Java conference organizers (and attendees) often choose to err on the side of caution and avoid content focused on commercial products (note that while WebLogic is beyond a doubt a commercially licensed product developers can use it freely on their own local machines with an OTN license). An unfortunate side effect of this problem is that many developers remain woefully unaware of the changes happening in mission critical bits of industry infrastructure such as WebLogic. The exception to this unfortunate situation is events like Oracle OpenWorld and other Oracle technology centric conferences where WebLogic is far better represented.

      For these reasons it was a breadth of fresh air to be able to deliver a brief WebLogic centric session at Voxxed Days Istanbul 2015. I am so grateful to the organizers for lending me the benefit of the doubt and recognizing the distinction between selling and informing current/prospective users about important technological changes that can help their organizations. Titled "What's New in WebLogic 12.1.3 and Beyond", the talk essentially covers the very important hard work that we have already done in WebLogic 12.1.3 including supporting some of the most critical Java EE 7 APIs as well as the fundamental changes coming soon in WebLogic 12.2.1 including full Java EE 7 platform support. Below is the slide deck for the talk (click here if you can't see the embedded slide deck.):

      If you have not yet taken a look at WebLogic 12.1.3 and the road map for 12.2.1, the deck should offer a quick way to do so. Here is the abstract for the talk to give you better context:

      WebLogic 12.1.3 was released about a year ago. It brings a large set of changes including support for some key new Java EE 7 APIs such as WebSocket, JAX-RS 2, JSON-P and JPA 2.1, support for Java SE 8, WebSocket fallback support, support for Server-Sent Events (SSE), improved Maven support, enhanced REST administration support, Oracle Database 12c driver support and much, much more. In this session we will take a detailed tour of these features. In addition we will also cover updated WebLogic support in the Oracle Cloud, the new Oracle public Maven repository, using WebLogic with Arquillian for testing and well as official Docker support for WebLogic. Towards the end of the session we will discuss what's coming in WebLogic 12.2.1 this year including full support for Java EE 7, multi-tenancy and more.

      Besides the brief WebLogic talk I also covered Java EE 7 and Java EE 8 at Voxxed Days Istanbul as well as the Istanbul and Ankara JUG. More details of the event are posted on my personal blog.

      Friday Jul 17, 2015

      Accessing WebLogic Logs via REST

      One of the most significant changes in the WebLogic 12.1.3 release is improvements in the REST management interface. Oracle ACE Director and WebLogic expert Dr. Frank Munz does a very nice job summarizing the changes on his blog. The REST management capability is really quite a nice addition to the existing DevOps oriented capabilities such as WLST and of course the admin console. One of the very interesting things you can do via the REST management interface in WebLogic 12.1.3 is easily access all WebLogic logs. Dr. Frank Munz explains nicely step by step how to do this via another excellent blog entry well worth a read.

      The best way to learn the details of the REST management capabilities is of course always the WebLogic documentation.

      Thursday Jun 18, 2015

      Managing Logs in WebLogic

      Logging is your first line of defense in terms administering, debugging and monitoring any part of the data center and especially the application server. WebLogic generates a number of very helpful log files for that reason. In addition WebLogic also provides ways to robustly manage these log files in terms of tuning things like log rotation and filtering. Ahmed Aboulnaga introduces some of these capabilities in a recent article on OTech Magazine (his article is mostly focused on the admin console).

      The most detailed and up-to-date way to learn about WebLogic logging is always of course the WebLogic documentation. For example a couple of important logging aspects the article does not get into include configuring the logs themselves as well as easily viewing the logs through the WebLogic console.

      About

      The official blog for Oracle WebLogic Server fans and followers!

      Stay Connected

      Search

      Archives
      « July 2016
      SunMonTueWedThuFriSat
           
      1
      2
      3
      4
      5
      6
      7
      8
      9
      10
      11
      12
      13
      15
      16
      17
      18
      19
      20
      21
      22
      23
      24
      25
      26
      27
      28
      29
      30
      31
            
      Today