Friday Dec 11, 2015

Introducing WLS JMS Multi-tenancy

Introduction

Multi-tenancy (MT) is the main theme of the WebLogic Server 12.2.1 release. It enhances the Oracle Platform for Software-as-a-Service (SaaS) and Platform-as-a-Service (PaaS) use cases. The main benefits of WebLogic multi-tenancy are increased density, tenant isolation, and simplified cloud configuration and management.

This article introduces multi-tenancy support for WebLogic JMS, the messaging component in WebLogic Server. 

Key MT Concepts

Some of you may have already learned from other blogs (for example Tim’s blog about Domain Partitions for Multi Tenancy) about some of the key concepts in WebLogic MT. But for the benefit of a broader audience, here is a quick review of those concepts before we get into JMS specifics.

WebLogic Multi-tenancy introduces the concepts of domain partition (also known as partition), resource group (RG), and resource group template (RGT).  

A Partition is conceptually a slice of a WebLogic domain, where resources and applications for different tenants can be configured and deployed in isolation on the same WebLogic server or in the same cluster. This improves overall density. Partitions define the isolation boundaries for JNDI, security, runtime MBeans, application persistent data, work managers and logging.  Furthermore, Partitions running on the same server instance have their own lifecycle, for example, a partition can be shut down at any time without impacting other partitions.

A Resource Group is simply a collection of functionally related resources and applications. A RG can be targeted and managed independently of other resource groups in the same partition. Resource groups can be defined not only inside a partition, but also at the domain level. As with partitions, RGs in the same partition (or at the domain level) that are running on the same server instance have their own lifecycle.

A Resource Group Template provides a templating mechanism to reduce the administrative overhead of configuring WebLogic resources and applications for SaaS use cases where the same resources and applications need to run in multiple partitions. It offers a configure-once-and-use-everywhere capability, where a common set of configuration artifacts can be specified in a RGT, and can then be referenced from RGs in different partitions. A RGT is not targetable, and resources in a RGT will not deploy unless the RGT is referenced by a deployed RG.

Note that the resources and applications configured or deployed in a partition (directly inside RGs or via RGs referencing a RGT) are scoped to that partition.

Understanding JMS Resources in MT

In a similar way to other WebLogic configuration artifacts, JMS resources such as JMS servers, SAF agents, path services, persistent stores, messaging bridges, JMS system modules, app-deployment JMS modules, Java EE 7 resource definition modules, and JMS applications can all now be configured and deployed in a RG, either directly or via RGTs, as well as in the ‘classic’ way, which is always directly at the domain level. Note that it is perfectly acceptable to combine both partition and ‘classic’ configuration together in the same domain.   

Resources and applications in different partitions are isolated from one another. For example, you can configure a JMS destination with the same JNDI name in multiple partitions running in the same cluster, and these destinations will be managed via independent runtime MBean instances, and can be independently secured via partition-specific security realms. In addition to non-persistent state, the persistent data (for example, persistent messages and durable subscriptions) in such JMS destinations are also isolated from one another.

Configuring JMS Resources in MT

The following configuration snippets show how JMS resources configured in a multi-tenant environment differs from traditional non-MT JMS configuration.

As you can see, partition-scoped JMS resources are embedded in a resource group in a partition (alternatively, they can be embedded in a Resource Group Template, which is in turn referenced by a Resource Group).

In addition, resources in a resource group are never individually targeted. Instead, the whole resource group is targeted via a virtual target, which is itself targeted in the normal way.  If a RG targets to a virtual target that is in turn targeted to a WL cluster, all JMS resources and applications in the RG will also be targeted to that cluster.

As we will see later, a virtual target not only provides the targeting information of a RG, it also defines the accessing point of a partition. For more information about resource group targeting and virtual targets, check out Joe's blog about Partition Targeting and Virtual Targets.

You might have noticed that I did not discuss configuring individual JMS resources for each server in a WL cluster, nor did I mention configuring “migratable targets” to add high availability. I have good news for you! Neither is needed or even supported in MT. They have been replaced with greatly enhanced WebLogic JMS cluster-targeting and HA support; my colleague Kathiravan blogs about it in 12.2.1 WebLogic JMS: Dynamic Scalability and High Availability Made Simple.

Although system level JMS resources (such as JMS servers, SAF agents, persistent stores, messaging bridges, path services, and JMS modules) are scoped differently in a MT configuration, their respective attributes are specified in exactly the same way as in a non-MT configuration.

Various validation and targeting rules are enforced to ensure that WebLogic MT JMS configuration is isolated, self contained, and easy to manage. One basic and high-level rule in configuring JMS in MT is that a JMS configuration artifact may only reference other configuration artifacts that are in the same scope. For example, a resource group scoped JMS server can only reference a persistent store that is also defined in the same resource group. These rules are enforced by configuration validation checks and by errors and warnings that are logged at runtime.

Accessing JMS Resources in MT

A JMS application designed for multi-tenancy accesses JMS resources in the same way as ‘classic’ JMS applications, by looking up JMS resources in a JNDI name space. The difference is that in a MT environment, a WebLogic JNDI InitialContext is associated with a particular scope (i.e. the domain or a partition), when it is created. 

A MT application can have multiple JNDI contexts that refer to the same WebLogic cluster but are scoped to different partitions. An initial context, once created, sticks to its scope until it is closed. This means that all JNDI operations using a partition-scoped JNDI context instance are performed using the partition-specific area of the JNDI space.

The scope of a JNDI context is determined by the “provider URL” supplied when the initial context is created.

Once an application successfully establishes a partition-scoped JNDI initial context, it can use this context to look up JMS connection factories and destinations in the same way as in a non-MT environment; except that now the application can only access partition-scoped JMS resources.

Let us look at some specific use cases and see how an application can establish an initial context to a particular partition in each of the use cases.

Use Case 1 - Local Intra-partition Access

When a Java EE application needs to access a JMS destination in its local partition in the same cluster (or on the same non-clustered managed server), the application can just create an initial context without supplying a provider URL.

Example 1: Null Provider URL

  Context ctx = new InitialContext();

  Object cf = ctx.lookup("jms/mycf1");

  Object dest = ctx.lookup("jms/myqueue1");

This initial context will be scoped to the partition in which the application is deployed.

Use Case 2 - Local Inter-partition Access

If a Java EE application needs to access a JMS destination (or other resource) in a different partition than the partition to which it is deployed, and the partition is in the same cluster (or on the same managed server) then it can use either a partition-scoped JNDI name or a provider URL with the "local:" protocol.

Using Partition Scoped JNDI Names

A JNDI name can be decorated with a namespace prefix to indicate its scope.

Example 2.1: given the partition configuration in the above examples, the following code can be used to access a JMS destination that is configured in "partition1".

Context ctx = new InitialContext();

Object cf = ctx.lookup("partition:partition1/jms/mycf1");

Object dest = ctx.lookup("partition:partition1/jms/myqueue1");

Similarly a Java EE application in a partition can access a domain level JNDI resource in the same cluster using a partition scoped initial context with the "domain:" namespace prefix, for example "domain:jms/mycf2".

Using a provider URL with the "local:" Protocol

Alternatively, one can specify a "local:" provider URL when creating an initial context to a specific partition.

Example 2.2: given the partition configuration in the above examples, the following code can be used to access a JMS destination that is configured in "partition1".

Hashtable<String, String> env = new Hashtable<>();

env.put(Context.PROVIDER_URL, "local://?partitionName=partition1");

env.put(Context.SECURITY_PRINCIPAL, "weblogic");

env.put(Context.SECURITY_CREDENTIALS, "welcome1");

Context ctx = new InitialContext(env);

Object cf = ctx.lookup("jms/mycf1");

Object dest = ctx.lookup("jms/myqueue1");

The initial context will be associated with "partition1" for its lifetime.

Similarly, a Java EE application in a partition can access a domain level JNDI resource in the same cluster using “local://?partitionName=DOMAIN” as the provider URL. 

Use Case 3 - General Partition Access

A third way for a Java EE application or client to access a JMS destination in a partition is to use a "partition URL". A partition URL is intended to be used when the JMS destination is in a remote cluster (or on a remote non-clustered managed server).  A typical "partition URL" is t3://hostname:port, or t3://host:port/URI-prefix.

Partition URLs may only be used by Java EE applications or clients using WLS 12.2.1 or later (older versions should use dedicated partition ports: see below).

Example 3: given the partition configuration in the examples above, the following code can be used to access a JMS destination that is configured in "partition1".

Note that "/partition1" in the provider URL below is the uri-prefix configured in the VirtualTarget for partition1.

Hashtable<String, String> env = new Hashtable<>();

env.put(Context.PROVIDER_URL, "t3://abcdef00:7001/partition1");

env.put(Context.SECURITY_PRINCIPAL, "weblogic");

env.put(Context.SECURITY_CREDENTIALS, "welcome1");

Context ctx = new InitialContext(env);

Object cf = ctx.lookup("jms/mycf1");

Object dest = ctx.lookup("jms/myqueue1");

Although it is not a best practice, a “partition URL” can also be used to access another partition in the same JVM/cluster.

Use Case 4 – Dedicated Partition Ports

One last option is to setup dedicated ports for each partition, and configuring these is described in Joe's blog about Partition Targeting and Virtual Targets.

Configuring dedicated partition ports enables applications that use ‘classic’ URLs to access a partition, and is mainly intended to enable clients and applications that are running on releases older than 12.2.1 to access partitions in a 12.2.1 or later domain.

Such older clients and applications do not support the use of host name and URI-prefix to access a partition. An attempt to use them from an older client will simply fail or may silently access the domain level JNDI name space.

What’s next?

I hope this article helps you to understand the basics of JMS MT! It is time to start exploring this new and exciting capability. You can find more information for messaging in MT in the Configuring Messaging chapter of Oracle® Fusion Middleware Using WebLogic Server Multitenant.

Wednesday Dec 09, 2015

New EJB 3.2 feature - Modernized JCA-based Message-Driven Bean

WebLogic Server 12.2.1 is a fully compatible implementation of Java EE 7 specification. One of the big improvements in EJB container in this release of WebLogic Server is that, a message-driven bean is able to implement a listener interface with no methods. When such a no-methods listener interface is used, all non-static public methods of the bean class (and of the bean class's super classes except java.lang.Object) are exposed as message listener methods.

[Read More]

Tuesday Dec 08, 2015

WLS JNDI Multitenancy

  The most important feature introduced in WebLogic Server 12.2.1 is multi-tenancy. As we know, before WLS 12.2.1, one WLS domain is used by one tenant. Since WLS 12.2.1, WLS domain can be divided into multiple partitions so that tenants can use different partitions of on WLS domain. Then Multiple tenants can share one WLS domain without influence each other. So isolation of resources between partitions is key. Since JNDI is a common way to access to these resources, the main target of JNDI in WLS 12.2.1 is to isolate JNDI resources.

  Before WLS 12.2.1, there is only one JNDI global tree of WLS domain. It is difficult to use this one JNDI global tree to support multiple partitions. Because partition requires a unique isolated namespace. For example, it is possible multiple partitions use same JNDI name to bind/lookup JNDI resource separately. And it will result in NameAlreadyBoundException. To isolate JNDI resources in different partitions, every partition has unique JNDI global tree since WLS 12.2.1. Then tenant can operate JNDI resource in one partition without name conflict with one in other partition. For Application scoped JNDI tree, it is only visible in application internal, so it isolated naturally. So there is no change for App scoped JNDI tree in WLS 12.2.1. Let us see how to access JNDI resource in partition.

Access JNDI resource in partition

  To access JNDI resources in partition, we need add partition information in provider url property during creating InitialContext.

  Runtime environment:

    Managed server:           ms1 , ms2

    Cluster:                         managed server ms1, managed server ms2

    Virtual target:               VT1 target to managed server ms1, VT2 target to cluster

    Partition:                      Partition 1 has available target VT1, Partition2 has available target VT2.

  We need add parition1 inforamtion in properties during creating InitialContext to access JNDI resources in partition1.

    Hashtable<String, String> env = new Hashtable<>();
    env.put(Context.PROVIDER_URL, "t3://ms1:7001/partition1");  
    env.put(Context.SECURITY_PRINCIPAL, "weblogic");
    env.put(Context.SECURITY_CREDENTIALS, "welcome1");
    Context ctx = new InitialContext(env);
    Object c = ctx.lookup("jdbc/ds1");

  Partition2 runs in cluster, so we can use cluster address format in properties during creating InitialContext.

    Hashtable<String, String> env = new Hashtable<>();
    env.put(Context.PROVIDER_URL, "t3://ms1:7001,ms2:7003/partition2");
    env.put(Context.SECURITY_PRINCIPAL, "weblogic");
    env.put(Context.SECURITY_CREDENTIALS, "welcome1");
    Context ctx = new InitialContext(env);
    Object c = ctx.lookup("jdbc/ds1");

  In weblogic, we can create Foreign JNDI provider to link JNDI resources in other server. In WLS 12.2.1, we also can use Foreign JNDI provider to link to  JNDI resources in specified partition by adding partition information in configuration. These partition information including provider url, user and password will be used to create JNDI context. The following is an example of Foreign JNDI provider configuration in partition1. This provider links to partition2.

<foreign-jndi-provider-override>
  <name>jndi_provider_rgt</name>
  <initial-context-factory>weblogic.jndi.WLInitialContextFactory</initial-context-factory>
  <provider-url>t3://ms1:7001,ms2:7003/partition2</provider-url>
  <password-encrypted>{AES}6pyJXtrS5m/r4pwFT2EXQRsxUOu2n3YEcKJEvZzxZ7M=</password-encrypted>
  <user>weblogic</user>
  <foreign-jndi-link>
    <name>link_rgt_2</name>
    <local-jndi-name>partition_Name</local-jndi-name>
    <remote-jndi-name>weblogic.partitionName</remote-jndi-name>
  </foreign-jndi-link>
</foreign-jndi-provider-override>

Stickiness of JNDI Context

  When a JNDI context is created, the context will associate with  specified partition. Then all subsequent JNDI operations are done within associated partition JNDI tree, not in current partition one. And this associated partition will remain even if the context is used by a different thread than was used to create the context.. If provider url property is set in environment during creating JNDI context, partition specified in provider url is associated. Otherwise, JNDI context associates with current partition.

Life cycle of Partition JNDI service

  Before WLS 12.2.1,, JNDI service life cycle is same with weblogic server. In WLS 12.2.1, every partition owns JNDI global tree by itself, so JNDI service life cycle is same with partition. As soon as partition startup, JNDI service of this partition is available. And during partition shutdown, JNDI service of this partition is destroyed.

Monitoring FAN Events

fanWatcher is a sample program to print the Oracle Notification Service (ONS) Fast Application Notification (FAN) event information. These events provide information regarding load balancing, and service and instance up and down events. This information is automatically processed by WebLogic Server Active GridLink and UCP on the mid-tier. For more information about FAN events, see this link.  The program described here is an enhancement of the earlier program described in that white paper  This program can be modified to work as desired to monitor events and help diagnose problems with configuration. The code is available this link (rename it from .txt to .java).

To run this Java application, you need to be set up to run a JDK and you need ons.jar and ojdbcN.jar in the CLASSPATH. The CLASSPATH is set differently depending on whether you are running on the database server or on the mid-tier with WebLogic Server or UCP. Make sure to use the correct path separator for CLASSPATH on your platform (';' for Windows, ':' otherwise).

The general format for the command line is

java fanWatcher config_type [eventtype … ]

Event Type Subscription

The event type sets up the subscriber to only return limited events. You can run without specifying the event type to see what types of events are returned. When you specify an event name on the command line, the program sets up the subscriber to have a simple match on the event. If the specified pattern occurs anywhere in a notification's header, then the comparison statement evaluates true. The most basic pattern match is an empty string (not null), which matches all notifications. The pattern is enclosed in double quotes (required) and prefixed with “%” to be case insensitive.

Event processing is more complete than shown in this sample. The subscription string is generally composed of one or more comparison statements, each logically related to another with the boolean operators '|' for an OR relationship or '&' for an AND relationship. Parentheses are used to group these comparison statements, and the '!' operator placed before an opening parenthesis negates the evaluated value within.

Each individual comparison statement must be enclosed within double quotes ('"'), and can take one of two basic forms: "pattern" or "name=value". A "pattern" is a simple string match of the notification header: if the specified "pattern" occurs anywhere in a notification's header, then the comparison statement evaluates true. The most basic pattern match is an empty string (not NULL) which matches all notifications.

The "name=value" format compares the ONS notification header or property name with the name against the specified value, and if the values match, then the comparison statement evaluates true. If the specified header or property name does not exist in the notification the comparison statement evaluates false. A comparison statement will be interpreted as a case insensitive when a percent character ('%') is placed before the opening quote. Note that for "name=value" comparison statements, only the value is treated as case insensitive with this option: the name lookup will always be case sensitive. A comparison statement will be interpreted as a regular expression when a dollar sign character ('$') is placed before the opening quote. Standard POSIX regular expressions are supported. To specify a regular expression that is also case insensitive, place the dollar sign and percent sign together and in that order ("$%") before the opening quote.

A special case subscription string composed of only the exclamation point character ('!') signifies that the subscription will not match any notifications.

You might want to modify the event to select on a specific service by using something like

%"eventType=database/event/servicemetrics/<serviceName> "

Running with Database Server 10.2 or later

This approach runs on the database server and connects directly to the local ONS daemon available in the Grid Infrastructure cluster. The FANwatcher utility must be run as a user that has privilege to access the $CRS_HOME/opmn/conf/ons.config, which is used by the ons daemon to start and accessed by this program. The configuration type on the command line is set to “crs”.

# CRS_HOME should be set for your Grid infrastructure
echo $CRS_HOME
CRS_HOME=/mypath/scratch/12.1.0/grid/
CLASSPATH="$CRS_HOME/jdbc/lib/ojdbc6.jar:$CRS_HOME/opmn/lib/ons.jar:."
export CLASSPATH
javac fanWatcher.java
java -Doracle.ons.oraclehome=$CRS_HOME fanWatcher crs

Running with WLS 10.3.6 or later using an explicit node list

There are two ways to run in a client environment – with an explicit node list and using auto-ONS. It’s necessary to have ojdbcN.jar and ons.jar that are available when configured for WLS. If you are set up to run with UCP directly, these should also be in your CLASSPATH.

In the first approach, it will work with Oracle driver and database 11 and later (SCAN support came in later versions of Oracle including the 11.2.0.3 jar files that shipped with WLS 10.3.6).

# Set the WLS environment using wlserver*/server/bin/setWLSEnv
CLASSPATH="$CLASSPATH:." # add local directory for sample program
export CLASSPATH
javac fanWatcher.java
java fanWatcher "nodes=rac1:6200,rac2:6200" database/event/service

The node list is a string of one or more values of the form name=value separated by a newline character (\n).

There are two supported formats for the node list.

The first format is available for all versions of ONS. The following names may be specified.

nodes – This is required. The format is one or more host:port pairs separated by a comma.

walletfile – Oracle wallet file used for SSL communication with the ONS server.

walletpassword – Password to open the Oracle wallet file.

The second format is available starting in database 12.2.0.2. It supports more complicated topologies with multiple clusters and node lists. It has the following names.

nodes.id—this value is a list of nodes representing a unique topology of remote ONS servers. id specifies a unique identifier for the node list. Duplicate entries are ignored. The list of nodes configured in any list must not include any nodes configured in any other list for the same client or duplicate notifications will be sent and delivered. The list format is a comma separated list of ONS daemon listen addresses and ports pairs separated by colon.

maxconnections.id— this value specifies the maximum number of concurrent connections maintained with the ONS servers. id specifies the node list to which this parameter applies. The default is 3.

active.id If true, the list is active and connections are automatically established to the configured number of ONS servers. If false, the list is inactive and is only be used as a fail over list in the event that no connections for an active list can be established. An inactive list can only serve as a fail over for one active list at a time, and once a single connection is re-established on the active list, the fail-over list reverts to being inactive. Note that only notifications published by the client after a list has failed over are sent to the fail over list. id specifies the node list to which this parameter applies. The default is true.

remotetimeout —The timeout period, in milliseconds, for a connection to each remote server. If the remote server has not responded within this timeout period, the connection is closed. The default is 30 seconds.

The walletfile and walletpassword may also be specified (note that there is one walletfile for all ONS servers). The nodes attribute cannot be combined with name.id attributes.

Running with WLS using auto-ONS

Auto-ONS is available starting in Database 12.1.0.1. Before that, no information is available. Auto-ONS only works with RAC configurations; it does not work with an Oracle Restart environment.  Since the first version of WLS that ships with Database 12.1 is WLS 12.1.3, this approach will only work with upgraded database jar files on versions of WLS earlier than 12.1.3. Auto-ONS works by getting a connection to the database to query the ONS information from the server. For this program to work, a user, password, and URL are required. For the sample program, the values are assumed to be in the environment (to avoid putting them on the command line). If you want, you can change the program to prompt for them or hard-code the values into the java code.

# Set the WLS environment using wlserver*/server/bin/setWLSEnv
# Set the credentials in the environment. If you don't like doing this,
# hard-code them into the java program
password=mypassword
url='jdbc:oracle:thin:@(DESCRIPTION=(ADDRESS_LIST=\
(ADDRESS=(PROTOCOL=TCP)(HOST=rac1)(PORT=1521))\
(ADDRESS=(PROTOCOL=TCP)(HOST=rac2)(PORT=1521)))\
(CONNECT_DATA=(SERVICE_NAME=otrade)))'
user=myuser
export password url user
CLASSPATH="$CLASSPATH:."
export CLASSPATH
javac fanWatcher.java
java fanWatcher autoons

fanWatcher Output

The output looks like the following. You can modify the program to change the output as desired. In this short output capture, there is a metric event and an event caused by stopping the service on one of the instances.

** Event Header **
Notification Type: database/event/servicemetrics/otrade
Delivery Time: Fri Dec 04 20:08:10 EST 2015
Creation Time: Fri Dec 04 20:08:10 EST 2015
Generating Node: rac1
Event payload:
VERSION=1.0 database=dev service=otrade { {instance=inst2 percent=50 flag=U
NKNOWN aff=FALSE}{instance=inst1 percent=50 flag=UNKNOWN aff=FALSE} } timestam
p=2015-12-04 17:08:03

** Event Header **
Notification Type: database/event/service
Delivery Time: Fri Dec 04 20:08:20 EST 2015
Creation Time: Fri Dec 04 20:08:20 EST 2015
Generating Node: rac1
Event payload:
VERSION=1.0 event_type=SERVICEMEMBER service=otrade instance=inst2 database=dev db_domain= host=rac2 status=down reason=USER timestamp=2015-12-04 17:

Monday Dec 07, 2015

Multi-Tenancy Samples

In order to make it easier to understand all aspects of Multi-tenancy in WebLogic Server 12.2.1, MedRec can support running in a Multi-tenancy environment to be used as a demonstration vehicle.

What’s MedRec

Avitek Medical Records (or MedRec) is a WebLogic Server sample application suite that demonstrates all aspects of the Java Platform, Enterprise Edition (Java EE). MedRec is designed as an educational tool for all levels of Java EE developers. It showcases the use of each Java EE component, and illustrates best-practice design patterns for component interaction and client development. MedRec also illustrates best practices for developing and deploying applications with WebLogic Server.

Please choose 'Complete with Examples' when you install WebLogic server whilst going to the step of 'Installation Type'. The codes, binaries and documentations of MedRec will be located at ‘$MW_HOME/wlserver/samples/server/medrec’ directory. Otherwise samples of WebLogic Server will be overleaped.

There are two non-OOTB Multi-tenancy samples. You need to run Ant commands provided to stage WebLogic domains.

Single Server Multi-tenancy MedRec

Overview

It’s a SAAS sample that is all focusing on the Multi-tenancy features themselves, without cluster, extra managed servers, all virtual targets only target to admin server.


Multi-tenancy Demonstration

There are 2 tenants named bayland and valley, and valley has 2 partitions(one tenant can have multiple partitions). In this sample, it’s demonstrating various Multi-tenancy features following. If you have any questions about a certain feature, you'd refer to the relevant blogs or documentations.


Resource Group Template

All resources including applications, JMS, file store, mail session, JDBC system resource are deployed onto a resource group template.


  • Applications


  • Other resources


Resource Overriding

Databases are supposed to be isolated among partitions. At resource group template, the JDBC system resource is a mere template with name, driver, JNDI lookup name. The real URL, username, password of datasource are set at the resource overriding in Partition scope.


Virtual Target

Each partition, exactly each partition resource group deriving from forementioned MedRec resource group template has it own virtual target. The 2 virtual target of valley share the same host names within different URI prefixes.


We can see three virtual targets, one per one partition. Web container is aware of which application is accessed to according to the host name plus URI prefix. For example, in this sample, medrec.ear is deployed at all partitions. Yet how to access the web module of medrec.ear on bayland? The URL would be 'http://hostname:port/bayland/medrec'. '/bayland' is the uri prefix. 'medrec' is the root context name of webapp.


Security Realm

Each tenant is supposed to have its own security realm with isolated users. MedRec has an use case of Servlet access control and authentication that demonstrates the scenario.


Resource Consumption Management.

Bayland is deemed as a VIP customer of this sample. So it has more quota of CPU, memory heap, thread work, etc.


Trigger will slow down or shut down the partition if the usage is up to the specified value.


Partition Work Manager

Partition Work Managers define a set of policies that limit the usage of threads by Work Managers in partitions only. They do not apply to the domain.


Deployment Plan

Then deployment plan file can be used in partition scope. The sample utilises this mechanism to change the appearance of the web pages on valley tenant including photos and background colour. That means you can let your application different from each other of different partition even though of one resource group template.

Installation

Prior to running setup script, you need to do a couple of preparation. Setting sample environment, editing etc/hosts file, customising the properties of admin server, host, port etc. After that, one ant command will stage all content of the SAAS sample.

  1. Setting environment.

    cd $MW_HOME/wlserver/samples/server
    . ./setExamplesEnv.sh
  1. Network address mapping. Please open /etc/hosts file and add following lines:

    127.0.0.1 www.baylandurgentcare.com
    127.0.0.1 www.valleyhealth.com
  2. Customizing admin server properties.
    update 5 properties in $MW_HOME/wlserver/samples/server/medrec/weblogic.properties. Please use weblogic as the username of the admin server.

    admin.server.name=adminServer
    admin.server.host=localhost
    admin.server.port=7005
    admin.server.username=weblogic
    admin.server.password=XXXXXX
  3. Running setup script

    cd $MW_HOME/wlserver/samples/server/medrec
    ant mt.single.server.sample

Webapp URLs

You can access MedRec via following URLs according to the server port you set. For example, you set admin.server.port = 7001.

Coherence Cluster Multi-tenancy MedRec

Overview

The second SAAS sample. Beyond the simple one, coherence cache, dynamic cluster, Oracle PDB are involved. In some extent, it’s a real usage of MT in practice.


Look at this diagram above, it also has 2 tenants but a partition per tenant. Bayland is the blue one, valley the green one. There are 2 resource group templates named app RGT and cache RGT instead of one. App RGT is similar to the resource group template of the first MT sample including all resources of MedRec. In order to enable coherence cache, a GAR archive is packaged into an application of medrec.ear. And the identical GAR is also deployed into the second cache resource group template. Both of the partitions have 2 resource groups app and cache deriving from the app and cache resource group templates respectively. Each resource group targets to different virtual target. So here has 4 virtual targets. 2 app virtual targets target to a storage disabled dynamic cluster app cluster with 2 managed servers. The applications and other resources run on this app cluster. In contrast, 2 cache virtual targets target to another dynamic cluster named cache cluster with 2 managed servers but storage enable. The GAR of cache resource group runs on the cache cluster.


Coherence Scenario

MedRec has 2 key archives medrec.ear and physician.ear. Physician archive is set to a web service(JAX-RS and JAX-WS) client application. And there aren't any JPA stuff in physician.ear all of which are in server side.  So leveraging Coherence Cache here can avoid frequent web service invocations and JDBC invocations in business services of web service server side.

Method Invocation Cache

This cache is one partitioned tenant cache. Most of business services of physician scenarios are annotated method invocation cache interceptor. First check data whether it has stored in cache. If data isn't cached, gets data through web service. Then stores return data into method cache. After that, following invocations with same values of parameter will fetch data from cache directly.

When is the data removed from the method cache? For examples, physician can look a patient's record summary which is cached after first getting the data. And physician creates a new record for this patient. At present, the record summary in the cache has already been inaccurate. So the dirty data should be cleaned. In this case, after success of creating record, the business service will fire an update event to remove the old data.

Actually, Method invocation cache has 3 different types. MedRec can be aware of the WLS environment and activate the relevant cache.

For example, physician login, when you first login as physician at bayland app server 1, the app_cluster-1.log should be printed liking following logs:

Method Invocation Coherence Cache is available.
Checking method XXXX invocation cache...
Not find the result.
Caching the result in method XXXX invocation cache....
Added result to cache
Method: XXXXX
Parameters: XXXX
Result: XXXXX

Logout and change to bayland server 2 login again, the app_cluster-2.log should like these:

Checking method XXXX invocation cache...
Found result in cache
Method: XXXXX
Parameters: XXXX
Result: XXXXX

Shared Cache

This cache is one partitioned shared cache that means bayland and valley can share it. It's still the creating record user-case. Physician can create prescriptions for the new record. Choosing drug uses a drug information list from database of server side. The list is a stable invariable data so that can be shared with both Partitions. So the drug information list is stored in this cache.

Open creating record page on browser at bayland server 1, the app_cluster-1.log should like this:

Drug info list is not stored in shared cache.
Fetch list from server end point.
Store drug info list into shared cache.

Then do the same thing at valley server 2, the app_cluster-2.log should like this:

Drug info list has already stored in shared cache.

That means the it is a shared cache over Partitions.

Installation

The installation and usage are similar to the first MT sample. Beyond the first one, the first sample adopts derby as database, the second adopts Oracle database. You need to prepare 2 Oracle PDBs and customize the file of db properties.

  1. Setting environment.

    cd $MW_HOME/wlserver/samples/server
    . ./setExamplesEnv.sh
  2. Update 5 properties in $MW_HOME/wlserver/samples/server/medrec/install/mt-coherence-cluster/configure.properties according with PDBs. E.g.:

    # Partition 1
    dbURL1      = jdbc:oracle:thin:XXXXXXXX:1521/pdb1
    dbUser1     = pdb1
    dbPassword1 = XXXXXX
    # Partition 2
    dbURL2      = jdbc:oracle:thin:XXXXXXXX:1521/pdb2
    dbUser2     = pdb2
    dbPassword2 = XXXXXX
  3. Network address mapping. Please open /etc/hosts file and add following lines:

    127.0.0.1 bayland.weblogicmt.com
    127.0.0.1 valley.weblogicmt.com
  4. Customizing admin server properties.
    update 5 properties in $MW_HOME/wlserver/samples/server/medrec/weblogic.properties. Please use weblogic as the username of admin server.

    admin.server.name=adminServer
    admin.server.host=localhost
    admin.server.port=7003 (Please don’t use 2105, 7021, 7022, 7051, 7052 which will be used as servers’ listening ports)
    admin.server.username=weblogic
    admin.server.password=XXXXXX
  5. Running setup script

    cd $MW_HOME/wlserver/samples/server/medrec
    ant mt.coherence.cluster.sample

Webapp URLs

After success, please access following URLs to experience MedRec:

Three Easy Steps to a Patched Domain Using ZDT Patching and OPatchAuto

Now that you’ve seen how easy it is to Update WebLogic by rolling out a new patched OracleHome to your managed servers, let’s go one step further and see how we can automate the preparation and distribution parts of that operation as well.

ZDT Patching has integrated with a great new tool in 12.2.1 called OPatchAuto. OPatchAuto is a single interface that allows you to apply patches to an OracleHome, distribute the patched OracleHome to all the nodes you want to update, and start the OracleHome rollout, in just three steps.

1. The first step is to create a patched OracleHome archive (.jar) based on combining an OracleHome in your production environment with the desired patch or patchSetUpdate. This operation will make a copy of that OracleHome so it will not affect the production environment. It will then apply the specified patches to the copy of the OracleHome and create the archive from it. This is the archive that the rollout will use when the time comes, but first it needs to be copied to all of the targeted nodes.

The OPatchAuto command for the first step looks like this:

${ORACLE_HOME}/OPatch/auto/core/bin/opatchauto.sh apply /pathTo/PatchHome -create-image -image-location /pathTo/image.jar –oop –oh /pathTo/OracleHome

PatchHome is a directory or file containing the patch or patchSetUpdate to apply.

image-location is where to put the resulting image file

-oop means “out-of-place” and tells OPatchAuto to copy the source OracleHome before applying the patches

2.  The second step is to copy the patched OracleHome archive created in step one to all of the targeted nodes. One cool thing about this step is that since OPatchAuto is integrated with ZDT Patching, you can give OPatchAuto the same target you would use with ZDT Patching, and it will ask ZDT Patching to calculate the nodes automatically. Here’s an example of what this command might look like:

${ORACLE_HOME}/OPatch/auto/core/bin/opatchauto.sh apply -plan wls-push-image -image-location /pathTo/image.jar -wls-admin-host ${ADMINHOSTNAME}:7001 -wls-target Cluster1 -remote-image-location /pathTo/image.jar -wallet ${WALLET_DIR}

image-location is the jar file created in the first step

wls-target can be a domain name, cluster name, or list of clusters

Note that if you do not already have a wallet for ssh authorization to the remote hosts, you may need to configure one using

3.  The last step is using OPatchAuto to invoke the ZDT Patching OracleHome rollout. You could switch to WLST at this point and start it as described in the previous post, but OPatchAuto will monitor the progress of the rollout and give you some helpful feedback as well. The command to start the rollout through OPatchAuto looks like this:

${ORACLE_HOME}/OPatch/auto/core/bin/opatchauto.sh apply -plan wls-zdt-rollout -image-location /pathTo/image.jar -wls-admin-host ${ADMINHOSTNAME}:7001 -wls-target Cluster1 -backup-home /pathTo/home-backup -remote-image-location /pathTo/image.jar -wallet ${WALLET_DIR}

image-location is the jar file created in the first step

backup-home is the location on each remote node to store the backup of the original OracleHome

image-location and remote-image-location are both specified so that if a node is encountered that is missing the image, it can be copied automatically. This is also why the wallet is specified here again

One more great thing to consider when looking at automating the entire process is how easy it would be to use these same commands to distribute and rollout the same patched OracleHome archive to a test environment for verification. Once verification is passed, a minor change to the same two commands will push the exact same (verified) OracleHome archive out to a production environment.

For more information about updating OracleHome with Zero Downtime Patching and OPatchAuto, view the documentation.

Wednesday Nov 25, 2015

Multi-Tenancy EJB

Multi-Tenancy EJB

Benefit from the Multi-Tenancy support of WLS 12.2.1, EJB container gains a lot of enhancements. Application and resource "multiplication" allows for EJB container to provide MT features while remaining largely partition unaware. Separate application copies also brings more isolation, such as distinct Remote objects, beans pools, caches, module class loader instances etc. Below names a few of the new features you can leverage for EJB applications.

1. JNDI

  • Server naming nodes will be partition aware.
  • Applications deployed to partitions will have their EJB client views exposed in the corresponding partition's JNDI namespace. 

2. Security

  • Security implementation now allows multiple active realms, including support for per-partition security realm.
  • Role based access control, and credential mapping for applications deployed to partition will use the partition's configured realm.

3. Runtime Metrics and Monitoring

  • New ApplicationRuntimeMBean instance with the PartitionName attribute populated, will get created for every application deployed to a partition.
  • EJB container exposed Runtime MBean sub-tree will be rooted by the ApplicationRuntimeMBean instance.

4. EJB Timer service

  • Persistent local timers rely on the store component. Partitioned custom file stores provide the required isolation of tenant data.
  • Clustered timers under the hood use Job scheduler, which is also providing isolation.

5. JTA configuration at Partition Level

  • JTA timeout can be configured at partition level, in addition to domain level and EJB component level.
  • Timeout value in EJB component level takes precedence over the other two.
  • Support dynamic update via deployment plan.

6. WebLogic Store

  • Persistent local timers rely on the store component. Partitioned custom file stores provide the required isolation of tenant data. 

7. Throttling Thread Resource usage

  • Work Managers with constraints can be defined at global runtime level, and application instances in partitions can refer to these shared WMs to throttle thread usage across partitions esp. for non interactive use cases - batch, async, message driven bean invocations ...

8. Data Sources for Java Persistence API users

  • Persistence Units that use data sources defined as system resources in the Resource Group Template will be able to take advantage of the PartitionDataSourceInfoMBean based overrides.
  • Use cases requiring advanced customization can use the new deployment plan support being added for system resource re-configuration.
  • Persistence Units that use application packaged data source modules can use the current deployment plan support to have the copies in different partitions, point to the appropriate PDBs.


A sample EJB application leveraging Multi-Tenancy

Now we're going through a simple setup of an EJB application on MT environment to demonstrate the usage of some of these features.

The EJB application archive is named sampleEJB.jar, it includes a stateful session bean which interacts with database by JPA API. We want the application to be deployed to 2 separate partitions, each of which points to a database instance of its own, so they can work independently.

1.  Create Virtual Targets

The first step is to create 2 virtual targets for the 2 partitions respectively, which use different URI prefixes /partition1 and /partition2 respectively as showed below.

2.  Create Resource Group Template

Now we create a Resource Group Template named myRGT.  Resource Group Template is a new concept introduced by WLS 12.2.1, to which you can deploy your applications and different resources you need. This is very helpful when your application setup is complicated, because you don't want to repeat the same thing for multiple times on different partitions.


3.  Deploy application and data source

Now we can deploy the application and define the data source as below. Please pay attention that the application and the data source are all defined in myRGT scope.

4.  Create Partitions

Now everything is ready, it's time to create partitions. As the following image shows, we can apply the Resource Group Template just defined when creating partitions, it will deploy everything in the template automatically.

5.  Access the EJB

Now with the partitions created and started, you can lookup and access the EJB with the following code. We're using the URL for partition1 here, you can also change the URL to access another partition.

  Hashtable<String, String> props = new Hashtable<String, String>();
  props.put(Context.INITIAL_CONTEXT_FACTORY, "weblogic.jndi.WLInitialContextFactory");
  props.put(Context.PROVIDER_URL, "t3://server1:7001/partition1");
  props.put(Context.SECURITY_PRINCIPAL, user);
  props.put(Context.SECURITY_CREDENTIALS, pass);
  Context ctx = new InitialContext(props);

  BeanIntf bean = (BeanIntf)ctx.lookup("MyEJB"); 
  boolean result = bean.doSomething(); 

6.  Override the data source

If you're feeling something is wrong, you're right. We defined the data source myDS in myRGT, then applied myRGT to both the partitions, now the 2 partitions are sharing the same data source. Normally we don't want this to happen, we need the 2 partitions to work independently without disturbing each other.  How can we do that?

If you want to make partition2 switch to another data source,  you can do that in the Resource Overrides tab of partition2 settings page. You can change the database URL here so another database instance will be used by partition2.  

7.  Change the transaction timeout

As mentioned above, for EJB applications it's supported to dynamically change the transaction timeout value for a particular partition. This can also be accomplished in partition settings page. In the following example, we set the timeout to 15 seconds. This will take affect immediately without asking to reboot.

There're also some other things you can do in the partition settings page, such as defining a work manager or monitoring the resource usage for a particular partition. Spend some time you will find more very useful tools around here.

Concurrency Utilities support in WebLogic Server 12.2.1, Part Five: Multi-tenancy Support

Overview

One of the key features in WLS 12.2.1 is the multi-tenancy support where a single Weblogic Server domain can contain multiple partitions. Please read Part One ~ Part Four prior to this article. Applications deployed to partitions can use the 4 types of concurrent managed objects in the same way as described in Part One ~ Part Four. Applications deployed to partitions can also use global pre-defined concurrent managed object templates, which means when an application is deployed to a partition, WebLogic Server creates concurrent managed objects for this application based on the configuration of global concurrent managed object templates. As you may recall there are server scope Max Concurrent Long Running Requests and Max Concurrent New Threads, please note that they limit long-running request/running threads in the whole server, including partitions.

Configuration

System administrators can define partition scope concurrent managed object templates.

As mentioned in Part One(ManagedExecutorService) and Part Three(ManagedThreadFactory), WebLogic Server provides configurations(Max Concurrent Long Running Requests/Max Concurrent New Threads) to limit the number of concurrent long-running tasks/threads in a ManagedExecutorService/ManagedScheduledExecutorService/ManagedThreadFactory instance, in the global (domain-level) runtime on a server or in the server. Among them, instance scope and server scope limits are applicable to partitions. Besides, system administrators can also define partition scope Max Concurrent Long Running Requests and Max Concurrent New Threads. There are a default Max Concurrent Long Running Requests(50) and a default Max Concurrent New Threads(50) for each partition.

ManagedExecutorService/ManagedScheduledExecutorService/ManagedThreadFactory accepts a long-running task submission/new thread creation only when neither of the 3 limits is exceeded. For instance, there is an application deployed to a partition on a server, when a long-running task is submitted to its default ManagedExecutorServiceRejectedExecutionException will be thrown if there are 10 in progress long-running tasks which are submitted to this ManagedExecutorService, or there are 50 in progress long-running tasks which are submitted to the ManagedExecutorServices/ManagedScheduledExecutorServices in scope of this partition on the server, or there are 100 in progress long-running tasks which are submitted to the ManagedExecutorServices/ManagedScheduledExecutorServices in the server.

Configure Partition Scope Concurrent Managed Object Templates

WebLogic system administrators can configure pre-defined concurrent managed object templates for a partition. When an application is deployed to the partition, WebLogic Server creates concurrent managed object instances based on the configuration of partition scope concurrent managed object templates, and the created concurrent managed object instances are all in scope of this application.

Example-1: Configure a Partition Scope ManagedThreadFactory template using WebLogic Administration Console

Step1: in WebLogic Administration Console, a Partition Scope ManagedThreadFactory template can be created by clicking on the “New” button from the “Summary of Concurrent Managed Object Templates” page. This brings up the "Create a New Managed Thread Factory Template" page where the name and other parameters of the new ManagedThreadFactory template can be specified. Choose the Scope to the partition. In this example, a ManagedThreadFactory template called "testMTFP1" is being created for partition1.

Step2: Once a Partition Scope ManagedThreadFactory template is created, any application in the partition can get its own ManagedThreadFactory instance to use.

@WebServlet("/SomeServlet")

public class SomeServlet extends HttpServlet {

    @Resource(mappedName="testMTFP1")

    ManagedThreadFactory mtf;

    protected void doGet(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException {

        Runnable aTask = new Runnable() {
            ...
        };
        Thread t = mtf.newThread(aTask);

        t.start();
        ...
    }

}

Configure Partition Scope Max Concurrent New Threads & Max Concurrent Long Running Requests

Max Concurrent New Threads of a partition is the limit of running threads created by all ManagedThreadFactories in that partition on a server. Max Concurrent Long Running Requests of a partition is the limit of concurrent long-running tasks submitted to all ManagedExecutorServices and ManagedScheduledExecutorServices in that partition on a server.

In WebLogic Administration Console, Max Concurrent New Threads and Max Concurrent Long Requests of a partition can be edited from the “Settings for <partitionName>” screen. In this example, Max Concurrent New Threads of partition1 is set to 30, Max Concurrent Long Running Requests of partition1 is set to 80

Related Articles:

See for more details Configuring Concurrent Managed Objects in the product documentation.

Concurrency Utilities support in WebLogic Server 12.2.1, Part Four: ContextService

Overview

ContextService is for creating contextual proxy objects. It provides method createContextualProxy to create a proxy object, then proxy object methods will run within the captured context at a later time. 

Weblogic Server provides a preconfigured, default ContextService for each application, and applications can easily use it in web or EJB components without any configuration. Let's begin with a simple example that uses default ContextService.

Example-1: Execute a task with the creator's context using a ExecutorService

Step1: Write the task. In this simple example, the task extends Runnable.

public class SomeTask implements Runnable {

    public void run() {

        // do some work

    }

}

Step2: SomeServlet.java injects the default ContextService, uses the ContextService to create a new contextual object proxy for SomeTask, then submit the contextual object proxy to a Java SE ExecutorService. Each invocation of run() method will have the context of the servlet that created the contextual object proxy.

@WebServlet("/SomeServlet")

public class SomeServlet extends HttpServlet {

    @Resource ContextService cs;

    @Resource ManagedThreadFactory mtf;

    ExecutorService exSvc = Executors.newThreadPool(10, mtf);

    protected void doGet(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException {

        SomeTask taskInstance = new SomeTask();

        Runnable rProxy = cs.createContextualProxy(taskInstance, Runnable.class);

        Future f =  = exSvc.submit(rProxy);

        // Process the result and reply to the user 

    }

}

Runtime Behavior

Application Scoped Instance

ContextServices are application scoped. Each application has its own default ContextService instance, and the lifecycle of the ContextService instances are bound to the application. Proxy objects created by ContextServices are also application scoped, so that when the application is shut down, invocations to proxied interface methods will fail with an IllegalStateException, and calls to createContextualProxy() will  throw an IllegalArgumentException

WebLogic Server only provides a default ContextService instance for each application, and does not provide any way to configure a ContextService.

Context Propagation

ContextService will capture the application context at contextual proxy object creation, then propagate the captured application context before invocation of contextual proxy object methods, so that proxy object methods can also run with the application context.

Four types of application context are propagated: JNDI, ClassLoader, Security and WorkArea. The propagated context types are the same for four types of the concurrent managed objects.

Related Articles:

See for more details Configuring Concurrent Managed Objects in the product documentation.

Concurrency Utilities support in WebLogic Server 12.2.1, Part Three: ManagedThreadFactory

Overview

ManagedThreadFactory is for creating threads managed by WebLogic Server. It extends from java.util.concurrent.ThreadFactory without new methods, and provides the method newThread from ThreadFactory. It can be used with Java SE concurrency utilities APIs where ThreadFactory is needed. e.g. in java.util.concurrent.Executors.

Weblogic Server provides a preconfigured, default ManagedThreadFactory for each application, and applications can easily use it in web or EJB components without any configuration. Let's begin with a simple example that uses default ManagedThreadFactory in a servlet.

Example-1: Use Default ManagedThreadFactory to Create a Thread in a Servlet

Step1: Write a Runnable, logging data until the thread is interrupted.

public class LoggerTask implements Runnable {

    @Override

    public void run() {

        while (!Thread.interrupted()) {

            // collect data and write them to database or file system

        }

    }

}

Step2: SomeServlet.java injects the default ManagedThreadFactory and use it to create the thread.

@WebServlet("/SomeServlet")

public class SomeServlet extends HttpServlet {

    @Resource ManagedThreadFactory mtf;

    protected void doGet(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException {

        Thread t = mtf.newThread(new LoggerTask());

        t.start();

        // Do something else and reply to the user

    }

}

Runtime Behavior

Application Scoped Instance

ManagedThreadFactories are application scoped. Each application has its own ManagedThreadFactory instances, and the lifecycle of the ManagedThreadFactory instances are bound to the application. Threads created by ManagedThreadFactory are also application scoped, so that when the application is shut down, related threads will be interrupted.

Each application has its own default ManagedThreadFactory instance. Besides, Applications or system administrators can define customized ManagedThreadFactory. Please note that even ManagedThreadFactory templates(see in a later section) defined globally in the console are application scoped during runtime.

Context Propagation

ManagedThreadFactories will capture the application context at ManagedThreadFactory creation(NOT at newThread method invocation), then propagate the captured application context before task execution, so that the task can also run with the application context.

Four types of application context are propagated: JNDI, ClassLoader, Security and WorkArea. The propagated context types are the same for four types of the concurrent managed objects.

Limit of Running Threads

When newThread method is invoked, WebLogic Server creates a new thread. Because an excessive number of running threads can have a negative affect on server performance and stability, WebLogic Server provides configurations(Max Concurrent New Threads) to limit the number of running threads in a ManagedThreadFactory instance, in the global (domain-level) runtime on a server or in the server. By default, the limits are: 10 for a ManagedThreadFactory instance, 50 for the global (domain-level) runtime on a server and 100 for a Server. When either of the limits is exceeded, calls to newThread() method of ManagedThreadFactory return null.

Please note the difference between the global (domain-level) runtime scope Max Concurrent New Threads and the server scope Max Concurrent New Threads. One of the key features in WLS 12.2.1 is the multi-tenancy support where a single Weblogic Server domain can contain multiple partitions. The global (domain-level) runtime Max Concurrent New Threads is the maximum number of threads created by all of the ManagedThreadFactories on the server for global (domain-level) runtime, this excludes threads created within the scope of partitions running on the server, while the server scope Max Concurrent New Threads is the maximum number of threads created by all of the ManagedThreadFactories on the server, including threads created  within the scope of partitions. For partition scope Max Concurrent New Threads, please read Part Five - Multi-tenancy Support.

ManagedThreadFactory returns a new thread only when neither of the 3 limits is exceeded. For instance, there is an application deployed to global (domain-level) runtime on a server, when servlets or EJBs invoke the newThread method of the default ManagedThreadFactory, they will get null if there are 10 in progress threads which are created by this ManagedThreadFactory, or there are 50 in progress threads which are created by the ManagedThreadFactories in scope of global (domain-level) runtime on the server, or there are 100 in progress threads which are created by the ManagedThreadFactories in the server.

There are examples on how to specify the Max Concurrent New Threads in a later section.

Configuration

As mentioned earlier, each application has its own default ManagedThreadFactory. The default ManagedThreadFactory has a default max concurrent new threads(10), and has a default thread priority(Thread.NORM_PRIORITY). There is also a default max concurrent new threads(100) for the whole server. If the default configuration is not good enough you will need to read further for configurations. For instance, when you need to create threads with higher priority, you will need to configure a ManagedThreadFactory; and if there would be more than 100 concurrent running threads in the server, you will need to change the server scope Max Concurrent New Threads.

Configure ManagedThreadFactories

NameMax Concurrent New Threads, and Priority are configured inside a ManagedThreadFactory. Name is a string that identifies the ManagedThreadFactory,Max Concurrent New Threads is the limit of running threads created by this ManagedThreadFactory, and Priority is the priority of threads.

An application can configure a ManagedThreadFactory in DD(weblogic-application.xml/weblogic-ejb-jar.xml/weblogic.xml), and gets the ManagedThreadFactory instance using @Resource(mappedName=<Name of ManagedThreadFactory>), then uses it to create threads. Besides annotation, the application can also bind the ManagedThreadFactory instance to JNDI by specifying <resource-env-description> and <resource-env-ref> in DD, then look it up using JNDI Naming Context, you can read Configuring Concurrent Managed Objects in the product documentation for details.

Also, a WebLogic system administrator can configure pre-defined ManagedThreadFactory templates. When an application is deployed, WebLogic Server creates ManagedThreadFactories based on the configuration of ManagedThreadFactory templates, and the created ManagedThreadFactories are all in scope of this application.

Example-2: Configure a ManagedThreadFactory in weblogic.xml

Step1: defining ManagedThreadFactory:

<!-- weblogic.xml -->

<managed-thread-factory>

    <name>customizedMTF</name>

    <priority>3</priority>

    <max-concurrent-new-threads>20</max-concurrent-new-threads>

</managed-thread-factory>

Step2: obtaining the ManagedThreadFactory instance to use

@WebServlet("/SomeServlet")

public class SomeServlet extends HttpServlet {

    @Resource(mappedName="customizedMTF")

    ManagedThreadFactory mtf;

    protected void doGet(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException {

        Runnable aTask = new Runnable() {
            ...
        };
        Thread t = mtf.newThread(aTask);

        t.start();
        ...
    }

}

Example-3: Configure a ManagedThreadFactory template using WebLogic Administration Console

If there is requirement on multiple applications instead of individual application, you can create ManagedThreadFactory templates globally that are available to all applications. For instance, when you need to create threads from all applications with lower priority, you will need to configure a ManagedThreadFactory template. As mentioned earlier, if there is a ManagedThreadFactory template, WebLogic Server creates a ManagedThreadFactory instance for each application based on the configuration of the template.

Step1: in WebLogic Administration Console, a ManagedThreadFactory template can be created by clicking on the “New” button from the “Summary of Concurrent Managed Object Templates” page. This brings up the "Create a New Managed Thread Factory Template" page where the name and other parameters of the new ManagedThreadFactory template can be specified. In this example, a ManagedThreadFactory template called "testMTF" is being created with priority 3.


Step2: Once a ManagedThreadFactory template is created, any application in the WebLogic Server can get its own ManagedThreadFactory instance to use.

@WebServlet("/SomeServlet")

public class SomeServlet extends HttpServlet {

    @Resource(mappedName="testMTF")

    ManagedThreadFactory mtf;

    protected void doGet(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException {

        Runnable aTask = new Runnable() {
            ...
        };
        Thread t = mtf.newThread(aTask);

        t.start();
        ...
    }

}

Configure Max Concurrent New Threads in global (domain-level) runtime scope or server scope

Example-4: Configure global (domain-level) runtime Scope Max Concurrent New Threads

Max Concurrent New Threads of global (domain-level) runtime is the limit of threads created by ManagedThreadFactories in global (domain-level) runtime on that server, this excludes threads created within the scope of partitions running on that server.

In WebLogic Administration Console, Max Concurrent New Threads of global (domain-level) runtime can be edited from the “Settings for <domainName>” screen. In this example, global (domain-level) runtime Max Concurrent New Threads of mydomain is set to 100.


Example-5: Configure Server Scope Max Concurrent New Threads

Max Concurrent New Threads of a server is the limit of running threads submitted to all ManagedThreadFactories in that server.

In WebLogic Administration Console, Max Concurrent New Threads of a server can be edited from the “Settings for <serverName>” screen. In this example, Max Concurrent New Threads of myserver is set to 200.


Related Articles:

See for more details Configuring Concurrent Managed Objects in the product documentation.

Concurrency Utilities support in WebLogic Server 12.2.1, Part Two: ManagedScheduledExecutorService

Overview

ManagedScheduledExecutorService extends from ManagedExecutorService, all the methods from ManagedExecutorService are supported in ManagedScheduledExecutorService, so prior to this article please read Part One: ManagedExecutorService.

ManagedScheduledExecutorService extends from java.util.concurrent.ScheduledExecutorService, so it also provides methods(schedule, scheduleAtFixedRate, scheduleAtFixedDelay) from ScheduledExecutorService for scheduling tasks to run after a given delay, or periodically. Besides, ManagedScheduledExecutorService provides new methods itself for tasks to run at some custom schedule based on a Trigger. All those tasks are run on threads provided by WebLogic Server.

Weblogic Server provides a preconfigured, default ManagedScheduledExecutorService for each application, and we can easily use it in web or EJB components without any configuration. Let's begin with a simple example that uses default ManagedScheduledExecutorService in a ServletContextListener.

Example-1: Use Default ManagedScheduledExecutorService to Submit a Periodical Task

Step1: Write a task to log data.

public class LoggerTask implements Runnable {

    @Override

    public void run() {

        // collect data and write them to database or file system

    }

}

Step2: SomeListener.java injects the default ManagedScheduledExecutorService, schedules the task periodically in contextInitialized, and cancels the task in contextDestroyed.

@WebListener

public class SomeListener implements ServletContextListener {

    Future loggerHandle = null;

    @Resource ManagedScheduledExecutorService mses;

    public void contextInitialized(ServletContextEvent scEvent) {

        // Creates and executes LoggerTask every 5 seconds, beginning at 1 second later

        loggerHandle = mses.scheduleAtFixedRate(new LoggerTask(), 1, 5, TimeUnit.SECONDS);

    }

    public void contextDestroyed(ServletContextEvent scEvent) {

        // Cancel and interrupt our logger task

        if (loggerHandle != null) {

            loggerHandle.cancel(true);

        }

    }

}

Runtime Behavior

ManagedScheduledExecutorService provides all the features described in Runtime Behavior of Part One: ManagedExecutorService.

As mentioned earlier, ManagedScheduledExecutorService can run task periodically or at some custom schedule, so that a task can run multiple times. Please note that for a long-running task, even if a task can be executed more than once, WebLogic Server creates only one thread for this long-running task at the time of the first run.

Configuration

Configure ManagedScheduledExecutorService

ManagedScheduledExecutorService has the same configurations(Name, Dispatch Policy, Max Concurrent Long Running Requests, and Long Running Priority) as ManagedExecutorService. And the way to get and use the customized ManagedScheduledExecutorService is also similar to ManagedExecutorService.

Example-2: Configure a  ManagedScheduledExecutorService in weblogic.xml

Step1: defining ManagedScheduledExecutorService:

<!-- weblogic.xml -->

<work-manager>

<name>customizedWM</name>

    <max-threads-constraint>

        <name>max</name>

        <count>1</count>

    </max-threads-constraint>

</work-manager>

<managed-scheduled-executor-service>

    <name>customizedMSES</name>

    <dispatch-policy>customizedWM</dispatch-policy>

    <long-running-priority>10</long-running-priority>

    <max-concurrent-long-running-requests>20</max-concurrent-long-running-requests>

</managed-scheduled-executor-service>

Step2: obtaining the ManagedScheduledExecutorService instance to use

@WebServlet("/SomeServlet")

public class SomeServlet extends HttpServlet {

    @Resource(mappedName="customizedMSES")

    ManagedScheduledExecutorService mses;

    protected void doGet(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException {

        Runnable aTask = new Runnable() {
            ...
        };
        mses.schedule(aTask, 5, TimeUnit.SECONDS);
        ...
    }

}


Example-3: Configure a ManagedScheduledExecutorService template using WebLogic Administration Console

Step1: in WebLogic Administration Console, a ManagedScheduledExecutorService template can be created by clicking on the “New” button from the “Summary of Concurrent Managed Object Templates” page. This brings up the "Create a New Managed Scheduled Executor Service Template" page where the name and other parameters of the new ManagedScheduledExecutorService template can be specified. In this example, a ManagedScheduledExecutorService called "testMSES" is being created to map to a pre-defined work manager "testWM".


Step2: Once a ManagedScheduledExecutorService template is created, any application in the WebLogic Server can get its own ManagedScheduledExecutorService instance to use.

@WebServlet("/SomeServlet")

public class SomeServlet extends HttpServlet {

    @Resource(mappedName="testMSES")

    ManagedScheduledExecutorService mses;

    protected void doGet(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException {

        Runnable aTask = new Runnable() {
            ...
        };
        mses.schedule(aTask, 5, TimeUnit.SECONDS);
        ...
    }

}

Related Articles:

See for more details Configuring Concurrent Managed Objects in the product documentation.

Concurrency Utilities support in WebLogic Server 12.2.1, Part One: ManagedExecutorService

Overview

ManagedExecutorService is for running tasks asynchronously on threads provided by WebLogic Server. It extends from java.util.concurrent.ExecutorService without new methods, it provides methods(execute, submit, invokeAll, invokeAny) from ExecutorService, and its lifecycle methods(awaitTermination, isTerminated, isShutdown, shutdown, shutdownNow) are disabled with IllegalStateException.

Weblogic Server provides a preconfigured, default ManagedExecutorService for each application, and applications can easily use it in web or EJB components without any configuration. Let's begin with a simple example that uses default ManagedExecutorService in a servlet.

Example-1: Use Default ManagedExecutorService to Submit an Asynchronous Task in a Servlet

Step1: Write an asynchronous task. Asynchronous tasks must implement either java.util.concurrent.Callable or java.lang.Runnable. A task can optionally implement javax.enterprise.concurrent.ManagedTask(see JSR236 specification) to provide identifying information, a ManagedTaskListener or additional execution properties of the task.

public class SomeTask implements Callable<Integer> {

    public Integer call() {

        // Interact with a database, then return answer.

    }

}

Step2: SomeServlet.java injects the default ManagedExecutorService and submit the task to it.

@WebServlet("/SomeServlet")

public class SomeServlet extends HttpServlet {

    @Resource ManagedExecutorService mes;

     protected void doGet(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException {

        // Create and submit the task instances

        Future<Integer> result = mes.submit(new SomeTask());

        // do something else

        try {

            // Wait for the result

            Integer value = result.get();

            // Process the result and reply to the user

        } catch (InterruptedException | ExecutionException e) {

            throw new ServletException("failed to get result for SomeTask", e);

        }

    }

}

Runtime Behavior


Application Scoped Instance

There are two applications(A in red and B in green) in the above figure. You can see that the two applications are submitting tasks to different ManagedExecutorService instances, this is true because ManagedExecutorServices are application scoped. Each application has its own ManagedExecutorService instances, and the lifecycle of the ManagedExecutorService instances are bound to the application. Asynchronous tasks submitted to ManagedExecutorServices are also application scoped, so that when the application is shut down, related asynchronous tasks/threads will be cancelled/interrupted.

Each application has its own default ManagedExecutorService instance. Besides, Applications or system administrators can define customized ManagedExecutorService. Please note that even ManagedExecutorService templates(see in a later section) defined globally in the console are application scoped during runtime.

Context Propagation

In the above figure you can see that when application A is submitting a task, the task is wrapped with the context of application A, whereas when application B is submitting a task, the task is wrapped with the context of application B. This is true because ManagedExecutorServices will capture the application context at task submission, then propagate the captured application context before task execution, so that the task can also run with the application context.

Four types of application context are propagated: JNDI, ClassLoader, Security and WorkArea. The propagated context types are the same for four types of the concurrent managed objects.

Self Tuning(for short-running tasks)

In the above figure you can see that ManagedExecutorServices submit short-running tasks to WorkManagers(see Workload Management in WebLogic Server 9.0 for overview of WebLogic work managers), and create a new thread for each long-running task. As you may know, WebLogic Server diagnoses a thread as stuck if it is continually working (not idle) for a set period of time(default is 10 minutes), so normally if a task would last longer than that period of time, it can be a long-running task. You can set ManagedTask.LONGRUNNING_HINT property(see JSR236 specification) to "true" to make it run as a long-running task.

Each ManagedExecutorService is associated with an application-scoped WorkManager. By default, ManagedExecutorServices are associated with the application default WorkManager. Applications or system administrators can specify Dispatch Policy to associate a ManagedExecutorService with a specific application-scoped WorkManager. There are examples on how to use the dispatch policy in a later section.

By associating a ManagedExecutorService with a WorkManager, WebLogic Server utilize the threads in the single thread pool to run asynchronous tasks from applications, so that asynchronous tasks can also be dynamically prioritized together with servlet or RMI requests.

Limit of Concurrent Long-running Requests

As mentioned earlier long-running tasks do not utilize the threads in the single thread pool, WebLogic Server creates a new thread for each task. Because an excessive number of running threads can have a negative affect on server performance and stability, WebLogic Server provides configurations(Max Concurrent Long Running Requests) to limit the number of concurrent long-running tasks in a ManagedExecutorService/ManagedScheduledExecutorService instance, in the global (domain-level) runtime on a server or in the server. By default, the limits are: 10 for a ManagedExecutorService/ManagedScheduledExecutorService instance, 50 for the global (domain-level) runtime on a server and 100 for a Server. When either of the limits is exceeded, ManagedExecutorService/ManagedScheduledExecutorService rejects long-running tasks submissions by throwing a RejectedExecutionException.

Please note the difference between the global (domain-level) runtime scope Max Concurrent Long Running Requests and the server scope Max Concurrent Long Running Requests. One of the key features in WLS 12.2.1 is the multi-tenancy support where a single Weblogic Server domain can contain multiple partitions. The global (domain-level) runtime scope Max Concurrent Long Running Requests is the maximum number of concurrent long-running tasks submitted by all of the ManagedExecutorServices/ManagedScheduledExecutorServices on the server for global (domain-level) runtime, this excludes concurrent long-running tasks submitted within the scope of partitions running on the server, while the server scope Max Concurrent Long Running Requests is the maximum number of concurrent long-running tasks submitted by all of the ManagedExecutorServices/ManagedScheduledExecutorServices on the server, including concurrent long-running tasks submitted within global (domain-level) runtime and partitions. For partition scope Max Concurrent Long Running Requests, please read Part Five - Multi-tenancy Support.

ManagedExecutorService/ManagedScheduledExecutorService accepts a concurrent long-running task submission only when neither of the 3 limits is exceeded. For instance, there is an application deployed to global (domain-level) runtime on a server, when a long-running task is submitted to its default ManagedExecutorServiceRejectedExecutionException will be thrown if there are 10 in progress long-running tasks which are submitted to this ManagedExecutorService, or there are 50 in progress long-running tasks which are submitted to the ManagedExecutorServices/ManagedScheduledExecutorServices in scope of global (domain-level) runtime on the server, or there are 100 in progress long-running tasks which are submitted to the ManagedExecutorServices/ManagedScheduledExecutorServices in the server.

There are examples on how to specify the Max Concurrent Long Running Requests in a later section.

Configuration

As mentioned earlier, each application has its own default ManagedExecutorService. The default ManagedExecutorService is associated with the default WorkManager, has a default max concurrent long running requests(10), and has a default thread priority(Thread.NORM_PRIORITY). There is also a default max concurrent long running requests(100) for the whole server. If the default configuration is not good enough you will need to read further for configurations. For instance, when you need to associate short-running tasks to a pre-defined WorkManager with higher priority, you will need to configure a ManagedExecutorService; and if there would be more than 100 concurrent long-running tasks in the server, you will need to change the server scope Max Concurrent Long Running Requests.

Configure ManagedExecutorServices

Name, Dispatch Policy, Max Concurrent Long Running Requests, and Long Running Priority are configured inside a ManagedExecutorService. Name is a string that identifies the ManagedExecutorServiceDispatch Policy is the name of the WorkManager to which the short-running tasks are submitted, Max Concurrent Long Running Requests is the limit of concurrent long-running tasks submitted to this ManagedExecutorService, and Long Running Priority is the priority of the threads created for long-running tasks.

An application can configure a ManagedExecutorService in DD(weblogic-application.xml/weblogic-ejb-jar.xml/weblogic.xml), and gets the ManagedExecutorService instance using @Resource(mappedName=<Name of ManagedExecutorService>), then submits a task to it. Besides annotation, the application can also bind the ManagedExecutorService instance to JNDI by specifying <resource-env-description> and <resource-env-ref> in DD, then look it up using JNDI Naming Context, you can read Configuring Concurrent Managed Objects in the product documentation for details.

Also, a WebLogic system administrator can configure pre-defined ManagedExecutorService templates. When an application is deployed, WebLogic Server creates ManagedExecutorServices based on the configuration of ManagedExecutorService templates, and the created ManagedExecutorServices are all in scope of this application.

Example-2: Configure a ManagedExecutorService in weblogic.xml

Step1: defining ManagedExecutorService:

<!-- weblogic.xml -->

<work-manager>

<name>customizedWM</name>

    <max-threads-constraint>

        <name>max</name>

        <count>1</count>

    </max-threads-constraint>

</work-manager>

<managed-executor-service>

    <name>customizedMES</name>

    <dispatch-policy>customizedWM</dispatch-policy>

    <long-running-priority>10</long-running-priority>

    <max-concurrent-long-running-requests>20</max-concurrent-long-running-requests>

</managed-executor-service>

Step2: obtaining the ManagedExecutorService instance to use

@WebServlet("/SomeServlet")

public class SomeServlet extends HttpServlet {

    @Resource(mappedName="customizedMES")

     ManagedExecutorService mes;

     protected void doGet(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException {

         Runnable aTask = new Runnable() {
             ...
         };
         mes.submit(aTask);
         ...
    }

}

Example-3: Configure a ManagedExecutorService template using WebLogic Administration Console

If there is requirement on multiple applications instead of individual application, you can create ManagedExecutorService templates globally that are available to all applications. For instance, when you need to run short-running tasks from all applications with lower priority, you will need to configure a ManagedExecutorService template . ManagedExecutorService templates are also useful in Batch jobs. As mentioned earlier, if there is a ManagedExecutorService template, WebLogic Server creates a ManagedExecutorService instance for each application based on the configuration of the template.

Step1: in WebLogic Administration Console, a ManagedExecutorService template can be created by clicking on the “New” button from the “Summary of Concurrent Managed Object Templates” page. This brings up the "Create a New Managed Executor Service Template" page where the name and other parameters of the new ManagedExecutorService template can be specified. In this example, a ManagedExecutorService called "testMES" is being created to map to a pre-defined work manager "testWM".


Step2: Once a ManagedExecutorService template is created, any application in the WebLogic Server can get its own ManagedExecutorService instance to use.

@WebServlet("/SomeServlet")

public class SomeServlet extends HttpServlet {

    @Resource(mappedName="testMES")

    ManagedExecutorService mes;

    protected void doGet(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException {

        Runnable aTask = new Runnable() {
            ...
        };
        mes.submit(aTask);
        ...
    }

}

Configure Max Concurrent Long Running Requests Running in global (domain-level) runtime scope or server scope

Example-4: Configure global (domain-level) runtime Scope Max Concurrent Long Running Requests

Max Concurrent Long Running Requests of global (domain-level) runtime is the limit of concurrent long-running tasks submitted to all ManagedExecutorServices and ManagedScheduledExecutorServices in global (domain-level) runtime on that server, this excludes long-running tasks submitted within the scope of partitions running on that server.

In WebLogic Administration Console, Max Concurrent Long Requests of global (domain-level) runtime can be edited from the “Settings for <domainName>” screen. In this example, global (domain-level) runtime Max Concurrent Long Running Requests for mydomain is set to 80.

Example-5: Configure Server Scope Max Concurrent Long Running Requests

Max Concurrent Long Running Requests of a server is the limit of concurrent long-running tasks submitted to all ManagedExecutorServices and ManagedScheduledExecutorServices in that server.

In WebLogic Administration Console, Max Concurrent Long Requests of a server can be edited from the “Settings for <serverName>” screen. In this example, Max Concurrent Long Running Requests of myserver is set to 200.


Related Articles:

See for more details Configuring Concurrent Managed Objects in the product documentation.

Concurrency Utilities support in WebLogic Server 12.2.1

As part of its support for Java EE 7, WebLogic Server 12.2.1 supports the Java EE Concurrency Utilities(JSR236) specification.

This specification provides a simple, standardized API(4 types of managed objects) for using concurrency from Java EE application components(such as servlets and EJBs). The 4 types of concurrent managed objects implement these interfaces in javax.enterprise.concurrent package: ManagedExecutorService, ManagedScheduledExecutorService, ManagedThreadFactory, ContextService.

If you are still using common Java SE concurrency APIs such as java.lang.Thread or java.util.Timer directly in your servlets or EJBs, you are strongly recommended to use java EE Concurrency Utilities instead. Threads created by using Java SE concurrency APIs are not managed by WebLogic Server, so that services and resources provided by WebLogic Server are typically unable to be reliably used from these un-managed threads.  By using java EE Concurrency Utilities, asynchronous tasks run on WebLogic Server-managed threads. Since WebLogic Server has knowledge of these threads/asynchronous tasks, it can manage them by:

  • Providing the proper execution context, including JNDI, ClassLoader, Security, WorkArea
  • Submitting short-running tasks to the single server-wide self-tuning thread pool to make them prioritized based on defined rules and run-time metrics
  • Limiting the number of threads for long-running tasks to prevent negative affect on server performance and stability
  • Managing the lifecycle of asynchronous tasks by interrupting threads/cancelling tasks when the application shuts down

CommonJ API(providing context aware Work Managers and Timers) is WebLogic Server specific, and is the predecessor of Java EE Concurrency Utilities. Comparing to CommonJ API, Java EE Concurrency Utilities is more standardized and easier to use, and provides more functions like custom scheduling, ContextService, ManagedThreadFactory.

Read these articles for details:

See for more details Configuring Concurrent Managed Objects in the product documentation.

Thursday Nov 19, 2015

WLS Replay Statistics

Starting in the 12.1.0.2 Oracle thin driver, the replay driver has statistics related to replay. This is useful to understand how many connections are being replayed. It should be completely transparent to the application so you won’t know if connection replays are occurring unless you check.

The statistics are available on a per connection basis or on a datasource basis. However, connections on a WLS datasource don’t share a driver-level datasource object so the latter isn’t useful in WLS. WLS 12.2.1 provides another mechanism to get the statistics at the datasource level.

The following code sample shows how to print out the available statistics for an individual connection using the oracle.jdbc.replay.ReplayableConnection interface, which exposes the method to get a oracle.jdbc.replay.ReplayStatistics object. See https://docs.oracle.com/database/121/JAJDB/oracle/jdbc/replay/ReplayStatistics.html for a description of the statistics values.

if (conn instanceof ReplayableConnection) {
  ReplayableConnection rc = ((ReplayableConnection)conn);
  ReplayStatistics rs = rc.getReplayStatistics(
    ReplayableConnection.StatisticsReportType.FOR_CURRENT_CONNECTION);
  System.out.println("Individual Statistics");
  System.out.println("TotalCalls="+rs.getTotalCalls());
  System.out.println("TotalCompletedRequests="+rs.getTotalCompletedRequests());
  System.out.println("FailedReplayCount="+rs.getFailedReplayCount());
  System.out.println("TotalRequests="+rs.getTotalRequests());
  System.out.println("TotalCallsTriggeringReplay="+rs.getTotalCallsTriggeringReplay());
  System.out.println("TotalReplayAttempts="+rs.getTotalReplayAttempts());
  System.out.println("TotalProtectedCalls="+rs.getTotalProtectedCalls());
  System.out.println("SuccessfulReplayCount="+rs.getSuccessfulReplayCount());
  System.out.println("TotalCallsAffectedByOutages="+rs.getTotalCallsAffectedByOutages()); 
  System.out.println("TotalCallsAffectedByOutagesDuringReplay="+  
      rs.getTotalCallsAffectedByOutagesDuringReplay());  
  System.out.println("ReplayDisablingCount="+rs.getReplayDisablingCount());
}

Besides a getReplayStatistics() method, there is also a clearReplayStatistics() method.

To provide for a consolidated view of all of the connections associated with a WLS datasource, the information is available via a new operation on the associated runtime MBean. You need to look-up the WLS MBean server, get the JDBC service, then search for the datasource name in the list of JDBC datasource runtime MBeans, and get the JDBCReplayStatisticsRuntimeMBean. This value will be null if the datasource is not using a replay driver, if the driver is earlier than 12.1.0.2, or if it’s not a Generic or AGL datasource. To use the replay information, you need to first call the refreshStatistics() operation that sets the MBean values, aggregating the values for all connections on the datasource. Then you can call the operations on the MBean to get the statistics values, as in the following sample code. Note that there is also a clearStatistics() operation to clear the statistics on all connections on the datasource. The following code shows an example of how to print the aggregated statistics from the data source.

public void printReplayStats(String dsName) throws Exception {
  MBeanServer server = getMBeanServer();
  ObjectName[] dsRTs = getJdbcDataSourceRuntimeMBeans(server);
  for (ObjectName dsRT : dsRTs) {
    String name = (String)server.getAttribute(dsRT, "Name");
    if (name.equals(dsName)) {
      ObjectName mb =(ObjectName)server.getAttribute(dsRT,  
        "JDBCReplayStatisticsRuntimeMBean");
      server.invoke(mb,"refreshStatistics", null, null);
      MBeanAttributeInfo[] attributes = server.getMBeanInfo(mb).getAttributes();
      System.out.println("Roll-up");
      for (int i = 0; i <attributes.length; i++) {
        if(attributes[i].getType().equals("java.lang.Long")) {
          System.out.println(attributes[i].getName()+"="+
            (Long)server.getAttribute(mb, attributes[i].getName()));
        }
      }
    }
  }
}

MBeanServer getMBeanServer() throws Exception {
  InitialContext ctx = new InitialContext();
  MBeanServer server = (MBeanServer)ctx.lookup("java:comp/env/jmx/runtime");
  return server;
}
ObjectName[] getJdbcDataSourceRuntimeMBeans(MBeanServer server) 
  throws Exception {
  ObjectName service = new ObjectName( "com.bea:Name=RuntimeService,Type=\
weblogic.management.mbeanservers.runtime.RuntimeServiceMBean");
  ObjectName serverRT = (ObjectName)server.getAttribute(service,  "ServerRuntime");
  ObjectName jdbcRT = (ObjectName)server.getAttribute(serverRT,  "JDBCServiceRuntime");
  ObjectName[] dsRTs = (ObjectName[])server.getAttribute(jdbcRT,
    "JDBCDataSourceRuntimeMBeans");
  return dsRTs;
}

Now run an application that gets a connection, does some work, kills the session, replays, then gets a second connection and does the same thing. Each connection successfully replays once. That means that the individual statistics show a single replay and the aggregated statistics will show two replays. Here is what the output might look like.

Individual Statistics
TotalCalls=35
TotalCompletedRequests=0
FailedReplayCount=0
TotalRequests=1
TotalCallsTriggeringReplay=1
TotalReplayAttempts=1
TotalProtectedCalls=19
SuccessfulReplayCount=1
TotalCallsAffectedByOutages=1
TotalCallsAffectedByOutagesDuringReplay=0
ReplayDisablingCount=0

Roll-up
TotalCalls=83
TotalCompletedRequests=2
FailedReplayCount=0
TotalRequests=4
TotalCallsTriggeringReplay=2
TotalReplayAttempts=2
TotalProtectedCalls=45
SuccessfulReplayCount=2
TotalCallsAffectedByOutages=2
TotalCallsAffectedByOutagesDuringReplay=0
ReplayDisablingCount=0

Looking carefully at the numbers, you can see that the individual count was done before the connections were closed (TotalCompletedRequests=0) and the roll-up was done after both connections were closed. 

You can also use WLST to get the statistics values for the datasource. The statistics are not visible in the administration console or FMWC in WLS 12.2.1.

Wednesday Nov 18, 2015

Patching Oracle Home Across your Domain with ZDT Patching

Now it’s time for the really good stuff!  In this post, you will see how Zero Downtime (ZDT) Patching can be used to rollout a patched WebLogic OracleHome directory to all your managed servers (and optionally to your AdminServer) without incurring any downtime or loss of session data for your end-users.

This rollout, like the others, is based on the controlled rolling shutdown of nodes, and using Oracle Traffic Director (OTD) load balancer to route user requests around the offline node. The difference with this rollout is what happens when the managed servers are shut down. In this case, when the managed servers are shutdown, the rollout will actually move the current OracleHome directory to a backup location, and replace it with a patched OracleHome directory that the administrator has prepared, verified, and distributed in advance. (More on the preparation in a moment)

When everything has been prepared, starting the rollout is simply a matter of issuing a WLST command like this one:

rolloutOracleHome(“Cluster1”, “/pathTo/PatchedOracleHome.jar”, “/pathTo/BackupCurrentOracleHome”, “FALSE”)

The AdminServer will then check that the PatchedOracleHome.jar file exists everywhere that it should, and it will begin the rollout. Note that the “FALSE” flag simply indicates that this is not a domain level rollback operation where we would be required to update the AdminServer last instead of first.

In order to prepare the patched OracleHome directory, as mentioned above, the user can start with a copy of a production OracleHome, usually in a test (non-production) environment, and apply the desired patches in whatever way is already familiar to them. Once this is done, the administrator uses the included CIE tool copyBinary to create a distributable jar archive of the OracleHome. Once the jar archive of the patched OracleHome directory has been created, it can be distributed to all of the nodes that will be updated. Note that it needs to reside on the same path for all nodes. With that, the preparation is complete and the rollout can begin!

Be sure to check back soon to read about how the preparation phase has been automated as well by integrating ZDT Patching with another new tool called OPatchAuto.


For more information about updating OracleHome with Zero Downtime Patching, view the documentation.

About

The official blog for Oracle WebLogic Server fans and followers!

Stay Connected

Search

Archives
« June 2016
SunMonTueWedThuFriSat
   
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
22
23
24
25
26
27
28
29
30
  
       
Today