Tuesday Feb 09, 2016

Cargo Tracker Java EE 7 Blue Prints Now Running on WebLogic 12.2.1

The Cargo Tracker project exists to serve as a blue print for developing well designed Java EE 7 applications, principally utilizing the well known Domain-Driven Design (DDD) architectural paradigm. The project provides a first hand look at how a realistic Java EE 7 application might look like.

The project was started some time ago and runs on GlassFish 4 and Java SE 7 by default. The project has now been enhanced to run the same code base on WebLogic 12.2.1 with Java SE 8. The code is virtually unchanged and the minor configuration difference between GlassFish and WebLogic are handled through Maven profiles. The instructions for running the project on WebLogic are available here. Feel free to give it a spin to get a feel for the Java EE 7 development experience with WebLogic 12.2.1.

Tuesday Jan 12, 2016

ZDT Technical Topic: How are Those Sessions Kept Alive Anyway?

By now you have probably read documentation or previous blog posts about how Zero Downtime Patching provides a convenient automated method of updating a WebLogic Domain in a rolling fashion.  By automating the process, Zero Downtime Patching greatly saves time and eliminates the potential human errors from the repetitive course of procedure. In addition to that there is also some special features around replicated HTTP sessions that make sure end users do not lose their session at any point during the rollout process. Lets explore the technical details around maintaining session state during Zero Downtime Patching.

One of the key aspects of WLS replicated session persistence contract is that the session may be maintained within the cluster even in the rare situation where a server crashes. However, the session persistence contract cannot guarantee sessions will be maintained when more than a single server goes down in a short time period. This is because the session has a single copy replicated to some secondary server within the cluster. The session is only replicated when the client makes a request to update the session so the client’s cookie can store a reference to the secondary server. Thus, if the primary server were to go down and then the secondary server were to go down before the session could be updated by a subsequent client request then the session would be lost. The rolling nature of Zero Downtime Patching fits this pattern, and thus must take extra care to avoid losing sessions. Administrators may have already observed that it is very easy to lose sessions by restarting one server at a time through the cluster.

Before we go into technical details on how Zero Downtime Patching prevents the issue of losing sessions, it is important to note that the entire methodology relies on Oracle Traffic Director for load balancing, dynamic discovery, health checks, and session failover handling. In addition to this setup, 3 key features were utilized by Zero Downtime Patching directly to prevent the loss of sessions:

Session Handling Overview

1. Preemptive Session Replication - Session data is preemptively propagated to another server in the cluster during graceful shutdown when necessary.  To get even more detailed on this, lets examine the scenario where the ZDT rollout has shutdown the server holding the HTTP Session, and the next step is to shutdown the server holding the replica. In that case, WebLogic can detect during shutdown that the session will be lost as there is no backup copy within the cluster. So the ZDT rollout can ensure that WebLogic Server replicates that session to another server within the cluster.

The illustration below shows the problematic scenario where the server,  s1, holding the primary copy of the session is shutdown followed by the shutdown of the server, s2, holding the secondary or replica copy.  The ZDT Orchestration signals that s2 should preemptively replicate any single session copies before shutting down.  Thus there is always a copy available within the cluster.  

Preemptive Session Replication on Shutdown

2. Session State Query Protocol - Due to the way that WebLogicServer relies on the association of an HTTP Session with a primary server and a secondary server, it is not sufficient to simply have the session somewhere in the cluster.  There is also a need to be able to find the session when the client request lands on an arbitrary server within the cluster. The ZDT rollout enables the ability for WebLogicServers to query other servers in the cluster for specific sessions if they don’t have their own copy.

Session Fetching via Session State Protocol Query

The diagram above shows that an incoming request to a server without the session can trigger a query and once the session is found within the cluster it can be fetched so that the request can be served on the server, "s4". 

3. Orphaned Session Cleanup - Once we combine the ability to preemptive replicate session instances, and the ability to fetch sessions from within the cluster, we must also take a more active approach to cleanup instances that are fetched.  Historically, WebLogic Server hasn’t had to worry much about orphaned sessions. Front end load balancers and web servers have been required to honor the session’s server affinity. And in the rare case that a request would land on a server that did not contain the primary or secondary, the session would be fetched from the primary server or secondary server and then the orphaned copy would be forgotten to be cleaned up upon timeout or other regular intervals. It was assumed that because the pointer to the session changed, that the actual stored reference would never be used again. However, the ZDT rollout repeatedly presents the scenario where a session must be found within the cluster and fetched from the server that holds the session. Not only can the number of session instances proliferate - all with various versions of the same session - the cluster is now queried for the copy and we must not find any stale copies - only the current replica of the session.

Orphaned Session Cleanup

The above illustration shows the cleanup action after s4 has fetched the session data to serve the incoming request.  It launches the cleanup request to s3 to ensure no stale data is left within the cluster.


Now during ZDT Patching we can shutdown server1, and expect that any lone session copies will be propagated to server2 without the clients knowledge. When the client does send another request, WLS will be able to handle that request and query the cluster to find the session data. The data will be fetched and used on the server handling the request. The orphaned copy will be cleaned up and the server handling the request will go through the process of choosing its preferred Secondary server to store the replica.

For more information about Zero Downtime Patching, view the documentation




Friday Jan 08, 2016

ZDT Rollouts and Singletons

WebLogic Server offers messaging, transaction and other system services to facilitate building enterprise grade applications. Typically, services can be either clustered or singleton. Clustered services are deployed identically to each server in a cluster to provide increased scalability and reliability. The session state of one clustered server is replicated on another server in the cluster. In contrast, singleton services run on only one server in a cluster at any given point of time so as to offer specific quality of service (QOS) but most importantly to preserve data consistency. Singleton services can be JMS-related, JTA-related or user-defined. In highly available (HA) environments, it is important for all services to be up and running even during patch upgrades.

The new WebLogic Zero Downtime Patching (a.k.a ZDT patching) feature introduces a fully automated rolling upgrade solution to perform upgrades such that deployed applications continue to function and are available for end users even during the upgrade process. ZDT patching supports rolling out Oracle Home, Java Home and also updating applications. Check out these blogs or view the documentation for more information on ZDT.

[Read More]

Monday Jan 04, 2016

Java Rock Star Adam Bien Impressed by WebLogic 12.2.1

It is not an exaggeration to say Adam Bien is pretty close to a "household name" in the Java world. Adam is a long time Java enthusiast, author of quite a few popular books, Java Community Process (JCP) expert, Oracle ACE Director, official Oracle Java Champion and JavaOne conference Rock Star award winner. Adam most recently won the JCP member of the year award. His blog is amongst the most popular for Java developers. 

Adam recently took WebLogic 12.2.1 for a spin and was impressed. Being a developer (not unlike myself) he focused on the full Java EE 7 support in WebLogic 12.2.1. He reported his findings to Java developers on his blog. He commented on fast startup, low memory footprint, fast deployments, excellent NetBeans integration and solid Java EE 7 compliance. You can read Adam's full write-up here.

None of this of course is incidental. WebLogic is a mature product with an extremely large deployment base. With those strengths often comes the challenge of usability. Nonetheless many folks that haven't kept up-to-date with WebLogic evolution don't realize that usability and performance have long been a continued core focus. That is why folks like Adam are often pleasantly surprised when they take an objective fresh look at WebLogic. You can of course give WebLogic 12.2.1 a try yourself here. There is no need to pay anything just to try it out as you can use a free OTN developer license (this is what Adam used as per the instructions on his post). You can also use an official Docker image here.

Solid Java EE support is of course the tip of the iceberg as to what WebLogic offers. As you are aware WebLogic offers a depth and breadth of proven features geared towards mission-critical, 24x7 operational environments that few other servers come close to. One of the best ways for anyone to observe this is taking a quick glance at the latest WebLogic documentation.

Tuesday Dec 15, 2015

Even Applications can be Updated with ZDT Patching

Zero Downtime Patching enables a convenient method of updating production applications on WebLogic Server without incurring any application downtime or loss of session data for your end-users.  This new feature may be especially useful for users who want to update multiple applications at the same time, or for those who cannot take advantage of the Production Redeployment feature due to various limitations or restrictions. Now there is a convenient alternative to complex application patching methods.

This rollout is based on the process and mechanism for automating rollouts across a domain while allowing applications to continue to service requests. In addition to the reliable automation, the Zero Downtime Patching feature also combines Oracle Traffic Director (OTD) load balancer and WebLogic Server to provide some advanced techniques for preserving active sessions and even handling incompatible session state during the patching process.

To rollout an application update, follow these 3 simple steps.

1. Produce a copy of the updated the application(s), test and verify. Note the administrator is responsible for making sure that the updated application sources are distributed to the appropriate nodes.  For stage mode, the updated application source needs to be available on the file system for the AdminServer to distribute the application source.  For no stage and external stage mode, the updated application source needs to be available on the file system for each node.

2. Create a JSON formatted file with the details of any applications that need to be updated during the rollout.

"backupLocation": "/scratch/wlsconfg/Domains/Domain1221/applications/ScrabbleStagev1.war"

3. Simply run the Application rollout using a WLST command like this one:

rolloutApplications(“Cluster1”, “/pathTo/applicationRolloutProperties”)

The Admin Server will start the rollout that coordinates the rolling restart of each node in the cluster named “Cluster1”. While the servers are shutdown, the original application source is moved to the specified backup location, and the new application source is copied into place.  Each server in turn is then started in admin mode.  While the server is in admin mode, the application redeploy command is called for that specific server, causing it to reload the new source.  Then the server is resumed to its original running state and is serving the updated application.

For more information about updating Applications with Zero Downtime Patching view the documentation.

Monday Dec 14, 2015

WLS 12.2.1 launch - Servlet 3.1 new features

[Read More]

Friday Dec 11, 2015

Introducing WLS JMS Multi-tenancy


Multi-tenancy (MT) is the main theme of the WebLogic Server 12.2.1 release. It enhances the Oracle Platform for Software-as-a-Service (SaaS) and Platform-as-a-Service (PaaS) use cases. The main benefits of WebLogic multi-tenancy are increased density, tenant isolation, and simplified cloud configuration and management.

This article introduces multi-tenancy support for WebLogic JMS, the messaging component in WebLogic Server. 

Key MT Concepts

Some of you may have already learned from other blogs (for example Tim’s blog about Domain Partitions for Multi Tenancy) about some of the key concepts in WebLogic MT. But for the benefit of a broader audience, here is a quick review of those concepts before we get into JMS specifics.

WebLogic Multi-tenancy introduces the concepts of domain partition (also known as partition), resource group (RG), and resource group template (RGT).  

A Partition is conceptually a slice of a WebLogic domain, where resources and applications for different tenants can be configured and deployed in isolation on the same WebLogic server or in the same cluster. This improves overall density. Partitions define the isolation boundaries for JNDI, security, runtime MBeans, application persistent data, work managers and logging.  Furthermore, Partitions running on the same server instance have their own lifecycle, for example, a partition can be shut down at any time without impacting other partitions.

A Resource Group is simply a collection of functionally related resources and applications. A RG can be targeted and managed independently of other resource groups in the same partition. Resource groups can be defined not only inside a partition, but also at the domain level. As with partitions, RGs in the same partition (or at the domain level) that are running on the same server instance have their own lifecycle.

A Resource Group Template provides a templating mechanism to reduce the administrative overhead of configuring WebLogic resources and applications for SaaS use cases where the same resources and applications need to run in multiple partitions. It offers a configure-once-and-use-everywhere capability, where a common set of configuration artifacts can be specified in a RGT, and can then be referenced from RGs in different partitions. A RGT is not targetable, and resources in a RGT will not deploy unless the RGT is referenced by a deployed RG.

Note that the resources and applications configured or deployed in a partition (directly inside RGs or via RGs referencing a RGT) are scoped to that partition.

Understanding JMS Resources in MT

In a similar way to other WebLogic configuration artifacts, JMS resources such as JMS servers, SAF agents, path services, persistent stores, messaging bridges, JMS system modules, app-deployment JMS modules, Java EE 7 resource definition modules, and JMS applications can all now be configured and deployed in a RG, either directly or via RGTs, as well as in the ‘classic’ way, which is always directly at the domain level. Note that it is perfectly acceptable to combine both partition and ‘classic’ configuration together in the same domain.   

Resources and applications in different partitions are isolated from one another. For example, you can configure a JMS destination with the same JNDI name in multiple partitions running in the same cluster, and these destinations will be managed via independent runtime MBean instances, and can be independently secured via partition-specific security realms. In addition to non-persistent state, the persistent data (for example, persistent messages and durable subscriptions) in such JMS destinations are also isolated from one another.

Configuring JMS Resources in MT

The following configuration snippets show how JMS resources configured in a multi-tenant environment differs from traditional non-MT JMS configuration.

As you can see, partition-scoped JMS resources are embedded in a resource group in a partition (alternatively, they can be embedded in a Resource Group Template, which is in turn referenced by a Resource Group).

In addition, resources in a resource group are never individually targeted. Instead, the whole resource group is targeted via a virtual target, which is itself targeted in the normal way.  If a RG targets to a virtual target that is in turn targeted to a WL cluster, all JMS resources and applications in the RG will also be targeted to that cluster.

As we will see later, a virtual target not only provides the targeting information of a RG, it also defines the accessing point of a partition. For more information about resource group targeting and virtual targets, check out Joe's blog about Partition Targeting and Virtual Targets.

You might have noticed that I did not discuss configuring individual JMS resources for each server in a WL cluster, nor did I mention configuring “migratable targets” to add high availability. I have good news for you! Neither is needed or even supported in MT. They have been replaced with greatly enhanced WebLogic JMS cluster-targeting and HA support; my colleague Kathiravan blogs about it in 12.2.1 WebLogic JMS: Dynamic Scalability and High Availability Made Simple.

Although system level JMS resources (such as JMS servers, SAF agents, persistent stores, messaging bridges, path services, and JMS modules) are scoped differently in a MT configuration, their respective attributes are specified in exactly the same way as in a non-MT configuration.

Various validation and targeting rules are enforced to ensure that WebLogic MT JMS configuration is isolated, self contained, and easy to manage. One basic and high-level rule in configuring JMS in MT is that a JMS configuration artifact may only reference other configuration artifacts that are in the same scope. For example, a resource group scoped JMS server can only reference a persistent store that is also defined in the same resource group. These rules are enforced by configuration validation checks and by errors and warnings that are logged at runtime.

Accessing JMS Resources in MT

A JMS application designed for multi-tenancy accesses JMS resources in the same way as ‘classic’ JMS applications, by looking up JMS resources in a JNDI name space. The difference is that in a MT environment, a WebLogic JNDI InitialContext is associated with a particular scope (i.e. the domain or a partition), when it is created. 

A MT application can have multiple JNDI contexts that refer to the same WebLogic cluster but are scoped to different partitions. An initial context, once created, sticks to its scope until it is closed. This means that all JNDI operations using a partition-scoped JNDI context instance are performed using the partition-specific area of the JNDI space.

The scope of a JNDI context is determined by the “provider URL” supplied when the initial context is created.

Once an application successfully establishes a partition-scoped JNDI initial context, it can use this context to look up JMS connection factories and destinations in the same way as in a non-MT environment; except that now the application can only access partition-scoped JMS resources.

Let us look at some specific use cases and see how an application can establish an initial context to a particular partition in each of the use cases.

Use Case 1 - Local Intra-partition Access

When a Java EE application needs to access a JMS destination in its local partition in the same cluster (or on the same non-clustered managed server), the application can just create an initial context without supplying a provider URL.

Example 1: Null Provider URL

  Context ctx = new InitialContext();

  Object cf = ctx.lookup("jms/mycf1");

  Object dest = ctx.lookup("jms/myqueue1");

This initial context will be scoped to the partition in which the application is deployed.

Use Case 2 - Local Inter-partition Access

If a Java EE application needs to access a JMS destination (or other resource) in a different partition than the partition to which it is deployed, and the partition is in the same cluster (or on the same managed server) then it can use either a partition-scoped JNDI name or a provider URL with the "local:" protocol.

Using Partition Scoped JNDI Names

A JNDI name can be decorated with a namespace prefix to indicate its scope.

Example 2.1: given the partition configuration in the above examples, the following code can be used to access a JMS destination that is configured in "partition1".

Context ctx = new InitialContext();

Object cf = ctx.lookup("partition:partition1/jms/mycf1");

Object dest = ctx.lookup("partition:partition1/jms/myqueue1");

Similarly a Java EE application in a partition can access a domain level JNDI resource in the same cluster using a partition scoped initial context with the "domain:" namespace prefix, for example "domain:jms/mycf2".

Using a provider URL with the "local:" Protocol

Alternatively, one can specify a "local:" provider URL when creating an initial context to a specific partition.

Example 2.2: given the partition configuration in the above examples, the following code can be used to access a JMS destination that is configured in "partition1".

Hashtable<String, String> env = new Hashtable<>();

env.put(Context.PROVIDER_URL, "local://?partitionName=partition1");

env.put(Context.SECURITY_PRINCIPAL, "weblogic");

env.put(Context.SECURITY_CREDENTIALS, "welcome1");

Context ctx = new InitialContext(env);

Object cf = ctx.lookup("jms/mycf1");

Object dest = ctx.lookup("jms/myqueue1");

The initial context will be associated with "partition1" for its lifetime.

Similarly, a Java EE application in a partition can access a domain level JNDI resource in the same cluster using “local://?partitionName=DOMAIN” as the provider URL. 

Use Case 3 - General Partition Access

A third way for a Java EE application or client to access a JMS destination in a partition is to use a "partition URL". A partition URL is intended to be used when the JMS destination is in a remote cluster (or on a remote non-clustered managed server).  A typical "partition URL" is t3://hostname:port, or t3://host:port/URI-prefix.

Partition URLs may only be used by Java EE applications or clients using WLS 12.2.1 or later (older versions should use dedicated partition ports: see below).

Example 3: given the partition configuration in the examples above, the following code can be used to access a JMS destination that is configured in "partition1".

Note that "/partition1" in the provider URL below is the uri-prefix configured in the VirtualTarget for partition1.

Hashtable<String, String> env = new Hashtable<>();

env.put(Context.PROVIDER_URL, "t3://abcdef00:7001/partition1");

env.put(Context.SECURITY_PRINCIPAL, "weblogic");

env.put(Context.SECURITY_CREDENTIALS, "welcome1");

Context ctx = new InitialContext(env);

Object cf = ctx.lookup("jms/mycf1");

Object dest = ctx.lookup("jms/myqueue1");

Although it is not a best practice, a “partition URL” can also be used to access another partition in the same JVM/cluster.

Use Case 4 – Dedicated Partition Ports

One last option is to setup dedicated ports for each partition, and configuring these is described in Joe's blog about Partition Targeting and Virtual Targets.

Configuring dedicated partition ports enables applications that use ‘classic’ URLs to access a partition, and is mainly intended to enable clients and applications that are running on releases older than 12.2.1 to access partitions in a 12.2.1 or later domain.

Such older clients and applications do not support the use of host name and URI-prefix to access a partition. An attempt to use them from an older client will simply fail or may silently access the domain level JNDI name space.

What’s next?

I hope this article helps you to understand the basics of JMS MT! It is time to start exploring this new and exciting capability. You can find more information for messaging in MT in the Configuring Messaging chapter of Oracle® Fusion Middleware Using WebLogic Server Multitenant.

Wednesday Dec 09, 2015

New EJB 3.2 feature - Modernized JCA-based Message-Driven Bean

WebLogic Server 12.2.1 is a fully compatible implementation of Java EE 7 specification. One of the big improvements in EJB container in this release of WebLogic Server is that, a message-driven bean is able to implement a listener interface with no methods. When such a no-methods listener interface is used, all non-static public methods of the bean class (and of the bean class's super classes except java.lang.Object) are exposed as message listener methods.

[Read More]

Tuesday Dec 08, 2015

WLS JNDI Multitenancy

  The most important feature introduced in WebLogic Server 12.2.1 is multi-tenancy. As we know, before WLS 12.2.1, one WLS domain is used by one tenant. Since WLS 12.2.1, WLS domain can be divided into multiple partitions so that tenants can use different partitions of on WLS domain. Then Multiple tenants can share one WLS domain without influence each other. So isolation of resources between partitions is key. Since JNDI is a common way to access to these resources, the main target of JNDI in WLS 12.2.1 is to isolate JNDI resources.

  Before WLS 12.2.1, there is only one JNDI global tree of WLS domain. It is difficult to use this one JNDI global tree to support multiple partitions. Because partition requires a unique isolated namespace. For example, it is possible multiple partitions use same JNDI name to bind/lookup JNDI resource separately. And it will result in NameAlreadyBoundException. To isolate JNDI resources in different partitions, every partition has unique JNDI global tree since WLS 12.2.1. Then tenant can operate JNDI resource in one partition without name conflict with one in other partition. For Application scoped JNDI tree, it is only visible in application internal, so it isolated naturally. So there is no change for App scoped JNDI tree in WLS 12.2.1. Let us see how to access JNDI resource in partition.

Access JNDI resource in partition

  To access JNDI resources in partition, we need add partition information in provider url property during creating InitialContext.

  Runtime environment:

    Managed server:           ms1 , ms2

    Cluster:                         managed server ms1, managed server ms2

    Virtual target:               VT1 target to managed server ms1, VT2 target to cluster

    Partition:                      Partition 1 has available target VT1, Partition2 has available target VT2.

  We need add parition1 inforamtion in properties during creating InitialContext to access JNDI resources in partition1.

    Hashtable<String, String> env = new Hashtable<>();
    env.put(Context.PROVIDER_URL, "t3://ms1:7001/partition1");  
    env.put(Context.SECURITY_PRINCIPAL, "weblogic");
    env.put(Context.SECURITY_CREDENTIALS, "welcome1");
    Context ctx = new InitialContext(env);
    Object c = ctx.lookup("jdbc/ds1");

  Partition2 runs in cluster, so we can use cluster address format in properties during creating InitialContext.

    Hashtable<String, String> env = new Hashtable<>();
    env.put(Context.PROVIDER_URL, "t3://ms1:7001,ms2:7003/partition2");
    env.put(Context.SECURITY_PRINCIPAL, "weblogic");
    env.put(Context.SECURITY_CREDENTIALS, "welcome1");
    Context ctx = new InitialContext(env);
    Object c = ctx.lookup("jdbc/ds1");

  In weblogic, we can create Foreign JNDI provider to link JNDI resources in other server. In WLS 12.2.1, we also can use Foreign JNDI provider to link to  JNDI resources in specified partition by adding partition information in configuration. These partition information including provider url, user and password will be used to create JNDI context. The following is an example of Foreign JNDI provider configuration in partition1. This provider links to partition2.


Stickiness of JNDI Context

  When a JNDI context is created, the context will associate with  specified partition. Then all subsequent JNDI operations are done within associated partition JNDI tree, not in current partition one. And this associated partition will remain even if the context is used by a different thread than was used to create the context.. If provider url property is set in environment during creating JNDI context, partition specified in provider url is associated. Otherwise, JNDI context associates with current partition.

Life cycle of Partition JNDI service

  Before WLS 12.2.1,, JNDI service life cycle is same with weblogic server. In WLS 12.2.1, every partition owns JNDI global tree by itself, so JNDI service life cycle is same with partition. As soon as partition startup, JNDI service of this partition is available. And during partition shutdown, JNDI service of this partition is destroyed.

Monitoring FAN Events

fanWatcher is a sample program to print the Oracle Notification Service (ONS) Fast Application Notification (FAN) event information. These events provide information regarding load balancing, and service and instance up and down events. This information is automatically processed by WebLogic Server Active GridLink and UCP on the mid-tier. For more information about FAN events, see this link.  The program described here is an enhancement of the earlier program described in that white paper  This program can be modified to work as desired to monitor events and help diagnose problems with configuration. The code is available this link (rename it from .txt to .java).

To run this Java application, you need to be set up to run a JDK and you need ons.jar and ojdbcN.jar in the CLASSPATH. The CLASSPATH is set differently depending on whether you are running on the database server or on the mid-tier with WebLogic Server or UCP. Make sure to use the correct path separator for CLASSPATH on your platform (';' for Windows, ':' otherwise).

The general format for the command line is

java fanWatcher config_type [eventtype … ]

Event Type Subscription

The event type sets up the subscriber to only return limited events. You can run without specifying the event type to see what types of events are returned. When you specify an event name on the command line, the program sets up the subscriber to have a simple match on the event. If the specified pattern occurs anywhere in a notification's header, then the comparison statement evaluates true. The most basic pattern match is an empty string (not null), which matches all notifications. The pattern is enclosed in double quotes (required) and prefixed with “%” to be case insensitive.

Event processing is more complete than shown in this sample. The subscription string is generally composed of one or more comparison statements, each logically related to another with the boolean operators '|' for an OR relationship or '&' for an AND relationship. Parentheses are used to group these comparison statements, and the '!' operator placed before an opening parenthesis negates the evaluated value within.

Each individual comparison statement must be enclosed within double quotes ('"'), and can take one of two basic forms: "pattern" or "name=value". A "pattern" is a simple string match of the notification header: if the specified "pattern" occurs anywhere in a notification's header, then the comparison statement evaluates true. The most basic pattern match is an empty string (not NULL) which matches all notifications.

The "name=value" format compares the ONS notification header or property name with the name against the specified value, and if the values match, then the comparison statement evaluates true. If the specified header or property name does not exist in the notification the comparison statement evaluates false. A comparison statement will be interpreted as a case insensitive when a percent character ('%') is placed before the opening quote. Note that for "name=value" comparison statements, only the value is treated as case insensitive with this option: the name lookup will always be case sensitive. A comparison statement will be interpreted as a regular expression when a dollar sign character ('$') is placed before the opening quote. Standard POSIX regular expressions are supported. To specify a regular expression that is also case insensitive, place the dollar sign and percent sign together and in that order ("$%") before the opening quote.

A special case subscription string composed of only the exclamation point character ('!') signifies that the subscription will not match any notifications.

You might want to modify the event to select on a specific service by using something like

%"eventType=database/event/servicemetrics/<serviceName> "

Running with Database Server 10.2 or later

This approach runs on the database server and connects directly to the local ONS daemon available in the Grid Infrastructure cluster. The FANwatcher utility must be run as a user that has privilege to access the $CRS_HOME/opmn/conf/ons.config, which is used by the ons daemon to start and accessed by this program. The configuration type on the command line is set to “crs”.

# CRS_HOME should be set for your Grid infrastructure
echo $CRS_HOME
javac fanWatcher.java
java -Doracle.ons.oraclehome=$CRS_HOME fanWatcher crs

Running with WLS 10.3.6 or later using an explicit node list

There are two ways to run in a client environment – with an explicit node list and using auto-ONS. It’s necessary to have ojdbcN.jar and ons.jar that are available when configured for WLS. If you are set up to run with UCP directly, these should also be in your CLASSPATH.

In the first approach, it will work with Oracle driver and database 11 and later (SCAN support came in later versions of Oracle including the jar files that shipped with WLS 10.3.6).

# Set the WLS environment using wlserver*/server/bin/setWLSEnv
CLASSPATH="$CLASSPATH:." # add local directory for sample program
javac fanWatcher.java
java fanWatcher "nodes=rac1:6200,rac2:6200" database/event/service

The node list is a string of one or more values of the form name=value separated by a newline character (\n).

There are two supported formats for the node list.

The first format is available for all versions of ONS. The following names may be specified.

nodes – This is required. The format is one or more host:port pairs separated by a comma.

walletfile – Oracle wallet file used for SSL communication with the ONS server.

walletpassword – Password to open the Oracle wallet file.

The second format is available starting in database It supports more complicated topologies with multiple clusters and node lists. It has the following names.

nodes.id—this value is a list of nodes representing a unique topology of remote ONS servers. id specifies a unique identifier for the node list. Duplicate entries are ignored. The list of nodes configured in any list must not include any nodes configured in any other list for the same client or duplicate notifications will be sent and delivered. The list format is a comma separated list of ONS daemon listen addresses and ports pairs separated by colon.

maxconnections.id— this value specifies the maximum number of concurrent connections maintained with the ONS servers. id specifies the node list to which this parameter applies. The default is 3.

active.id If true, the list is active and connections are automatically established to the configured number of ONS servers. If false, the list is inactive and is only be used as a fail over list in the event that no connections for an active list can be established. An inactive list can only serve as a fail over for one active list at a time, and once a single connection is re-established on the active list, the fail-over list reverts to being inactive. Note that only notifications published by the client after a list has failed over are sent to the fail over list. id specifies the node list to which this parameter applies. The default is true.

remotetimeout —The timeout period, in milliseconds, for a connection to each remote server. If the remote server has not responded within this timeout period, the connection is closed. The default is 30 seconds.

The walletfile and walletpassword may also be specified (note that there is one walletfile for all ONS servers). The nodes attribute cannot be combined with name.id attributes.

Running with WLS using auto-ONS

Auto-ONS is available starting in Database Before that, no information is available. Auto-ONS only works with RAC configurations; it does not work with an Oracle Restart environment.  Since the first version of WLS that ships with Database 12.1 is WLS 12.1.3, this approach will only work with upgraded database jar files on versions of WLS earlier than 12.1.3. Auto-ONS works by getting a connection to the database to query the ONS information from the server. For this program to work, a user, password, and URL are required. For the sample program, the values are assumed to be in the environment (to avoid putting them on the command line). If you want, you can change the program to prompt for them or hard-code the values into the java code.

# Set the WLS environment using wlserver*/server/bin/setWLSEnv
# Set the credentials in the environment. If you don't like doing this,
# hard-code them into the java program
export password url user
javac fanWatcher.java
java fanWatcher autoons

fanWatcher Output

The output looks like the following. You can modify the program to change the output as desired. In this short output capture, there is a metric event and an event caused by stopping the service on one of the instances.

** Event Header **
Notification Type: database/event/servicemetrics/otrade
Delivery Time: Fri Dec 04 20:08:10 EST 2015
Creation Time: Fri Dec 04 20:08:10 EST 2015
Generating Node: rac1
Event payload:
VERSION=1.0 database=dev service=otrade { {instance=inst2 percent=50 flag=U
NKNOWN aff=FALSE}{instance=inst1 percent=50 flag=UNKNOWN aff=FALSE} } timestam
p=2015-12-04 17:08:03

** Event Header **
Notification Type: database/event/service
Delivery Time: Fri Dec 04 20:08:20 EST 2015
Creation Time: Fri Dec 04 20:08:20 EST 2015
Generating Node: rac1
Event payload:
VERSION=1.0 event_type=SERVICEMEMBER service=otrade instance=inst2 database=dev db_domain= host=rac2 status=down reason=USER timestamp=2015-12-04 17:

Monday Dec 07, 2015

Multi-Tenancy Samples

In order to make it easier to understand all aspects of Multi-tenancy in WebLogic Server 12.2.1, MedRec can support running in a Multi-tenancy environment to be used as a demonstration vehicle.

What’s MedRec

Avitek Medical Records (or MedRec) is a WebLogic Server sample application suite that demonstrates all aspects of the Java Platform, Enterprise Edition (Java EE). MedRec is designed as an educational tool for all levels of Java EE developers. It showcases the use of each Java EE component, and illustrates best-practice design patterns for component interaction and client development. MedRec also illustrates best practices for developing and deploying applications with WebLogic Server.

Please choose 'Complete with Examples' when you install WebLogic server whilst going to the step of 'Installation Type'. The codes, binaries and documentations of MedRec will be located at ‘$MW_HOME/wlserver/samples/server/medrec’ directory. Otherwise samples of WebLogic Server will be overleaped.

There are two non-OOTB Multi-tenancy samples. You need to run Ant commands provided to stage WebLogic domains.

Single Server Multi-tenancy MedRec


It’s a SAAS sample that is all focusing on the Multi-tenancy features themselves, without cluster, extra managed servers, all virtual targets only target to admin server.

Multi-tenancy Demonstration

There are 2 tenants named bayland and valley, and valley has 2 partitions(one tenant can have multiple partitions). In this sample, it’s demonstrating various Multi-tenancy features following. If you have any questions about a certain feature, you'd refer to the relevant blogs or documentations.

Resource Group Template

All resources including applications, JMS, file store, mail session, JDBC system resource are deployed onto a resource group template.

  • Applications

  • Other resources

Resource Overriding

Databases are supposed to be isolated among partitions. At resource group template, the JDBC system resource is a mere template with name, driver, JNDI lookup name. The real URL, username, password of datasource are set at the resource overriding in Partition scope.

Virtual Target

Each partition, exactly each partition resource group deriving from forementioned MedRec resource group template has it own virtual target. The 2 virtual target of valley share the same host names within different URI prefixes.

We can see three virtual targets, one per one partition. Web container is aware of which application is accessed to according to the host name plus URI prefix. For example, in this sample, medrec.ear is deployed at all partitions. Yet how to access the web module of medrec.ear on bayland? The URL would be 'http://hostname:port/bayland/medrec'. '/bayland' is the uri prefix. 'medrec' is the root context name of webapp.

Security Realm

Each tenant is supposed to have its own security realm with isolated users. MedRec has an use case of Servlet access control and authentication that demonstrates the scenario.

Resource Consumption Management.

Bayland is deemed as a VIP customer of this sample. So it has more quota of CPU, memory heap, thread work, etc.

Trigger will slow down or shut down the partition if the usage is up to the specified value.

Partition Work Manager

Partition Work Managers define a set of policies that limit the usage of threads by Work Managers in partitions only. They do not apply to the domain.

Deployment Plan

Then deployment plan file can be used in partition scope. The sample utilises this mechanism to change the appearance of the web pages on valley tenant including photos and background colour. That means you can let your application different from each other of different partition even though of one resource group template.


Prior to running setup script, you need to do a couple of preparation. Setting sample environment, editing etc/hosts file, customising the properties of admin server, host, port etc. After that, one ant command will stage all content of the SAAS sample.

  1. Setting environment.

    cd $MW_HOME/wlserver/samples/server
    . ./setExamplesEnv.sh
  1. Network address mapping. Please open /etc/hosts file and add following lines: www.baylandurgentcare.com www.valleyhealth.com
  2. Customizing admin server properties.
    update 5 properties in $MW_HOME/wlserver/samples/server/medrec/weblogic.properties. Please use weblogic as the username of the admin server.

  3. Running setup script

    cd $MW_HOME/wlserver/samples/server/medrec
    ant mt.single.server.sample

Webapp URLs

You can access MedRec via following URLs according to the server port you set. For example, you set admin.server.port = 7001.

Coherence Cluster Multi-tenancy MedRec


The second SAAS sample. Beyond the simple one, coherence cache, dynamic cluster, Oracle PDB are involved. In some extent, it’s a real usage of MT in practice.

Look at this diagram above, it also has 2 tenants but a partition per tenant. Bayland is the blue one, valley the green one. There are 2 resource group templates named app RGT and cache RGT instead of one. App RGT is similar to the resource group template of the first MT sample including all resources of MedRec. In order to enable coherence cache, a GAR archive is packaged into an application of medrec.ear. And the identical GAR is also deployed into the second cache resource group template. Both of the partitions have 2 resource groups app and cache deriving from the app and cache resource group templates respectively. Each resource group targets to different virtual target. So here has 4 virtual targets. 2 app virtual targets target to a storage disabled dynamic cluster app cluster with 2 managed servers. The applications and other resources run on this app cluster. In contrast, 2 cache virtual targets target to another dynamic cluster named cache cluster with 2 managed servers but storage enable. The GAR of cache resource group runs on the cache cluster.

Coherence Scenario

MedRec has 2 key archives medrec.ear and physician.ear. Physician archive is set to a web service(JAX-RS and JAX-WS) client application. And there aren't any JPA stuff in physician.ear all of which are in server side.  So leveraging Coherence Cache here can avoid frequent web service invocations and JDBC invocations in business services of web service server side.

Method Invocation Cache

This cache is one partitioned tenant cache. Most of business services of physician scenarios are annotated method invocation cache interceptor. First check data whether it has stored in cache. If data isn't cached, gets data through web service. Then stores return data into method cache. After that, following invocations with same values of parameter will fetch data from cache directly.

When is the data removed from the method cache? For examples, physician can look a patient's record summary which is cached after first getting the data. And physician creates a new record for this patient. At present, the record summary in the cache has already been inaccurate. So the dirty data should be cleaned. In this case, after success of creating record, the business service will fire an update event to remove the old data.

Actually, Method invocation cache has 3 different types. MedRec can be aware of the WLS environment and activate the relevant cache.

For example, physician login, when you first login as physician at bayland app server 1, the app_cluster-1.log should be printed liking following logs:

Method Invocation Coherence Cache is available.
Checking method XXXX invocation cache...
Not find the result.
Caching the result in method XXXX invocation cache....
Added result to cache
Method: XXXXX
Parameters: XXXX
Result: XXXXX

Logout and change to bayland server 2 login again, the app_cluster-2.log should like these:

Checking method XXXX invocation cache...
Found result in cache
Method: XXXXX
Parameters: XXXX
Result: XXXXX

Shared Cache

This cache is one partitioned shared cache that means bayland and valley can share it. It's still the creating record user-case. Physician can create prescriptions for the new record. Choosing drug uses a drug information list from database of server side. The list is a stable invariable data so that can be shared with both Partitions. So the drug information list is stored in this cache.

Open creating record page on browser at bayland server 1, the app_cluster-1.log should like this:

Drug info list is not stored in shared cache.
Fetch list from server end point.
Store drug info list into shared cache.

Then do the same thing at valley server 2, the app_cluster-2.log should like this:

Drug info list has already stored in shared cache.

That means the it is a shared cache over Partitions.


The installation and usage are similar to the first MT sample. Beyond the first one, the first sample adopts derby as database, the second adopts Oracle database. You need to prepare 2 Oracle PDBs and customize the file of db properties.

  1. Setting environment.

    cd $MW_HOME/wlserver/samples/server
    . ./setExamplesEnv.sh
  2. Update 5 properties in $MW_HOME/wlserver/samples/server/medrec/install/mt-coherence-cluster/configure.properties according with PDBs. E.g.:

    # Partition 1
    dbURL1      = jdbc:oracle:thin:XXXXXXXX:1521/pdb1
    dbUser1     = pdb1
    dbPassword1 = XXXXXX
    # Partition 2
    dbURL2      = jdbc:oracle:thin:XXXXXXXX:1521/pdb2
    dbUser2     = pdb2
    dbPassword2 = XXXXXX
  3. Network address mapping. Please open /etc/hosts file and add following lines: bayland.weblogicmt.com valley.weblogicmt.com
  4. Customizing admin server properties.
    update 5 properties in $MW_HOME/wlserver/samples/server/medrec/weblogic.properties. Please use weblogic as the username of admin server.

    admin.server.port=7003 (Please don’t use 2105, 7021, 7022, 7051, 7052 which will be used as servers’ listening ports)
  5. Running setup script

    cd $MW_HOME/wlserver/samples/server/medrec
    ant mt.coherence.cluster.sample

Webapp URLs

After success, please access following URLs to experience MedRec:

Three Easy Steps to a Patched Domain Using ZDT Patching and OPatchAuto

Now that you’ve seen how easy it is to Update WebLogic by rolling out a new patched OracleHome to your managed servers, let’s go one step further and see how we can automate the preparation and distribution parts of that operation as well.

ZDT Patching has integrated with a great new tool in 12.2.1 called OPatchAuto. OPatchAuto is a single interface that allows you to apply patches to an OracleHome, distribute the patched OracleHome to all the nodes you want to update, and start the OracleHome rollout, in just three steps.

1. The first step is to create a patched OracleHome archive (.jar) based on combining an OracleHome in your production environment with the desired patch or patchSetUpdate. This operation will make a copy of that OracleHome so it will not affect the production environment. It will then apply the specified patches to the copy of the OracleHome and create the archive from it. This is the archive that the rollout will use when the time comes, but first it needs to be copied to all of the targeted nodes.

The OPatchAuto command for the first step looks like this:

${ORACLE_HOME}/OPatch/auto/core/bin/opatchauto.sh apply /pathTo/PatchHome -create-image -image-location /pathTo/image.jar –oop –oh /pathTo/OracleHome

PatchHome is a directory or file containing the patch or patchSetUpdate to apply.

image-location is where to put the resulting image file

-oop means “out-of-place” and tells OPatchAuto to copy the source OracleHome before applying the patches

2.  The second step is to copy the patched OracleHome archive created in step one to all of the targeted nodes. One cool thing about this step is that since OPatchAuto is integrated with ZDT Patching, you can give OPatchAuto the same target you would use with ZDT Patching, and it will ask ZDT Patching to calculate the nodes automatically. Here’s an example of what this command might look like:

${ORACLE_HOME}/OPatch/auto/core/bin/opatchauto.sh apply -plan wls-push-image -image-location /pathTo/image.jar -wls-admin-host ${ADMINHOSTNAME}:7001 -wls-target Cluster1 -remote-image-location /pathTo/image.jar -wallet ${WALLET_DIR}

image-location is the jar file created in the first step

wls-target can be a domain name, cluster name, or list of clusters

Note that if you do not already have a wallet for ssh authorization to the remote hosts, you may need to configure one using

3.  The last step is using OPatchAuto to invoke the ZDT Patching OracleHome rollout. You could switch to WLST at this point and start it as described in the previous post, but OPatchAuto will monitor the progress of the rollout and give you some helpful feedback as well. The command to start the rollout through OPatchAuto looks like this:

${ORACLE_HOME}/OPatch/auto/core/bin/opatchauto.sh apply -plan wls-zdt-rollout -image-location /pathTo/image.jar -wls-admin-host ${ADMINHOSTNAME}:7001 -wls-target Cluster1 -backup-home /pathTo/home-backup -remote-image-location /pathTo/image.jar -wallet ${WALLET_DIR}

image-location is the jar file created in the first step

backup-home is the location on each remote node to store the backup of the original OracleHome

image-location and remote-image-location are both specified so that if a node is encountered that is missing the image, it can be copied automatically. This is also why the wallet is specified here again

One more great thing to consider when looking at automating the entire process is how easy it would be to use these same commands to distribute and rollout the same patched OracleHome archive to a test environment for verification. Once verification is passed, a minor change to the same two commands will push the exact same (verified) OracleHome archive out to a production environment.

For more information about updating OracleHome with Zero Downtime Patching and OPatchAuto, view the documentation.

Wednesday Nov 25, 2015

Multi-Tenancy EJB

Multi-Tenancy EJB

Benefit from the Multi-Tenancy support of WLS 12.2.1, EJB container gains a lot of enhancements. Application and resource "multiplication" allows for EJB container to provide MT features while remaining largely partition unaware. Separate application copies also brings more isolation, such as distinct Remote objects, beans pools, caches, module class loader instances etc. Below names a few of the new features you can leverage for EJB applications.


  • Server naming nodes will be partition aware.
  • Applications deployed to partitions will have their EJB client views exposed in the corresponding partition's JNDI namespace. 

2. Security

  • Security implementation now allows multiple active realms, including support for per-partition security realm.
  • Role based access control, and credential mapping for applications deployed to partition will use the partition's configured realm.

3. Runtime Metrics and Monitoring

  • New ApplicationRuntimeMBean instance with the PartitionName attribute populated, will get created for every application deployed to a partition.
  • EJB container exposed Runtime MBean sub-tree will be rooted by the ApplicationRuntimeMBean instance.

4. EJB Timer service

  • Persistent local timers rely on the store component. Partitioned custom file stores provide the required isolation of tenant data.
  • Clustered timers under the hood use Job scheduler, which is also providing isolation.

5. JTA configuration at Partition Level

  • JTA timeout can be configured at partition level, in addition to domain level and EJB component level.
  • Timeout value in EJB component level takes precedence over the other two.
  • Support dynamic update via deployment plan.

6. WebLogic Store

  • Persistent local timers rely on the store component. Partitioned custom file stores provide the required isolation of tenant data. 

7. Throttling Thread Resource usage

  • Work Managers with constraints can be defined at global runtime level, and application instances in partitions can refer to these shared WMs to throttle thread usage across partitions esp. for non interactive use cases - batch, async, message driven bean invocations ...

8. Data Sources for Java Persistence API users

  • Persistence Units that use data sources defined as system resources in the Resource Group Template will be able to take advantage of the PartitionDataSourceInfoMBean based overrides.
  • Use cases requiring advanced customization can use the new deployment plan support being added for system resource re-configuration.
  • Persistence Units that use application packaged data source modules can use the current deployment plan support to have the copies in different partitions, point to the appropriate PDBs.

A sample EJB application leveraging Multi-Tenancy

Now we're going through a simple setup of an EJB application on MT environment to demonstrate the usage of some of these features.

The EJB application archive is named sampleEJB.jar, it includes a stateful session bean which interacts with database by JPA API. We want the application to be deployed to 2 separate partitions, each of which points to a database instance of its own, so they can work independently.

1.  Create Virtual Targets

The first step is to create 2 virtual targets for the 2 partitions respectively, which use different URI prefixes /partition1 and /partition2 respectively as showed below.

2.  Create Resource Group Template

Now we create a Resource Group Template named myRGT.  Resource Group Template is a new concept introduced by WLS 12.2.1, to which you can deploy your applications and different resources you need. This is very helpful when your application setup is complicated, because you don't want to repeat the same thing for multiple times on different partitions.

3.  Deploy application and data source

Now we can deploy the application and define the data source as below. Please pay attention that the application and the data source are all defined in myRGT scope.

4.  Create Partitions

Now everything is ready, it's time to create partitions. As the following image shows, we can apply the Resource Group Template just defined when creating partitions, it will deploy everything in the template automatically.

5.  Access the EJB

Now with the partitions created and started, you can lookup and access the EJB with the following code. We're using the URL for partition1 here, you can also change the URL to access another partition.

  Hashtable<String, String> props = new Hashtable<String, String>();
  props.put(Context.INITIAL_CONTEXT_FACTORY, "weblogic.jndi.WLInitialContextFactory");
  props.put(Context.PROVIDER_URL, "t3://server1:7001/partition1");
  props.put(Context.SECURITY_PRINCIPAL, user);
  props.put(Context.SECURITY_CREDENTIALS, pass);
  Context ctx = new InitialContext(props);

  BeanIntf bean = (BeanIntf)ctx.lookup("MyEJB"); 
  boolean result = bean.doSomething(); 

6.  Override the data source

If you're feeling something is wrong, you're right. We defined the data source myDS in myRGT, then applied myRGT to both the partitions, now the 2 partitions are sharing the same data source. Normally we don't want this to happen, we need the 2 partitions to work independently without disturbing each other.  How can we do that?

If you want to make partition2 switch to another data source,  you can do that in the Resource Overrides tab of partition2 settings page. You can change the database URL here so another database instance will be used by partition2.  

7.  Change the transaction timeout

As mentioned above, for EJB applications it's supported to dynamically change the transaction timeout value for a particular partition. This can also be accomplished in partition settings page. In the following example, we set the timeout to 15 seconds. This will take affect immediately without asking to reboot.

There're also some other things you can do in the partition settings page, such as defining a work manager or monitoring the resource usage for a particular partition. Spend some time you will find more very useful tools around here.

Concurrency Utilities support in WebLogic Server 12.2.1, Part Five: Multi-tenancy Support


One of the key features in WLS 12.2.1 is the multi-tenancy support where a single Weblogic Server domain can contain multiple partitions. Please read Part One ~ Part Four prior to this article. Applications deployed to partitions can use the 4 types of concurrent managed objects in the same way as described in Part One ~ Part Four. Applications deployed to partitions can also use global pre-defined concurrent managed object templates, which means when an application is deployed to a partition, WebLogic Server creates concurrent managed objects for this application based on the configuration of global concurrent managed object templates. As you may recall there are server scope Max Concurrent Long Running Requests and Max Concurrent New Threads, please note that they limit long-running request/running threads in the whole server, including partitions.


System administrators can define partition scope concurrent managed object templates.

As mentioned in Part One(ManagedExecutorService) and Part Three(ManagedThreadFactory), WebLogic Server provides configurations(Max Concurrent Long Running Requests/Max Concurrent New Threads) to limit the number of concurrent long-running tasks/threads in a ManagedExecutorService/ManagedScheduledExecutorService/ManagedThreadFactory instance, in the global (domain-level) runtime on a server or in the server. Among them, instance scope and server scope limits are applicable to partitions. Besides, system administrators can also define partition scope Max Concurrent Long Running Requests and Max Concurrent New Threads. There are a default Max Concurrent Long Running Requests(50) and a default Max Concurrent New Threads(50) for each partition.

ManagedExecutorService/ManagedScheduledExecutorService/ManagedThreadFactory accepts a long-running task submission/new thread creation only when neither of the 3 limits is exceeded. For instance, there is an application deployed to a partition on a server, when a long-running task is submitted to its default ManagedExecutorServiceRejectedExecutionException will be thrown if there are 10 in progress long-running tasks which are submitted to this ManagedExecutorService, or there are 50 in progress long-running tasks which are submitted to the ManagedExecutorServices/ManagedScheduledExecutorServices in scope of this partition on the server, or there are 100 in progress long-running tasks which are submitted to the ManagedExecutorServices/ManagedScheduledExecutorServices in the server.

Configure Partition Scope Concurrent Managed Object Templates

WebLogic system administrators can configure pre-defined concurrent managed object templates for a partition. When an application is deployed to the partition, WebLogic Server creates concurrent managed object instances based on the configuration of partition scope concurrent managed object templates, and the created concurrent managed object instances are all in scope of this application.

Example-1: Configure a Partition Scope ManagedThreadFactory template using WebLogic Administration Console

Step1: in WebLogic Administration Console, a Partition Scope ManagedThreadFactory template can be created by clicking on the “New” button from the “Summary of Concurrent Managed Object Templates” page. This brings up the "Create a New Managed Thread Factory Template" page where the name and other parameters of the new ManagedThreadFactory template can be specified. Choose the Scope to the partition. In this example, a ManagedThreadFactory template called "testMTFP1" is being created for partition1.

Step2: Once a Partition Scope ManagedThreadFactory template is created, any application in the partition can get its own ManagedThreadFactory instance to use.


public class SomeServlet extends HttpServlet {


    ManagedThreadFactory mtf;

    protected void doGet(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException {

        Runnable aTask = new Runnable() {
        Thread t = mtf.newThread(aTask);



Configure Partition Scope Max Concurrent New Threads & Max Concurrent Long Running Requests

Max Concurrent New Threads of a partition is the limit of running threads created by all ManagedThreadFactories in that partition on a server. Max Concurrent Long Running Requests of a partition is the limit of concurrent long-running tasks submitted to all ManagedExecutorServices and ManagedScheduledExecutorServices in that partition on a server.

In WebLogic Administration Console, Max Concurrent New Threads and Max Concurrent Long Requests of a partition can be edited from the “Settings for <partitionName>” screen. In this example, Max Concurrent New Threads of partition1 is set to 30, Max Concurrent Long Running Requests of partition1 is set to 80

Related Articles:

See for more details Configuring Concurrent Managed Objects in the product documentation.

Concurrency Utilities support in WebLogic Server 12.2.1, Part Four: ContextService


ContextService is for creating contextual proxy objects. It provides method createContextualProxy to create a proxy object, then proxy object methods will run within the captured context at a later time. 

Weblogic Server provides a preconfigured, default ContextService for each application, and applications can easily use it in web or EJB components without any configuration. Let's begin with a simple example that uses default ContextService.

Example-1: Execute a task with the creator's context using a ExecutorService

Step1: Write the task. In this simple example, the task extends Runnable.

public class SomeTask implements Runnable {

    public void run() {

        // do some work



Step2: SomeServlet.java injects the default ContextService, uses the ContextService to create a new contextual object proxy for SomeTask, then submit the contextual object proxy to a Java SE ExecutorService. Each invocation of run() method will have the context of the servlet that created the contextual object proxy.


public class SomeServlet extends HttpServlet {

    @Resource ContextService cs;

    @Resource ManagedThreadFactory mtf;

    ExecutorService exSvc = Executors.newThreadPool(10, mtf);

    protected void doGet(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException {

        SomeTask taskInstance = new SomeTask();

        Runnable rProxy = cs.createContextualProxy(taskInstance, Runnable.class);

        Future f =  = exSvc.submit(rProxy);

        // Process the result and reply to the user 



Runtime Behavior

Application Scoped Instance

ContextServices are application scoped. Each application has its own default ContextService instance, and the lifecycle of the ContextService instances are bound to the application. Proxy objects created by ContextServices are also application scoped, so that when the application is shut down, invocations to proxied interface methods will fail with an IllegalStateException, and calls to createContextualProxy() will  throw an IllegalArgumentException

WebLogic Server only provides a default ContextService instance for each application, and does not provide any way to configure a ContextService.

Context Propagation

ContextService will capture the application context at contextual proxy object creation, then propagate the captured application context before invocation of contextual proxy object methods, so that proxy object methods can also run with the application context.

Four types of application context are propagated: JNDI, ClassLoader, Security and WorkArea. The propagated context types are the same for four types of the concurrent managed objects.

Related Articles:

See for more details Configuring Concurrent Managed Objects in the product documentation.


The official blog for Oracle WebLogic Server fans and followers!

Stay Connected


« February 2016