Friday Dec 11, 2015

Introducing WLS JMS Multi-tenancy


Multi-tenancy (MT) is the main theme of the WebLogic Server 12.2.1 release. It enhances the Oracle Platform for Software-as-a-Service (SaaS) and Platform-as-a-Service (PaaS) use cases. The main benefits of WebLogic multi-tenancy are increased density, tenant isolation, and simplified cloud configuration and management.

This article introduces multi-tenancy support for WebLogic JMS, the messaging component in WebLogic Server. 

Key MT Concepts

Some of you may have already learned from other blogs (for example Tim’s blog about Domain Partitions for Multi Tenancy) about some of the key concepts in WebLogic MT. But for the benefit of a broader audience, here is a quick review of those concepts before we get into JMS specifics.

WebLogic Multi-tenancy introduces the concepts of domain partition (also known as partition), resource group (RG), and resource group template (RGT).  

A Partition is conceptually a slice of a WebLogic domain, where resources and applications for different tenants can be configured and deployed in isolation on the same WebLogic server or in the same cluster. This improves overall density. Partitions define the isolation boundaries for JNDI, security, runtime MBeans, application persistent data, work managers and logging.  Furthermore, Partitions running on the same server instance have their own lifecycle, for example, a partition can be shut down at any time without impacting other partitions.

A Resource Group is simply a collection of functionally related resources and applications. A RG can be targeted and managed independently of other resource groups in the same partition. Resource groups can be defined not only inside a partition, but also at the domain level. As with partitions, RGs in the same partition (or at the domain level) that are running on the same server instance have their own lifecycle.

A Resource Group Template provides a templating mechanism to reduce the administrative overhead of configuring WebLogic resources and applications for SaaS use cases where the same resources and applications need to run in multiple partitions. It offers a configure-once-and-use-everywhere capability, where a common set of configuration artifacts can be specified in a RGT, and can then be referenced from RGs in different partitions. A RGT is not targetable, and resources in a RGT will not deploy unless the RGT is referenced by a deployed RG.

Note that the resources and applications configured or deployed in a partition (directly inside RGs or via RGs referencing a RGT) are scoped to that partition.

Understanding JMS Resources in MT

In a similar way to other WebLogic configuration artifacts, JMS resources such as JMS servers, SAF agents, path services, persistent stores, messaging bridges, JMS system modules, app-deployment JMS modules, Java EE 7 resource definition modules, and JMS applications can all now be configured and deployed in a RG, either directly or via RGTs, as well as in the ‘classic’ way, which is always directly at the domain level. Note that it is perfectly acceptable to combine both partition and ‘classic’ configuration together in the same domain.   

Resources and applications in different partitions are isolated from one another. For example, you can configure a JMS destination with the same JNDI name in multiple partitions running in the same cluster, and these destinations will be managed via independent runtime MBean instances, and can be independently secured via partition-specific security realms. In addition to non-persistent state, the persistent data (for example, persistent messages and durable subscriptions) in such JMS destinations are also isolated from one another.

Configuring JMS Resources in MT

The following configuration snippets show how JMS resources configured in a multi-tenant environment differs from traditional non-MT JMS configuration.

As you can see, partition-scoped JMS resources are embedded in a resource group in a partition (alternatively, they can be embedded in a Resource Group Template, which is in turn referenced by a Resource Group).

In addition, resources in a resource group are never individually targeted. Instead, the whole resource group is targeted via a virtual target, which is itself targeted in the normal way.  If a RG targets to a virtual target that is in turn targeted to a WL cluster, all JMS resources and applications in the RG will also be targeted to that cluster.

As we will see later, a virtual target not only provides the targeting information of a RG, it also defines the accessing point of a partition. For more information about resource group targeting and virtual targets, check out Joe's blog about Partition Targeting and Virtual Targets.

You might have noticed that I did not discuss configuring individual JMS resources for each server in a WL cluster, nor did I mention configuring “migratable targets” to add high availability. I have good news for you! Neither is needed or even supported in MT. They have been replaced with greatly enhanced WebLogic JMS cluster-targeting and HA support; my colleague Kathiravan blogs about it in 12.2.1 WebLogic JMS: Dynamic Scalability and High Availability Made Simple.

Although system level JMS resources (such as JMS servers, SAF agents, persistent stores, messaging bridges, path services, and JMS modules) are scoped differently in a MT configuration, their respective attributes are specified in exactly the same way as in a non-MT configuration.

Various validation and targeting rules are enforced to ensure that WebLogic MT JMS configuration is isolated, self contained, and easy to manage. One basic and high-level rule in configuring JMS in MT is that a JMS configuration artifact may only reference other configuration artifacts that are in the same scope. For example, a resource group scoped JMS server can only reference a persistent store that is also defined in the same resource group. These rules are enforced by configuration validation checks and by errors and warnings that are logged at runtime.

Accessing JMS Resources in MT

A JMS application designed for multi-tenancy accesses JMS resources in the same way as ‘classic’ JMS applications, by looking up JMS resources in a JNDI name space. The difference is that in a MT environment, a WebLogic JNDI InitialContext is associated with a particular scope (i.e. the domain or a partition), when it is created. 

A MT application can have multiple JNDI contexts that refer to the same WebLogic cluster but are scoped to different partitions. An initial context, once created, sticks to its scope until it is closed. This means that all JNDI operations using a partition-scoped JNDI context instance are performed using the partition-specific area of the JNDI space.

The scope of a JNDI context is determined by the “provider URL” supplied when the initial context is created.

Once an application successfully establishes a partition-scoped JNDI initial context, it can use this context to look up JMS connection factories and destinations in the same way as in a non-MT environment; except that now the application can only access partition-scoped JMS resources.

Let us look at some specific use cases and see how an application can establish an initial context to a particular partition in each of the use cases.

Use Case 1 - Local Intra-partition Access

When a Java EE application needs to access a JMS destination in its local partition in the same cluster (or on the same non-clustered managed server), the application can just create an initial context without supplying a provider URL.

Example 1: Null Provider URL

  Context ctx = new InitialContext();

  Object cf = ctx.lookup("jms/mycf1");

  Object dest = ctx.lookup("jms/myqueue1");

This initial context will be scoped to the partition in which the application is deployed.

Use Case 2 - Local Inter-partition Access

If a Java EE application needs to access a JMS destination (or other resource) in a different partition than the partition to which it is deployed, and the partition is in the same cluster (or on the same managed server) then it can use either a partition-scoped JNDI name or a provider URL with the "local:" protocol.

Using Partition Scoped JNDI Names

A JNDI name can be decorated with a namespace prefix to indicate its scope.

Example 2.1: given the partition configuration in the above examples, the following code can be used to access a JMS destination that is configured in "partition1".

Context ctx = new InitialContext();

Object cf = ctx.lookup("partition:partition1/jms/mycf1");

Object dest = ctx.lookup("partition:partition1/jms/myqueue1");

Similarly a Java EE application in a partition can access a domain level JNDI resource in the same cluster using a partition scoped initial context with the "domain:" namespace prefix, for example "domain:jms/mycf2".

Using a provider URL with the "local:" Protocol

Alternatively, one can specify a "local:" provider URL when creating an initial context to a specific partition.

Example 2.2: given the partition configuration in the above examples, the following code can be used to access a JMS destination that is configured in "partition1".

Hashtable<String, String> env = new Hashtable<>();

env.put(Context.PROVIDER_URL, "local://?partitionName=partition1");

env.put(Context.SECURITY_PRINCIPAL, "weblogic");

env.put(Context.SECURITY_CREDENTIALS, "welcome1");

Context ctx = new InitialContext(env);

Object cf = ctx.lookup("jms/mycf1");

Object dest = ctx.lookup("jms/myqueue1");

The initial context will be associated with "partition1" for its lifetime.

Similarly, a Java EE application in a partition can access a domain level JNDI resource in the same cluster using “local://?partitionName=DOMAIN” as the provider URL. 

Use Case 3 - General Partition Access

A third way for a Java EE application or client to access a JMS destination in a partition is to use a "partition URL". A partition URL is intended to be used when the JMS destination is in a remote cluster (or on a remote non-clustered managed server).  A typical "partition URL" is t3://hostname:port, or t3://host:port/URI-prefix.

Partition URLs may only be used by Java EE applications or clients using WLS 12.2.1 or later (older versions should use dedicated partition ports: see below).

Example 3: given the partition configuration in the examples above, the following code can be used to access a JMS destination that is configured in "partition1".

Note that "/partition1" in the provider URL below is the uri-prefix configured in the VirtualTarget for partition1.

Hashtable<String, String> env = new Hashtable<>();

env.put(Context.PROVIDER_URL, "t3://abcdef00:7001/partition1");

env.put(Context.SECURITY_PRINCIPAL, "weblogic");

env.put(Context.SECURITY_CREDENTIALS, "welcome1");

Context ctx = new InitialContext(env);

Object cf = ctx.lookup("jms/mycf1");

Object dest = ctx.lookup("jms/myqueue1");

Although it is not a best practice, a “partition URL” can also be used to access another partition in the same JVM/cluster.

Use Case 4 – Dedicated Partition Ports

One last option is to setup dedicated ports for each partition, and configuring these is described in Joe's blog about Partition Targeting and Virtual Targets.

Configuring dedicated partition ports enables applications that use ‘classic’ URLs to access a partition, and is mainly intended to enable clients and applications that are running on releases older than 12.2.1 to access partitions in a 12.2.1 or later domain.

Such older clients and applications do not support the use of host name and URI-prefix to access a partition. An attempt to use them from an older client will simply fail or may silently access the domain level JNDI name space.

What’s next?

I hope this article helps you to understand the basics of JMS MT! It is time to start exploring this new and exciting capability. You can find more information for messaging in MT in the Configuring Messaging chapter of Oracle® Fusion Middleware Using WebLogic Server Multitenant.

Monday Nov 16, 2015

12.2.1 WebLogic JMS: Dynamic Scalability and High Availability Made Simple


WebLogic’s 12.2.1 release features a greatly simplified, easy to use JMS configuration and administration model. This simplified model works seamlessly in both Cluster and Multi-Tenant/Cloud environments, making JMS configuration a breeze and portable. It essentially lifts all major limitations for the initial version of the JMS ‘cluster targeting’ feature that was added in 12.1.2 plus adds enhancements that aren’t available in the old administration model.

  • Now, all types of JMS Service artifacts can take full advantage of a Dynamic Cluster environment and automatically scale up as well as evenly distribute the load across the cluster in response to cluster size changes. In other words, there is no need for individually configuring and deploying JMS artifacts on every cluster member in response to cluster growth or change.
  • New easily configured high availability fail-over, fail-back, and restart-in-place settings provide capabilities that were previously only partially supported via individual targeting.
  • Finally, 12.2.1 adds the ability to configure singleton destinations in a cluster within the simplified configuration model.

These capabilities apply to all WebLogic Cluster types, including ‘classic’ static clusters which combine a set of individually configured WebLogic servers, dynamic clusters which define a single dynamic WL server that can expand into multiple instances, and mixed clusters that combine both a dynamic server and one or more individually configured servers.

Configuration Changes

With this model, you can now easily configure, control dynamic scaling and high availability behavior for JMS in a central location, either on a custom Store for all JMS artifacts that handle persistent data, or on a Messaging Bridge. The new configuration parameters introduced by this model are collectively known as “High Availability” policies. These are exposed to the users via management Consoles (WebLogic Administration Console, Fusion Middleware Control (FMWc)) as well as through WLST scripting and Java MBean APIs. When they’re configured on a store, all the JMS service artifacts that reference that store simply inherit these settings from the store and behave accordingly.

Figure 1. Configuration Inheritance 

The most important configuration parameters are, distribution-policy and migration-policy, which control dynamic scalability and high availability respectively for their associated service artifacts.

When a distribution-policy is set to distributed on one configured artifact, then at the deploy time, the system automatically creates an instance on each cluster member that joins the cluster. When set to singleton, then the system creates a single instance for the entire cluster.

Distributed instances are uniquely named after their host WebLogic Server (their configured name is suffixed with the name of their server), where it is initially created and started for runtime monitoring and location tracking purposes. This server is called the home or preferred server for the distributed instances that are named after it. A singleton instance is not decorated with a server name, instead it’s simply suffixed with “-01” and the system will choose one of the managed servers in the cluster to host the instance.

A distribution-policy works in concert with a new high availability option called the migration-policy, to ensure that instances survive any unexpected service failures, server crashes, or even a planned shutdown of the servers. It does this by automatically migrating them to available cluster members.

For the migration-policy, you can choose one of three options: on-failure, where the migration of instances will take place only in the event of unexpected service failures or server crashes; always, where the migration of instances will take place even during a planned administrative shutdown of a server; finally, you can choose to have off as an option to disable the service level migration if needed.

Figure 2. Console screenshot: HA Configuration 

In addition to the migration-policy, the new model offers another high availability notion for stores called the restart-in-place capability. When enabled, the system will first try to restart failing store instances on their current server before failing over to another server in the cluster. This option can be fine tuned to limit the number of attempts and delay between each attempt. This capability prevents the system from doing unnecessary migration in the event of temporary glitches, such as a database outage, or unresponsive network or IO requests due to latency and overload. Bridges ignore restart-in-place settings as they already automatically restart themselves after a failure (they periodically try to reconnect).

Note that the high availability enhancement not only offers failover of the service artifacts in the event of failure, it also offers automatic failback of distributed instances when their home server gets restarted after a crash or shutdown – a high availability feature that isn’t available in previous releases. This allows the applications to achieve high-level server/configuration affinity whenever possible. Unlike in previous releases, both during startup and failover, the system will also try to ensure that the instances are evenly distributed across the cluster members thus preventing accidental overload of any one server in the cluster.

Here’s a table that summarizes the new distribution, migration, and restart-in-place settings:

Attribute Name





Controls JMS service instance counts and names.  

[Distributed | Singleton]



Controls HA behavior. 

[Off | On-Failure | Always]



Enables automatic restart of a failing store instance(s) with a healthy WebLogic server.

[true | false ]



Specifies how many seconds to wait in between attempts to restart-in-place for a failed service.

[1 … {Max Integer}]



Specifies how many restart attempts to make before migrating the failed services

[-1,0 … {Max Long}]



The length of time to wait before starting an artifact's instance on a server.

[-1,0 … {Max Long}]



The length of time to wait before failing back an artifact to its preferred server.

[-1,0 … {Max Long}]



The length of time to wait before the cluster should consider itself at a "steady state". Until that point, only some resources may be started in the cluster. This gives the cluster time to come up slowly and still be easily laid out

[-1,0 … {Max Long}]


Runtime Monitoring

As mentioned earlier, when targeted to cluster, system automatically creates one (singleton) or more (distributed) instances from a single configured artifact. These instances are backed by appropriate runtime mbeans, named uniquely and made available for accessing/monitoring under the appropriately scoped server (or partition, in case of multi-tenant environment) runtime mbean tree.

Figure 3. Console screenshot: Runtime Monitoring 

 The above screenshot shows how a cluster targeted SAF Agent runtime instance is decorated with cluster member server name to make it unique.

Validation and Legal Checks

There are legal checks and validation rules in place to prevent the users from configuring invalid combinations of these new parameters. The following two tables list the supported combinations of these two new policies by service types as well by resource type respectively.

Service Artifact

Distribution Policy

Migration Policy




Persistent Store



JMS Server



SAF Agent


Path Service


Messaging Bridge



In the above table, the legal combinations are listed based on the JMS service types. For example, the Path Service, a messaging service that persists and holds the routing information for messages that take advantage of a popular WebLogic ordering extension called unit-of-order or unit-of-work routing, is a singleton service that should be made highly available in a cluster regardless of whether there is a service failure or server failure. So, the only valid and legal combinations of HA policies for this service configuration are: distribution-policy as singleton and migration-policy as always.

Some rules are also derived based on the resource types that are being used in an application. For example, for any JMS Servers that host uniform distributed destinations or for SAF Agents that would always host imported destinations, the distribution-policy as singleton does not make any sense and is not allowed.

Resource Type



JMS Servers

(hosting Distributed Destinations)

SAF Agent

(hosting Imported Destinations)

JMS Servers (hosting Singleton Destinations)

Path Service


In the event of an invalid configuration that violates these legal checks there will be error or log messaging indicating the same and in some cases it may cause deployment server startup failures.

Best Practices

To take full advantage of the improved capabilities, first design your JMS application by carefully identifying the scalability and availability requirements as well as the deployment environments. For example, identify whether the application will be deployed to a Cluster or to a multi-tenant environment and whether it will be using uniform distributed destinations or standalone (non-distributed) destinations or both.

Once the above requirements are identified then always define and associate a custom persistent store with the applicable JMS service artifacts. Ensure that the new HA parameters are explicitly set as per the requirements (use the above tables as a guidance) and that both a JMS service artifact and its corresponding store are similarly targeted (to the same cluster or to the same RG/T in case of multi-tenant environment).

Remember, the JMS high availability mechanism depends on WebLogic Server Health and Singleton Monitoring services, which in turn rely on a mechanism called “Cluster Leasing”. So you need to setup valid cluster leasing configuration particularly when the migration-policy is set to either on-failure or always or when you want to create a singleton instance of a JMS service artifact. Note that WebLogic offers two leasing options: Consensus and Database, and we highly recommend using Database leasing as a best practice.

Also it is highly recommended to configure high availability for WebLogic’s transaction system, as JMS apps often directly use transactions, and JMS internals often implicitly use transactions. Note that WebLogic transaction high availability requires that all managed servers have an explicit listen-address and listen-port values configured, instead of leaving to the defaults, in order to yield full transaction HA support. In case of dynamic cluster configuration, you can configure these settings as part of the dynamic server template definition.

Finally, it is also preferred to use NodeManager to start all the managed servers of a cluster over any other methods.

For more information on this feature and other new improvements in Oracle WebLogic Server 12.2.1 release, please see What’s New chapter of the public documentation.


Using these new enhanced capabilities of WebLogic JMS, one can greatly reduce the overall time and cost involved in configuring and managing WebLogic JMS in general, plus scalability and high availability in particular, resulting in ease of use with an increased return on investment.

Monday Nov 02, 2015

JMS 2.0 support in WebLogic Server 12.2.1

As part of its support for Java EE 7, WebLogic Server 12.2.1 supports version 2.0 of the JMS (Java Message Service) specification.

JMS 2.0 is the first update to the JMS specification since version 1.1 was released in 2002. One might think that an API that has remained unchanged for so long has grown moribund and unused. However, if you judge the success of an API standard by the number of different implementations, JMS is one of the most successful APIs around.

In JMS 2.0, the emphasis has been on catching up with the ease-of-use improvements that have been made to other enterprise Java technologies. While technologies such as Enterprise JavaBeans or Java persistence are now much simpler to use than they were a decade ago, JMS had remained unchanged with a successful, but rather verbose, API.

The single biggest change in JMS 2.0 is the introduction of a new simplified API for sending and receiving messages that reduces the amount of code a developer must write. For applications that run in WebLogic server itself, the new API also supports resource injection. This allows WebLogic to take care of the creation and management of JMS objects, simplifying the application even further.

Other changes in JMS 2.0 asynchronous send,  shared topic subscriptions and delivery delay. These were existing features WebLogic which are now available using an improved, standard, API.

To find out more about JMS 2.0, see this 15 minute audio-visual slide presentation.

Read these two OTN articles:

See also Understanding the Simplified API Programming Model in the product documentation

In a hurry? See Ten ways in which JMS 2.0 means writing less code.


The official blog for Oracle WebLogic Server fans and followers!

Stay Connected


« July 2016