Friday Dec 11, 2015

Introducing WLS JMS Multi-tenancy

Introduction

Multi-tenancy (MT) is the main theme of the WebLogic Server 12.2.1 release. It enhances the Oracle Platform for Software-as-a-Service (SaaS) and Platform-as-a-Service (PaaS) use cases. The main benefits of WebLogic multi-tenancy are increased density, tenant isolation, and simplified cloud configuration and management.

This article introduces multi-tenancy support for WebLogic JMS, the messaging component in WebLogic Server. 

Key MT Concepts

Some of you may have already learned from other blogs (for example Tim’s blog about Domain Partitions for Multi Tenancy) about some of the key concepts in WebLogic MT. But for the benefit of a broader audience, here is a quick review of those concepts before we get into JMS specifics.

WebLogic Multi-tenancy introduces the concepts of domain partition (also known as partition), resource group (RG), and resource group template (RGT).  

A Partition is conceptually a slice of a WebLogic domain, where resources and applications for different tenants can be configured and deployed in isolation on the same WebLogic server or in the same cluster. This improves overall density. Partitions define the isolation boundaries for JNDI, security, runtime MBeans, application persistent data, work managers and logging.  Furthermore, Partitions running on the same server instance have their own lifecycle, for example, a partition can be shut down at any time without impacting other partitions.

A Resource Group is simply a collection of functionally related resources and applications. A RG can be targeted and managed independently of other resource groups in the same partition. Resource groups can be defined not only inside a partition, but also at the domain level. As with partitions, RGs in the same partition (or at the domain level) that are running on the same server instance have their own lifecycle.

A Resource Group Template provides a templating mechanism to reduce the administrative overhead of configuring WebLogic resources and applications for SaaS use cases where the same resources and applications need to run in multiple partitions. It offers a configure-once-and-use-everywhere capability, where a common set of configuration artifacts can be specified in a RGT, and can then be referenced from RGs in different partitions. A RGT is not targetable, and resources in a RGT will not deploy unless the RGT is referenced by a deployed RG.

Note that the resources and applications configured or deployed in a partition (directly inside RGs or via RGs referencing a RGT) are scoped to that partition.

Understanding JMS Resources in MT

In a similar way to other WebLogic configuration artifacts, JMS resources such as JMS servers, SAF agents, path services, persistent stores, messaging bridges, JMS system modules, app-deployment JMS modules, Java EE 7 resource definition modules, and JMS applications can all now be configured and deployed in a RG, either directly or via RGTs, as well as in the ‘classic’ way, which is always directly at the domain level. Note that it is perfectly acceptable to combine both partition and ‘classic’ configuration together in the same domain.   

Resources and applications in different partitions are isolated from one another. For example, you can configure a JMS destination with the same JNDI name in multiple partitions running in the same cluster, and these destinations will be managed via independent runtime MBean instances, and can be independently secured via partition-specific security realms. In addition to non-persistent state, the persistent data (for example, persistent messages and durable subscriptions) in such JMS destinations are also isolated from one another.

Configuring JMS Resources in MT

The following configuration snippets show how JMS resources configured in a multi-tenant environment differs from traditional non-MT JMS configuration.

As you can see, partition-scoped JMS resources are embedded in a resource group in a partition (alternatively, they can be embedded in a Resource Group Template, which is in turn referenced by a Resource Group).

In addition, resources in a resource group are never individually targeted. Instead, the whole resource group is targeted via a virtual target, which is itself targeted in the normal way.  If a RG targets to a virtual target that is in turn targeted to a WL cluster, all JMS resources and applications in the RG will also be targeted to that cluster.

As we will see later, a virtual target not only provides the targeting information of a RG, it also defines the accessing point of a partition. For more information about resource group targeting and virtual targets, check out Joe's blog about Partition Targeting and Virtual Targets.

You might have noticed that I did not discuss configuring individual JMS resources for each server in a WL cluster, nor did I mention configuring “migratable targets” to add high availability. I have good news for you! Neither is needed or even supported in MT. They have been replaced with greatly enhanced WebLogic JMS cluster-targeting and HA support; my colleague Kathiravan blogs about it in 12.2.1 WebLogic JMS: Dynamic Scalability and High Availability Made Simple.

Although system level JMS resources (such as JMS servers, SAF agents, persistent stores, messaging bridges, path services, and JMS modules) are scoped differently in a MT configuration, their respective attributes are specified in exactly the same way as in a non-MT configuration.

Various validation and targeting rules are enforced to ensure that WebLogic MT JMS configuration is isolated, self contained, and easy to manage. One basic and high-level rule in configuring JMS in MT is that a JMS configuration artifact may only reference other configuration artifacts that are in the same scope. For example, a resource group scoped JMS server can only reference a persistent store that is also defined in the same resource group. These rules are enforced by configuration validation checks and by errors and warnings that are logged at runtime.

Accessing JMS Resources in MT

A JMS application designed for multi-tenancy accesses JMS resources in the same way as ‘classic’ JMS applications, by looking up JMS resources in a JNDI name space. The difference is that in a MT environment, a WebLogic JNDI InitialContext is associated with a particular scope (i.e. the domain or a partition), when it is created. 

A MT application can have multiple JNDI contexts that refer to the same WebLogic cluster but are scoped to different partitions. An initial context, once created, sticks to its scope until it is closed. This means that all JNDI operations using a partition-scoped JNDI context instance are performed using the partition-specific area of the JNDI space.

The scope of a JNDI context is determined by the “provider URL” supplied when the initial context is created.

Once an application successfully establishes a partition-scoped JNDI initial context, it can use this context to look up JMS connection factories and destinations in the same way as in a non-MT environment; except that now the application can only access partition-scoped JMS resources.

Let us look at some specific use cases and see how an application can establish an initial context to a particular partition in each of the use cases.

Use Case 1 - Local Intra-partition Access

When a Java EE application needs to access a JMS destination in its local partition in the same cluster (or on the same non-clustered managed server), the application can just create an initial context without supplying a provider URL.

Example 1: Null Provider URL

  Context ctx = new InitialContext();

  Object cf = ctx.lookup("jms/mycf1");

  Object dest = ctx.lookup("jms/myqueue1");

This initial context will be scoped to the partition in which the application is deployed.

Use Case 2 - Local Inter-partition Access

If a Java EE application needs to access a JMS destination (or other resource) in a different partition than the partition to which it is deployed, and the partition is in the same cluster (or on the same managed server) then it can use either a partition-scoped JNDI name or a provider URL with the "local:" protocol.

Using Partition Scoped JNDI Names

A JNDI name can be decorated with a namespace prefix to indicate its scope.

Example 2.1: given the partition configuration in the above examples, the following code can be used to access a JMS destination that is configured in "partition1".

Context ctx = new InitialContext();

Object cf = ctx.lookup("partition:partition1/jms/mycf1");

Object dest = ctx.lookup("partition:partition1/jms/myqueue1");

Similarly a Java EE application in a partition can access a domain level JNDI resource in the same cluster using a partition scoped initial context with the "domain:" namespace prefix, for example "domain:jms/mycf2".

Using a provider URL with the "local:" Protocol

Alternatively, one can specify a "local:" provider URL when creating an initial context to a specific partition.

Example 2.2: given the partition configuration in the above examples, the following code can be used to access a JMS destination that is configured in "partition1".

Hashtable<String, String> env = new Hashtable<>();

env.put(Context.PROVIDER_URL, "local://?partitionName=partition1");

env.put(Context.SECURITY_PRINCIPAL, "weblogic");

env.put(Context.SECURITY_CREDENTIALS, "welcome1");

Context ctx = new InitialContext(env);

Object cf = ctx.lookup("jms/mycf1");

Object dest = ctx.lookup("jms/myqueue1");

The initial context will be associated with "partition1" for its lifetime.

Similarly, a Java EE application in a partition can access a domain level JNDI resource in the same cluster using “local://?partitionName=DOMAIN” as the provider URL. 

Use Case 3 - General Partition Access

A third way for a Java EE application or client to access a JMS destination in a partition is to use a "partition URL". A partition URL is intended to be used when the JMS destination is in a remote cluster (or on a remote non-clustered managed server).  A typical "partition URL" is t3://hostname:port, or t3://host:port/URI-prefix.

Partition URLs may only be used by Java EE applications or clients using WLS 12.2.1 or later (older versions should use dedicated partition ports: see below).

Example 3: given the partition configuration in the examples above, the following code can be used to access a JMS destination that is configured in "partition1".

Note that "/partition1" in the provider URL below is the uri-prefix configured in the VirtualTarget for partition1.

Hashtable<String, String> env = new Hashtable<>();

env.put(Context.PROVIDER_URL, "t3://abcdef00:7001/partition1");

env.put(Context.SECURITY_PRINCIPAL, "weblogic");

env.put(Context.SECURITY_CREDENTIALS, "welcome1");

Context ctx = new InitialContext(env);

Object cf = ctx.lookup("jms/mycf1");

Object dest = ctx.lookup("jms/myqueue1");

Although it is not a best practice, a “partition URL” can also be used to access another partition in the same JVM/cluster.

Use Case 4 – Dedicated Partition Ports

One last option is to setup dedicated ports for each partition, and configuring these is described in Joe's blog about Partition Targeting and Virtual Targets.

Configuring dedicated partition ports enables applications that use ‘classic’ URLs to access a partition, and is mainly intended to enable clients and applications that are running on releases older than 12.2.1 to access partitions in a 12.2.1 or later domain.

Such older clients and applications do not support the use of host name and URI-prefix to access a partition. An attempt to use them from an older client will simply fail or may silently access the domain level JNDI name space.

What’s next?

I hope this article helps you to understand the basics of JMS MT! It is time to start exploring this new and exciting capability. You can find more information for messaging in MT in the Configuring Messaging chapter of Oracle® Fusion Middleware Using WebLogic Server Multitenant.

Monday Nov 16, 2015

12.2.1 WebLogic JMS: Dynamic Scalability and High Availability Made Simple

Introduction

WebLogic’s 12.2.1 release features a greatly simplified, easy to use JMS configuration and administration model. This simplified model works seamlessly in both Cluster and Multi-Tenant/Cloud environments, making JMS configuration a breeze and portable. It essentially lifts all major limitations for the initial version of the JMS ‘cluster targeting’ feature that was added in 12.1.2 plus adds enhancements that aren’t available in the old administration model.

  • Now, all types of JMS Service artifacts can take full advantage of a Dynamic Cluster environment and automatically scale up as well as evenly distribute the load across the cluster in response to cluster size changes. In other words, there is no need for individually configuring and deploying JMS artifacts on every cluster member in response to cluster growth or change.
  • New easily configured high availability fail-over, fail-back, and restart-in-place settings provide capabilities that were previously only partially supported via individual targeting.
  • Finally, 12.2.1 adds the ability to configure singleton destinations in a cluster within the simplified configuration model.

These capabilities apply to all WebLogic Cluster types, including ‘classic’ static clusters which combine a set of individually configured WebLogic servers, dynamic clusters which define a single dynamic WL server that can expand into multiple instances, and mixed clusters that combine both a dynamic server and one or more individually configured servers.

Configuration Changes

With this model, you can now easily configure, control dynamic scaling and high availability behavior for JMS in a central location, either on a custom Store for all JMS artifacts that handle persistent data, or on a Messaging Bridge. The new configuration parameters introduced by this model are collectively known as “High Availability” policies. These are exposed to the users via management Consoles (WebLogic Administration Console, Fusion Middleware Control (FMWc)) as well as through WLST scripting and Java MBean APIs. When they’re configured on a store, all the JMS service artifacts that reference that store simply inherit these settings from the store and behave accordingly.


Figure 1. Configuration Inheritance 

The most important configuration parameters are, distribution-policy and migration-policy, which control dynamic scalability and high availability respectively for their associated service artifacts.

When a distribution-policy is set to distributed on one configured artifact, then at the deploy time, the system automatically creates an instance on each cluster member that joins the cluster. When set to singleton, then the system creates a single instance for the entire cluster.

Distributed instances are uniquely named after their host WebLogic Server (their configured name is suffixed with the name of their server), where it is initially created and started for runtime monitoring and location tracking purposes. This server is called the home or preferred server for the distributed instances that are named after it. A singleton instance is not decorated with a server name, instead it’s simply suffixed with “-01” and the system will choose one of the managed servers in the cluster to host the instance.

A distribution-policy works in concert with a new high availability option called the migration-policy, to ensure that instances survive any unexpected service failures, server crashes, or even a planned shutdown of the servers. It does this by automatically migrating them to available cluster members.

For the migration-policy, you can choose one of three options: on-failure, where the migration of instances will take place only in the event of unexpected service failures or server crashes; always, where the migration of instances will take place even during a planned administrative shutdown of a server; finally, you can choose to have off as an option to disable the service level migration if needed.


Figure 2. Console screenshot: HA Configuration 

In addition to the migration-policy, the new model offers another high availability notion for stores called the restart-in-place capability. When enabled, the system will first try to restart failing store instances on their current server before failing over to another server in the cluster. This option can be fine tuned to limit the number of attempts and delay between each attempt. This capability prevents the system from doing unnecessary migration in the event of temporary glitches, such as a database outage, or unresponsive network or IO requests due to latency and overload. Bridges ignore restart-in-place settings as they already automatically restart themselves after a failure (they periodically try to reconnect).

Note that the high availability enhancement not only offers failover of the service artifacts in the event of failure, it also offers automatic failback of distributed instances when their home server gets restarted after a crash or shutdown – a high availability feature that isn’t available in previous releases. This allows the applications to achieve high-level server/configuration affinity whenever possible. Unlike in previous releases, both during startup and failover, the system will also try to ensure that the instances are evenly distributed across the cluster members thus preventing accidental overload of any one server in the cluster.

Here’s a table that summarizes the new distribution, migration, and restart-in-place settings:

Attribute Name

Description

Options

Default

distribution-policy

Controls JMS service instance counts and names.  

[Distributed | Singleton]

Distributed

migration-policy

Controls HA behavior. 

[Off | On-Failure | Always]

Off

restart-in-place

Enables automatic restart of a failing store instance(s) with a healthy WebLogic server.

[true | false ]

true

seconds-between-restarts

Specifies how many seconds to wait in between attempts to restart-in-place for a failed service.

[1 … {Max Integer}]

30

number-of-restart-attempts

Specifies how many restart attempts to make before migrating the failed services

[-1,0 … {Max Long}]

6

initial-boot-delay-seconds

The length of time to wait before starting an artifact's instance on a server.

[-1,0 … {Max Long}]

60

failback-delay-seconds

The length of time to wait before failing back an artifact to its preferred server.

[-1,0 … {Max Long}]

30

partial-cluster-stability-seconds

The length of time to wait before the cluster should consider itself at a "steady state". Until that point, only some resources may be started in the cluster. This gives the cluster time to come up slowly and still be easily laid out

[-1,0 … {Max Long}]

240

Runtime Monitoring

As mentioned earlier, when targeted to cluster, system automatically creates one (singleton) or more (distributed) instances from a single configured artifact. These instances are backed by appropriate runtime mbeans, named uniquely and made available for accessing/monitoring under the appropriately scoped server (or partition, in case of multi-tenant environment) runtime mbean tree.


Figure 3. Console screenshot: Runtime Monitoring 

 The above screenshot shows how a cluster targeted SAF Agent runtime instance is decorated with cluster member server name to make it unique.

Validation and Legal Checks

There are legal checks and validation rules in place to prevent the users from configuring invalid combinations of these new parameters. The following two tables list the supported combinations of these two new policies by service types as well by resource type respectively.

Service Artifact

Distribution Policy

Migration Policy

Off

Always

On-Failure

Persistent Store

Distributed

Singleton

JMS Server

Distributed

Singleton

SAF Agent

Distributed

Path Service

Singleton

Messaging Bridge

Distributed

Singleton

In the above table, the legal combinations are listed based on the JMS service types. For example, the Path Service, a messaging service that persists and holds the routing information for messages that take advantage of a popular WebLogic ordering extension called unit-of-order or unit-of-work routing, is a singleton service that should be made highly available in a cluster regardless of whether there is a service failure or server failure. So, the only valid and legal combinations of HA policies for this service configuration are: distribution-policy as singleton and migration-policy as always.

Some rules are also derived based on the resource types that are being used in an application. For example, for any JMS Servers that host uniform distributed destinations or for SAF Agents that would always host imported destinations, the distribution-policy as singleton does not make any sense and is not allowed.

Resource Type

Singleton

Distributed

JMS Servers

(hosting Distributed Destinations)

SAF Agent

(hosting Imported Destinations)

JMS Servers (hosting Singleton Destinations)

Path Service

Bridge

In the event of an invalid configuration that violates these legal checks there will be error or log messaging indicating the same and in some cases it may cause deployment server startup failures.

Best Practices

To take full advantage of the improved capabilities, first design your JMS application by carefully identifying the scalability and availability requirements as well as the deployment environments. For example, identify whether the application will be deployed to a Cluster or to a multi-tenant environment and whether it will be using uniform distributed destinations or standalone (non-distributed) destinations or both.

Once the above requirements are identified then always define and associate a custom persistent store with the applicable JMS service artifacts. Ensure that the new HA parameters are explicitly set as per the requirements (use the above tables as a guidance) and that both a JMS service artifact and its corresponding store are similarly targeted (to the same cluster or to the same RG/T in case of multi-tenant environment).

Remember, the JMS high availability mechanism depends on WebLogic Server Health and Singleton Monitoring services, which in turn rely on a mechanism called “Cluster Leasing”. So you need to setup valid cluster leasing configuration particularly when the migration-policy is set to either on-failure or always or when you want to create a singleton instance of a JMS service artifact. Note that WebLogic offers two leasing options: Consensus and Database, and we highly recommend using Database leasing as a best practice.

Also it is highly recommended to configure high availability for WebLogic’s transaction system, as JMS apps often directly use transactions, and JMS internals often implicitly use transactions. Note that WebLogic transaction high availability requires that all managed servers have an explicit listen-address and listen-port values configured, instead of leaving to the defaults, in order to yield full transaction HA support. In case of dynamic cluster configuration, you can configure these settings as part of the dynamic server template definition.

Finally, it is also preferred to use NodeManager to start all the managed servers of a cluster over any other methods.

For more information on this feature and other new improvements in Oracle WebLogic Server 12.2.1 release, please see What’s New chapter of the public documentation.

Conclusion

Using these new enhanced capabilities of WebLogic JMS, one can greatly reduce the overall time and cost involved in configuring and managing WebLogic JMS in general, plus scalability and high availability in particular, resulting in ease of use with an increased return on investment.

Friday Aug 30, 2013

Introducing Elastic JMS

In WebLogic 12.1.2, we enhanced the way that you can configure JMS servers, stores, and subdeployments so that the JMS subsystem can automatically scale with the Managed Servers in a cluster. We call this Elastic JMS. My friend Maciej Gruszka calls it Magic JMS!

 Here are some details:

JMS Servers: In releases before WebLogic Server 12.1.2, each JMS Server was individually configured and targeted at a single Managed Server. It didn’t matter whether or not that Managed Server was part of a cluster. Starting in WebLogic Server 12.1.2, you can target a JMS Server at a cluster. Under the covers, WebLogic spins up a JMS Server on each managed server in the cluster. If you add or remove servers from the cluster, JMS Servers are added or removed automatically.

WebLogic Persistent Stores: Like JMS Servers, in releases before WebLogic Server 12.1.2, each WebLogic Persistent Store (file store or JDBC store) was individually configured and targeted to a single Managed Server, clustered or not. In WebLogic Server 12.1.2, you can target a WebLogic Persistent Store at a cluster. Under the covers, WebLogic creates a store instance on each Managed Server in the cluster. Each instance of a file store uses the same path to either a shared file system or to a local file. Each instance of a JDBC store uses the same JDBC data source, but gets its own underlying tables.

Subdeployments: A subdeployment defines the list of JMS Servers that will host a queue or topic. In releases before WebLogic Server 12.1.2, when you defined a subdeployment for a distributed queue or topic, you listed each JMS Server in the cluster. When you scaled up the cluster by adding a Managed Server and a corresponding JMS Server, you also needed to update the subdeployment with the new JMS Server. Starting in WebLogic Server 12.1.2, subdeployments are much simpler. You can list a single JMS Server that is targeted at the cluster. When you scale up the cluster, the distributed queue is automatically extended to the new JMS Server instance without any changes to the subdeployment.

Pulling it all together: By using cluster targeted JMS Servers and Persistent Stores, you get some nice benefits:

  • Simplified configuration – Even initial JMS configuration is much simpler than it was in the past: no need for individually configured JMS Servers and related items.
  • Elastic scalability – As you scale the cluster, the JMS services automatically scale with it. 
  • Support for Dynamic Clusters – Because Dynamic Clusters require homogenous targeting of services, the new configuration options make it possible to run JMS on Dynamic Clusters. 

  • Check out the documentation at http://docs.oracle.com/middleware/1212/wls/JMSAD/dynamic_messaging.htm or see my video at for more details.

    Wednesday Jul 31, 2013

    JMS JDBC Store Performance Using Multiple Connections

    This article is a bit different than the normal data source articles I have been writing because it's focused on an application use of WebLogic Server (WLS) data sources, although the application is still part of WLS.  Java Messaging Service (JMS) supports the use of either a file store or a JDBC store to store JMS persistent messages (the JDBC store can also be used for Transaction Log information, diagnostics, etc.).  The file store is easier to configure,generates no network traffic, and is generally faster.  However, the JDBC store is popular because most customers have invested in High Availability (HA) solutions, like RAC, Data Guard or Golden Gate,  for their database so using a JDBC store on the database makes HA and migration much easier (for the file store, the disk must be shared or migrated). Some work has been done in recent releases to improve the JDBC store performance and take advantage of RAC clusters.

    It's obvious from the JDBC store configuration that JMS uses just a single table in the database.  JMS use of this table is sort of like a queue so there are hot spots at the beginning and end of the table as messages are added and consumed - that might get fixed in WLS in the future but it is a consideration for the current store performance.  JMS since the beginning has been single threaded on a single database connection.  Starting in WLS 10.3.6 (see this link), the store can run with multiple worker threads each with its own connection by setting Worker Count on the JDBC Persistent Store page in the console.  There are no documented restrictions or recommendations about how to set this value  Should we set it to the maximum allowed of 1000 so we get a lot of work done?  Not quite ...

    Since we have contention between the connections, using too many connections is not good.  To begin with, there is overhead in managing the work among the threads so if JMS is lightly loaded, it's worse to use multiple connections.  When you have a high load, we found that for one configuration, 8 threads gave the best performance but 4 was almost as good at half the resources using the Oracle Thin driver on an Oracle database (more about database vendors below).  Optimization for queues with multiple connections is  a big win with some gains as high as 200%.  Handling a topic is another ... well, topic.  It's complicated by the fact that a message can go to a single or multiple topics and we want to aggregate acknowledgements to reduce contention and improve performance.  Topic testing saw more modest gains of  around 20%, depending on the test.

    How about different data source types? It turns out that when using a RAC cluster and updates are scattered across multiple instances, there is too much overhead in locking and cache fusion across the RAC instances.  That makes it important that all of the reserved connections are on a single RAC instance.   For a generic data source, there is nothing to worry about - you have just one node.  In the case of multi data source (MDS), you can get all connections on a single instance by using the AlgorithmType set to "Failover" (see this link ).   All connections will be reserved on the first configured generic data source within the MDS until a failure occurs, then the failed data source will be marked as suspended and all connections will come from the next generic data source in the MDS.  You don't want to use AlgorithmType set to "Load-Balancing".  In the case of Active GridLink (AGL), it's actually difficult to get connection affinity to a single node and without it, performance can seriously degrade.  Some benchmarks saw performance loss of 50% when using multiple connections on different instances.  For WLS 10.3.6 and 12.1.1, it is not recommended to use AGL with multiple connections.  In WLS 12.1.2, this was fixed so that JMS will reserve all connections on the same instance.  If there is a failure, all of the reserved connections need to be closed, a new connection is reserved using Connection Runtime Load Balancing (RCLB), hopefully on a lightly loaded instance), and then the rest of the connections are reserved on the same instance.  In one benchmark, performance improved by 200% when using multiple connections on the same instance.

    How about different database vendor types?  Your performance will vary based on the application and the database.  The discussion above regarding RAC cluster performance is interesting and may have implications for any application that you are moving to a cluster.  Another thing that is specific to the Oracle database is indexing the table for efficient access by multiple connections.  In this case, it is recommended to use a reverse key index for the primary key.  The bytes in the key are reversed such that keys that normally would be grouped because the left-most bytes are the same are now distributed more evenly (imagine using a B-tree to store a bunch of sequential numbers with left padding with 0's, for example). 

    Bottom line: this feature may give you a big performance boost but you might want to try it with your application, database, hardware, and vary the worker count.




    Thursday May 19, 2011

    WebLogic MDB and Distributed Queue Elastic Capabilities

    Recently, a customer asked me how MDBs interact with a Distributed Queue, especially after an Automatic Service Migration event occurs. We have some great technology built into WebLogic Server to make migration and failover seamless. I thought there might be some other folks interested in knowing more about this, too.[Read More]
    About

    The official blog for Oracle WebLogic Server fans and followers!

    Stay Connected

    Search

    Archives
    « June 2016
    SunMonTueWedThuFriSat
       
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    22
    23
    24
    25
    26
    27
    28
    29
    30
      
           
    Today