Monday Nov 16, 2015

Weblogic 12.2.1 Multitenancy Support for Resource Adapter

One of the key features in WLS 12.2.1 is the multi-tenancy support, you can learn more about its concept in Tim Quinn's blog:Domain Partitions for Multi-tenancy in WebLogic Server. For resource adapter, besides deploying it to domain partition, you can also deploy a resource adapter to partition's resource group or resource group template. This can be done by selecting resource group scope or resource group template scope while deploying resource adapter in console. Following graph shows the deployment page in console. In the example graph, we have a resource group Partition1-rg in Partition1 and a resource group template TestRGT:

Deploy RA to MT in Console

When you select 'Global' scope, the resource adapter will be deployed to domain partition. If selecting 'TestRGT template', the resource adapter will be deployed to resource group template TestRGT, and if Partition1's resource group references TestRGT, the resource adapter will be deployed to Partition1. If selecting 'Partition1-rg in Partition1', the resource adapter will be deployed to Partition1. 

You can learn more about multi-tendency deployment in Hong Zhang's blog: Multi Tenancy Deployment.

If you deploy resource adapters to different partitions, these resources in different partitions will not interfere each other, because:

  1. Resource adapter's JNDI resources in one partition can not be looked up by another partition, you can only lookup resource adapter resources being bound in same partition.
  2. Resource adapter classes packaged in resource adapter archive are loaded by different classloaders when they are deployed to different partitions. You do not need to worry about mistakenly using some resource adapter classes loaded by another partition.
  3. If you somehow get a reference to one of the following resource adapter's resource objects which belongs to another partition, you still can not use it. You will get a exception if you calling some of the methods of that object.
    • javax.resource.spi.work.WorkManager
    • javax.resource.spi.BootstrapContext
    • javax.resource.spi.ConnectionManager
    • javax.validation.Validator
    • javax.validation.ValidatorFactory
    • javax.enterprise.inject.spi.BeanManager
    • javax.resource.spi.ConnectionEventListener

After having resource adapter deployed,  you can access domain resource adapter's runtime mbean through 'ConnectorServiceRuntime' directory under ServerRuntime, using WebLogic Scripting Tool (WLST):

View Connector RuntimeMBean in WLST

In above example, we have a resource adapter named 'jca_ra' deployed in domain partition, so we can see it's runtime mbean under ConnectorServiceRuntime/ConnectorService. jms-internal-notran-adp and jms-internal-xa-adp are also listed here, they are weblogic internal resource adapters.

But how can we monitor resource adapters deployed in partition? they are under PartitionRuntimes:

View MT Connector RuntimeMBean in WLST

In above example, we have a resource adapter named 'jca_ra' deployed in Partition1.


You can also get resource adapter's runtimembean through JMX(see how to access runtimembean using JMX):

      JMXServiceURL serviceURL = new JMXServiceURL("t3", hostname, port,          
                             "/jndi/weblogic.management.mbeanservers.domainruntime");
      Hashtable h = new Hashtable();
      h.put(Context.SECURITY_PRINCIPAL, user);
      h.put(Context.SECURITY_CREDENTIALS, passwd);
      h.put(JMXConnectorFactory.PROTOCOL_PROVIDER_PACKAGES,
                            "weblogic.management.remote");
      h.put("jmx.remote.x.request.waiting.timeout", new Long(10000));
      JMXConnector connector = JMXConnectorFactory.connect(serviceURL, h);
      MBeanServerConnection connection = connector.getMBeanServerConnection();
      Set<ObjectName> names = connection.queryNames(new ObjectName(
                            "*:Type=ConnectorComponentRuntime,Name=jca_ra,*"), null);
      for (ObjectName oname : names) {
          Object o = MBeanServerInvocationHandler.newProxyInstance(connection, oname, 
                               ConnectorComponentRuntimeMBean.class, false);
          System.out.println(o);
      }

Running above example code in a domain which has a resource adapter named 'jca_ra' deployed to both domain and Partition1, you will get following result:

[MBeanServerInvocationHandler]com.bea:Name=jca_ra,ServerRuntime=admin,Location=admin,

                                Type=ConnectorComponentRuntime,ApplicationRuntime=jca_ra

[MBeanServerInvocationHandler]com.bea:Name=jca_ra,ServerRuntime=admin,Location=admin,

                              Type=ConnectorComponentRuntime,ApplicationRuntime=jca_ra,PartitionRuntime=Partition1

You can see the connection pool runtime mbean(ConnectorComponentRuntime) of the resource adapter which is deployed to Partition1 has a valid PartitionRuntime attribute. So you can query Partition1's resource adapter's runtime mbean by following code:

connection.queryNames(new ObjectName(

                   "*:Type=ConnectorComponentRuntime,Name=jca_ra,PartitionRuntime=Partition1,*"), null);

12.2.1 WebLogic JMS: Dynamic Scalability and High Availability Made Simple

Introduction

WebLogic’s 12.2.1 release features a greatly simplified, easy to use JMS configuration and administration model. This simplified model works seamlessly in both Cluster and Multi-Tenant/Cloud environments, making JMS configuration a breeze and portable. It essentially lifts all major limitations for the initial version of the JMS ‘cluster targeting’ feature that was added in 12.1.2 plus adds enhancements that aren’t available in the old administration model.

  • Now, all types of JMS Service artifacts can take full advantage of a Dynamic Cluster environment and automatically scale up as well as evenly distribute the load across the cluster in response to cluster size changes. In other words, there is no need for individually configuring and deploying JMS artifacts on every cluster member in response to cluster growth or change.
  • New easily configured high availability fail-over, fail-back, and restart-in-place settings provide capabilities that were previously only partially supported via individual targeting.
  • Finally, 12.2.1 adds the ability to configure singleton destinations in a cluster within the simplified configuration model.

These capabilities apply to all WebLogic Cluster types, including ‘classic’ static clusters which combine a set of individually configured WebLogic servers, dynamic clusters which define a single dynamic WL server that can expand into multiple instances, and mixed clusters that combine both a dynamic server and one or more individually configured servers.

Configuration Changes

With this model, you can now easily configure, control dynamic scaling and high availability behavior for JMS in a central location, either on a custom Store for all JMS artifacts that handle persistent data, or on a Messaging Bridge. The new configuration parameters introduced by this model are collectively known as “High Availability” policies. These are exposed to the users via management Consoles (WebLogic Administration Console, Fusion Middleware Control (FMWc)) as well as through WLST scripting and Java MBean APIs. When they’re configured on a store, all the JMS service artifacts that reference that store simply inherit these settings from the store and behave accordingly.


Figure 1. Configuration Inheritance 

The most important configuration parameters are, distribution-policy and migration-policy, which control dynamic scalability and high availability respectively for their associated service artifacts.

When a distribution-policy is set to distributed on one configured artifact, then at the deploy time, the system automatically creates an instance on each cluster member that joins the cluster. When set to singleton, then the system creates a single instance for the entire cluster.

Distributed instances are uniquely named after their host WebLogic Server (their configured name is suffixed with the name of their server), where it is initially created and started for runtime monitoring and location tracking purposes. This server is called the home or preferred server for the distributed instances that are named after it. A singleton instance is not decorated with a server name, instead it’s simply suffixed with “-01” and the system will choose one of the managed servers in the cluster to host the instance.

A distribution-policy works in concert with a new high availability option called the migration-policy, to ensure that instances survive any unexpected service failures, server crashes, or even a planned shutdown of the servers. It does this by automatically migrating them to available cluster members.

For the migration-policy, you can choose one of three options: on-failure, where the migration of instances will take place only in the event of unexpected service failures or server crashes; always, where the migration of instances will take place even during a planned administrative shutdown of a server; finally, you can choose to have off as an option to disable the service level migration if needed.


Figure 2. Console screenshot: HA Configuration 

In addition to the migration-policy, the new model offers another high availability notion for stores called the restart-in-place capability. When enabled, the system will first try to restart failing store instances on their current server before failing over to another server in the cluster. This option can be fine tuned to limit the number of attempts and delay between each attempt. This capability prevents the system from doing unnecessary migration in the event of temporary glitches, such as a database outage, or unresponsive network or IO requests due to latency and overload. Bridges ignore restart-in-place settings as they already automatically restart themselves after a failure (they periodically try to reconnect).

Note that the high availability enhancement not only offers failover of the service artifacts in the event of failure, it also offers automatic failback of distributed instances when their home server gets restarted after a crash or shutdown – a high availability feature that isn’t available in previous releases. This allows the applications to achieve high-level server/configuration affinity whenever possible. Unlike in previous releases, both during startup and failover, the system will also try to ensure that the instances are evenly distributed across the cluster members thus preventing accidental overload of any one server in the cluster.

Here’s a table that summarizes the new distribution, migration, and restart-in-place settings:

Attribute Name

Description

Options

Default

distribution-policy

Controls JMS service instance counts and names.  

[Distributed | Singleton]

Distributed

migration-policy

Controls HA behavior. 

[Off | On-Failure | Always]

Off

restart-in-place

Enables automatic restart of a failing store instance(s) with a healthy WebLogic server.

[true | false ]

true

seconds-between-restarts

Specifies how many seconds to wait in between attempts to restart-in-place for a failed service.

[1 … {Max Integer}]

30

number-of-restart-attempts

Specifies how many restart attempts to make before migrating the failed services

[-1,0 … {Max Long}]

6

initial-boot-delay-seconds

The length of time to wait before starting an artifact's instance on a server.

[-1,0 … {Max Long}]

60

failback-delay-seconds

The length of time to wait before failing back an artifact to its preferred server.

[-1,0 … {Max Long}]

30

partial-cluster-stability-seconds

The length of time to wait before the cluster should consider itself at a "steady state". Until that point, only some resources may be started in the cluster. This gives the cluster time to come up slowly and still be easily laid out

[-1,0 … {Max Long}]

240

Runtime Monitoring

As mentioned earlier, when targeted to cluster, system automatically creates one (singleton) or more (distributed) instances from a single configured artifact. These instances are backed by appropriate runtime mbeans, named uniquely and made available for accessing/monitoring under the appropriately scoped server (or partition, in case of multi-tenant environment) runtime mbean tree.


Figure 3. Console screenshot: Runtime Monitoring 

 The above screenshot shows how a cluster targeted SAF Agent runtime instance is decorated with cluster member server name to make it unique.

Validation and Legal Checks

There are legal checks and validation rules in place to prevent the users from configuring invalid combinations of these new parameters. The following two tables list the supported combinations of these two new policies by service types as well by resource type respectively.

Service Artifact

Distribution Policy

Migration Policy

Off

Always

On-Failure

Persistent Store

Distributed

Singleton

JMS Server

Distributed

Singleton

SAF Agent

Distributed

Path Service

Singleton

Messaging Bridge

Distributed

Singleton

In the above table, the legal combinations are listed based on the JMS service types. For example, the Path Service, a messaging service that persists and holds the routing information for messages that take advantage of a popular WebLogic ordering extension called unit-of-order or unit-of-work routing, is a singleton service that should be made highly available in a cluster regardless of whether there is a service failure or server failure. So, the only valid and legal combinations of HA policies for this service configuration are: distribution-policy as singleton and migration-policy as always.

Some rules are also derived based on the resource types that are being used in an application. For example, for any JMS Servers that host uniform distributed destinations or for SAF Agents that would always host imported destinations, the distribution-policy as singleton does not make any sense and is not allowed.

Resource Type

Singleton

Distributed

JMS Servers

(hosting Distributed Destinations)

SAF Agent

(hosting Imported Destinations)

JMS Servers (hosting Singleton Destinations)

Path Service

Bridge

In the event of an invalid configuration that violates these legal checks there will be error or log messaging indicating the same and in some cases it may cause deployment server startup failures.

Best Practices

To take full advantage of the improved capabilities, first design your JMS application by carefully identifying the scalability and availability requirements as well as the deployment environments. For example, identify whether the application will be deployed to a Cluster or to a multi-tenant environment and whether it will be using uniform distributed destinations or standalone (non-distributed) destinations or both.

Once the above requirements are identified then always define and associate a custom persistent store with the applicable JMS service artifacts. Ensure that the new HA parameters are explicitly set as per the requirements (use the above tables as a guidance) and that both a JMS service artifact and its corresponding store are similarly targeted (to the same cluster or to the same RG/T in case of multi-tenant environment).

Remember, the JMS high availability mechanism depends on WebLogic Server Health and Singleton Monitoring services, which in turn rely on a mechanism called “Cluster Leasing”. So you need to setup valid cluster leasing configuration particularly when the migration-policy is set to either on-failure or always or when you want to create a singleton instance of a JMS service artifact. Note that WebLogic offers two leasing options: Consensus and Database, and we highly recommend using Database leasing as a best practice.

Also it is highly recommended to configure high availability for WebLogic’s transaction system, as JMS apps often directly use transactions, and JMS internals often implicitly use transactions. Note that WebLogic transaction high availability requires that all managed servers have an explicit listen-address and listen-port values configured, instead of leaving to the defaults, in order to yield full transaction HA support. In case of dynamic cluster configuration, you can configure these settings as part of the dynamic server template definition.

Finally, it is also preferred to use NodeManager to start all the managed servers of a cluster over any other methods.

For more information on this feature and other new improvements in Oracle WebLogic Server 12.2.1 release, please see What’s New chapter of the public documentation.

Conclusion

Using these new enhanced capabilities of WebLogic JMS, one can greatly reduce the overall time and cost involved in configuring and managing WebLogic JMS in general, plus scalability and high availability in particular, resulting in ease of use with an increased return on investment.

Sunday Nov 15, 2015

Deploying Java EE 7 Applications to Partitions from Eclipse

The new WebLogic Server 12.2.1 Multi-tenant feature enables partitions to be created in a domain that are isolated from one another and able to be managed independently of one another. From a development perspective, this isolation opens up some interesting opportunities - for instance it enables the use of a single domain to be shared by multiple developers, working on the same application, without them needing to worry about collisions of URLs or cross accessing of resources.

The lifecycle of a partition can be managed independently of others so starting and stopping the partition to start and stop applications can be done with no impact on other users of the shared domain. A partition can be exported (unplugged) from a domain, including all of it's resources and application bits that are deployed, and imported (plugged) into a completely different domain to restore the exact same partition in the new location. This enables complete, working applications to be shared and moved between between different environments in a very straightforward manner.

As an illustration of this concept of using partitions within a development environment, the YouTube video - WebLogic Server 12.2.1 - Deploying Java EE 7 Application to Partitions - takes the Java EE 7 CargoTracker application and deploys it to different targets from Eclipse.

  • In the first instance, CargoTracker is deployed to a known WebLogic Server target using the well known "Run as Server" approach, with which Eclipse will start the configured server and deploy the application to the base domain.
  • Following that, using a partition that has been created on the same domain called "test", the same application code-base is built and deployed to the partition using maven and the weblogic-maven-plugin. The application is accessed in its partition using its Virtual Target mapping and shown to be working as expected.
  • To finish off the demonstration the index page of the CargoTracker application is modified to mimic a development change and deployed to another partition called "uat" - where it is accessed and seen that the page change is active.
  • At this point, all three instances of the same application are running independently on the same server and are accessible at the same time, essentially showing how a single domain can independently host multiple instances of the same application as it is being developed.

Friday Nov 13, 2015

Oracle WebLogic Server 12.2.1 Running on Docker Containers

UPDATE April 2016 - We now officially certify and support WebLogic 12.1.3 and WebLogic 12.2.1 Clusters in multi-host environments! For more information see this blog post. The Docker configuration files are also now maintained on the official Oracle GitHub Docker repository.  Links in the Docker section of this article have also been updated to reflect the latest updates and changes. For more up to date information on Docker scripts and support, check the Oracle GitHub project docker-images.


Oracle WebLogic Server 12.2.1 is now certified to run on Docker containers. As part of this certification, we are releasing Docker files on GitHub to create Oracle WebLogic Server 12.2.1 install images and Oracle WebLogic Server 12.2.1 domain images. These images are built as an extension of existing Oracle Linux images Oracle Linux Images. To help you with this, we have posted Dockerfiles and scripts on GitHub as examples for you to get started.

Docker is a platform that enables users to build, package, ship and run distributed applications. Docker users package up their applications, and any dependent libraries or files, into a Docker image. Docker images are portable artifacts that can be distributed across Linux environments. Images that have been distributed can be used to instantiate containers where applications can run in isolation from other applications running in other containers on the same host operating system.

The table below describes the certification provided for various WebLogic Server versions. You can use these combinations of Oracle WebLogic Server, JDK, Linux and Docker versions when building your Docker images.

Oracle WebLogic Server Version

JDK Version

HOST OS

Kernel Version

Docker Version

12.2.1.0.0

8

Oracle Linux 6 Update 6 or higher

UEK Release 3 (3.8.13)

1.7 or higher

12.2.1.0.0

8

Oracle Linux 7 or higher

UEK Release 3 (3.8.13)

Or RHCK 3 (3.10)

1.7 or higher

12.2.1.0.0

8

RedHat Enterprise Linux 7 or higher

RHCK 3 (3.10)

1.7 or higher

12.1.3.0.0

7/8

Oracle Linux 6 Update 5 or higher

UEK Release 3 (3.8.13)

1.3.3 or higher

12.1.3.0.0

7/8

Oracle Linux 7 or higher

UEK Release 3 (3.8.13)

Or RHCK 3 (3.10)

1.3.3 or higher

12.1.3.0.0

7/8

RedHat Enterprise Linux 7 or higher

RHCK 3 (3.10)

1.3.3 or higher

We support Oracle WebLogic Server in certified Docker containers running on other Linux host operating systems that have Kernel 3.8.13 or larger and that support Docker Containers, please read our support statement at Support statement. For additional details on the most current Oracle WebLogic Server supported configurations please refer to Oracle Fusion Middleware Certification Pages.

These Dockerfiles and scripts we have provided enable users to create clustered and non-clustered Oracle WebLogic Server domain configurations, including both development and production running on a single host operating system or VMs. Each server running in the resulting domain configurations runs in its Docker container, and is capable of communicating as required with other servers.


A topology which is in line with the “Docker-way” for containerized applications and services consists of a container designed to run only an administration server containing all resources, shared libraries and deployments. These Docker containers can all be on a single physical or virtual server Linux host or on multiple physical or virtual server Linux hosts. The Dockerfiles in GitHub to create an image with a WebLogic Server domain can be used to start these admin server containers.

For documentation on how to use these Dockerfiles and scripts, see the whitepaper on OTN. The Oracle WebLogic Server video and demo presents our certification effort and shows a Demo of WebLogic Server 12.2.1 running on Docker Containers. We hope you will try running the different configurations of WebLogic Server on Docker containers, and look forward to hearing any feedback you might have.

Thursday Nov 12, 2015

WLS UCP Datasource

WebLogic Server (WLS) 12.2.1 introduces a new datasource type that uses the Oracle Universal Connection Pool (UCP) as an alternative connection pool.  The UCP datasource allows for configuration, deployment, and monitoring of the UCP connection pool as part of the WLS domain.  It is certified with the Oracle Thin driver (simple, XA, and replay drivers). 

The product documentation is at http://docs.oracle.com/middleware/1221/wls/JDBCA/ucp_datasources.htm#JDBCA746 .  The goal of this article  is not to reproduce that information but to summarize the feature and provide some additional information and screen shots for configuring the datasource.

A UCP data source is defined using a jdbc-data-source descriptor as a system resource.  With respect to multi-tenancy, these system resources can be defined at the domain, partition, resource group template, or resource group level. 

The configuration  for a UCP data source is pretty simple with the standard datasource parameters.  You can  name it, give it a URL, user, password and JNDI name.  Most of the detailed configuration and tuning comes in the form of UCP connection properties.  The administrator can configure values for any setters supported by oracle.ucp.jdbc.PoolDataSourceImpl except LogWriter  (see oracle.ucp.PoolDaaSourceImpl) by just removing the "set" from the attribute name (the names are case insensitive).  For example,

ConnectionHarvestMaxCount=3


Table 8-2 in the documentation lists all of the UCP attributes that are currently supported, based on the 12.1.0.2 UCP jar that ships with WLS 12.2.1.

There is some built-in validation of the (common sense) combinations of driver and connection factory:

Driver

Factory (ConnectionFactoryClassName)

oracle.ucp.jdbc.PoolDataSourceImpl (default)

oracle.jdbc.pool.OracleDataSource

oracle.ucp.jdbc.PoolXADataSourceImpl

oracle.jdbc.xa.client.OracleXADataSource

oracle.ucp.jdbc.PoolDataSourceImpl

oracle.jdbc.replay.OracleDataSourceImpl

To simplify the configuration, if the "driver-name" is not specified, it will default to oracle.ucp.jdbc.PoolDataSourceImpl  and the ConnectionFactoryClassName connection property defaults to the corresponding entry from the above table.

Example 8.1 in the product documentation gives a complete example of creating a UCP data source using WLST.   WLST usage is very common for application configuration these days.

Monitoring is available via the weblogic.management.runtime.JDBCUCPDataSourceRuntimeMBean.  This MBean extends JDBCDataSourceRuntimeMBean so that it can be returned with the list of other JDBC MBeans from the JDBC service for tools like the administration console or your WLST script.  For a UCP data source, the state and the following attributes are set: CurrCapacity, ActiveConnectionsCurrentCount, NumAvailable, ReserveRequestCount, ActiveConnectionsAverageCount, CurrCapacityHighCount, ConnectionsTotalCount, NumUnavailable, and WaitingForConnectionSuccessTotal.

The administration console and FMWC make it easy to create, update, and monitor UCP datasources.

The following images are from the administration console. For the creation path, there is a drop-down that lists the data source types; UCP is one of the choices.  The resulting data source descriptor datasource-type set to "UCP". 

http://aseng-wiki.us.oracle.com/asengwiki/download/attachments/643530931/ucp1.jpg?version=3&modificationDate=1443495917000&api=v2

The first step is to specify the JDBC Data Source Properties that determine the identity of the data source. They include the datasource names, the scope (Global or Multi Tenant Partition, Resource Group, or Resource Group Template) and the JNDI names. 

http://aseng-wiki.us.oracle.com/asengwiki/download/attachments/643530931/ucp2.jpg?version=3&modificationDate=1443495999000&api=v2

The next page handles the user name and password, URL, and additional connection properties. Additional connection properties are used to configure the UCP connection pool. There are two ways to provide the connection properties for a UCP data source in the console. On the Connection Properties page, all of the available connection properties for the UCP driver are displayed so that you only need to enter the property value.  On the next page for Test Database Connection, you can enter a propertyName=value directly into the Properties text box.  Any values entered on the previous Connection Properties page will already appear in the text box.  This page can be used to test the specified values including the connection properties.

http://aseng-wiki.us.oracle.com/asengwiki/download/attachments/643530931/ucp5.jpg?version=1&modificationDate=1443496172000&api=v2

The Test Database Connection page allows you to enter free-form values for properties and test a database connection before the data source configuration is finalized. If necessary, you can provide additional configuration information using the Properties, System Properties, and Encrypted Properties attributes.

http://aseng-wiki.us.oracle.com/asengwiki/download/attachments/643530931/ucp3.jpg?version=1&modificationDate=1404361954000&api=v2

The final step is to target the data source. You can select one or more targets to which to deploy your new UCP data source. If you don't select a target, the data source will be created but not deployed. You will need to deploy the data source at a later time before you can get a connection in the application.

For editing the data source, minimal tabs and attributes are exposed to configure, target, and monitor this data source type.