Sunday May 01, 2016

Using SQLXML Data Type with Application Continuity

When I first wrote an article about changing Oracle concrete classes to interfaces to work with Application Continuity (AC) (https://blogs.oracle.com/WebLogicServer/entry/using_oracle_jdbc_type_interfaces), I left out one type. oracle.sql.OPAQUE is replaced with oracle.jdbc.OracleOpaque. There isn’t a lot that you can do with this opaque type. While the original class had a lot of conversion methods, the new Oracle type interfaces have only methods that are considered significant or not available with standard JDBC API’s. The new interface only has a method to get the value as an Object and two meta information methods to get meta data and type name. Unlike the other Oracle type interfaces (oracle.jdbc.OracleStruct extends java.sql.Struct and oracle.jdbc.OracleArray extends java.sql.Array), oracle.jdbc.OracleOpaque does not extend a JDBC interface

There is one related very common use case that needs to be changed to work with AC. Early uses of SQLXML made use of the following XDB API.

SQLXML sqlXml = oracle.xdb.XMLType.createXML(
((oracle.jdbc.OracleResultSet)resultSet).getOPAQUE("issue"));

oracle.xdb.XMLType extends oracle.sql.OPAQUE and its use will disable AC replay. This must be replaced with the standard JDBC API

SQLXML sqlXml = resultSet.getSQLXML("issue");

If you try to do a “new oracle.xdb.XMLType(connection, string)” when running with the replay datasource, you will get a ClassCastException. Since XMLType doesn’t work with the replay datasource and the oracle.xdb package uses XMLType extensively, this package is no longer usable for AC replay.

The API’s for SQLXML are documented at https://docs.oracle.com/javase/7/docs/api/java/sql/SQLXML.html. The javadoc shows API’s to work with DOM, SAX, StAX, XLST, and XPath.

Take a look at the sample program at https://blogs.oracle.com/WebLogicServer/resource/StephenFeltsFiles/XmlSample.txt

The sample uses StAX to store the information and DOM to get it. By default, it uses the replay datasource and it does not use XDB.

You can run with replay debugging by doing something like the following. Create a file named /tmp/config.txt that has the following text.

java.util.logging.ConsoleHandler.formatter = java.util.logging.SimpleFormatter
handlers = java.util.logging.FileHandler
java.util.logging.FileHandler.pattern = /tmp/replay.log
oracle.jdbc.internal.replay.level = FINEST

Change your WLS CLASSPATH (or one with the Oracle client jar files) to put ojdbc7_g.jar at the front (to replace ojdbc7.jar) and add the current directory.

Compile the program (after renaming .txt to .java) and run it using

java -Djava.util.logging.config.file=/tmp/config.txt XmlSample

The output replay log is in /tmp/replay.log. With the defaults in the sample program, you won’t see replay disabled in the log. If you change the program to set useXdb to true, you will see that replay is disabled. The log will have “DISABLE REPLAY in preForMethodWithConcreteClass(getOPAQUE)” and “Entering disableReplayInternal”.

This sample can be used to test other sequences of operations to see if they are safe for replay.

Alternatively, you can use orachk to do a static analysis of the class. See https://blogs.oracle.com/WebLogicServer/entry/using_orachk_to_clean_up for more information. If you run orachk on this program, you will get this failure.

FAILED - [XmlSample][[MethodCall] desc= (Ljava/lang/String;)Loracle/sql/OPAQUE; method name=getOPAQUE, lineno=105]

Thursday Apr 28, 2016

Testing WLS and ONS Configuration

Introduction

Oracle Notification Service (ONS) is installed and configured as part of the Oracle Clusterware installation. All nodes participating in the cluster are automatically registered with the ONS during Oracle Clusterware installation. The configuration file is located on each node in $ORACLE_HOME/opmn/conf/ons.config. See the Oracle documentation for further information. This article focuses on the client side.

Oracle RAC Fast Application Notification (FAN) events are available starting in database 11.2. This is the minimum database release required for WLS Active GridLink. FAN events are notifications sent by a cluster running Oracle RAC to inform the subscribers about the configuration changes within the cluster. The supported FAN events are service up, service down, node down, and load balancing advisories (LBA).

fanWatcher Program

You can optionally test your ONS configuration independent of running WLS. This tests the connection from the ONS client to the ONS server but not configuration of your RAC services. See https://blogs.oracle.com/WebLogicServer/entry/fanwatcher_sample_program for details to get, compile, and run the fanWatcher program. I’m assuming that you have WLS 10.3.6 or later installed and you have your CLASSPATH set appropriately. You would run the test program using something like

java fanWatcher "nodes=rac1:6200,rac2:6200" database/event/service

If you are using the database 12.1.0.2 client jar files, you can handle more complex configurations with multiple clusters, for example DataGuard, with something like

java fanWatcher "nodes.1=site1.rac1:6200,site1.rac2:6200
nodes.2=site2.rac1:6200,site2.rac2:6200" database/event/service

Note that a newline is used to separate multiple node lists. You can also test with a wallet file and password, if the ONS server is configured to use SSL communications.

Once this program is running, you should minimally see occasional LBA notifications. If you start or stop a service, you should see an associated event.

Auto ONS

It’s possible to run without specifying the ONS information using a feature call auto-ONS. The auto-ONS feature cannot be used if you are running with

- an 11g driver or 11g database. Auto-ONS depends on protocol flowing between the driver and the database server and this feature was added in 12c.

- pre-WLS 12.1.3. Auto-ONS is supported starting in WLS 12.1.3.

- an Oracle wallet with SSL communications. Configuration of the wallet requires also configuring the ONS information.

- complicated ONS topology. In general, auto-ONS can figure out what you need but there are cases where you need to specify it precisely. In WLS 12.2.1, an extension of the ONS configuration allows for specifying the exact topology using a property node list. See http://docs.oracle.com/middleware/1221/wls/JDBCA/gridlink_datasources.htm#BABGJBIC for more information.

If you have some configurations that use an 11g driver or database and some that run with 12c driver/database, you may want to just specify the ONS information all of the time instead of using the auto-ONS simplification. The fanWatcher link above indicates how to test fanWatcher using auto-ONS.

WLS ONS Configuration and Testing

The next step is to ensure that you have end-to-end configuration running. That includes the database service for which events will be generated to the AGL datasource that processes the events for the corresponding service.

On the server side, the database service must be configured RCLB enabled. RCLB is enabled for a service if the service GOAL (NOT CLB_GOAL) is set to either SERVICE_TIME or THROUGHPUT. See the Oracle documentation for further information on using srvctl to set this when creating the service.

On the WLS side, the key pieces are the URL and the ONS configuration.

The URL is configured using a long format with this service name specified. The URL can use an Oracle Single Client Access Name (SCAN) address, for example,

jdbc:oracle:thin:@(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=scanname)(PORT=scanport))(CONNECT_DATA=(SERVICE_NAME=myservice)))

or multiple non-SCAN addresses with LOAD_BALANCE=on, for example,

jdbc:oracle:thin:@(DESCRIPTION=(ADDRESS_LIST=(LOAD_BALANCE=ON)(ADDRESS=(PROTOCOL=TCP)(HOST=host1)(PORT=1521))(ADDRESS=(PROTOCOL=TCP)(HOST=host2)(PORT=1521)))(CONNECT_DATA=(SERVICE_NAME=myservice)))

Defining the URL is a complex topic - see the Oracle documentation for more information.

As described above, the ONS configuration can be implicit using auto-ONS or explicit. The trade-offs and restrictions are also described above. The format of the explicit ONS information is described at http://docs.oracle.com/middleware/1221/wls/JDBCA/gridlink_datasources.htm#BABGJBIC.

If you create the datasource using the administration console with explicit ONS configuration, there is a button to click on to test the ONS configuration. This tests doing a simple handshake with the ONS server.

Of course, the first real test of your ONS configuration with WLS is deploying the datasource, either when starting the server or when targeting the datasource on a running server.

In the administration console, you can look at the AGL runtime monitoring page for ONS, especially if using auto-ONS, to see the ONS configuration.

You can look at the page for instances and check the affinity flag and instance weight attributes that are updated on LBA events. If you stop a service using something like

srvctl stop service -db beadev -i beadev2 -s otrade

that should also show up on this page with the weight and capacity going to 0.


If you look at the server log (for example servers/myserver/logs/myserver.log) you should see a message tracking the outage like the following.

…. <Info> <JDBC> … <Data source JDBC Data Source-0 for service otrade received a service down event for instance [beadev2].>

If you want to see more information like the LBA events, you can enable the JDBCRAC debugging using –Dweblogic.debug.DebugJDBCRAC=true. For example,

... <JDBCRAC> ... lbaEventOccurred() event=service=otrade, database=beadev, event=VERSION=1.0 database=beadev service=otrade { {instance=beadev1 percent=50 flag=GOOD aff=FALSE}{instance=beadev2 percent=50 flag=UNKNOWN aff=FALSE} }

There will be a lot of debug output with this setting so it is not recommended for production.

Wednesday Apr 27, 2016

Migrating from Generic Data Source to Active GridLink

Earlier, I wrote an article about how to migrate from a Multi Data source (MDS) for RAC connectivity to Active GridLink (AGL). This is needed to move from the older datasource technology to the newer technology, both supporting Oracle RAC. The information is now in the public documentation set at http://docs.oracle.com/middleware/1221/wls/JDBCA/gridlink_datasources.htm#JDBCA690.

There are also many customers that are growing up from a standalone database to an Oracle RAC cluster. In this case, it’s a migration from a GENERIC datasource to an AGL datasource. This migration is pretty simple.

No changes should be required to your applications.  A standard application looks up the datasource in JNDI and uses it to get connections.  The JNDI name won’t change.

The only changes necessary should be to your configuration and the necessary information is generally provided by your database administrator.   The information needed is the new URL and optionally the configuration of Oracle Notification Service (ONS) on the RAC cluster. The latter is only needed if you are running with

- an 11g driver or 11g database. Auto-ONS depends on protocol flowing between the driver and the database server and this feature was added in 12c.

- pre-WLS 12.1.3. Auto-ONS is supported starting in WLS 12.1.3.

- an Oracle wallet with SSL communications. Configuration of the wallet requires also configuring the ONS information.

- complicated ONS topology. In general, auto-ONS can figure out what you need but there are cases where you need to specify it precisely. In WLS 12.2.1, an extension of the ONS configuration allows for specifying the exact topology using a property node list. See http://docs.oracle.com/middleware/1221/wls/JDBCA/gridlink_datasources.htm#BABGJBIC for more information.

The URL and ONS attributes are configurable but not dynamic. That means that the datasource will need to be shutdown and restarted after the change. The simplest way to do this is to untarget the datasource, make the changes, and then re-target the datasource.

The recommended approach to migrate from a GENERIC to AGL datasource is to use WLST. The URL must be changed in the JDBCDriverParams object. The new JDBCOracleParams object (it generally doesn’t exist for a GENERIC datasource) needs to have FAN enabled set to true and optionally set the ONS information.

The following is a sample WLST script with the new values hard-coded. You could parameterize it and make it more flexible in handling multiple datasources. If you are using an Oracle wallet for ONS, that needs to be added to the JDBCOracleParams object as well.

# java weblogic.WLST file.py
import sys, socket, os
hostname = socket.gethostname()
datasource="JDBC Data Source-0"
connect("weblogic","welcome1","t3://"+hostname+":7001")
edit()
startEdit()
cd("/JDBCSystemResources/" + datasource )
targets=get("Targets")
set("Targets",jarray.array([], ObjectName))
save()
activate()
startEdit()
cd("/JDBCSystemResources/" + datasource + "/JDBCResource/" +
datasource + "/JDBCDriverParams/" + datasource )
set("Url","jdbc:oracle:thin:@(DESCRIPTION=(ADDRESS_LIST=(" +
"ADDRESS=(PROTOCOL=TCP)(HOST=dbhost)(PORT=1521)))" +
"(CONNECT_DATA=(SERVICE_NAME=otrade)))")
cd("/JDBCSystemResources/" + datasource + "/JDBCResource/" +
datasource + "/JDBCOracleParams/" + datasource )
set("FanEnabled","true")
set("OnsNodeList","dbhost:6200")
#cd("/JDBCSystemResources/" + datasource + "/JDBCResource/" +
# datasource )
#set("DatasourceType", "AGL")
save()
activate()
startEdit()
cd("/JDBCSystemResources/" + datasource )
set("Targets", targets)
save()
activate()

Starting in WLS 12.2.1, there is an explicit datasource type at the JDBCResource level. If that is set to GENERIC, you must re-set it using set("DatasourceType", "AGL"). In the script above, uncomment the lines to set it.

In the administrative console, the database type is read-only and there is no mechanism to change the database type. You can try to get around this by setting the URL, FAN Enabled box, and ONS information. However, in 12.2.1 there is no way to re-set the Datasource Type in the console and that value overrides all others.

Sunday Apr 10, 2016

New WebLogic Server Running on Docker in Multi-Host Environments

Oracle WebLogic Server 12.2.1 is now certified to run on Docker 1.9 containers. As part of this certification, you can create Oracle WebLogic Server 12.2.1 clusters which can span multiple physical hosts. Containers running on multi-host are built as an extension of existing Oracle WebLogic 12.2.1 Install images built with Dockerfiles , Domain images built with Dockerfiles, and existing Oracle Linux images Oracle Linux Images. To help you with this, we have posted scripts on GitHub as examples for you to get started.

The table below describes the certification provided for WebLogic Server 12.2.1 on Docker 1.9. You can use these combinations of Oracle WebLogic Server, JDK, Linux and Docker versions when building your Docker images.

WLS Version

JDK Version

Host OS

Kernel

Docker Version

12.2.1

8

Oracle Linux 6

UEK 4

1.9 or higher

Oracle Linux 7

Please read earlier blog Oracle Weblogic 12.2.1 Running on Docker Containers for details on Oracle WebLogic Server 12.1.3 and Oracle WebLogic 12.2.1 certification on other versions of Docker. We support Oracle WebLogic Server in certified Docker containers running on other Linux host operating systems that have Kernel 4 or larger and that support Docker Containers, please read our Support statement. For additional details on the most current Oracle WebLogic Server supported configurations please refer to Oracle Fusion Middleware Certification Pages.

The scripts that support multi-host environment on GitHub are based on the latest versions of Docker Networking, Swarm, and Docker Compose. The Docker Machine participates in the Swarm which is networked by a Docker overlay network. The WebLogic Admin Server container as well as the WebLogic Managed Servers containers run on different VMs in the Swarm and are able to communicate with each other.

Dockerfiles and scripts we have provided enable users to create clustered and non-clustered Oracle WebLogic Server domain configurations, including both development and production running on a single or multiple hosts operating system or VMs. Each server running in the resulting domain configurations runs in its Docker container, and is capable of communicating as required with other servers. When these containers run in a WebLogic cluster all HA properties of the WebLogic cluster are supported such as in memory session replication, HTTP load balancing service and server migration.

Please check the new WebLogic on Docker Multi Host Workshop in Github. This workshop takes you step by step in how to build a WebLogic Server Domain on Docker in a multi host environment.  After the WebLogic domain has been started an Apache Plugin Web Tier container is started in the Swarm, the Apache Plugin load balances invocations to an application deployed to a WebLogic cluster. 

This project takes advantage of the following tools Docker Machine, Docker Swarm, Docker Overlay Network, Docker Compose, Docker Registry, and Consul.  Very easily and quickly using the sample Dockerfiles, and scripts you can set up your environment running on Docker.  Try it out and enjoy!

On  YouTube we have a video that shows you how to create a WLS domain/cluster on Multi Host environment. We hope you will try running the different configurations of WebLogic Server on Docker containers, and look forward to hearing any feedback you might have.

Wednesday Mar 09, 2016

WebLogic Server Multi-tenancy and Partition Isolation

One advantage of using the multi-tenancy features in WebLogic Server is that you can get higher density. If you target multiple partitions to the same WLS cluster, then those partitions share the hardware and software stack, from the iron up through the Java virtual machine and the WLS infrastructure itself. This is great for efficient use of resources, but what about isolating those partitions from each other?

We've published a short article about the tension between density and isolation. It highlights how to set up partitions and virtual targets for either purpose. Please take a look!

Friday Feb 26, 2016

The next VTS round is fast approaching!

Virtual Technology Summit is a set of free online events covering a wide variety of technical topics (Java, Middleware, Database, IoT, etc.). And there is something for everyone (see full agenda). In the upcoming edition, the following sessions should be particularly interesting for WebLogic users:
  • Developing Java EE 7 Applications with WebLogic Server 12.2.1
  • Multi-Tenancy Fundamental
  • Introduction to WebLogic Server Zero Down-time Patching
There are 3 VTS events to suit your geographic location - Americas (March 8th), APAC (March 15th) and Europe (April 5th).  For schedules and abstracts for all sessions, please see OTN Virtual Technology Summit All Track Agenda and Abstracts and make sure to register today!

Wednesday Feb 17, 2016

WebLogic Server 12.2.1: Elastic Cluster Scaling

WebLogic Server 12.2.1 added support for the elastic scaling of dynamic clusters:

http://docs.oracle.com/middleware/1221/wls/ELAST/overview.htm#ELAST529

Elasticity allows you to configure elastic scaling for a dynamic cluster based on either of the following:

  • Manually adding or removing a running dynamic server instance from an active dynamic cluster. This is called on-demand scaling. You can perform on-demand scaling using the Fusion Middleware component of Enterprise Manager, the WebLogic Server Administration Console, or the WebLogic Scripting Tool (WLST).

  • Establishing policies that set the conditions under which a dynamic cluster should be scaled up or down and actions that define the scaling operations themselves. When the conditions defined in the scaling policy occur, the corresponding scaling action is triggered automatically.

To see this in action, a set of video demonstrations have been added to the youtube.com/OracleWebLogic channel that show the use of various elastic scaling options available.

WebLogic Server 12.2.1 Elastic Cluster Scaling with WLST
https://www.youtube.com/watch?v=6PHYfVd9Oh4
WebLogic Server 12.2.1 Elastic Cluster Scaling with WebLogic Console
https://www.youtube.com/watch?v=HkG0Uw14Dak
WebLogic Server 12.2.1 Automated Elastic Cluster Scaling
https://www.youtube.com/watch?v=6b7dySBC-mk

Sunday Feb 14, 2016

Now Available: Domain to Partition Conversion Tool (DPCT)

We are pleased to announce that a new utility has just been published to help with the process of converting existing WebLogic Server domains into WebLogic Server 12.2.1 partitions. 

The Domain to Partition Conversion Tool (DPCT) provides a utility that inspects a specified source domain and produces an archive containing the resources, deployed applications and other settings.  This can then be used with the importPartition operation provided in WebLogic Server 12.2.1 to create a new partition that represents the original source domain.  An external overrides file is generated (in JSON format) that can be modified to adjust the targets and names used for the relevant artifacts when they are created in the partition.

DPCT supports WebLogic Server 10.3.6, 12.1.1, 12.1.2 and 12.1.3 source domains and  makes the conversion to WebLogic Server 12.2.1 partitions a straightforward process.

DPCT is available for downloaded from OTN:

http://www.oracle.com/technetwork/middleware/weblogic/downloads/wls-main-097127.html

 ** Note: there is also a corresponding patch (opatch) posted alongside the DPCT download that needs to be downloaded and applied to the target installation of WebLogic Server 12.2.1 to support the import operation **

 The README contains more details and examples of using the tool:

http://download.oracle.com/otn/nt/middleware/12c/1221/wls1221_D-PCT-README.txt

A video demonstration of using DPCT to convert a WebLogic Server 12.1.3 domain with a deployed application into a WebLogic Server 12.2.1 is also available on our YouTube channel:

https://youtu.be/D1vQJrFfz9Q


Tuesday Feb 09, 2016

Cargo Tracker Java EE 7 Blue Prints Now Running on WebLogic 12.2.1

The Cargo Tracker project exists to serve as a blue print for developing well designed Java EE 7 applications, principally utilizing the well known Domain-Driven Design (DDD) architectural paradigm. The project provides a first hand look at how a realistic Java EE 7 application might look like.

The project was started some time ago and runs on GlassFish 4 and Java SE 7 by default. The project has now been enhanced to run the same code base on WebLogic 12.2.1 with Java SE 8. The code is virtually unchanged and the minor configuration difference between GlassFish and WebLogic are handled through Maven profiles. The instructions for running the project on WebLogic are available here. Feel free to give it a spin to get a feel for the Java EE 7 development experience with WebLogic 12.2.1.

Tuesday Jan 12, 2016

ZDT Technical Topic: How are Those Sessions Kept Alive Anyway?

By now you have probably read documentation or previous blog posts about how Zero Downtime Patching provides a convenient automated method of updating a WebLogic Domain in a rolling fashion.  By automating the process, Zero Downtime Patching greatly saves time and eliminates the potential human errors from the repetitive course of procedure. In addition to that there is also some special features around replicated HTTP sessions that make sure end users do not lose their session at any point during the rollout process. Lets explore the technical details around maintaining session state during Zero Downtime Patching.


One of the key aspects of WLS replicated session persistence contract is that the session may be maintained within the cluster even in the rare situation where a server crashes. However, the session persistence contract cannot guarantee sessions will be maintained when more than a single server goes down in a short time period. This is because the session has a single copy replicated to some secondary server within the cluster. The session is only replicated when the client makes a request to update the session so the client’s cookie can store a reference to the secondary server. Thus, if the primary server were to go down and then the secondary server were to go down before the session could be updated by a subsequent client request then the session would be lost. The rolling nature of Zero Downtime Patching fits this pattern, and thus must take extra care to avoid losing sessions. Administrators may have already observed that it is very easy to lose sessions by restarting one server at a time through the cluster.


Before we go into technical details on how Zero Downtime Patching prevents the issue of losing sessions, it is important to note that the entire methodology relies on Oracle Traffic Director for load balancing, dynamic discovery, health checks, and session failover handling. In addition to this setup, 3 key features were utilized by Zero Downtime Patching directly to prevent the loss of sessions:

Session Handling Overview


1. Preemptive Session Replication - Session data is preemptively propagated to another server in the cluster during graceful shutdown when necessary.  To get even more detailed on this, lets examine the scenario where the ZDT rollout has shutdown the server holding the HTTP Session, and the next step is to shutdown the server holding the replica. In that case, WebLogic can detect during shutdown that the session will be lost as there is no backup copy within the cluster. So the ZDT rollout can ensure that WebLogic Server replicates that session to another server within the cluster.

The illustration below shows the problematic scenario where the server,  s1, holding the primary copy of the session is shutdown followed by the shutdown of the server, s2, holding the secondary or replica copy.  The ZDT Orchestration signals that s2 should preemptively replicate any single session copies before shutting down.  Thus there is always a copy available within the cluster.  

Preemptive Session Replication on Shutdown


2. Session State Query Protocol - Due to the way that WebLogicServer relies on the association of an HTTP Session with a primary server and a secondary server, it is not sufficient to simply have the session somewhere in the cluster.  There is also a need to be able to find the session when the client request lands on an arbitrary server within the cluster. The ZDT rollout enables the ability for WebLogicServers to query other servers in the cluster for specific sessions if they don’t have their own copy.

Session Fetching via Session State Protocol Query

The diagram above shows that an incoming request to a server without the session can trigger a query and once the session is found within the cluster it can be fetched so that the request can be served on the server, "s4". 


3. Orphaned Session Cleanup - Once we combine the ability to preemptive replicate session instances, and the ability to fetch sessions from within the cluster, we must also take a more active approach to cleanup instances that are fetched.  Historically, WebLogic Server hasn’t had to worry much about orphaned sessions. Front end load balancers and web servers have been required to honor the session’s server affinity. And in the rare case that a request would land on a server that did not contain the primary or secondary, the session would be fetched from the primary server or secondary server and then the orphaned copy would be forgotten to be cleaned up upon timeout or other regular intervals. It was assumed that because the pointer to the session changed, that the actual stored reference would never be used again. However, the ZDT rollout repeatedly presents the scenario where a session must be found within the cluster and fetched from the server that holds the session. Not only can the number of session instances proliferate - all with various versions of the same session - the cluster is now queried for the copy and we must not find any stale copies - only the current replica of the session.

Orphaned Session Cleanup

The above illustration shows the cleanup action after s4 has fetched the session data to serve the incoming request.  It launches the cleanup request to s3 to ensure no stale data is left within the cluster.


Summary:

Now during ZDT Patching we can shutdown server1, and expect that any lone session copies will be propagated to server2 without the clients knowledge. When the client does send another request, WLS will be able to handle that request and query the cluster to find the session data. The data will be fetched and used on the server handling the request. The orphaned copy will be cleaned up and the server handling the request will go through the process of choosing its preferred Secondary server to store the replica.



For more information about Zero Downtime Patching, view the documentation

(http://docs.oracle.com/middleware/1221/wls/WLZDT/configuring_patching.htm#WLZDT166)


References

https://docs.oracle.com/cd/E24329_01/web.1211/e24425/failover.htm#CLUST205

Friday Jan 08, 2016

ZDT Rollouts and Singletons

WebLogic Server offers messaging, transaction and other system services to facilitate building enterprise grade applications. Typically, services can be either clustered or singleton. Clustered services are deployed identically to each server in a cluster to provide increased scalability and reliability. The session state of one clustered server is replicated on another server in the cluster. In contrast, singleton services run on only one server in a cluster at any given point of time so as to offer specific quality of service (QOS) but most importantly to preserve data consistency. Singleton services can be JMS-related, JTA-related or user-defined. In highly available (HA) environments, it is important for all services to be up and running even during patch upgrades.

The new WebLogic Zero Downtime Patching (a.k.a ZDT patching) feature introduces a fully automated rolling upgrade solution to perform upgrades such that deployed applications continue to function and are available for end users even during the upgrade process. ZDT patching supports rolling out Oracle Home, Java Home and also updating applications. Check out these blogs or view the documentation for more information on ZDT.

[Read More]

Monday Jan 04, 2016

Java Rock Star Adam Bien Impressed by WebLogic 12.2.1

It is not an exaggeration to say Adam Bien is pretty close to a "household name" in the Java world. Adam is a long time Java enthusiast, author of quite a few popular books, Java Community Process (JCP) expert, Oracle ACE Director, official Oracle Java Champion and JavaOne conference Rock Star award winner. Adam most recently won the JCP member of the year award. His blog is amongst the most popular for Java developers. 

Adam recently took WebLogic 12.2.1 for a spin and was impressed. Being a developer (not unlike myself) he focused on the full Java EE 7 support in WebLogic 12.2.1. He reported his findings to Java developers on his blog. He commented on fast startup, low memory footprint, fast deployments, excellent NetBeans integration and solid Java EE 7 compliance. You can read Adam's full write-up here.

None of this of course is incidental. WebLogic is a mature product with an extremely large deployment base. With those strengths often comes the challenge of usability. Nonetheless many folks that haven't kept up-to-date with WebLogic evolution don't realize that usability and performance have long been a continued core focus. That is why folks like Adam are often pleasantly surprised when they take an objective fresh look at WebLogic. You can of course give WebLogic 12.2.1 a try yourself here. There is no need to pay anything just to try it out as you can use a free OTN developer license (this is what Adam used as per the instructions on his post). You can also use an official Docker image here.

Solid Java EE support is of course the tip of the iceberg as to what WebLogic offers. As you are aware WebLogic offers a depth and breadth of proven features geared towards mission-critical, 24x7 operational environments that few other servers come close to. One of the best ways for anyone to observe this is taking a quick glance at the latest WebLogic documentation.

Tuesday Dec 15, 2015

Even Applications can be Updated with ZDT Patching

Zero Downtime Patching enables a convenient method of updating production applications on WebLogic Server without incurring any application downtime or loss of session data for your end-users.  This new feature may be especially useful for users who want to update multiple applications at the same time, or for those who cannot take advantage of the Production Redeployment feature due to various limitations or restrictions. Now there is a convenient alternative to complex application patching methods.

This rollout is based on the process and mechanism for automating rollouts across a domain while allowing applications to continue to service requests. In addition to the reliable automation, the Zero Downtime Patching feature also combines Oracle Traffic Director (OTD) load balancer and WebLogic Server to provide some advanced techniques for preserving active sessions and even handling incompatible session state during the patching process.

To rollout an application update, follow these 3 simple steps.

1. Produce a copy of the updated the application(s), test and verify. Note the administrator is responsible for making sure that the updated application sources are distributed to the appropriate nodes.  For stage mode, the updated application source needs to be available on the file system for the AdminServer to distribute the application source.  For no stage and external stage mode, the updated application source needs to be available on the file system for each node.

2. Create a JSON formatted file with the details of any applications that need to be updated during the rollout.


{"applications":[
{
"applicationName":"ScrabbleStage",
"patchedLocation":"/scratch/wlsconfg/Domains/Domain1221/applications/ScrabbleStagev2.war",
"backupLocation": "/scratch/wlsconfg/Domains/Domain1221/applications/ScrabbleStagev1.war"
},
{
"applicationName":"ScrabbleNoStage",
"patchedLocation":"/scratch/wlsconfg/Domains/Domain1221/applications/ScrabbleNoStagev2.war",
"backupLocation":"/scratch/wlsconfg/Domains/Domain1221/applications/ScrabbleNoStagev1.war"
},
{
"applicationName":"ScrabbleExternalStage",
"patchedLocation":"/scratch/wlsconfg/Domains/Domain1221/applications/ScrabbleExternalStagev2.war",
"backupLocation":"/scratch/wlsconfg/Domains/Domain1221/applications/ScrabbleExternalStagev1.war"
}
]}

3. Simply run the Application rollout using a WLST command like this one:

rolloutApplications(“Cluster1”, “/pathTo/applicationRolloutProperties”)

The Admin Server will start the rollout that coordinates the rolling restart of each node in the cluster named “Cluster1”. While the servers are shutdown, the original application source is moved to the specified backup location, and the new application source is copied into place.  Each server in turn is then started in admin mode.  While the server is in admin mode, the application redeploy command is called for that specific server, causing it to reload the new source.  Then the server is resumed to its original running state and is serving the updated application.

For more information about updating Applications with Zero Downtime Patching view the documentation.

Monday Dec 14, 2015

WLS 12.2.1 launch - Servlet 3.1 new features

[Read More]

Friday Dec 11, 2015

Introducing WLS JMS Multi-tenancy

Introduction

Multi-tenancy (MT) is the main theme of the WebLogic Server 12.2.1 release. It enhances the Oracle Platform for Software-as-a-Service (SaaS) and Platform-as-a-Service (PaaS) use cases. The main benefits of WebLogic multi-tenancy are increased density, tenant isolation, and simplified cloud configuration and management.

This article introduces multi-tenancy support for WebLogic JMS, the messaging component in WebLogic Server. 

Key MT Concepts

Some of you may have already learned from other blogs (for example Tim’s blog about Domain Partitions for Multi Tenancy) about some of the key concepts in WebLogic MT. But for the benefit of a broader audience, here is a quick review of those concepts before we get into JMS specifics.

WebLogic Multi-tenancy introduces the concepts of domain partition (also known as partition), resource group (RG), and resource group template (RGT).  

A Partition is conceptually a slice of a WebLogic domain, where resources and applications for different tenants can be configured and deployed in isolation on the same WebLogic server or in the same cluster. This improves overall density. Partitions define the isolation boundaries for JNDI, security, runtime MBeans, application persistent data, work managers and logging.  Furthermore, Partitions running on the same server instance have their own lifecycle, for example, a partition can be shut down at any time without impacting other partitions.

A Resource Group is simply a collection of functionally related resources and applications. A RG can be targeted and managed independently of other resource groups in the same partition. Resource groups can be defined not only inside a partition, but also at the domain level. As with partitions, RGs in the same partition (or at the domain level) that are running on the same server instance have their own lifecycle.

A Resource Group Template provides a templating mechanism to reduce the administrative overhead of configuring WebLogic resources and applications for SaaS use cases where the same resources and applications need to run in multiple partitions. It offers a configure-once-and-use-everywhere capability, where a common set of configuration artifacts can be specified in a RGT, and can then be referenced from RGs in different partitions. A RGT is not targetable, and resources in a RGT will not deploy unless the RGT is referenced by a deployed RG.

Note that the resources and applications configured or deployed in a partition (directly inside RGs or via RGs referencing a RGT) are scoped to that partition.

Understanding JMS Resources in MT

In a similar way to other WebLogic configuration artifacts, JMS resources such as JMS servers, SAF agents, path services, persistent stores, messaging bridges, JMS system modules, app-deployment JMS modules, Java EE 7 resource definition modules, and JMS applications can all now be configured and deployed in a RG, either directly or via RGTs, as well as in the ‘classic’ way, which is always directly at the domain level. Note that it is perfectly acceptable to combine both partition and ‘classic’ configuration together in the same domain.   

Resources and applications in different partitions are isolated from one another. For example, you can configure a JMS destination with the same JNDI name in multiple partitions running in the same cluster, and these destinations will be managed via independent runtime MBean instances, and can be independently secured via partition-specific security realms. In addition to non-persistent state, the persistent data (for example, persistent messages and durable subscriptions) in such JMS destinations are also isolated from one another.

Configuring JMS Resources in MT

The following configuration snippets show how JMS resources configured in a multi-tenant environment differs from traditional non-MT JMS configuration.

As you can see, partition-scoped JMS resources are embedded in a resource group in a partition (alternatively, they can be embedded in a Resource Group Template, which is in turn referenced by a Resource Group).

In addition, resources in a resource group are never individually targeted. Instead, the whole resource group is targeted via a virtual target, which is itself targeted in the normal way.  If a RG targets to a virtual target that is in turn targeted to a WL cluster, all JMS resources and applications in the RG will also be targeted to that cluster.

As we will see later, a virtual target not only provides the targeting information of a RG, it also defines the accessing point of a partition. For more information about resource group targeting and virtual targets, check out Joe's blog about Partition Targeting and Virtual Targets.

You might have noticed that I did not discuss configuring individual JMS resources for each server in a WL cluster, nor did I mention configuring “migratable targets” to add high availability. I have good news for you! Neither is needed or even supported in MT. They have been replaced with greatly enhanced WebLogic JMS cluster-targeting and HA support; my colleague Kathiravan blogs about it in 12.2.1 WebLogic JMS: Dynamic Scalability and High Availability Made Simple.

Although system level JMS resources (such as JMS servers, SAF agents, persistent stores, messaging bridges, path services, and JMS modules) are scoped differently in a MT configuration, their respective attributes are specified in exactly the same way as in a non-MT configuration.

Various validation and targeting rules are enforced to ensure that WebLogic MT JMS configuration is isolated, self contained, and easy to manage. One basic and high-level rule in configuring JMS in MT is that a JMS configuration artifact may only reference other configuration artifacts that are in the same scope. For example, a resource group scoped JMS server can only reference a persistent store that is also defined in the same resource group. These rules are enforced by configuration validation checks and by errors and warnings that are logged at runtime.

Accessing JMS Resources in MT

A JMS application designed for multi-tenancy accesses JMS resources in the same way as ‘classic’ JMS applications, by looking up JMS resources in a JNDI name space. The difference is that in a MT environment, a WebLogic JNDI InitialContext is associated with a particular scope (i.e. the domain or a partition), when it is created. 

A MT application can have multiple JNDI contexts that refer to the same WebLogic cluster but are scoped to different partitions. An initial context, once created, sticks to its scope until it is closed. This means that all JNDI operations using a partition-scoped JNDI context instance are performed using the partition-specific area of the JNDI space.

The scope of a JNDI context is determined by the “provider URL” supplied when the initial context is created.

Once an application successfully establishes a partition-scoped JNDI initial context, it can use this context to look up JMS connection factories and destinations in the same way as in a non-MT environment; except that now the application can only access partition-scoped JMS resources.

Let us look at some specific use cases and see how an application can establish an initial context to a particular partition in each of the use cases.

Use Case 1 - Local Intra-partition Access

When a Java EE application needs to access a JMS destination in its local partition in the same cluster (or on the same non-clustered managed server), the application can just create an initial context without supplying a provider URL.

Example 1: Null Provider URL

  Context ctx = new InitialContext();

  Object cf = ctx.lookup("jms/mycf1");

  Object dest = ctx.lookup("jms/myqueue1");

This initial context will be scoped to the partition in which the application is deployed.

Use Case 2 - Local Inter-partition Access

If a Java EE application needs to access a JMS destination (or other resource) in a different partition than the partition to which it is deployed, and the partition is in the same cluster (or on the same managed server) then it can use either a partition-scoped JNDI name or a provider URL with the "local:" protocol.

Using Partition Scoped JNDI Names

A JNDI name can be decorated with a namespace prefix to indicate its scope.

Example 2.1: given the partition configuration in the above examples, the following code can be used to access a JMS destination that is configured in "partition1".

Context ctx = new InitialContext();

Object cf = ctx.lookup("partition:partition1/jms/mycf1");

Object dest = ctx.lookup("partition:partition1/jms/myqueue1");

Similarly a Java EE application in a partition can access a domain level JNDI resource in the same cluster using a partition scoped initial context with the "domain:" namespace prefix, for example "domain:jms/mycf2".

Using a provider URL with the "local:" Protocol

Alternatively, one can specify a "local:" provider URL when creating an initial context to a specific partition.

Example 2.2: given the partition configuration in the above examples, the following code can be used to access a JMS destination that is configured in "partition1".

Hashtable<String, String> env = new Hashtable<>();

env.put(Context.PROVIDER_URL, "local://?partitionName=partition1");

env.put(Context.SECURITY_PRINCIPAL, "weblogic");

env.put(Context.SECURITY_CREDENTIALS, "welcome1");

Context ctx = new InitialContext(env);

Object cf = ctx.lookup("jms/mycf1");

Object dest = ctx.lookup("jms/myqueue1");

The initial context will be associated with "partition1" for its lifetime.

Similarly, a Java EE application in a partition can access a domain level JNDI resource in the same cluster using “local://?partitionName=DOMAIN” as the provider URL. 

Use Case 3 - General Partition Access

A third way for a Java EE application or client to access a JMS destination in a partition is to use a "partition URL". A partition URL is intended to be used when the JMS destination is in a remote cluster (or on a remote non-clustered managed server).  A typical "partition URL" is t3://hostname:port, or t3://host:port/URI-prefix.

Partition URLs may only be used by Java EE applications or clients using WLS 12.2.1 or later (older versions should use dedicated partition ports: see below).

Example 3: given the partition configuration in the examples above, the following code can be used to access a JMS destination that is configured in "partition1".

Note that "/partition1" in the provider URL below is the uri-prefix configured in the VirtualTarget for partition1.

Hashtable<String, String> env = new Hashtable<>();

env.put(Context.PROVIDER_URL, "t3://abcdef00:7001/partition1");

env.put(Context.SECURITY_PRINCIPAL, "weblogic");

env.put(Context.SECURITY_CREDENTIALS, "welcome1");

Context ctx = new InitialContext(env);

Object cf = ctx.lookup("jms/mycf1");

Object dest = ctx.lookup("jms/myqueue1");

Although it is not a best practice, a “partition URL” can also be used to access another partition in the same JVM/cluster.

Use Case 4 – Dedicated Partition Ports

One last option is to setup dedicated ports for each partition, and configuring these is described in Joe's blog about Partition Targeting and Virtual Targets.

Configuring dedicated partition ports enables applications that use ‘classic’ URLs to access a partition, and is mainly intended to enable clients and applications that are running on releases older than 12.2.1 to access partitions in a 12.2.1 or later domain.

Such older clients and applications do not support the use of host name and URI-prefix to access a partition. An attempt to use them from an older client will simply fail or may silently access the domain level JNDI name space.

What’s next?

I hope this article helps you to understand the basics of JMS MT! It is time to start exploring this new and exciting capability. You can find more information for messaging in MT in the Configuring Messaging chapter of Oracle® Fusion Middleware Using WebLogic Server Multitenant.

About

The official blog for Oracle WebLogic Server fans and followers!

Stay Connected

Search

Archives
« May 2016
SunMonTueWedThuFriSat
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
    
       
Today