Tuesday Jun 28, 2016

WebLogic Server Continuous Availability in 12.2.1.1

We have made enhancements to the Continuous Availability Offering in WebLogic 12.2.1.1 in the areas of Zero Downtime Patching, Cross Site Transaction Recovery, Coherence Federated Caching and Coherence Persistence. We have also enhanced the documentation to provide design considerations for the multi-data center Maximum Availability Architectures (MAA) that are supported for WebLogic Server Continuous Availability.

Zero Downtime Patching Enhancements

Enhancements in Zero Downtime Patching support updating applications running in a multitenant partition without affecting other partitions that run in the same cluster. Coherence applications can now be updated while maintaining high availability of the Coherence data during the rollout process. We have also removed the dependency on NodeManager to upgrade the WebLogic Administration Server.

  • Multitenancy support
  • Application updates can use partition shutdown instead of server shutdowns.
  • Can update an application in a partition on a server without affecting other partitions.
  • Can update an application referenced by a ResourceGroupTemplate.
  • Coherence support - User can supply minimum safety mode for rollout to Coherence cluster.
  • Removed Administration Server dependency on NodeManager – The Administration Server no longer needs to be started by NodeManager.

Cross-Site Transaction Recovery

We introduced a “Site Leasing” mechanism to do auto recovery when there is a site failure or mid-tier failure. With site leasing we provide a more robust mechanism to failover and failback transaction recovery without imposing dependencies on the TLog which affect the health of the Servers hosting the Transaction Manager.

Every server in a site will update their lease. When the lease expires for all servers running in a cluster in Site 1, servers running in a cluster in a remote site assume ownership of the TLogs, and recover the transactions while still continuing their transaction work. To learn more, please read Active-Active XA Transaction Recovery.

Coherence Federated Caching and Coherence Persistence Administration Enhancements

We have enhanced the WebLogic Server Administration Console to make it easier to configure Coherence Federated Caching and Coherence Persistence.

  • Coherence Federated Caching - Added the ability to setup Federation with basic active/active and active/passive configurations using the Administration Console and eliminated the need to use configuration files.
  • Coherence Persistence - Added a persistence tab in the Administration Console that provides the ability to configure Persistence related settings that apply to all services.

Documentation

In WebLogic Server 12.2.1.1 we have enhanced the document Continuous Availability for Oracle WebLogic Server to include a new chapter “Design Considerations for Continuous Availability” See http://docs.oracle.com/middleware/12211/wls/WLCAG/weblogic_ca_best.htm#WLCAG145.

This new chapter provides design considerations and best practices for the components of your multi-data center environments. In addition to the general best practices recommended for all continuous availability MAA architectures, we provide specific advice for each of the Continuous Availability supported topologies, and describe how the features can be used in these topologies to provide maximum high availability and disaster recovery.


Tuesday Jun 21, 2016

Oracle WebLogic Server 12.2.1.1 is Now Available

Last October, we delivered Oracle WebLogic Server 12.2.1 as part of the overall Oracle Fusion Middleware 12.2.1 Release.   As noted previously on this blog, WebLogic Server 12.2.1 delivers compelling new feature capabilities in the areas of Multitenancy, Continuous Availability, and Developer Productivity and Portability to Cloud.  

Today, we are releasing WebLogic Server 12.2.1.1, which is the first patch set release for WebLogic Server and Fusion Middleware 12.2.1.   New WebLogic Server 12.2.1.1 installers are now posted on the Oracle Technology Network and Oracle Software Delivery Cloud, and new documentation has been made available.  WebLogic Server 12.2.1.1 contains all the new features in WebLogic Server 12.2.1, and also includes an integrated, cumulative set of fixes and a small number of targeted, non-disruptive enhancements.   

For customers who have just begun evaluating WebLogic Server 12cR2, or are planning evaluation and adoption, we recommend that you adopt WebLogic Server 12.2.1.1 so that you can benefit from the maintenance and enhancements that have been included.   For customers who are already running in production on WebLogic Server 12.2.1, you can continue to do so, though we will encourage adoption of WebLogic Server 12.2.1 patch sets.

The enhancements are primarily in the following areas:

  • Multitenancy - Improvements to Resource Consumption Management, partition security management, REST management, and Fusion Middleware Control, all targeted at multitenancy manageability and usability.
  • Continuous Availability - New documented best practices for multi data center deployments, and product improvements to Zero Downtime Patching capabilities.
  • Developer Productivity and Portability to the Cloud - The Domain to Partition Conversion Tool (D-PCT), which enables you to convert an existing domain to a WebLogic Server 12.2.1 partition, has been integrated into 12.2.1.1 with improved functionality.   So it's now easier to migrate domains and applications to WebLogic Server partitions, including partitions running in the Oracle Java Cloud Service. 
We will provide additional updates on the capabilities described above, but everything is ready for you to get started using WebLogic Server 12.2.1.1 today.   Try it out and give us your feedback!

Wednesday Apr 27, 2016

Migrating from Generic Data Source to Active GridLink

Earlier, I wrote an article about how to migrate from a Multi Data source (MDS) for RAC connectivity to Active GridLink (AGL). This is needed to move from the older datasource technology to the newer technology, both supporting Oracle RAC. The information is now in the public documentation set at http://docs.oracle.com/middleware/1221/wls/JDBCA/gridlink_datasources.htm#JDBCA690.

There are also many customers that are growing up from a standalone database to an Oracle RAC cluster. In this case, it’s a migration from a GENERIC datasource to an AGL datasource. This migration is pretty simple.

No changes should be required to your applications.  A standard application looks up the datasource in JNDI and uses it to get connections.  The JNDI name won’t change.

The only changes necessary should be to your configuration and the necessary information is generally provided by your database administrator.   The information needed is the new URL and optionally the configuration of Oracle Notification Service (ONS) on the RAC cluster. The latter is only needed if you are running with

- an 11g driver or 11g database. Auto-ONS depends on protocol flowing between the driver and the database server and this feature was added in 12c.

- pre-WLS 12.1.3. Auto-ONS is supported starting in WLS 12.1.3.

- an Oracle wallet with SSL communications. Configuration of the wallet requires also configuring the ONS information.

- complicated ONS topology. In general, auto-ONS can figure out what you need but there are cases where you need to specify it precisely. In WLS 12.2.1, an extension of the ONS configuration allows for specifying the exact topology using a property node list. See http://docs.oracle.com/middleware/1221/wls/JDBCA/gridlink_datasources.htm#BABGJBIC for more information.

The URL and ONS attributes are configurable but not dynamic. That means that the datasource will need to be shutdown and restarted after the change. The simplest way to do this is to untarget the datasource, make the changes, and then re-target the datasource.

The recommended approach to migrate from a GENERIC to AGL datasource is to use WLST. The URL must be changed in the JDBCDriverParams object. The new JDBCOracleParams object (it generally doesn’t exist for a GENERIC datasource) needs to have FAN enabled set to true and optionally set the ONS information.

The following is a sample WLST script with the new values hard-coded. You could parameterize it and make it more flexible in handling multiple datasources. If you are using an Oracle wallet for ONS, that needs to be added to the JDBCOracleParams object as well.

# java weblogic.WLST file.py
import sys, socket, os
hostname = socket.gethostname()
datasource="JDBC Data Source-0"
connect("weblogic","welcome1","t3://"+hostname+":7001")
edit()
startEdit()
cd("/JDBCSystemResources/" + datasource )
targets=get("Targets")
set("Targets",jarray.array([], ObjectName))
save()
activate()
startEdit()
cd("/JDBCSystemResources/" + datasource + "/JDBCResource/" +
datasource + "/JDBCDriverParams/" + datasource )
set("Url","jdbc:oracle:thin:@(DESCRIPTION=(ADDRESS_LIST=(" +
"ADDRESS=(PROTOCOL=TCP)(HOST=dbhost)(PORT=1521)))" +
"(CONNECT_DATA=(SERVICE_NAME=otrade)))")
cd("/JDBCSystemResources/" + datasource + "/JDBCResource/" +
datasource + "/JDBCOracleParams/" + datasource )
set("FanEnabled","true")
set("OnsNodeList","dbhost:6200")
# The following is for WLS 12.1.2 and 12.1.3 if not setting FanEnabled true, which is not recommended
#set("ActiveGridlink","true")
# The following is for WLS 12.2.1 and later

#cd("/JDBCSystemResources/" + datasource + "/JDBCResource/" +
# datasource )
#set("DatasourceType", "AGL")
save()
activate()
startEdit()
cd("/JDBCSystemResources/" + datasource )
set("Targets", targets)
save()
activate()

In WLS 12.1.2 and 12.1.3, there is an explicit ActiveGridlink flag that can be used to identify an AGL datasource, even if FanEnabled is not set to true (which is not recommended) and auto-ONS is used (12.1.2 is the first release in which auto-ONS is supported). In the script above, uncomment the line to set it only if FanEnabled and OnsNodeList are not set.

Starting in WLS 12.2.1, there is an explicit datasource type at the JDBCResource level. If that is set to GENERIC, you must re-set it using set("DatasourceType", "AGL"). In the script above, uncomment the lines to set it.  In this case, the ActiveGridlink flag is not necessary.

In the administrative console, the database type is read-only and there is no mechanism to change the database type. You can try to get around this by setting the URL, FAN Enabled box, and ONS information. However, in 12.2.1 there is no way to re-set the Datasource Type in the console and that value overrides all others.

Sunday Apr 10, 2016

New WebLogic Server Running on Docker in Multi-Host Environments

Oracle WebLogic Server 12.2.1 is now certified to run on Docker 1.9 containers. As part of this certification, you can create Oracle WebLogic Server 12.2.1 clusters which can span multiple physical hosts. Containers running on multi-host are built as an extension of existing Oracle WebLogic 12.2.1 Install images built with Dockerfiles , Domain images built with Dockerfiles, and existing Oracle Linux images Oracle Linux Images. To help you with this, we have posted scripts on GitHub as examples for you to get started.

The table below describes the certification provided for WebLogic Server 12.2.1 on Docker 1.9. You can use these combinations of Oracle WebLogic Server, JDK, Linux and Docker versions when building your Docker images.

WLS Version

JDK Version

Host OS

Kernel

Docker Version

12.2.1

8

Oracle Linux 6

UEK 4

1.9 or higher

Oracle Linux 7

Please read earlier blog Oracle Weblogic 12.2.1 Running on Docker Containers for details on Oracle WebLogic Server 12.1.3 and Oracle WebLogic 12.2.1 certification on other versions of Docker. We support Oracle WebLogic Server in certified Docker containers running on other Linux host operating systems that have Kernel 4 or larger and that support Docker Containers, please read our Support statement. For additional details on the most current Oracle WebLogic Server supported configurations please refer to Oracle Fusion Middleware Certification Pages.

The scripts that support multi-host environment on GitHub are based on the latest versions of Docker Networking, Swarm, and Docker Compose. The Docker Machine participates in the Swarm which is networked by a Docker overlay network. The WebLogic Admin Server container as well as the WebLogic Managed Servers containers run on different VMs in the Swarm and are able to communicate with each other.

Dockerfiles and scripts we have provided enable users to create clustered and non-clustered Oracle WebLogic Server domain configurations, including both development and production running on a single or multiple hosts operating system or VMs. Each server running in the resulting domain configurations runs in its Docker container, and is capable of communicating as required with other servers. When these containers run in a WebLogic cluster all HA properties of the WebLogic cluster are supported such as in memory session replication, HTTP load balancing service and server migration.

Please check the new WebLogic on Docker Multi Host Workshop in Github. This workshop takes you step by step in how to build a WebLogic Server Domain on Docker in a multi host environment.  After the WebLogic domain has been started an Apache Plugin Web Tier container is started in the Swarm, the Apache Plugin load balances invocations to an application deployed to a WebLogic cluster. 

This project takes advantage of the following tools Docker Machine, Docker Swarm, Docker Overlay Network, Docker Compose, Docker Registry, and Consul.  Very easily and quickly using the sample Dockerfiles, and scripts you can set up your environment running on Docker.  Try it out and enjoy!

On  YouTube we have a video that shows you how to create a WLS domain/cluster on Multi Host environment. For documentation on how to use these Dockerfiles and scripts, see the whitepaper on OTN. .  We hope you will try running the different configurations of WebLogic Server on Docker containers, and look forward to hearing any feedback you might have.

Tuesday Feb 09, 2016

Cargo Tracker Java EE 7 Blue Prints Now Running on WebLogic 12.2.1

The Cargo Tracker project exists to serve as a blue print for developing well designed Java EE 7 applications, principally utilizing the well known Domain-Driven Design (DDD) architectural paradigm. The project provides a first hand look at how a realistic Java EE 7 application might look like.

The project was started some time ago and runs on GlassFish 4 and Java SE 7 by default. The project has now been enhanced to run the same code base on WebLogic 12.2.1 with Java SE 8. The code is virtually unchanged and the minor configuration difference between GlassFish and WebLogic are handled through Maven profiles. The instructions for running the project on WebLogic are available here. Feel free to give it a spin to get a feel for the Java EE 7 development experience with WebLogic 12.2.1.

Tuesday Jan 12, 2016

ZDT Technical Topic: How are Those Sessions Kept Alive Anyway?

By now you have probably read documentation or previous blog posts about how Zero Downtime Patching provides a convenient automated method of updating a WebLogic Domain in a rolling fashion.  By automating the process, Zero Downtime Patching greatly saves time and eliminates the potential human errors from the repetitive course of procedure. In addition to that there is also some special features around replicated HTTP sessions that make sure end users do not lose their session at any point during the rollout process. Lets explore the technical details around maintaining session state during Zero Downtime Patching.


One of the key aspects of WLS replicated session persistence contract is that the session may be maintained within the cluster even in the rare situation where a server crashes. However, the session persistence contract cannot guarantee sessions will be maintained when more than a single server goes down in a short time period. This is because the session has a single copy replicated to some secondary server within the cluster. The session is only replicated when the client makes a request to update the session so the client’s cookie can store a reference to the secondary server. Thus, if the primary server were to go down and then the secondary server were to go down before the session could be updated by a subsequent client request then the session would be lost. The rolling nature of Zero Downtime Patching fits this pattern, and thus must take extra care to avoid losing sessions. Administrators may have already observed that it is very easy to lose sessions by restarting one server at a time through the cluster.


Before we go into technical details on how Zero Downtime Patching prevents the issue of losing sessions, it is important to note that the entire methodology relies on Oracle Traffic Director for load balancing, dynamic discovery, health checks, and session failover handling. In addition to this setup, 3 key features were utilized by Zero Downtime Patching directly to prevent the loss of sessions:

Session Handling Overview


1. Preemptive Session Replication - Session data is preemptively propagated to another server in the cluster during graceful shutdown when necessary.  To get even more detailed on this, lets examine the scenario where the ZDT rollout has shutdown the server holding the HTTP Session, and the next step is to shutdown the server holding the replica. In that case, WebLogic can detect during shutdown that the session will be lost as there is no backup copy within the cluster. So the ZDT rollout can ensure that WebLogic Server replicates that session to another server within the cluster.

The illustration below shows the problematic scenario where the server,  s1, holding the primary copy of the session is shutdown followed by the shutdown of the server, s2, holding the secondary or replica copy.  The ZDT Orchestration signals that s2 should preemptively replicate any single session copies before shutting down.  Thus there is always a copy available within the cluster.  

Preemptive Session Replication on Shutdown


2. Session State Query Protocol - Due to the way that WebLogicServer relies on the association of an HTTP Session with a primary server and a secondary server, it is not sufficient to simply have the session somewhere in the cluster.  There is also a need to be able to find the session when the client request lands on an arbitrary server within the cluster. The ZDT rollout enables the ability for WebLogicServers to query other servers in the cluster for specific sessions if they don’t have their own copy.

Session Fetching via Session State Protocol Query

The diagram above shows that an incoming request to a server without the session can trigger a query and once the session is found within the cluster it can be fetched so that the request can be served on the server, "s4". 


3. Orphaned Session Cleanup - Once we combine the ability to preemptive replicate session instances, and the ability to fetch sessions from within the cluster, we must also take a more active approach to cleanup instances that are fetched.  Historically, WebLogic Server hasn’t had to worry much about orphaned sessions. Front end load balancers and web servers have been required to honor the session’s server affinity. And in the rare case that a request would land on a server that did not contain the primary or secondary, the session would be fetched from the primary server or secondary server and then the orphaned copy would be forgotten to be cleaned up upon timeout or other regular intervals. It was assumed that because the pointer to the session changed, that the actual stored reference would never be used again. However, the ZDT rollout repeatedly presents the scenario where a session must be found within the cluster and fetched from the server that holds the session. Not only can the number of session instances proliferate - all with various versions of the same session - the cluster is now queried for the copy and we must not find any stale copies - only the current replica of the session.

Orphaned Session Cleanup

The above illustration shows the cleanup action after s4 has fetched the session data to serve the incoming request.  It launches the cleanup request to s3 to ensure no stale data is left within the cluster.


Summary:

Now during ZDT Patching we can shutdown server1, and expect that any lone session copies will be propagated to server2 without the clients knowledge. When the client does send another request, WLS will be able to handle that request and query the cluster to find the session data. The data will be fetched and used on the server handling the request. The orphaned copy will be cleaned up and the server handling the request will go through the process of choosing its preferred Secondary server to store the replica.



For more information about Zero Downtime Patching, view the documentation

(http://docs.oracle.com/middleware/1221/wls/WLZDT/configuring_patching.htm#WLZDT166)


References

https://docs.oracle.com/cd/E24329_01/web.1211/e24425/failover.htm#CLUST205

Friday Jan 08, 2016

ZDT Rollouts and Singletons

WebLogic Server offers messaging, transaction and other system services to facilitate building enterprise grade applications. Typically, services can be either clustered or singleton. Clustered services are deployed identically to each server in a cluster to provide increased scalability and reliability. The session state of one clustered server is replicated on another server in the cluster. In contrast, singleton services run on only one server in a cluster at any given point of time so as to offer specific quality of service (QOS) but most importantly to preserve data consistency. Singleton services can be JMS-related, JTA-related or user-defined. In highly available (HA) environments, it is important for all services to be up and running even during patch upgrades.

The new WebLogic Zero Downtime Patching (a.k.a ZDT patching) feature introduces a fully automated rolling upgrade solution to perform upgrades such that deployed applications continue to function and are available for end users even during the upgrade process. ZDT patching supports rolling out Oracle Home, Java Home and also updating applications. Check out these blogs or view the documentation for more information on ZDT.

[Read More]

Monday Jan 04, 2016

Java Rock Star Adam Bien Impressed by WebLogic 12.2.1

It is not an exaggeration to say Adam Bien is pretty close to a "household name" in the Java world. Adam is a long time Java enthusiast, author of quite a few popular books, Java Community Process (JCP) expert, Oracle ACE Director, official Oracle Java Champion and JavaOne conference Rock Star award winner. Adam most recently won the JCP member of the year award. His blog is amongst the most popular for Java developers. 

Adam recently took WebLogic 12.2.1 for a spin and was impressed. Being a developer (not unlike myself) he focused on the full Java EE 7 support in WebLogic 12.2.1. He reported his findings to Java developers on his blog. He commented on fast startup, low memory footprint, fast deployments, excellent NetBeans integration and solid Java EE 7 compliance. You can read Adam's full write-up here.

None of this of course is incidental. WebLogic is a mature product with an extremely large deployment base. With those strengths often comes the challenge of usability. Nonetheless many folks that haven't kept up-to-date with WebLogic evolution don't realize that usability and performance have long been a continued core focus. That is why folks like Adam are often pleasantly surprised when they take an objective fresh look at WebLogic. You can of course give WebLogic 12.2.1 a try yourself here. There is no need to pay anything just to try it out as you can use a free OTN developer license (this is what Adam used as per the instructions on his post). You can also use an official Docker image here.

Solid Java EE support is of course the tip of the iceberg as to what WebLogic offers. As you are aware WebLogic offers a depth and breadth of proven features geared towards mission-critical, 24x7 operational environments that few other servers come close to. One of the best ways for anyone to observe this is taking a quick glance at the latest WebLogic documentation.

Tuesday Dec 15, 2015

Even Applications can be Updated with ZDT Patching

Zero Downtime Patching enables a convenient method of updating production applications on WebLogic Server without incurring any application downtime or loss of session data for your end-users.  This new feature may be especially useful for users who want to update multiple applications at the same time, or for those who cannot take advantage of the Production Redeployment feature due to various limitations or restrictions. Now there is a convenient alternative to complex application patching methods.

This rollout is based on the process and mechanism for automating rollouts across a domain while allowing applications to continue to service requests. In addition to the reliable automation, the Zero Downtime Patching feature also combines Oracle Traffic Director (OTD) load balancer and WebLogic Server to provide some advanced techniques for preserving active sessions and even handling incompatible session state during the patching process.

To rollout an application update, follow these 3 simple steps.

1. Produce a copy of the updated the application(s), test and verify. Note the administrator is responsible for making sure that the updated application sources are distributed to the appropriate nodes.  For stage mode, the updated application source needs to be available on the file system for the AdminServer to distribute the application source.  For no stage and external stage mode, the updated application source needs to be available on the file system for each node.

2. Create a JSON formatted file with the details of any applications that need to be updated during the rollout.


{"applications":[
{
"applicationName":"ScrabbleStage",
"patchedLocation":"/scratch/wlsconfg/Domains/Domain1221/applications/ScrabbleStagev2.war",
"backupLocation": "/scratch/wlsconfg/Domains/Domain1221/applications/ScrabbleStagev1.war"
},
{
"applicationName":"ScrabbleNoStage",
"patchedLocation":"/scratch/wlsconfg/Domains/Domain1221/applications/ScrabbleNoStagev2.war",
"backupLocation":"/scratch/wlsconfg/Domains/Domain1221/applications/ScrabbleNoStagev1.war"
},
{
"applicationName":"ScrabbleExternalStage",
"patchedLocation":"/scratch/wlsconfg/Domains/Domain1221/applications/ScrabbleExternalStagev2.war",
"backupLocation":"/scratch/wlsconfg/Domains/Domain1221/applications/ScrabbleExternalStagev1.war"
}
]}

3. Simply run the Application rollout using a WLST command like this one:

rolloutApplications(“Cluster1”, “/pathTo/applicationRolloutProperties”)

The Admin Server will start the rollout that coordinates the rolling restart of each node in the cluster named “Cluster1”. While the servers are shutdown, the original application source is moved to the specified backup location, and the new application source is copied into place.  Each server in turn is then started in admin mode.  While the server is in admin mode, the application redeploy command is called for that specific server, causing it to reload the new source.  Then the server is resumed to its original running state and is serving the updated application.

For more information about updating Applications with Zero Downtime Patching view the documentation.

Tuesday Dec 08, 2015

WLS JNDI Multitenancy

  The most important feature introduced in WebLogic Server 12.2.1 is multi-tenancy. As we know, before WLS 12.2.1, one WLS domain is used by one tenant. Since WLS 12.2.1, WLS domain can be divided into multiple partitions so that tenants can use different partitions of on WLS domain. Then Multiple tenants can share one WLS domain without influence each other. So isolation of resources between partitions is key. Since JNDI is a common way to access to these resources, the main target of JNDI in WLS 12.2.1 is to isolate JNDI resources.

  Before WLS 12.2.1, there is only one JNDI global tree of WLS domain. It is difficult to use this one JNDI global tree to support multiple partitions. Because partition requires a unique isolated namespace. For example, it is possible multiple partitions use same JNDI name to bind/lookup JNDI resource separately. And it will result in NameAlreadyBoundException. To isolate JNDI resources in different partitions, every partition has unique JNDI global tree since WLS 12.2.1. Then tenant can operate JNDI resource in one partition without name conflict with one in other partition. For Application scoped JNDI tree, it is only visible in application internal, so it isolated naturally. So there is no change for App scoped JNDI tree in WLS 12.2.1. Let us see how to access JNDI resource in partition.

Access JNDI resource in partition

  To access JNDI resources in partition, we need add partition information in provider url property during creating InitialContext.

  Runtime environment:

    Managed server:           ms1 , ms2

    Cluster:                         managed server ms1, managed server ms2

    Virtual target:               VT1 target to managed server ms1, VT2 target to cluster

    Partition:                      Partition 1 has available target VT1, Partition2 has available target VT2.

  We need add parition1 inforamtion in properties during creating InitialContext to access JNDI resources in partition1.

    Hashtable<String, String> env = new Hashtable<>();
    env.put(Context.PROVIDER_URL, "t3://ms1:7001/partition1");  
    env.put(Context.SECURITY_PRINCIPAL, "weblogic");
    env.put(Context.SECURITY_CREDENTIALS, "welcome1");
    Context ctx = new InitialContext(env);
    Object c = ctx.lookup("jdbc/ds1");

  Partition2 runs in cluster, so we can use cluster address format in properties during creating InitialContext.

    Hashtable<String, String> env = new Hashtable<>();
    env.put(Context.PROVIDER_URL, "t3://ms1:7001,ms2:7003/partition2");
    env.put(Context.SECURITY_PRINCIPAL, "weblogic");
    env.put(Context.SECURITY_CREDENTIALS, "welcome1");
    Context ctx = new InitialContext(env);
    Object c = ctx.lookup("jdbc/ds1");

  In weblogic, we can create Foreign JNDI provider to link JNDI resources in other server. In WLS 12.2.1, we also can use Foreign JNDI provider to link to  JNDI resources in specified partition by adding partition information in configuration. These partition information including provider url, user and password will be used to create JNDI context. The following is an example of Foreign JNDI provider configuration in partition1. This provider links to partition2.

<foreign-jndi-provider-override>
  <name>jndi_provider_rgt</name>
  <initial-context-factory>weblogic.jndi.WLInitialContextFactory</initial-context-factory>
  <provider-url>t3://ms1:7001,ms2:7003/partition2</provider-url>
  <password-encrypted>{AES}6pyJXtrS5m/r4pwFT2EXQRsxUOu2n3YEcKJEvZzxZ7M=</password-encrypted>
  <user>weblogic</user>
  <foreign-jndi-link>
    <name>link_rgt_2</name>
    <local-jndi-name>partition_Name</local-jndi-name>
    <remote-jndi-name>weblogic.partitionName</remote-jndi-name>
  </foreign-jndi-link>
</foreign-jndi-provider-override>

Stickiness of JNDI Context

  When a JNDI context is created, the context will associate with  specified partition. Then all subsequent JNDI operations are done within associated partition JNDI tree, not in current partition one. And this associated partition will remain even if the context is used by a different thread than was used to create the context.. If provider url property is set in environment during creating JNDI context, partition specified in provider url is associated. Otherwise, JNDI context associates with current partition.

Life cycle of Partition JNDI service

  Before WLS 12.2.1,, JNDI service life cycle is same with weblogic server. In WLS 12.2.1, every partition owns JNDI global tree by itself, so JNDI service life cycle is same with partition. As soon as partition startup, JNDI service of this partition is available. And during partition shutdown, JNDI service of this partition is destroyed.

Monday Dec 07, 2015

Three Easy Steps to a Patched Domain Using ZDT Patching and OPatchAuto

Now that you’ve seen how easy it is to Update WebLogic by rolling out a new patched OracleHome to your managed servers, let’s go one step further and see how we can automate the preparation and distribution parts of that operation as well.

ZDT Patching has integrated with a great new tool in 12.2.1 called OPatchAuto. OPatchAuto is a single interface that allows you to apply patches to an OracleHome, distribute the patched OracleHome to all the nodes you want to update, and start the OracleHome rollout, in just three steps.

1. The first step is to create a patched OracleHome archive (.jar) based on combining an OracleHome in your production environment with the desired patch or patchSetUpdate. This operation will make a copy of that OracleHome so it will not affect the production environment. It will then apply the specified patches to the copy of the OracleHome and create the archive from it. This is the archive that the rollout will use when the time comes, but first it needs to be copied to all of the targeted nodes.

The OPatchAuto command for the first step looks like this:

${ORACLE_HOME}/OPatch/auto/core/bin/opatchauto.sh apply /pathTo/PatchHome -create-image -image-location /pathTo/image.jar –oop –oh /pathTo/OracleHome

PatchHome is a directory or file containing the patch or patchSetUpdate to apply.

image-location is where to put the resulting image file

-oop means “out-of-place” and tells OPatchAuto to copy the source OracleHome before applying the patches

2.  The second step is to copy the patched OracleHome archive created in step one to all of the targeted nodes. One cool thing about this step is that since OPatchAuto is integrated with ZDT Patching, you can give OPatchAuto the same target you would use with ZDT Patching, and it will ask ZDT Patching to calculate the nodes automatically. Here’s an example of what this command might look like:

${ORACLE_HOME}/OPatch/auto/core/bin/opatchauto.sh apply -plan wls-push-image -image-location /pathTo/image.jar -wls-admin-host ${ADMINHOSTNAME}:7001 -wls-target Cluster1 -remote-image-location /pathTo/image.jar -wallet ${WALLET_DIR}

image-location is the jar file created in the first step

wls-target can be a domain name, cluster name, or list of clusters

Note that if you do not already have a wallet for ssh authorization to the remote hosts, you may need to configure one using

3.  The last step is using OPatchAuto to invoke the ZDT Patching OracleHome rollout. You could switch to WLST at this point and start it as described in the previous post, but OPatchAuto will monitor the progress of the rollout and give you some helpful feedback as well. The command to start the rollout through OPatchAuto looks like this:

${ORACLE_HOME}/OPatch/auto/core/bin/opatchauto.sh apply -plan wls-zdt-rollout -image-location /pathTo/image.jar -wls-admin-host ${ADMINHOSTNAME}:7001 -wls-target Cluster1 -backup-home /pathTo/home-backup -remote-image-location /pathTo/image.jar -wallet ${WALLET_DIR}

image-location is the jar file created in the first step

backup-home is the location on each remote node to store the backup of the original OracleHome

image-location and remote-image-location are both specified so that if a node is encountered that is missing the image, it can be copied automatically. This is also why the wallet is specified here again

One more great thing to consider when looking at automating the entire process is how easy it would be to use these same commands to distribute and rollout the same patched OracleHome archive to a test environment for verification. Once verification is passed, a minor change to the same two commands will push the exact same (verified) OracleHome archive out to a production environment.

For more information about updating OracleHome with Zero Downtime Patching and OPatchAuto, view the documentation.

Wednesday Nov 18, 2015

Patching Oracle Home Across your Domain with ZDT Patching

Now it’s time for the really good stuff!  In this post, you will see how Zero Downtime (ZDT) Patching can be used to rollout a patched WebLogic OracleHome directory to all your managed servers (and optionally to your AdminServer) without incurring any downtime or loss of session data for your end-users.

This rollout, like the others, is based on the controlled rolling shutdown of nodes, and using Oracle Traffic Director (OTD) load balancer to route user requests around the offline node. The difference with this rollout is what happens when the managed servers are shut down. In this case, when the managed servers are shutdown, the rollout will actually move the current OracleHome directory to a backup location, and replace it with a patched OracleHome directory that the administrator has prepared, verified, and distributed in advance. (More on the preparation in a moment)

When everything has been prepared, starting the rollout is simply a matter of issuing a WLST command like this one:

rolloutOracleHome(“Cluster1”, “/pathTo/PatchedOracleHome.jar”, “/pathTo/BackupCurrentOracleHome”, “FALSE”)

The AdminServer will then check that the PatchedOracleHome.jar file exists everywhere that it should, and it will begin the rollout. Note that the “FALSE” flag simply indicates that this is not a domain level rollback operation where we would be required to update the AdminServer last instead of first.

In order to prepare the patched OracleHome directory, as mentioned above, the user can start with a copy of a production OracleHome, usually in a test (non-production) environment, and apply the desired patches in whatever way is already familiar to them. Once this is done, the administrator uses the included CIE tool copyBinary to create a distributable jar archive of the OracleHome. Once the jar archive of the patched OracleHome directory has been created, it can be distributed to all of the nodes that will be updated. Note that it needs to reside on the same path for all nodes. With that, the preparation is complete and the rollout can begin!

Be sure to check back soon to read about how the preparation phase has been automated as well by integrating ZDT Patching with another new tool called OPatchAuto.


For more information about updating OracleHome with Zero Downtime Patching, view the documentation.

Friday Nov 13, 2015

Oracle WebLogic Server 12.2.1 Running on Docker Containers

UPDATE April 2016 - We now officially certify and support WebLogic 12.1.3 and WebLogic 12.2.1 Clusters in multi-host environments! For more information see this blog post. The Docker configuration files are also now maintained on the official Oracle GitHub Docker repository.  Links in the Docker section of this article have also been updated to reflect the latest updates and changes. For more up to date information on Docker scripts and support, check the Oracle GitHub project docker-images.


Oracle WebLogic Server 12.2.1 is now certified to run on Docker containers. As part of this certification, we are releasing Docker files on GitHub to create Oracle WebLogic Server 12.2.1 install images and Oracle WebLogic Server 12.2.1 domain images. These images are built as an extension of existing Oracle Linux images Oracle Linux Images. To help you with this, we have posted Dockerfiles and scripts on GitHub as examples for you to get started.

Docker is a platform that enables users to build, package, ship and run distributed applications. Docker users package up their applications, and any dependent libraries or files, into a Docker image. Docker images are portable artifacts that can be distributed across Linux environments. Images that have been distributed can be used to instantiate containers where applications can run in isolation from other applications running in other containers on the same host operating system.

The table below describes the certification provided for various WebLogic Server versions. You can use these combinations of Oracle WebLogic Server, JDK, Linux and Docker versions when building your Docker images.

Oracle WebLogic Server Version

JDK Version

HOST OS

Kernel Version

Docker Version

12.2.1.0.0

8

Oracle Linux 6 Update 6 or higher

UEK Release 3 (3.8.13)

1.7 or higher

12.2.1.0.0

8

Oracle Linux 7 or higher

UEK Release 3 (3.8.13)

Or RHCK 3 (3.10)

1.7 or higher

12.2.1.0.0

8

RedHat Enterprise Linux 7 or higher

RHCK 3 (3.10)

1.7 or higher

12.1.3.0.0

7/8

Oracle Linux 6 Update 5 or higher

UEK Release 3 (3.8.13)

1.3.3 or higher

12.1.3.0.0

7/8

Oracle Linux 7 or higher

UEK Release 3 (3.8.13)

Or RHCK 3 (3.10)

1.3.3 or higher

12.1.3.0.0

7/8

RedHat Enterprise Linux 7 or higher

RHCK 3 (3.10)

1.3.3 or higher

We support Oracle WebLogic Server in certified Docker containers running on other Linux host operating systems that have Kernel 3.8.13 or larger and that support Docker Containers, please read our support statement at Support statement. For additional details on the most current Oracle WebLogic Server supported configurations please refer to Oracle Fusion Middleware Certification Pages.

These Dockerfiles and scripts we have provided enable users to create clustered and non-clustered Oracle WebLogic Server domain configurations, including both development and production running on a single host operating system or VMs. Each server running in the resulting domain configurations runs in its Docker container, and is capable of communicating as required with other servers.


A topology which is in line with the “Docker-way” for containerized applications and services consists of a container designed to run only an administration server containing all resources, shared libraries and deployments. These Docker containers can all be on a single physical or virtual server Linux host or on multiple physical or virtual server Linux hosts. The Dockerfiles in GitHub to create an image with a WebLogic Server domain can be used to start these admin server containers.

For documentation on how to use these Dockerfiles and scripts, see the whitepaper on OTN. The Oracle WebLogic Server video and demo presents our certification effort and shows a Demo of WebLogic Server 12.2.1 running on Docker Containers. We hope you will try running the different configurations of WebLogic Server on Docker containers, and look forward to hearing any feedback you might have.

Monday Nov 09, 2015

Update your Java Version Easily with ZDT Patching

 Another great feature of ZDT Patching is that it provides a simple way to update the Java version used to run WebLogic.  Keeping up-to-date with Java security patches is an ongoing task of critical importance. Prior to ZDT Patching, there was no easy way to migrate all of your managed servers to a new Java version, but ZDT Patching makes this a simple two step procedure.

The first step is to install the updated Java version to all of the nodes that you will be updating. This can be done manually or by using any of the normal software distribution tools typically used to manage enterprise software installations. This operation can be done outside a planned maintenance window as it will not affect any running servers. Note that when installing the new Java version, it must not overwrite the existing Java directory, and the location of the new directory must be the same on every node.

The second step is to simply run the Java rollout using a WLST command like this one:

rolloutJavaHome(“Cluster1”, “/pathTo/jdk1.8.0_66”)

In this example, the Admin Server will start the rollout to coordinate the rolling restart of each node in the cluster named “Cluster1”. While the managed servers and NodeManager on a given node are down, the path to the Java executable that they are started with will be updated. The rollout will then start the managed servers and NodeManager from the new Java path.

Easy as that!

For more information about upgrading Java with Zero Downtime Patching, view the documentation.

Wednesday Nov 04, 2015

Application MBeans Visibility in Oracle WebLogic Server 12.2.1

Oracle WebLogic Server (WLS) version 12.2.1 supports a feature called Multi-Tenancy (WLS MT). WLS MT introduces the partition, partition administrator, and partition resource concepts.  Partition isolation is enforced when accessing resources (e.g., MBeans) in a domain. WLS administrators can see MBeans in the domain and the partitions. But a partition administrator as well as other partition roles are only allowed to see the MBeans in their partition, not in other partitions. 

In this article, I will explore the visibility support on the application MBeans to demonstrate partition isolation in WLS MT in 12.2.1. This includes

  • An overview of application MBean visibility in WLS MT
  • A simple user case that demonstrates what MBeans are registered on a WLS MBeanServer, what MBeans can be visible by WLS administrators or partition administrators
  • Links to reference materials for more information

The use case used in this article is run based on a domain created in another article "Create WebLogic Server Domain with Partitions using WLST in 12.2.1". In this article, I will 

  • Briefly show the domain topology
  • Demonstrate how to deploy an application to the domain and partitions
  • Demonstrate how to access the application MBeans via JMX clients using global/domain url or partition specific url
  • Demonstrate how to enable debugging/logging

1. Overview

An application can be deployed to WLS servers per partition, so the application is multiplied for multiple partitions. WLS contains three MBeanServers: Domain Runtime MBeanServer, Runtime MBeanServer. Each MBeanServer can be used for all partitions. WLS needs to ensure that the MBeans registered on each MBeanServer by the application are unique for each partition.

The application MBean visibility in WLS MT can be illustrated by several parts:

  • Partition Isolation
  • Application MBeans Registration
  • Query Application MBeans
  • Access Application MBeans

1.1 Partition Isolation

A WLS administrator can see application MBeans in partitions. But a partition administrator for a partition is not able to see application MBeans from the domain or other partitions.  

1.2 Application MBeans Registration

When an application is deployed to a partition, application MBeans are registered during the application deployment. WLS adds a partition specific key (e.g. Partition=<partition name>) to the MBean Object Names when registering them onto the WLS MBeanServer. This will ensure that MBean object names are unique when registered from a multiplied application.

Figure on the right shows how application MBean ObjectNames are different when registered onto the WLS MBeanServer on the domain and the partitions.

Figure on the right shows there is a WLS domain and an application.

WLS domain is configured with two partitions: cokePartition and pepsiPartition.

An application registers one MBean, e.g., testDomain:type=testType, during the application deployment.

The application is deployed to WLS domain, cokePartition and pepsiPartition. Since an WLS MBeanServer instance is shared by the domain, cokePartition and pepsiPartition, there are three application MBeans registered on the same MBeanServer after three application deployments:

  • An MBean belongs to domain:          testDomain:type=testType
  • An MBean belongs to cokePartition: testDomain:Partition=cokePartition,type=testType
  • An MBean belongs to cokePartition: testDomain:Partition=pepsiPartition,type=testType

The MBeans belong to the partitions contains an Partition key property in the ObjectNames.

1.3 Query Application MBeans

JMX clients, e.g., WebLogic WLST, JConsole etc., connect to a global/domain URL or partition specific URL, then do a query on the WebLogic MBeanServer. The query results are different:

  • When connecting to a global/domain URL, the application MBeans that belong to the partitions can be visible to those JMX clients.
  • When connecting to a partition specific URL, WLS filters the query results. Only the application MBeans that belong to that partition are returned. MBeans belonging to the domain and other partitions are filtered out.

1.4 Access Application MBeans

JMX clients, e.g., WebLogic WLST, JConcole, etc., connect to a partition specific URL, and do an JMX operation, e.g., getAttribute(<MBean ObjectName>, <attributeName>), the JMX operation is actually done on different MBeans:

  • When connecting to a global/domain URL, the getAttribute() is called on the MBean that belongs to the domain. (The MBean without the Partition key property on the MBean ObjectName.)
  • When connecting to a partition specific URL, the getAttribute() is called on the MBean that belongs to that partition. (The MBean with the Partition key property on the MBean ObjectName.)

2. Use case

Now I will demonstrate how MBean visibility works in WebLogic Server MT in 12.2.1 to support partition isolation. 

2.1 Domain with Partitions

In the article "Create WebLogic Server Domain with Partitions using WLST in 12.2.1", a domain with 2 partitions: coke and pepsi is created. This domain is also used for the use case in this article. Here is the summary of the domain topology:

  • A domain is configured with one AdminServer named "admin", one partition named "coke", one partition named "pepsi". 
  • The "coke" partition contains one resource group named "coke-rg1", targeting to a target named "coke-vt". 
  • The "pepsi" partition contains one resource group named "pepsi-rg1", targeting to a virtual target named "pepsi-vt". 

More specifically, each domain/partition has the following configuration values:

  Domain Name User Name Password
Domain base_domain weblogic welcome1
Coke Partition coke mtadmin1 welcome1
Pepsi Partition pepsi mtadmin2 welcome2

Please see details in the article "Create Oracle WebLogic Server Domain with Partitions using WLST in 12.2.1" on how to create this domain.

2.2 Application deployment

When the domain is set up and started, an application "helloTenant.ear" is deployed to the domain. It is also deployed to the "coke-rg1" in the "coke" partition and to the "pepsi-rg1" in the "pepsi" partition. The deployment can be done using different WLS tools, like FMW Console, WLST, etc.. Below are the WLST commands that deploy an application to the domain and the partitions:

startEdit()
deploy(appName='helloTenant',target='admin,path='${path-to-the-ear-file}/helloTenant.ear')
deploy(appName='helloTenant-coke',partition='coke',resourceGroup='coke-rg1',path='${path-to-the-ear-file}/helloTenant.ear')
deploy(appName='helloTenant-pepsi',partition='pepsi',resourceGroup='pepsi-rg1',path='${path-to-the-ear-file}/helloTenant.ear')
save()
activate()

For other WLS deployment tools, please see the "Reference" section.

2.3 Access Application MBeans

During the application deployment, application MBeans are registered onto the WebLogic Server MBeanServer. As mentioned in the previous section 1.2 Application MBean Registration, multiple MBeans are registered, even though there is only one application.

To access application MBeans, there are multiple ways to achieve this

  • WLST
  • JConsole
  • JSR 160 apis

2.3.1 WLST

The WebLogic Scripting Tool (WLST) is a command-line scripting interface that system administrators and operators use to monitor and manage WebLogic Server instances and domains. To start WLST:

$MW_HOME/oracle_common/common/bin/wlst.sh

Once WLST is started, user can connect to the server by providing a connection url. Below will show different values of an application MBean attribute by the WLS administrator or partition administrator when providing different connection urls.

2.3.1.1 WLS administrator

WLS administrator 'weblogic' connects to the domain using the following connect command:

connect("weblogic", "welcome1", "t3://localhost:7001")

The picture below shows there are 3 MBeans registered on the WebLogic Server MBeanServer, whose domain is "test.domain", and the value of the attribute "PartitionName" on each MBean.

  • test.domain:Partition=coke,type=testType,name=testName
    • belongs to the coke partition. The value of the PartitionName attribute is "coke"
  • test.domain:Partition=pepsi,type=testType,name=testName
    • belongs to the pepsi partition. The value of the PartitionName attribute is "pepsi"
  • test.domain:type=testType,name=testName
    • belongs to the domain. No Partition key property in the ObjectName. The value of the PartitionName attribute is "DOMAIN"

The MBean belonging to the partition will contain a Partition key property in the ObjectName. The Partition key property is added by WLS internally when they are registered in a partition context.

2.3.1.2 Partition administrator for coke

Similarly, the partition administrator 'mtadmin1' for coke can connect to the coke partition. The connection url uses "/coke" which is the uri prefix defined in the virtual target coke-vt. (Check the config/config.xml in the domain.)

connect("mtadmin1", "welcome1", "t3://localhost:7001/coke")

From the picture below, when connecting to the coke partition, there is only one MBean listed:

test.domain:type=testType,name=testName

Even though there is no Partition key property in the ObjectName, this MBean still belongs to the coke partition. The value of the PartitionName attribute is "coke".

2.3.1.3 Partition administrator for pepsi

Similarly, the partition administrator 'mtadmin2' for pepsi can connect to the pepsi partition. The connection url uses "/pepsi" which is the uri prefix defined in the virtual target pepsi-vt.

connect("mtadmin2", "welcome2", "t3://localhost:7001/pepsi")

From the picture below, when connecting to the pepsi partition, there is only one MBean listed:

test.domain:type=testType,name=testName

Even though there is no Partition key property in the ObjectName, same as the one seen by the partition administrator for coke, this MBean still belongs to the pepsi partition. The value of the PartitionName attibute is "pepsi".

2.3.2 JConsole

The JConsole graphical user interface is a build-in tool in JDK. It's a monitoring tool that complies to the Java Management Extensions (JMX) specification.  By using JConsole you can get a overview of the MBeans registered on the MBeanServer.

To start JConsole, do this:

$JAVA_HOME/bin/jconsole
-J-Djava.class.path=$JAVA_HOME/lib/jconsole.jar:
$JAVA_HOME/lib/tools.jar:$MW_HOME/wlserver/server/lib/wljmxclient.jar
 -J-Djmx.remote.protocol.provider.pkgs=weblogic.management.remote

where <MW_HOME> is the location where WebLogic Server is installed.

Once JConsole is started, WLS administrator and partition administrator can use it to browse the MBeans given the credentials and the JMX service URL.

2.3.2.1 WLS administrator

The WLS administrator "weblogic" provides an JMX service URL to connect to the WLS Runtime MBeanServer like below:

service:jmx:t3://localhost:7001/jndi/weblogic.management.mbeanservers.runtime

When connected by a WLS administrator, an MBean tree in JConsole shows 3 MBeans with the "test.domain" in the ObjectName:

The highlighted ObjectName in the right pane in the picture below is the MBean that belongs to the coke partition. It has the Partition key property: Partition=coke.

The highlighted below is the MBean that belongs in the pepsi partition. It has the Partition key property: Partition=pepsi.

The highlighted below is the MBean that belongs to the domain. It does not have the Partition key property.

The result here is consistent with what we have seen in WLST for WLS administrator.

2.3.2.2 Partition administrator for coke

The partition administrator "mtadmin1" provides a different JMX service URL to JConsole:

service:jmx:t3://localhost:7001/coke/jndi/weblogic.management.mbeanservers.runtime

When connected via partition specific JMX service url,, the partition administrator can only see one MBean:

test.domain:type=testType,name=testName

This MBean belongs to the coke partition and the value of the PartitionName is coke as shown in the picture below. However, there is no Partition key property in the ObjectName.

2.3.2.3 Partition administrator for pepsi

The partition administrator "mtadmin2" provides a different JMX service URL to JConsole:

service:jmx:t3://localhost:7001/pepsi/pepsi/weblogic.management.mbeanservers.runtime

When connected via partition specific JMX service url, the partition administrator "mtadmin2" can only see one MBean:

test.domain:type=testType,name=testName

This MBean belongs to the pepsi partition and the value of the PartitionName is pepsi as shown in the picture below.

2.3.3 JSR 160 APIs

JMX clients can use JSR 160 APIs to access the MBeans registered on the MBeanServer. For example, the code below shows how to get the JMXConnector by providing a service url and the env, to get the MBean attribute:

import javax.management.*;
import javax.management.remote.JMXConnector;
import javax.management.remote.JMXServiceURL;
import javax.management.remote.JMXConnectorFactory;
import java.util.*
 
public class TestJMXConnection {
public static void main(String[] args) throws Exception {
JMXConnector jmxCon = null;
try {
// Connect to JMXConnector
JMXServiceURL serviceUrl = new JMXServiceURL(
"service:jmx:t3://localhost:7001/jndi/weblogic.management.mbeanservers.runtime");
Hashtable env = new Hashtable();
env.put(JMXConnectorFactory.PROTOCOL_PROVIDER_PACKAGES, "weblogic.management.remote");
env.put(javax.naming.Context.SECURITY_PRINCIPAL, "weblogic");
env.put(javax.naming.Context.SECURITY_CREDENTIALS, "welcome1");
jmxCon = JMXConnectorFactory.newJMXConnector(serviceUrl, env);
jmxCon.connect();
 
// Access the MBean
MBeanServerConnection con = jmxCon.getMBeanServerConnection();
ObjectName oname = new ObjectName("test.domain:type=testType,name=testName,*");
Set queryResults = (Set)con.queryNames(oname, null);
for (ObjectName theName : queryResults) {
System.out.print("queryNames(): " + theName);
String partitionName = (String)con.getAttribute(theName, "PartitionName");
System.out.println(", Attribute PartitionName: " + partitionName);
}
} finally {
if (jmxCon != null)
jmxCon.close();
}

To compile and run this code, provide the wljmxclient.jar on the classpath, like:

$JAVA_HOME/bin/java -classpath $MW_HOME/wlserver/server/lib/wljmxclient.jar:. TestJMXConnection

You will get the results below:

Connecting to: service:jmx:t3://localhost:7001/jndi/weblogic.management.mbeanservers.runtime
queryNames(): test.domain:Partition=pepsi,type=testType,name=testName, Attribute PartitionName: pepsi
queryNames(): test.domain:Partition=coke,type=testType,name=testName,  Attribute PartitionName: coke
queryNames(): test.domain:type=testType,name=testName, Attribute PartitionName: DOMAIN

When change the code to use partition administrator "mtadmin1",

JMXServiceURL serviceUrl = new JMXServiceURL(
"service:jmx:t3://localhost:7001/coke/jndi/weblogic.management.mbeanservers.runtime");
env.put(javax.naming.Context.SECURITY_PRINCIPAL, "mtadmin1");
env.put(javax.naming.Context.SECURITY_CREDENTIALS, "welcome1");

Running the code will return only one MBean:

Connecting to: service:jmx:t3://localhost:7001/coke/jndi/weblogic.management.mbeanservers.runtime
queryNames(): test.domain:type=testType,name=testName,  Attribute PartitionName: coke

Similar results would be seen for the partition administrator for pepsi. If provide a pepsi specific JMX service url, only the MBean that belongs to the pepsi partition is returned.

2.4 Enable logging/debugging flags

If it appears the MBean is not behaving correctly in WebLogic Server 12.2.1, for example:

  • Partition administrator can see MBeans from global domain or other partitions when quiery the MBeans, or
  • Got JMX exceptions, e.g., javax.management.InstanceNotFoundException, when accessing an MBean

Try the followings to triage the errors:

  • If it's a connection problem in JConsole, add -debug on the JConsole command line when starting JConsole.
  • Partition administrator can see MBeans from global domain or other partitions when query the MBeans:
    • When connected by JMX clients, e.g., WLST, JConsole, JSR 160 APIs, make sure the host name on the service url matches the host name defined in the virtual target in the config/config.xml in the domain.
    • Make sure the uri prefix on the service url matches the uri prefix defined in the virtual target in the config/config.xml in the domain.
  • Got JMX exceptions, e.g., javax.management.InstanceNotFoundException, when accessing an MBean:

    • When the MBean belongs to a partition, make sure the partition is started. The application deployment is only happened when the partition is started.
    • Enable the debug flags during the server startup, like this:
      • -Dweblogic.StdoutDebugEnabled=true -Dweblogic.log.LogSeverity=Debug -Dweblogic.log.LoggerSeverity=Debug -Dweblogic.debug.DebugPartitionJMX=true -Dweblogic.debug.DebugCIC=false
    • Search server logs for the specific MBean ObjectName you are interested. Make sure the MBean you are debugging is registered in a correct partition context. Make sure the MBean operation is called in a correct partition context.

Here are sample debug messages for the MBean "test.domain:type=testType,name=testName" related to the MBean registration, queryNames() invocation, and getAttribute() invocation.

<Oct 21, 2015 11:36:43 PM PDT> <Debug> <PartitionJMX> <BEA-000000> <Calling register MBean test.domain:type=testType,name=testName in partition DOMAIN>
<Oct 21, 2015 11:36:44 PM PDT> <Debug> <PartitionJMX> <BEA-000000> <Calling register MBean test.domain:Partition=coke,type=testType,name=testName in partition coke>
<Oct 21, 2015 11:36:45 PM PDT> <Debug> <PartitionJMX> <BEA-000000> <Calling register MBean test.domain:Partition=pepsi,type=testType,name=testName in partition pepsi>
<Oct 21, 2015 11:36:56 PM PDT> <Debug> <PartitionJMX> <BEA-000000> <queryNames on MBean test.domain:Partition=coke,type=testType,name=testName,* in partition coke>
<Oct 21, 2015 11:36:56 PM PDT> <Debug> <MBeanCIC> <BEA-000000> <getAttribute: MBean: test.domain:Partition=coke,type=testType,name=testName, CIC: (pId = 2d044835-3ca9-4928-915f-6bd1d158f490, pName = coke, appId = helloTenant$coke, appName = helloTenant, appVersion = null, mId = null, compName = null)>

 

    • To check why the partition context is not right, turn on this debug flag, in addition to the debug flags mentioned above, when starting WLS servers:
      • -Dweblogic.debug.DebugCIC=true. Once this flag is used, there are a lot of messages logged into the server log. Search for the messages logged by DebugCIC logger, like 
        ExecuteThread: '<thread id #>' for queue: 'weblogic.kernel.Default (self-tuning)'): Pushed 

        and the messages logged by DebugPartitionJMX logger.

<Oct 21, 2015, 23:59:34 PDT> INVCTXT (24-[ACTIVE] ExecuteThread: '0' for queue: 'weblogic.kernel.Default (self-tuning)'): Pushed [(pId = 0, pName = DOMAIN, appId = null, appName = null, appVersion = null, mId = null, compName = null)] on top of [(pId = 0, pName = DOMAIN, appId = null, appName = null, appVersion = null, mId = null, compName = null)]. New size is [2]. Pushed by [weblogic.application.ComponentInvocationContextManagerImpl.pushComponentInvocationContext(ComponentInvocationContextManagerImpl.java:173)
...
<Oct 21, 2015 11:59:34 PM PDT> <Debug> <PartitionJMX> <BEA-000000> <Calling register MBean test.domain:type=testType,name=testName in partition DOMAIN>
...
<Oct 21, 2015, 23:59:37 PDT> INVCTXT (29-[STANDBY] ExecuteThread: '2' for queue: 'weblogic.kernel.Default (self-tuning)'): Pushed [(pId = 2d044835-3ca9-4928-915f-6bd1d158f490, pName = coke, appId = helloTenant$coke, appName = helloTenant, appVersion = null, mId = null, compName = null)] on top of [(pId = 2d044835-3ca9-4928-915f-6bd1d158f490, pName = coke, appId = null, appName = null, appVersion = null, mId = null, compName = null)]. New size is [3]. Pushed by
[weblogic.application.ComponentInvocationContextManagerImpl.pushComponentInvocationContext(ComponentInvocationContextManagerImpl.java:173)
...
<Oct 21, 2015 11:59:37 PM PDT> <Debug> <PartitionJMX> <BEA-000000> <Calling register MBean test.domain:Partition=coke,type=testType,name=testName in partition coke>

3. Conclusion

WebLogic Server 12.2.1 provides a new feature: Multi-Tenancy (MT). With this feature, partition isolation is enforced. Applications can deploy to the domain and the partitions. Users in one partition cannot see the resources in other partitions, including MBeans registered by applications. In this article, a use case is used to briefly demonstrate how application MBeans are affected by partition isolation in regards to MBean visibility. For more detailed information, see "References" section.

4. References

WebLogic Server domain

Config Wizard

WLST command reference  

JConsole

Managing WebLogic Server with JConsole

JSR 160: Java Management Extensions Remote JMX api

WebLogic Server Security

WebLogic Server Deployment

 

 

About

The official blog for Oracle WebLogic Server fans and followers!

Stay Connected

Search

Archives
« July 2016
SunMonTueWedThuFriSat
     
1
2
3
4
5
6
7
8
9
10
11
12
13
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
      
Today