X

Proactive insights, news and tips from Oracle WebLogic Server Support. Learn Oracle from Oracle.

Recent Posts

Make WebLogic Domain Provisioning and Deployment Easy!

The Oracle WebLogic Deploy Tooling (WDT) makes the automation of WebLogic Server domain provisioning and applications deployment easy. Instead of writing WLST scripts that need to be maintained, WDT creates a declarative, metadata model that describes the domain, applications, and the resources used by the applications.  This metadata model makes it easy to provision, deploy, and perform domain lifecycle operations in a repeatable fashion, which makes it perfect for the Continuous Delivery of applications. The WebLogic Deploy Tooling provides maximum flexibility by supporting a wide range of WebLogic Server versions from 10.3.6 to 12.2.1.3. WDT supports both Windows and UNIX operating systems, and provides the following benefits: Introspects a WebLogic domain into a metadata model (JSON or YAML). Creates a new WebLogic Server domain using a metadata model and allows version control of the domain configuration. Updates the configuration of an existing WebLogic Server domain, deploys applications and resources into the domain. Allows runtime alterations to the metadata model (also referred as the model) before applying it. Allows the same model to apply to multiple environments by accepting value placeholders provided in a separate property file. Passwords can be encrypted directly in the model or property file. Supports a sparse model so that the model only needs to describe what is required for the specific operation without describing other artifacts. Provides easy validation of the model content and verification that its related artifacts are well-formed. Allows automation and continuous delivery of deployments. Facilitates Lift and Shift of the domain into other environments, like Docker images and Kubernetes.   Currently, the project provides five single-purpose tools, all exposed as shell scripts: The Create Domain Tool (createDomain) understands how to create a domain and populate the domain with all the resources and applications specified in the model. The Update Domain Tool (updateDomain) understands how to update an existing domain and populate the domain with all the resources and applications specified in the model, either in offline or online mode. The Deploy Applications Tool (deployApps) understands how to add resources and applications to an existing domain, either in offline or online mode. The Discover Domain Tool (discoverDomain) introspects an existing domain and creates a model file describing the domain and an archive file of the binaries deployed to the domain. The Encrypt Model Tool (encryptModel) encrypts the passwords in a model (or its variable file) using a user-provided passphrase. The Validate Model Tool (validateModel) provides both standalone validation of a model as well as model usage information to help users write or edit their models. The WebLogic on Docker and Kubernetes projects take advantage of WDT to provision WebLogic domains and deploy applications inside of a Docker image or in a Kubernetes persistent volume (PV).  The Discover and Create Domain Tools enable us to take a domain running in a non-Docker/Kubernetes environment and lift and shift them into these environments. Docker/Kubernetes environments require a specific WebLogic configuration (for example, network). The Validate Model Tool provides mechanisms to validate the WebLogic configuration and ensure that it can run in these environments. We have created a sample in the GitHub WebLogic Docker project, https://github.com/oracle/docker-images/tree/master/OracleWebLogic/samples/12213-domain-wdt, to demonstrate how to provision a WebLogic 12.2.1.3 domain inside of a Docker image.  The WebLogic domain is configured with a WebLogic dynamic cluster, a simple application deployed, and a data source that connects to an Oracle database running inside of a container. This sample includes a basic WDT model, simple-topology.yaml, that describes the intended configuration of the domain within the Docker image. WDT models can be created and modified using a text editor, following the format and rules described in the README file for the WDT project in GitHub.  Alternatively, the model can be created using the WDT Discover Domain Tool to introspect an already existing WebLogic domain. Domain creation may require the deployment of applications and libraries. This is accomplished by creating a ZIP archive with a specific structure, then referencing those items in the model. This sample creates and deploys a simple ZIP archive, containing a small application WAR. The archive is built in the sample directory prior to creating the Docker image. How to Build and Run The image is based on a WebLogic Server 12.2.1.3 image in the docker-images repository. Follow the README in https://github.com/oracle/docker-images/tree/master/OracleWebLogic/dockerfiles/12.2.1.3 to build the WebLogic Server install image to your local repository. The WebLogic Deploy Tool installer is used to build this sample WebLogic domain image. This sample deploys a simple, one-page web application contained in a ZIP archive, archive.zip. This archive needs to be built before building the domain Docker image.     $ ./build-archive.sh Before the domain image is built, we also need the WDT model simple-topology.yaml.  If you want to customize this WebLogic domains sample, you can either use an editor to change the model simple-topology.yaml or use the WDT Discover Domain Tool to introspect an already existing WebLogic domain. The image below shows you a snippet of the sample WDT model simple-topology.yaml where the database password will be encrypted and replaced by the value in the properties file we will supply before running the WebLogic domain containers. To build this sample, run:     $ docker build \     --build-arg WDT_MODEL=simple-topology.yaml \     --build-arg WDT_ARCHIVE=archive.zip \     --force-rm=true \     -t 12213-domain-wdt . You should have a WebLogic domain image in your local repository. How to Run In this sample, each of the Managed Servers in the WebLogic domain have a data source deployed to them. We want to connect the data source to an Oracle database running in a container. Pull the Oracle database image from the Docker Store or the Oracle Container Registry into your local repository.     $ docker pull container-registry.oracle.com/database/enterprise:12.2.0.1 Create the Docker network for the WLS and database containers to run:     $ docker network create -d bridge SampleNET Run the Database Container To create a database container, use the environment file below to set the database name, domain, and feature bundle. The example environment file, properties/env.txt, is:     DB_SID=InfraDB     DB_PDB=InfraPDB1     DB_DOMAIN=us.oracle.com     DB_BUNDLE=basic Run the database container by running the following Docker command:     $ docker run -d --name InfraDB --network=SampleNET  \     -p 1521:1521 -p 5500:5500  \     --env-file /Users/mriccell/Docker/docker-images/OracleWebLogic/samples/12213-domain-wdt/properties/env.txt  \     -it --shm-size="8g"  \     container-registry.oracle.com/database/enterprise:12.2.0.1     Verify that the database is running and healthy. The STATUS field shows (healthy) in the output of docker ps.  The database is created with the default password 'Oradoc_db1'. To change the database password, you must use sqlplus.  To run sqlplus pull the Oracle Instant Client from the Oracle Container Registry or the Docker Store, and run a sqlplus container with the following command:     $ docker run -ti --network=SampleNET --rm \     store/oracle/database-instantclient:12.2.0.1 \     sqlplus  sys/Oradoc_db1@InfraDB:1521/InfraDB.us.oracle.com \     AS SYSDBA       SQL> alter user system identified by dbpasswd container=all; Make sure you add the new database password 'dbpasswd ' in the properties file, properties/domain.properties DB_PASSWORD. Verify that you can connect to the database:     $ docker exec -ti InfraDB  \     /u01/app/oracle/product/12.2.0/dbhome_1/bin/sqlplus \     system/dbpasswd@InfraDB:1521/InfraPDB1.us.oracle.com       SQL> select * from Dual; Run the WebLogic Domain You will need to modify the domain.properties file in properties/domain.properties with all the parameters required to run the WebLogic domain, including the database password. To start the containerized Administration Server, run:     $ docker run -d --name wlsadmin --hostname wlsadmin \     --network=SampleNET -p 7001:7001 \     -v /Users/mriccell/Docker/docker-images/OracleWebLogic/samples/12213-domain-wdt/properties:/u01/oracle/properties  \     12213-domain-wdt To start a containerized Managed Server (ms-1) to self-register with the Administration Server above, run:     $ docker run -d --name ms-1 --link wlsadmin:wlsadmin \     --network=SampleNET -p 9001:9001 \     -v /Users/mriccell/Docker/docker-images/OracleWebLogic/samples/12213-domain-wdt/properties:/u01/oracle/properties  \     -e MS_NAME=ms-1 12213-domain-wdt startManagedServer.sh To start an additional Managed Server (in this example, ms-2), run:     $ docker run -d --name ms-2 --link wlsadmin:wlsadmin  \     --network=SampleNET -p 9002:9001 \     -v /Users/mriccell/Docker/docker-images/OracleWebLogic/samples/12213-domain-wdt/properties:/u01/oracle/properties  \     -e MS_NAME=ms-2 12213-domain-wdt startManagedServer.sh The above scenario will give you a WebLogic domain with a dynamic cluster set up on a single host environment. Let’s verify that the servers are running and that the data source connects to the Oracle database running in the container. Invoke the WLS Administration Console by entering this URL in your browser, ‘http://localhost:7001/console’. Log in using the credentials you provided in the domain.properties file. The WebLogic Deploy Tooling simplifies the provisioning of WebLogic domains, deployment of applications, and the resources these applications need.  The WebLogic on Docker/Kubernetes projects take advantage of these tools to simplify the provisioning of domains inside of an image or persisted to a Kubernetes persistent volume.  We have released the General Availability version of the WebLogic Kubernetes Operator which simplifies the management of WebLogic domains in Kubernetes. Soon we will release the WebLogic Kubernetes Operator version 2.0 which provides enhancements to the management of WebLogic domains. We continue to provide tooling to make it simple to provision, deploy, and manage WebLogic domains with the goal of providing the greatest degree of flexibility for where these domains can run.  We hope this sample is helpful to anyone wanting to use the WebLogic Deploy Tooling for provisioning and deploying WebLogic Server domains, and we look forward to your feedback.

The Oracle WebLogic Deploy Tooling (WDT) makes the automation of WebLogic Server domain provisioning and applications deployment easy. Instead of writing WLST scripts that need to be maintained, WDT...

Announcement

WebLogic Kubernetes Operator Image Now Available in Docker Hub

To facilitate the management of Oracle WebLogic Server domains in Kubernetes, we have made available the WebLogic Server Kubernetes Operator images in the Docker Hub repository, https://hub.docker.com/r/oracle/weblogic-kubernetes-operator/. In this repository, we provide several WebLogic Kubernetes Operator images: Version 1.0 and latest. The general availability version of the operator. Version develop. The latest pre-released version of the operator image.   The open source code and documentation for the WebLogic Kubernetes Operator can be found in the GitHub repository, https://github.com/oracle/weblogic-kubernetes-operator. The WebLogic Server Kubernetes Operator uses the standard Oracle WebLogic Server 12.2.1.3 Docker image from the Docker store. It treats this image as immutable and all of the state is persisted in a Kubernetes persistent volume. This allows us to treat all of the pods as throwaway and replaceable, and it completely eliminates the need to manage state written into Docker containers.   Get Started The Oracle WebLogic Server Kubernetes Operator has the following requirements: Kubernetes 1.7.5+, 1.8.0+, 1.9.0+, 1.10.0 (check with kubectl version). Flannel networking v0.9.1-amd64 (check with docker images | grep flannel) Docker 17.03.1.ce (check with docker version) Oracle WebLogic Server 12.2.1.3.0 To obtain the WebLogic Kubernetes Operator image from Docker Hub, run: $ docker pull oracle/weblogic-kubernetes-operator:1.0 Customize the operator parameters file The operator is deployed with the provided installation script, kubernetes/create-weblogic-operator.sh. The input to this script is the file, kubernetes/create-operator-inputs.yaml, which needs to be updated to reflect the target environment. Parameters must be provided in the input file. For a description of each parameter, see https://github.com/oracle/weblogic-kubernetes-operator/blob/master/site/installation.md. Decide which REST configuration to use The operator provides three REST certificate options: none  Disables the REST server. self-signed-cert  Generates self-signed certificates. custom-cert  Provides a mechanism to provide certificates that were created and signed by some other means. Decide which options to enable The operator provides some optional features that can be enabled in the configuration file. Load balancing with an Ingress controller or a web server You can choose a load balancer provider for your WebLogic domains running in a Kubernetes cluster. Please refer to Load balancing with Voyager/HAProxy, Load balancing with Traefik, and Load balancing with the Apache HTTP Server for information about the current capabilities and setup instructions for each of the supported load balancers. Note these limitations: Only HTTP(S) is supported. Other protocols are not supported. A root path rule is created for each cluster. Rules based on the DNS name, or on URL paths other than ‘/’, are not supported. No non-default configuration of the load balancer is performed in this release. The default configuration gives round-robin routing and WebLogic Server will provide cookie-based session affinity. Note that Ingresses are not created for servers that are not part of a WebLogic Server cluster, including the Administration Server. Such servers are exposed externally using NodePort services. Log integration with Elastic Stack The operator can install the Elastic Stack and publish its logs to it. If enabled, Elasticsearch and Kibana will be installed in the default namespace, and a Logstash container will be created in the operator pod. Logstash will be configured to publish the operator’s logs to Elasticsearch, and the log data will be available for visualization and analysis in Kibana. To enable the ELK integration, set the enableELKintegration option to true. 
 Deploying the operator to a Kubernetes cluster To deploy the operator, run the deployment script and give it the location of your inputs file, ./create-weblogic-operator.sh –i /path/to/create-operator-inputs.yaml. What the script does The script will carry out the following actions: A set of Kubernetes YAML files will be created from the inputs provided. A namespace will be created for the operator. A service account will be created in that namespace. If Elastic Stack integration was enabled, a persistent volume for the Elastic Stack will be created. A set of RBAC roles and bindings will be created. The operator will be deployed. If requested, the load balancer will be deployed. If requested, Elastic Stack will be deployed and Logstash will be configured for the operator’s logs. The script will validate each action before it proceeds. This will deploy the operator in your Kubernetes cluster. Please refer to the documentation for next steps including using the REST services, creating a WebLogic Server domain, starting a domain, and so on. Our future plans include, enhancements to the WebLogic Server Kubernetes Operator which can manage a WebLogic domain inside a Docker image as well as on a persistent volume. Enhancements to add CI/CD of WebLogic deployments on Kubernetes with Oracle Container Pipelines, and new features and enhancements over time. Please stay tuned for more information. We hope this announcement is helpful to those of you seeking to deploy WebLogic Server on Kubernetes, and look forward to your feedback.  

To facilitate the management of Oracle WebLogic Server domains in Kubernetes, we have made available the WebLogic Server Kubernetes Operator images in the Docker Hub repository, https://hub.docker.com/...

Technical

WebLogic Server JTA in a Kubernetes Environment

This blog post describes WebLogic Server global transactions running in a Kubernetes environment.  First, we’ll review how the WebLogic Server Transaction Manager (TM) processes distributed transactions.  Then, we’ll walk through an example transactional application that is deployed to WebLogic Server domains running in a Kubernetes cluster with the WebLogic Kubernetes Operator.    WebLogic Server Transaction Manager Introduction The WebLogic Server Transaction Manager (TM) is the transaction processing monitor implementation in WebLogic Server that supports the Java Enterprise Edition (Java EE) Java Transaction API (JTA).  A Java EE application uses JTA to manage global transactions to ensure that changes to resource managers, such as databases and messaging systems, either complete as a unit, or are undone. This section provides a brief introduction to the WebLogic Server TM, specifically around network communication and related configuration, which will be helpful when we examine transactions in a Kubernetes environment.  There are many TM features, optimizations, and configuration options that won’t be covered in this article.  Refer to the following WebLogic Server documentation for additional details: ·      For general information about the WebLogic Server TM, see the WebLogic Server JTA documentation. ·      For detailed information regarding the Java Transaction API, see the Java EE JTA Specification. How Transactions are Processed in WebLogic Server To get a basic understanding of how the WebLogic Server TM processes transactions, we’ll look at a hypothetical application.  Consider a web application consisting of a servlet that starts a transaction, inserts a record in a database table, and sends a message to a Java Messaging Service (JMS) queue destination.  After updating the JDBC and JMS resources, the servlet commits the transaction.   The following diagram shows the server and resource transaction participants. Transaction Propagation The transaction context builds up state as it propagates between servers and as resources are accessed by the application.  For this application, the transaction context at commit time would look something like the following. Server participants, identified by domain name and server name, have an associated URL that is used for internal TM communication.  These URLs are typically derived from the server’s default network channel, or default secure network channel.  The transaction context also contains information about which server participants have javax.transaction.Synchronization callbacks registered.  The JTA synchronization API is a callback mechanism where the TM invokes the Synchronization.beforeCompletion() method before commencing two-phase commit processing for a transaction.   The Synchronization.afterCompletion(int status) method is invoked after transaction processing is complete with the final status of the transaction (for example, committed, rolled back, and such).  Transaction Completion When the TM is instructed to commit the transaction, the TM takes over and coordinates the completion of the transaction.  One of the server participants is chosen as the transaction coordinator to drive the two-phase commit protocol.  The coordinator instructs the remaining subordinate servers to process registered synchronization callbacks, and to prepare, commit, or rollback resources.  The TM communication channels used to coordinate the example transaction are illustrated in the following diagram. The dashed-line arrows represent asynchronous RMI calls between the coordinator and subordinate servers.  Note that the Synchronization.beforeCompletion() communication can take place directly between subordinate servers.  It is also important to point out that application communication is conceptually separate from the internal TM communication, as the TM may establish network channels that were not used by the application to propagate the transaction.  The TM could use different protocols, addresses, and ports depending on how the server default network channels are configured. Configuration Recommendations There are a few TM configuration recommendations related to server network addresses, persistent storage, and server naming. Server Network Addresses As mentioned previously, server participants locate each other using URLs included in the transaction context.  It is important that the network channels used for TM URLs be configured with address names that are resolvable after node, pod, or container restarts where IP addresses may change.  Also, because the TM requires direct server-to-server communication, cluster or load-balancer addresses that resolve to multiple IP addresses should not be used. Transaction Logs The coordinating server persists state in the transaction log (TLOG) that is used for transaction recovery processing after failure.  Because a server instance may relocate to another node, the TLOG needs to reside in a network/replicated file system (for example, NFS, SAN, and such) or in a highly-available database such as Oracle RAC.  For additional information, refer to the High Availability Guide. Cross-Domain Transactions Transactions that span WebLogic Server domains are referred to as cross-domain transactions.  Cross-domain transactions introduce additional configuration requirements, especially when the domains are connected by a public network. Server Naming The TM identifies server participants using a combination of the domain name and server name.  Therefore, each domain should be named uniquely to prevent name collisions.  Server participant name collisions will cause transactions to be rolled back at runtime. Security Server participants that are connected by a public network require the use of secure protocols (for example, t3s) and authorization checks to verify that the TM communication is legitimate.  For the purpose of this demonstration, we won’t cover these topics in detail.  For the Kubernetes example application, all TM communication will take place on the private Kubernetes network and will use a non-SSL protocol. For details on configuring security for cross-domain transactions, refer to the Configuring Secure Inter-Domain and Intra-Domain Transaction Communication chapter of the Fusion Middleware Developing JTA Applications for Oracle WebLogic Server documentation. WebLogic Server on Kubernetes In an effort to improve WebLogic Server integration with Kubernetes, Oracle has released the open source WebLogic Kubernetes Operator.   The WebLogic Kubernetes Operator supports the creation and management of WebLogic Server domains, integration with various load balancers, and additional capabilities.  For details refer to the GitHub project page, https://github.com/oracle/weblogic-kubernetes-operator, and the related blogs at https://blogs.oracle.com/weblogicserver/how-to-weblogic-server-on-kubernetes. Example Transactional Application Walkthrough To illustrate running distributed transactions on Kubernetes, we’ll step through a simplified transactional application that is deployed to multiple WebLogic Server domains running in a single Kubernetes cluster.  The environment that I used for this example is a Mac running Docker Edge v18.05.0-ce that includes Kubernetes v1.9.6. After installing and starting Docker Edge, open the Preferences page, increase the memory available to Docker under the Advanced tab (~8 GiB) and enable Kubernetes under the Kubernetes tab.  After applying the changes, Docker and Kubernetes will be started.  If you are behind a firewall, you may also need to add the appropriate settings under the Proxies tab.  Once running, you should be able to list the Kubernetes version information. $ kubectl version Client Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.6", GitCommit:"9f8ebd171479bec0ada837d7ee641dec2f8c6dd1", GitTreeState:"clean", BuildDate:"2018-03-21T15:21:50Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"darwin/amd64"} Server Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.6", GitCommit:"9f8ebd171479bec0ada837d7ee641dec2f8c6dd1", GitTreeState:"clean", BuildDate:"2018-03-21T15:13:31Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"} To keep the example file system path names short, the working directory for input files, operator sources and binaries, persistent volumes, and such, are created under $HOME/k8sop. You can reference the directory using the environment variable $K8SOP. $ export K8SOP=$HOME/k8sop $ mkdir $K8SOP Install the WebLogic Kubernetes Operator The next step will be to build and install the weblogic-kubernetes-operator image.  Refer to the installation procedures at https://github.com/oracle/weblogic-kubernetes-operator/blob/master/site/installation.md.  Note that for this example, the weblogic-kubernetes-operator GitHub project will be cloned under the $K8SOP/src directory ($K8SOP/src/weblogic-kubernetes-operator).  Also note that when building the Docker image, use the tag “local” in place of “some-tag” that’s specified in the installation docs. $ mkdir $K8SOP/src $ cd $K8SOP/src $ git clone https://github.com/oracle/weblogic-kubernetes-operator.git $ cd weblogic-kubernetes-operator $ mvn clean install $ docker login $ docker build -t weblogic-kubernetes-operator:local --no-cache=true . After building the operator image, you should see it in the local registry. $ docker images weblogic-kubernetes-operator REPOSITORY                     TAG                 IMAGE ID            CREATED             SIZE weblogic-kubernetes-operator   local               42a5f70c7287        10 seconds ago      317MB The next step will be to deploy the operator to the Kubernetes cluster.  For this example, we will modify the create-weblogic-operator-inputs.yaml file to add an additional target namespace (weblogic) and specify the correct operator image name. Attribute Value targetNamespaces default,weblogic weblogicOperatorImage weblogic-kubernetes-operator:local javaLoggingLevel WARNING   Save the modified input file under $K8SOP/create-weblogic-operator-inputs.yaml. Then run the create-weblogic-operator.sh script, specifying the path to the modified create-weblogic-operator.yaml input file and the path of the operator output directory. $ cd $K8SOP $ mkdir weblogic-kubernetes-operator $ $K8SOP/src/weblogic-kubernetes-operator/kubernetes/create-weblogic-operator.sh -i $K8SOP/create-weblogic-operator-inputs.yaml -o $K8SOP/weblogic-kubernetes-operator When the script completes you will be able to see the operator pod running. $ kubectl get po -n weblogic-operator NAME                                 READY     STATUS    RESTARTS   AGE weblogic-operator-6dbf8bf9c9-prhwd   1/1       Running   0          44s WebLogic Domain Creation The procedures for creating a WebLogic Server domain are documented at https://github.com/oracle/weblogic-kubernetes-operator/blob/master/site/creating-domain.md.  Follow the instructions for pulling the WebLogic Server image from the Docker store into the local registry.  You’ll be able to pull the image after accepting the license agreement on the Docker store. $ docker login $ docker pull store/oracle/weblogic:12.2.1.3 Next, we’ll create a Kubernetes secret to hold the administrative credentials for our domain (weblogic/weblogic1). $ kubectl -n weblogic create secret generic domain1-weblogic-credentials --from-literal=username=weblogic --from-literal=password=weblogic1 The persistent volume location for the domain will be under $K8SOP/volumes/domain1. $ mkdir -m 777 -p $K8SOP/volumes/domain1 Then we’ll customize the $K8SOP/src/weblogic-kubernetes-operator/kubernetes/create-weblogic-domain-inputs.yaml example input file, modifying the following attributes: Attribute Value weblogicDomainStoragePath {full path of $HOME}/k8sop/volumes/domain1 domainName domain1 domainUID domain1 t3PublicAddress {your-local-hostname} exposeAdminT3Channel true exposeAdminNodePort true namespace weblogic   After saving the updated input file to $K8SOP/create-domain1.yaml, invoke the create-weblogic-domain.sh script as follows. $ $K8SOP/src/weblogic-kubernetes-operator/kubernetes/create-weblogic-domain.sh -i $K8SOP/create-domain1.yaml -o $K8SOP/weblogic-kubernetes-operator After the create-weblogic-domain.sh script completes, Kubernetes will start up the Administration Server and the clustered Managed Server instances.  After a while, you can see the running pods. $ kubectl get po -n weblogic NAME                                        READY     STATUS    RESTARTS   AGE domain1-admin-server                        1/1       Running   0          5m domain1-cluster-1-traefik-9985d9594-gw2jr   1/1       Running   0          5m domain1-managed-server1                     1/1       Running   0          3m domain1-managed-server2                     1/1       Running   0          3m Now we will access the running Administration Server using the WebLogic Server Administration Console to check the state of the domain using the URL http://localhost:30701/console with the credentials weblogic/weblogic1.  The following screen shot shows the Servers page. The Administration Console Servers page shows all of the servers in domain1.  Note that each server has a listen address that corresponds to a Kubernetes service name that is defined for the specific server instance.  The service name is derived from the domainUID (domain1) and the server name. These address names are resolvable within the Kubernetes namespace and, along with the listen port, are used to define each server’s default network channel.  As mentioned previously, the default network channel URLs are propagated with the transaction context and are used internally by the TM for distributed transaction coordination. Example Application Now that we have a WebLogic Server domain running under Kubernetes, we will look at an example application that can be used to verify distributed transaction processing.  To make the example as simple as possible, it will be limited in scope to transaction propagation between servers and synchronization callback processing.  This will allow us to verify inter-server transaction communication without the need for resource manager configuration and the added complexity of writing JDBC or JMS client code. The application consists of two main components: a servlet front end and an RMI remote object.  The servlet processes a GET request that contains a list of URLs.  It starts a global transaction and then invokes the remote object at each of the URLs.  The remote object simply registers a synchronization callback that prints a message to stdout in the beforeCompletion and afterCompletion callback methods.  Finally, the servlet commits the transaction and sends a response containing information about each of the RMI calls and the outcome of the global transaction. The following diagram illustrates running the example application on the domain1 servers in the Kubernetes cluster.  The servlet is invoked using the Administration Server’s external port.  The servlet starts the transaction, registers a local synchronization object, and invokes the register operation on the Managed Servers using their Kubernetes internal URLs:  t3://domain1-managed-server1:8001 and t3://domain1-managed-server2:8001. TxPropagate Servlet As mentioned above, the servlet starts a transaction and then invokes the RemoteSync.register() remote method on each of the server URLs specified.  Then the transaction is committed and the results are returned to the caller. package example;   import java.io.IOException; import java.io.PrintWriter;   import javax.naming.Context; import javax.naming.InitialContext; import javax.naming.NamingException; import javax.servlet.ServletException; import javax.servlet.annotation.WebServlet; import javax.servlet.http.HttpServlet; import javax.servlet.http.HttpServletRequest; import javax.servlet.http.HttpServletResponse; import javax.transaction.HeuristicMixedException; import javax.transaction.HeuristicRollbackException; import javax.transaction.NotSupportedException; import javax.transaction.RollbackException; import javax.transaction.SystemException;   import weblogic.transaction.Transaction; import weblogic.transaction.TransactionHelper; import weblogic.transaction.TransactionManager;   @WebServlet("/TxPropagate") public class TxPropagate extends HttpServlet {   private static final long serialVersionUID = 7100799641719523029L;   private TransactionManager tm = (TransactionManager)       TransactionHelper.getTransactionHelper().getTransactionManager();     protected void doGet(HttpServletRequest request,       HttpServletResponse response) throws ServletException, IOException {     PrintWriter out = response.getWriter();       String urlsParam = request.getParameter("urls");     if (urlsParam == null) return;     String[] urls = urlsParam.split(",");       try {       RemoteSync forward = (RemoteSync)           new InitialContext().lookup(RemoteSync.JNDINAME);       tm.begin();       Transaction tx = (Transaction) tm.getTransaction();       out.println("<pre>");       out.println(Utils.getLocalServerID() + " started " +           tx.getXid().toString());       out.println(forward.register());       for (int i = 0; i < urls.length; i++) {         out.println(Utils.getLocalServerID() + " " + tx.getXid().toString() +             " registering Synchronization on " + urls[i]);         Context ctx = Utils.getContext(urls[i]);         forward = (RemoteSync) ctx.lookup(RemoteSync.JNDINAME);         out.println(forward.register());       }       tm.commit();       out.println(Utils.getLocalServerID() + " committed " + tx);     } catch (NamingException | NotSupportedException | SystemException |         SecurityException | IllegalStateException | RollbackException |         HeuristicMixedException | HeuristicRollbackException e) {       throw new ServletException(e);     }   } Remote Object The RemoteSync remote object contains a single method, register, that registers a javax.transaction.Synchronization callback with the propagated transaction context. RemoteSync Interface The following is the example.RemoteSync remote interface definition. package example;   import java.rmi.Remote; import java.rmi.RemoteException;   public interface RemoteSync extends Remote {   public static final String JNDINAME = "propagate.RemoteSync";   String register() throws RemoteException; } RemoteSyncImpl Implementation The example.RemoteSyncImpl class implements the example.RemoteSync remote interface and contains an inner synchronization implementation class named SynchronizationImpl.  The beforeCompletion and afterCompletion methods simply write a message to stdout containing the server ID (domain name and server name) and the Xid string representation of the propagated transaction. The static main method instantiates a RemoteSyncImpl object and binds it into the server’s local JNDI context.  The main method is invoked when the application is deployed using the ApplicationLifecycleListener, as described below. package example;   import java.rmi.RemoteException;   import javax.naming.Context; import javax.transaction.RollbackException; import javax.transaction.Synchronization; import javax.transaction.SystemException;   import weblogic.jndi.Environment; import weblogic.transaction.Transaction; import weblogic.transaction.TransactionHelper;   public class RemoteSyncImpl implements RemoteSync {     public String register() throws RemoteException {     Transaction tx = (Transaction)         TransactionHelper.getTransactionHelper().getTransaction();     if (tx == null) return Utils.getLocalServerID() +         " no transaction, Synchronization not registered";     try {       Synchronization sync = new SynchronizationImpl(tx);       tx.registerSynchronization(sync);       return Utils.getLocalServerID() + " " + tx.getXid().toString() +           " registered " + sync;     } catch (IllegalStateException | RollbackException |         SystemException e) {       throw new RemoteException(           "error registering Synchronization callback with " +       tx.getXid().toString(), e);     }   }     class SynchronizationImpl implements Synchronization {     Transaction tx;         SynchronizationImpl(Transaction tx) {       this.tx = tx;     }         public void afterCompletion(int arg0) {       System.out.println(Utils.getLocalServerID() + " " +           tx.getXid().toString() + " afterCompletion()");     }       public void beforeCompletion() {       System.out.println(Utils.getLocalServerID() + " " +           tx.getXid().toString() + " beforeCompletion()");     }   }     // create and bind remote object in local JNDI   public static void main(String[] args) throws Exception {     RemoteSyncImpl remoteSync = new RemoteSyncImpl();     Environment env = new Environment();     env.setCreateIntermediateContexts(true);     env.setReplicateBindings(false);     Context ctx = env.getInitialContext();     ctx.rebind(JNDINAME, remoteSync);     System.out.println("bound " + remoteSync);   } } Utility Methods The Utils class contains a couple of static methods, one to get the local server ID and another to perform an initial context lookup given a URL.  The initial context lookup is invoked under the anonymous user.  These methods are used by both the servlet and the remote object. package example;   import java.util.Hashtable;   import javax.naming.Context; import javax.naming.InitialContext; import javax.naming.NamingException;   public class Utils {     public static Context getContext(String url) throws NamingException {     Hashtable env = new Hashtable();     env.put(Context.INITIAL_CONTEXT_FACTORY,         "weblogic.jndi.WLInitialContextFactory");     env.put(Context.PROVIDER_URL, url);     return new InitialContext(env);   }     public static String getLocalServerID() {     return "[" + getDomainName() + "+"         + System.getProperty("weblogic.Name") + "]";   }     private static String getDomainName() {     String domainName = System.getProperty("weblogic.Domain");     if (domainName == null) domainName = System.getenv("DOMAIN_NAME");     return domainName;   } } ApplicationLifecycleListener When the application is deployed to a WebLogic Server instance, the lifecycle listener preStart method is invoked to initialize and bind the RemoteSync remote object. package example;   import weblogic.application.ApplicationException; import weblogic.application.ApplicationLifecycleEvent; import weblogic.application.ApplicationLifecycleListener;   public class LifecycleListenerImpl extends ApplicationLifecycleListener {     public void preStart (ApplicationLifecycleEvent evt)       throws ApplicationException {     super.preStart(evt);     try {       RemoteSyncImpl.main(null);     } catch (Exception e) {       throw new ApplicationException(e);     }   } } Application Deployment Descriptor The application archive contains the following weblogic-application.xml deployment descriptor to register the ApplicationLifecycleListener object. <?xml version = '1.0' ?> <weblogic-application xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.bea.com/ns/weblogic/weblogic-application http://www.bea.com/ns/weblogic/weblogic-application/1.0/weblogic-application.xsd" xmlns="http://www.bea.com/ns/weblogic/weblogic-application">    <listener>     <listener-class>example.LifecycleListenerImpl</listener-class>     <listener-uri>lib/remotesync.jar</listener-uri>   </listener> </weblogic-application> Deploying the Application The example application can be deployed using a number of supported deployment mechanisms (refer to https://blogs.oracle.com/weblogicserver/best-practices-for-application-deployment-on-weblogic-server-running-on-kubernetes-v2).  For this example, we’ll deploy the application using the WebLogic Server Administration Console. Assume that the application is packaged in an application archive named txpropagate.ear.  First, we’ll copy txpropagate.ear to the applications directory under the domain1 persistent volume location ($K8SOP/volumes/domain1/applications).  Then we can deploy the application from the Administration Console’s Deployment page. Note that the path of the EAR file is /shared/applications/txpropagate.ear within the Administration Server’s container, where /shared is mapped to the persistent volume that we created at $K8SOP/volumes/domain1. Deploy the EAR as an application and then target it to the Administration Server and the cluster. On the next page, click Finish to deploy the application.  After the application is deployed, you’ll see its entry in the Deployments table. Running the Application Now that we have the application deployed to the servers in domain1, we can run a distributed transaction test.  The following CURL operation invokes the servlet using the load balancer port 30305 for the clustered Managed Servers and specifies the URL of managed-server1. $ curl http://localhost:30305/TxPropagate/TxPropagate?urls=t3://domain1-managed-server1:8001 <pre> [domain1+managed-server2] started BEA1-0001DE85D4EE [domain1+managed-server2] BEA1-0001DE85D4EEC47AE630 registered example.RemoteSyncImpl$SynchronizationImpl@562a85bd [domain1+managed-server2] BEA1-0001DE85D4EEC47AE630 registering Synchronization on t3://domain1-managed-server1:8001 [domain1+managed-server1] BEA1-0001DE85D4EEC47AE630 registered example.RemoteSyncImpl$SynchronizationImpl@585ff41b [domain1+managed-server2] committed Xid=BEA1-0001DE85D4EEC47AE630(844351585),Status=Committed,numRepliesOwedMe=0,numRepliesOwedOthers=0,seconds since begin=0,seconds left=120,useSecure=false,SCInfo[domain1+managed-server2]=(state=committed),SCInfo[domain1+managed-server1]=(state=committed),properties=({ackCommitSCs={managed-server1+domain1-managed-server1:8001+domain1+t3+=true}, weblogic.transaction.partitionName=DOMAIN}),OwnerTransactionManager=ServerTM[ServerCoordinatorDescriptor=(CoordinatorURL=managed-server2+domain1-managed-server2:8001+domain1+t3+ CoordinatorNonSecureURL=managed-server2+domain1-managed-server2:8001+domain1+t3+ coordinatorSecureURL=null, XAResources={WSATGatewayRM_managed-server2_domain1},NonXAResources={})],CoordinatorURL=managed-server2+domain1-managed-server2:8001+domain1+t3+) The following diagram shows the application flow. Looking at the output, we see that the servlet request was dispatched on managed-server2 where it started the transaction BEA1-0001DE85D4EE.   [domain1+managed-server2] started BEA1-0001DE85D4EE The local RemoteSync.register() method was invoked which registered the callback object SynchronizationImpl@562a85bd. [domain1+managed-server2] BEA1-0001DE85D4EEC47AE630 registered example.RemoteSyncImpl$SynchronizationImpl@562a85bd The servlet then invoked the register method on the RemoteSync object on managed-server1, which registered the synchronization object SynchronizationImpl@585ff41b. [domain1+managed-server2] BEA1-0001DE85D4EEC47AE630 registering Synchronization on t3://domain1-managed-server1:8001 [domain1+managed-server1] BEA1-0001DE85D4EEC47AE630 registered example.RemoteSyncImpl$SynchronizationImpl@585ff41b Finally, the servlet committed the transaction and returned the transaction’s string representation (typically used for TM debug logging). [domain1+managed-server2] committed Xid=BEA1-0001DE85D4EEC47AE630(844351585),Status=Committed,numRepliesOwedMe=0,numRepliesOwedOthers=0,seconds since begin=0,seconds left=120,useSecure=false,SCInfo[domain1+managed-server2]=(state=committed),SCInfo[domain1+managed-server1]=(state=committed),properties=({ackCommitSCs={managed-server1+domain1-managed-server1:8001+domain1+t3+=true}, weblogic.transaction.partitionName=DOMAIN}),OwnerTransactionManager=ServerTM[ServerCoordinatorDescriptor=(CoordinatorURL=managed-server2+domain1-managed-server2:8001+domain1+t3+ CoordinatorNonSecureURL=managed-server2+domain1-managed-server2:8001+domain1+t3+ coordinatorSecureURL=null, XAResources={WSATGatewayRM_managed-server2_domain1},NonXAResources={})],CoordinatorURL=managed-server2+domain1-managed-server2:8001+domain1+t3+) The output shows that the transaction was committed, that it has two server participants (managed-server1 and managed-server2) and that the coordinating server (managed-server2) is accessible using t3://domain1-managed-server2:8001. We can also verify that the registered synchronization callbacks were invoked by looking at the output of admin-server and managed-server1.  The .out files for the servers can be found under the persistent volume of the domain. $ cd $K8SOP/volumes/domain1/domain/domain1/servers $ find . -name '*.out' -exec grep -H BEA1-0001DE85D4EE {} ';' ./managed-server1/logs/managed-server1.out:[domain1+managed-server1] BEA1-0001DE85D4EEC47AE630 beforeCompletion() ./managed-server1/logs/managed-server1.out:[domain1+managed-server1] BEA1-0001DE85D4EEC47AE630 afterCompletion() ./managed-server2/logs/managed-server2.out:[domain1+managed-server2] BEA1-0001DE85D4EEC47AE630 beforeCompletion() ./managed-server2/logs/managed-server2.out:[domain1+managed-server2] BEA1-0001DE85D4EEC47AE630 afterCompletion() To summarize, we were able to process distributed transactions within a WebLogic Server domain running in a Kubernetes cluster without having to make any changes.  The WebLogic Kubernetes Operator domain creation process provided all of the Kubernetes networking and WebLogic Server configuration necessary to make it possible.  The following command lists the Kubernetes services defined in the weblogic namespace. $ kubectl get svc -n weblogic NAME                                        TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)           AGE domain1-admin-server                        NodePort    10.102.156.32    <none>        7001:30701/TCP    11m domain1-admin-server-extchannel-t3channel   NodePort    10.99.21.154     <none>        30012:30012/TCP   9m domain1-cluster-1-traefik                   NodePort    10.100.211.213   <none>        80:30305/TCP      11m domain1-cluster-1-traefik-dashboard         NodePort    10.108.229.66    <none>        8080:30315/TCP    11m domain1-cluster-cluster-1                   ClusterIP   10.106.58.103    <none>        8001/TCP          9m domain1-managed-server1                     ClusterIP   10.108.85.130    <none>        8001/TCP          9m domain1-managed-server2                     ClusterIP   10.108.130.92    <none>        8001/TCP We were able to access the servlet through the Traefik NodePort service using port 30305 on localhost.  From inside the Kubernetes cluster, the servlet is able to access other WebLogic Server instances using their service names and ports.  Because each server’s listen address is set to its corresponding Kubernetes service name, the addresses are resolvable from within the Kubernetes namespace even if a server’s pod is restarted and assigned a different IP address. Cross-Domain Transactions Now we’ll look at extending the example to run across two WebLogic Server domains.  As mentioned in the TM overview section, cross-domain transactions can require additional configuration to properly secure TM communication.  However, for our example, we will keep the configuration as simple as possible.  We’ll continue to use a non-secure protocol (t3), and the anonymous user, for both application and internal TM communication. First, we’ll need to create a new domain (domain2) in the same Kubernetes namespace as domain1 (weblogic).  Before generating domain2 we need to create a secret for the domain2 credentials (domain2-weblogic-credentials) in the weblogic namespace and a directory for the persistent volume ($K8SOP/volumes/domain2). Next, modify the create-domain1.yaml file, changing the following attribute values, and save the changes to a new file named create-domain2.yaml. Attribute Value domainName domain2 domainUID domain2 weblogicDomainStoragePath {full path of $HOME}/k8sop/volumes/domain2 weblogicCredentialsSecretName domain2-weblogic-credentials t3ChannelPort 32012 adminNodePort 32701 loadBalancerWebPort 32305 loadBalancerDashboardPort 32315   Now we’re ready to invoke the create-weblogic-domain.sh script with the create-domain2.yaml input file. $ $K8SOP/src/weblogic-kubernetes-operator/kubernetes/create-weblogic-domain.sh -i $K8SOP/create-domain2.yaml -o $K8SOP/weblogic-kubernetes-operator After the create script completes successfully, the servers in domain2 will start and, using the readiness probe, report that they have reached the RUNNING state. $ kubectl get po -n weblogic NAME                                         READY     STATUS    RESTARTS   AGE domain1-admin-server                         1/1       Running   0          27m domain1-cluster-1-traefik-9985d9594-gw2jr    1/1       Running   0          27m domain1-managed-server1                      1/1       Running   0          25m domain1-managed-server2                      1/1       Running   0          25m domain2-admin-server                         1/1       Running   0          5m domain2-cluster-1-traefik-5c49f54689-9fzzr   1/1       Running   0          5m domain2-managed-server1                      1/1       Running   0          3m domain2-managed-server2                      1/1       Running   0          3m After deploying the application to the servers in domain2, we can invoke the application and include the URLs for the domain2 Managed Servers.  $ curl http://localhost:30305/TxPropagate/TxPropagate?urls=t3://domain2-managed-server1:8001,t3://domain2-managed-server2:8001 <pre> [domain1+managed-server1] started BEA1-0001144553CC [domain1+managed-server1] BEA1-0001144553CC5D73B78A registered example.RemoteSyncImpl$SynchronizationImpl@2e13aa23 [domain1+managed-server1] BEA1-0001144553CC5D73B78A registering Synchronization on t3://domain2-managed-server1:8001 [domain2+managed-server1] BEA1-0001144553CC5D73B78A registered example.RemoteSyncImpl$SynchronizationImpl@68d4c2d6 [domain1+managed-server1] BEA1-0001144553CC5D73B78A registering Synchronization on t3://domain2-managed-server2:8001 [domain2+managed-server2] BEA1-0001144553CC5D73B78A registered example.RemoteSyncImpl$SynchronizationImpl@1ae87d94 [domain1+managed-server1] committed Xid=BEA1-0001144553CC5D73B78A(1749245151),Status=Committed,numRepliesOwedMe=0,numRepliesOwedOthers=0,seconds since begin=0,seconds left=120,useSecure=false,SCInfo[domain1+managed-server1]=(state=committed),SCInfo[domain2+managed-server1]=(state=committed),SCInfo[domain2+managed-server2]=(state=committed),properties=({ackCommitSCs={managed-server2+domain2-managed-server2:8001+domain2+t3+=true, managed-server1+domain2-managed-server1:8001+domain2+t3+=true}, weblogic.transaction.partitionName=DOMAIN}),OwnerTransactionManager=ServerTM[ServerCoordinatorDescriptor=(CoordinatorURL=managed-server1+domain1-managed-server1:8001+domain1+t3+ CoordinatorNonSecureURL=managed-server1+domain1-managed-server1:8001+domain1+t3+ coordinatorSecureURL=null, XAResources={WSATGatewayRM_managed-server1_domain1},NonXAResources={})],CoordinatorURL=managed-server1+domain1-managed-server1:8001+domain1+t3+) The application flow is shown in the following diagram. In this example, the transaction includes server participants from both domain1 and domain2, and we can verify that the synchronization callbacks were processed on all participating servers. $ cd $K8SOP/volumes $ find . -name '*.out' -exec grep -H BEA1-0001144553CC {} ';' ./domain1/domain/domain1/servers/managed-server1/logs/managed-server1.out:[domain1+managed-server1] BEA1-0001144553CC5D73B78A beforeCompletion() ./domain1/domain/domain1/servers/managed-server1/logs/managed-server1.out:[domain1+managed-server1] BEA1-0001144553CC5D73B78A afterCompletion() ./domain2/domain/domain2/servers/managed-server1/logs/managed-server1.out:[domain2+managed-server1] BEA1-0001144553CC5D73B78A beforeCompletion() ./domain2/domain/domain2/servers/managed-server1/logs/managed-server1.out:[domain2+managed-server1] BEA1-0001144553CC5D73B78A afterCompletion() ./domain2/domain/domain2/servers/managed-server2/logs/managed-server2.out:[domain2+managed-server2] BEA1-0001144553CC5D73B78A beforeCompletion() ./domain2/domain/domain2/servers/managed-server2/logs/managed-server2.out:[domain2+managed-server2] BEA1-0001144553CC5D73B78A afterCompletion() Summary In this article we reviewed, at a high level, how the WebLogic Server Transaction Manager processes global transactions and discussed some of the basic configuration requirements.   We then looked at an example application to illustrate how cross-domain transactions are processed in a Kubernetes cluster.   In future articles we’ll look at more complex transactional use-cases such as multi-node, cross Kubernetes cluster transactions, failover, and such.

This blog post describes WebLogic Server global transactions running in a Kubernetes environment.  First, we’ll review how the WebLogic Server Transaction Manager (TM) processes distributed...

Announcement

Announcing WebLogic Server Certification on Oracle Cloud Infrastructure Container Engine for Kubernetes

On May 7th we announced the General Availability (GA) version of the WebLogic Server Kubernetes Operator, including certification of WebLogic Server and Operator configurations running on the Oracle Cloud Infrastructure (OCI).   In this initial announcement, WebLogic Server and Operator OCI certification was provided on Kubernetes clusters created on OCI using the Terraform Kubernetes Installer.    For more details on this announcement, please refer to the announcement blog Announcing General Availability version of the WebLogic Server Kubernetes Operator. Today we are announcing the additional certification of WebLogic Server and Operator configurations on the Oracle Container Engine for Kubernetes running on OCI, please see blog Kubernetes: A Cloud (and Data Center) Operating System?.  The Oracle Container Engine for Kubernetes is a fully-managed, scalable, and highly available service that you can use to deploy your containerized applications to the cloud.  In the blog How to run WebLogic clusters on the Oracle Cloud Infrastructure Container Engine for Kubernetes, we describe the steps to run a WebLogic domain/cluster managed by the WebLogic Kubernetes Operator running on OCI Container Engine for Kubernetes  with WebLogic and Operator images stored in the OCI Registry. Very soon, we hope to provide an easy way to migrate existing WebLogic Server domains in Kubernetes using the WebLogic Deploy Tooling, add CI/CD of WebLogic deployments on Kubernetes with Oracle Container Pipelines, and add new features and enhancements over time. The WebLogic Server and Operator capabilities described are supported on standard Kuberntes infrastructure with full compatibility between OCI, and other private and public cloud platforms that use Kubernetes.  The Operator, Prometheus Exporter, and WebLogic Deploy Tooling are all being developed in open source.   We are open to your feedback – thanks! Safe Harbor Statement The preceding is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions.  The development, release, and timing of any features or functionality described for Oracle’s products remains at the sole discretion of Oracle.

On May 7th we announced the General Availability (GA) version of the WebLogic Server Kubernetes Operator, including certification of WebLogic Server and Operator configurations running on the Oracle...

The WebLogic Server

How to run WebLogic clusters on the Oracle Cloud Infrastructure Container Engine for Kubernetes

There are various options for setting up a Kubernetes environment in order to run WebLogic clusters. Oracle supports customers who want to run WebLogic clusters in production or development mode and on Kubernetes clusters on-premises or in the cloud. In this blog, we describe the steps to run a WebLogic cluster using the Oracle Cloud Infrastructure (OCI) Container Engine for Kubernetes. The Kubernetes managed service is fully integrated with the underlying Oracle Cloud Infrastructure (OCI), making it easy to provision a Kubernetes cluster and to provide the required services, such as a load balancer, volumes, and network fabric. Prerequisites: Docker images: WebLogic Server (weblogic-12.2.1.3:latest). WebLogic Kubernetes Operator (weblogic-operator:latest) Traefik Load Balancer (traefik:1.4.5) A workstation with Docker and kubectl, installed and configured. The Oracle Container Engine for Kubernetes on OCI. To setup a Kubernetes managed service on OCI, follow the documentation Overview of Container Engine for Kubernetes. OCI Container Engine for Kubernetes nodes are accessible using ssh. The Oracle Cloud Infrastructure Registry to push the WebLogic Server, Operator, and Load Balancer images. Prepare the WebLogic Kubernetes Operator environment To prepare the environment, we need to: ·Test accessibility and set up the RBAC policy for the OCI Container Engine for the Kubernetes cluster Set up the NFS server Upload the Docker images to the OCI Registry (OCIR) Modify the configuration YAML files to reflect the Docker images’ names in the OCIR Test accessibility and set up the RBAC policy for the OKE cluster To check the accessibility to the OCI Container Engine for Kubernetes nodes, enter the command: kubectl get nodes The output of the command will display the nodes, similar to the following: NAME              STATUS    ROLES     AGE       VERSION 129.146.109.106   Ready     node      5h        v1.9.4 129.146.22.123    Ready     node      5h        v1.9.4 129.146.66.11     Ready     node      5h        v1.9.4 In order to have permission to access the Kubernetes cluster, you need to authorize your OCI account as a cluster-admin on the OCI Container Engine for Kubernetes cluster.  This will require your OCID, which is available on the OCI console page, under your user settings. For example, if your user OCID is ocid1.user.oc1..aaaaaaaac26kw7qvuij7i6fadabklqfb7svyuhpitedmguspv6ht67i5l32q, the command would be: kubectl create clusterrolebinding my-cluster-admin-binding --clusterrole=cluster-admin --user=ocid1.user.oc1..aaaaaaaac26kw7qvuij7i6fadabklqfb7svyuhpitedmguspv6ht67i5l32q Set up the NFS server In the current GA version, the OCI Container Engine for Kubernetes supports network block storage that can be shared across nodes with access permission RWOnce (meaning that only one can write, others can read only). At this time, the WebLogic on Kubernetes domain created by the WebLogic Server Kubernetes Operator, requires a shared file system to store the WebLogic domain configuration, which MUST be accessible from all the pods across the nodes. As a workaround, you need to install an NFS server on one node and share the file system across all the nodes. Note: Currently, we recommend that you use NFS version 3.0 for running WebLogic Server on OCI Container Engine for Kubernetes. During certification, we found that when using NFS 4.0, the servers in the WebLogic domain went into a failed state intermittently. Because multiple threads use NFS (default store, diagnostics store, Node Manager, logging, and domain_home), there are issues when accessing the file store. These issues are removed by changing the NFS to version 3.0. In this demo, the Kubernetes cluster is using nodes with these IP addresses: Node1: 129.146.109.106   Node2: 129.146.22.123   Node3: 129.146.66.11 In the above case, let’s install the NFS server on Node1 with the IP address 129.146.109.106, and use Node2 (IP:129.146.22.123)and Node3 (IP:129.146.66.11) as clients. Log in to each of the nodes using ssh to retrieve the private IP address, by executing the command: ssh -i ~/.ssh/id_rsa opc@[Public IP of Node] ip addr | grep ens3 ~/.ssh/id_rsa is the path to the private ssh RSA key. For example, for Node1: ssh -i ~/.ssh/id_rsa opc@129.146.109.106 ip addr | grep ens3 Retrieve the inet value for each node. For this demo, here is the collected information: Nodes: Public IP Private IP Node1 (NFS Server)   129.146.109.106       10.0.11.3   Node2   129.146.22.123       10.0.11.1   Node3   129.146.66.11     10.0.11.2   Log in using ssh to Node1, and install and set up NFS for Node1 (NFS Server): sudo su - yum install -y nfs-utils mkdir /scratch chown -R opc:opc /scratch Edit the /etc/exports file to add the internal IP addresses of Node2 and Node3: vi /etc/exports /scratch 10.0.11.1(rw) /scratch 10.0.11.2(rw) systemctl restart nfs exit Log in using ssh to Node2: ssh -i ~/.ssh/id_rsa opc@129.146.22.123 sudo su - yum install -y nfs-utils mkdir /scratch Edit the /etc/fstab file to add the internal IP address of Node1: vi /etc/fstab 10.0.11.3:/scratch /scratch nfs nfsvers=3 0 0 mount /scratch exit Repeat the same steps for Node3: ssh -i ~/.ssh/id_rsa opc@129.146.66.11 sudo su - yum install -y nfs-utils mkdir /scratch Edit the /etc/fstab file to add the internal IP address of Node1: vi /etc/fstab 10.0.11.3:/scratch /scratch nfs nfsvers=3 0 0 mount /scratch exit Upload the Docker images to the OCI Registry Build the required Docker images for WebLogic 12.2.1.3 and WebLogic Kubernetes Operator. Pull the Traefik Docker Image from the Docker Hub repository, for example:   docker login docker pull traefik:1.4.5   Tag the Docker images, as follows:   docker tag [Name Of Your Image For Operator] phx.ocir.io/weblogicondocker/weblogic-kubernetes-operator:latest docker tag [Name Of Your Image For WebLogic Domain] phx.ocir.io/weblogicondocker/weblogic:12.2.1.3 docker tag traefik:1.4.5 phx.ocir.io/weblogicondocker/traefik:1.4.5   Generate an authentication token to log in to the phx.ocir.io OCIR Docker repository: Log in to your OCI dashboard. Click ‘User Settings’, then ‘Auth Tokens’ on the left-side menu. Save the generated password in a secured place. Log in to the OCIR Docker registry by entering this command: docker login phx.ocir.io When prompted for your username, enter your OCI tenancy name/oci username. For example: docker login phx.ocir.io Username: weblogicondocker/myusername           Password: Login Succeeded Create a Docker registry secret.  The secret name must consist of lower case alphanumeric characters: kubectl create secret docker-registry <secret_name> --docker-server=<region>.ocir.io --docker-username=<oci_tenancyname>/<oci_username> --docker-password=<auth_token> --docker-email=example_email For example, for the PHX registry create docker secret ocisecret: kubectl create secret docker-registry ocisecret --docker-server=phx.ocir.io --docker-username=weblogicondocker/myusername --docker-password= _b5HiYcRzscbC48e1AZa --docker-email=myusername@oracle.com Push Docker images into OCIR:   docker push phx.ocir.io/weblogicondocker/weblogic-kubernetes-operator:latest docker push phx.ocir.io/weblogicondocker/weblogic:12.2.1.3 docker push phx.ocir.io/weblogicondocker/traefik:1.4.5   Log in to the OCI console and verify the image: Log in to the OCI console. Verify that you are using the correct region, for example, us-phoenix-1. Under Containers, select Registry. The image should be visible on the Registry page. Click on image name, select ‘Actions’ to make it ‘Public’ Modify the configuration YAML files to reflect the Docker image names in the OCIR Our final steps are to customize the parameters in the input files and generate deployment YAML files for the WebLogic cluster, WebLogic Operator, and to use the Traefik load balancer to reflect the image changes and local configuration. We will use the provided open source scripts:  create-weblogic-operator.sh and create-weblogic-domain.sh. Use Git to download the WebLogic Kubernetes Operator project: git clone https://github.com/oracle/weblogic-kubernetes-operator.git Modify the YAML inputs to reflect the image names: cd $SRC/weblogic-kubernetes-operator/kubernetes Change the ‘image’ field to the corresponding Docker repository image name in the OCIR: ./internal/create-weblogic-domain-job-template.yaml:   image: phx.ocir.io/weblogicondocker/weblogic:12.2.1.3 ./internal/weblogic-domain-traefik-template.yaml:      image: phx.ocir.io/weblogicondocker/traefik:1.4.5 ./internal/domain-custom-resource-template.yaml:       image: phx.ocir.io/weblogicondocker/weblogic:12.2.1.3 ./create-weblogic-operator-inputs.yaml:         weblogicOperatorImage: phx.ocir.io/weblogicondocker/weblogic-kubernetes-operator:latest Review and customize the other parameters in the create-weblogic-operator-inputs.yaml and create-weblogic-domain-inputs.yaml files. Check all the available options and descriptions in the installation instructions for the Operator and WebLogic Domain. Here is the list of customized values in the create-weblogic-operator-inputs.yaml file for this demo: targetNamespaces: domain1 weblogicOperatorImage: phx.ocir.io/weblogicondocker/weblogic-kubernetes-operator:latest externalRestOption: SELF_SIGNED_CERT externalSans: IP:129.146.109.106 Here is the list of customized values in the create-weblogic-domain-inputs.yaml file for this demo: domainUID: domain1 t3PublicAddress: 0.0.0.0 exposeAdminNodePort: true namespace: domain1 loadBalancer: TRAEFIK exposeAdminT3Channel: true weblogicDomainStoragePath: /scratch/external-domain-home/pv001 Note: Currently, we recommend that you use Traefik and the Apache HTTP Server load balancers for running WebLogic Server on the OCI Container Engine for Kubernetes. At this time we cannot certify the Voyager HAProxy Ingress Controller due to a lack of support in OKE. The WebLogic domain will use the persistent volume mapped to the path, specified by the parameter weblogicDomainStoragePath. Let’s create the persistent volume directory on the NFS server, Node1, using the command: ssh -i ~/.ssh/id_rsa opc@129.146.109.106 "mkdir -m 777 -p /scratch/external-domain-home/pv001" Our demo domain is configured to be run in the namespace domain1. To create namespace domain1, execute this command: kubectl create namespace domain1 The username and password credentials for access to the Administration Server must be stored in a Kubernetes secret in the same namespace that the domain will run in. The script does not create the secret in order to avoid storing the credentials in a file. Oracle recommends that this command be executed in a secure shell and that the appropriate measures be taken to protect the security of the credentials. To create the secret, issue the following command: kubectl -n NAMESPACE create secret generic SECRET_NAME   --from-literal=username=ADMIN-USERNAME   --from-literal=password=ADMIN-PASSWORD For our demo values: kubectl -n domain1 create secret generic domain1-weblogic-credentials --from-literal=username=weblogic --from-literal=password=welcome1 Finally, run the create script, pointing it at your inputs file and the output directory: ./create-weblogic-operator.sh –i create-weblogic-operator-job-inputs.yaml  -o /path/to/weblogic-operator-output-directory It will create and start all the related operator deployments. Run this command to check the operator pod status: Execute the same command for the WebLogic domain creation: ./create-weblogic-domain.sh –i create-weblogic-domain-job-inputs.yaml  -o /path/to/weblogic-domain-output-directory To check the status of the WebLogic cluster, run this command: bash-4.2$ kubectl get pods -n domain1 Let’s see how the load balancer works. For that, let’s access the WebLogic Server Administration Console and deploy the testwebapp.war application. In the customized inputs for the WebLogic domain, we have specified to expose the AdminNodePort. To review the port number, run this command: Let’s use one of the node’s external IP addresses to access the Administration Console. In our demo, it is http://129.146.109.106:30701/console. Log in to the WebLogic Server Administration Console using the credentials weblogic/welcome1. Click ‘Deployments’, ‘Lock&Edit’, and upload the testwebapp.war application. Select cluster-1 as a target and click ‘Finish’, then ‘Release Configuration’. Select the ‘Control’ tab and click ‘Start serving all requests’. The status of the deployment should change to ‘active’. Let’s demonstrate load balancing HTTP requests using Traefik as Ingress controllers on Kubernetes clusters. To check the NodePort number for the load balancer, run this command:  The Traefik load balancer is running on port 30305. Every time we access the testwebapp application link, http://129.146.22.123:30305/testwebapp/, the application will display the currently used Managed Server’s information. Another load of the same URL, displays the information about Managed Server 1. Because the WebLogic cluster is exposed to the external world and accessible using the external IP addresses of the nodes, the authorized WebLogic user can use the T3 protocol to access all the available WebLogic resources by using WLST commands. With a firewall, you have to run T3 using tunneling with a proxy (use T3 over HTTP; turn on tunneling in the WLS Server and then use the "HTTP" protocol instead of "T3"). See this blog for more details. If you are outside of the corporate network, you can use T3 with no limitations.   Summary In this blog, we demonstrated all the required steps to set up a WebLogic cluster using the OCI Container Engine for Kubernetes that runs on the Oracle Cloud Infrastructure and load balancing for a web application, deployed on the WebLogic cluster. Running WebLogic Server on Kubernetes in OCI Container Engine for Kubernetes enables users to leverage WebLogic Server applications in a managed Kubernetes environment, to integrate WebLogic Server applications with other cloud applications, and to evolve their usage of WebLogic Server and expand their usage of Kubernetes. We are also publishing a series of blog entries that describe in detail, how to run the operator, how to stand up one or more WebLogic domains in Kubernetes, how to scale up or down  a WebLogic cluster manually or automatically using the WebLogic Diagnostics Framework (WLDF) or Prometheus, how the Operator manages load balancing for web applications deployed in WebLogic clusters, and how to provide integration for managing operator logs through Elasticsearch, Logstash, and Kibana.

There are various options for setting up a Kubernetes environment in order to run WebLogic clusters. Oracle supports customers who want to run WebLogic clusters in production or development mode...

Announcement

Announcing General Availability version of the WebLogic Server Kubernetes Operator

We are very pleased to announce the release of our General Availability (GA) version of the WebLogic Server Kubernetes Operator.  The Operator, first released in February as a Technology Preview version, simplifies the creation and management of WebLogic Server 12.2.1.3 domains on Kubernetes.  The GA operator supports additional WebLogic features, and is certified and supported for use in development and production.  Certification includes support for the Operator and WebLogic Server configurations running on the Oracle Cloud Infrastructure (OCI), on Kubernetes clusters created using the Terraform Kubernetes Installer for OCI, and using the Oracle Cloud Infrastructure Registry (OCIR) for storing Operator and WebLogic Server domain images. For additional information about WebLogic on Kubernetes  certification and WebLogic Server Kubernetes Operator, see Support Doc ID 2349228.1, and reference the announcement blog, WebLogic on Kubernetes Certification. We have developed the Operator to integrate WebLogic Server and Kubernetes, allowing Kubernetes to serve as a container infrastructure hosting WebLogic Server instances. The WebLogic Server Kubernetes Operator extends Kubernetes to create, configure, and manage a WebLogic domain. Read our prior announcement blog, Announcing WebLogic Server Kubernetes Operator, and find the WebLogic Server Kubernetes Operator GitHub project at https://github.com/oracle/weblogic-kubernetes-operator.   Running WebLogic Server on Kubernetes enables users to leverage WebLogic Server applications in Kubernetes environments, to integrate WebLogic Server applications with other cloud applications, and to evolve their usage of WebLogic Server and expand their usage of Kubernetes. The WebLogic Server Kubernetes Operator allows users to: Simplify WebLogic management in Kubernetes Ensure Kubernetes resources are allocated for WebLogic domains Manage the overall environment, including load balancers, Ingress controllers, network fabric, and security, through Kubernetes APIs Simplify and automate patching and scaling operations Ensure that WebLogic best practices are followed Run WebLogic domains well and securely In this version of the WebLogic Server Kubernetes Operator and the WebLogic Server Kubernetes certification, we have added the following functionality and support: Support for Kubernetes versions 1.7.5, 1.8.0, 1.9.0, 1.10.0 In our Operator GitHub project, we provide instructions for how to build, test, and publish the Docker image for the Operator directly from Oracle Container Pipelines using the wercker.yml . Support for dynamic clusters, and auto-scaling of a WebLogic Server cluster with dynamic clusters. Please read the blog for details WebLogic Dynamic Cluster on Kubernetes. Support for the Apache HTTP Server and Voyager (HAProxy-backed) Ingress controller running within the Kubernetes cluster for load balancing HTTP requests across WebLogic Server Managed Servers running in clustered configurations. Integration with the Operator automates the configuration of these load balancers.  Find documentation for the Apache HTTP Server and Voyager Ingress Controller. Support for Persistent Volumes (PV) in NFS storage for multi-node environments. In our project, we provide a cheat sheet to configure the NFS volume on OCI, and some important notes about NFS volumes and the WebLogic Server domain in Kubernetes. The  Delete WebLogic domain resources script, which permanently removes the Kubernetes resources for a domain or domains, from a Kubernetes cluster. Please see “Removing a domain” in the README of the Operator project. Improved Prometheus support.   See Using Prometheus to Automatically Scale WebLogic Clusters on Kubernetes. Integration tests posted on our WebLogic Server Kubernetes Operator GitHub project. Our future plans include, certification of WebLogic Server on Kubernetes running on the OCI Container Engine for Kubernetes, providing an easy way to reprovision and redeploy existing  WebLogic Server domains in Kubernetes using the WebLogic Deploy Tooling, adding CI/CD of WebLogic deployments on Kubernetes with Oracle Container Pipelines, and new features and enhancements over time. Please stay tuned for more information. We hope this announcement is helpful to those of you seeking to deploy WebLogic Server on Kubernetes, and look forward to your feedback.    

We are very pleased to announce the release of our General Availability (GA) version of the WebLogic Server Kubernetes Operator.  The Operator, first released in February as a Technology Preview...

Technical

WebLogic Dynamic Clusters on Kubernetes

Overview A WebLogic Server cluster consists of multiple Managed Server instances running simultaneously and working together to provide increased scalability and reliability.  WebLogic Server supports two types of clustering configurations, configured and dynamic clustering.  Configured clusters are created by manually configuring each individual Managed Server instance.  In dynamic clusters, the Managed Server configurations are generated from a single, shared template.  Using a template greatly simplifies the configuration of clustered Managed Servers and allows for dynamically assigning servers to Machine resources, thereby providing a greater utilization of resources with minimal configuration.  With dynamic clusters, when additional server capacity is needed, new server instances can be added to the cluster without having to manually configure them individually. Also, unlike configured clusters, scaling up of dynamic clusters is not restricted to the set of servers defined for a cluster, but can be increased based on runtime demands.   For more information on how to create, configure, and use dynamic clusters in WebLogic Server, see Dynamic Clusters.   Support for Dynamic Clusters by Oracle WebLogic Server Kubernetes Operator Previously, the WebLogic Server Kubernetes Operator supported configured clusters only.  That is, the operator could only manage and scale Managed Servers defined for a configured cluster.  Now, this limitation has been removed. By supporting dynamic clusters, the operator can easily scale the number of Managed Server instances based on a server template instead of requiring that you first manually configure them.   Creating a Dynamic Cluster in a WebLogic Domain in Kubernetes   The WebLogic Server team has been actively working to integrate WebLogic Server in Kubernetes, WebLogic Server Certification on Kubernetes.  The Oracle WebLogic Server Kubernetes Operator provides a mechanism for creating and managing any number of WebLogic domains, automates domain startup, allows scaling of WebLogic clusters, manages load balancing for web applications deployed in WebLogic clusters, and provides integration with Elasticsearch, Logstash, and Kibana. The operator is currently available as an open source project at https://oracle.github.io/weblogic-kubernetes-operator.  To create a WebLogic domain, the recommended approach is to use the provided create-weblogic-domain.sh script, which automates the creation of a WebLogic domain within a Kubernetes cluster.  The create-weblogic-domain.sh script takes an input file, create-weblogic-domain-inputs.yaml, which specifies the configuration properties for the WebLogic domain. The following parameters of the input file are used when creating a dynamic cluster:   Parameter Definition Default clusterName The name of the WebLogic cluster instance to generate for the domain. cluster-1 clusterType The type of WebLogic cluster. Legal values are "CONFIGURED" or "DYNAMIC". CONFIGURED configuredManagedServerCount   The number of Managed Server instances to generate for the domain. 2 initialManagedServerReplicas The number of Managed Servers to start initially for the domain. 2 managedServerNameBase Base string used to generate Managed Server names.  Used as the server name prefix in a server template for dynamic clusters. managed-server       The following example configuration will create a dynamic cluster named ‘cluster-1’ with four defined Managed Servers (managed-server1 … managed-server4) in which the operator will initially start up two Managed Servers instances, managed-server1and managed-server2:   # Type of WebLogic Cluster # Legal values are "CONFIGURED" or "DYNAMIC" clusterType: DYNAMIC   # Cluster name clusterName: cluster-1   # Number of Managed Servers to generate for the domain configuredManagedServerCount: 4   # Number of Managed Servers to initially start for the domain initialManagedServerReplicas: 2       # Base string used to generate Managed Server names managedServerNameBase: managed-server   To create the WebLogic domain, you simply run the create-weblogic-domain.sh script specifying your input file and output directory for any generated configuration files:   #> create-weblogic-domain.sh  –i create-domain-job-inputs.yaml  -o /path/to/weblogic-domain-output-directory   There are some limitations when creating WebLogic clusters using the create domain script:   The script creates the specified number of Managed Server instances and places them all in one cluster. The script always creates one cluster. Alternatively, you can create a WebLogic domain manually as outlined in Manually Creating a WebLogic Domain.   How WebLogic Kubernetes Operator Manages a Dynamic Cluster   A Kubernetes Operator is “an application-specific controller that extends the Kubernetes API to create, configure, and manage instances of complex applications.”  For more information on operators, see Introducing Operators: Putting Operational Knowledge into Software. The Oracle WebLogic Server Kubernetes Operator extends Kubernetes to create, configure, and manage any number of WebLogic domains running in a Kubernetes environment.  It provides a mechanism to create domains, automate domain startup, and allow scaling of both configured and dynamic WebLogic clusters. For more details about the WebLogic Server Kubernetes Operator, see the blog, Announcing the Oracle WebLogic Server Kubernetes Operator.   Because the WebLogic Kubernetes Operator manages the life cycle of Managed Servers in a Kubernetes cluster, it provides the ability to start up and scale (up or down) WebLogic dynamic clusters. The operator manages the startup of a WebLogic domain based on the settings defined in a Custom Resource Domain (CRD).   The number of WLS pods/Managed Server instances running in a Kubernetes cluster, for a dynamic cluster, is represented by the ‘replicas’ attribute value of the ClusterStartup entry in the following domain custom resource YAML file:   clusterStartup:   - desiredState: "RUNNING"     clusterName: "cluster-1"     replicas: 2     env:     - name: JAVA_OPTIONS       value: "-Dweblogic.StdoutDebugEnabled=false"     - name: USER_MEM_ARGS       value: "-Xms64m -Xmx256m"   For the above example entry, during WebLogic domain startup, the operator would start two pod/Managed Server instances for the dynamic cluster ‘cluster-1’. Details of a domain custom resource YAML file can be found in Starting a WebLogic Domain.   Scaling of WebLogic Dynamic Clusters on Kubernetes   There are several ways to initiate scaling through the operator, including:   On-demand, updating the Custom Resource Domain specification directly (using kubectl). Calling the operator's REST scale API, for example, from curl. Using a WLDF policy rule and script action to call the operator's REST scale API. Using a Prometheus alert action to call the Operator's REST scale API. On-Demand, Updating the Custom Resource Domain Directly Scaling a dynamic cluster can be achieved by editing the Custom Resource Domain directly by using the ‘kubectl edit’ command and modifying the ‘replicas’ attribute value:   #> kubectl edit domain domain1 -n [namespace]   This command will open an editor which will allow you to edit the defined Custom Resource Domain specification.  Once committed, the operator will be notified of the change and will immediately attempt to scale the corresponding dynamic cluster by reconciling the number of running pods/Managed Server instances with the ‘replicas’ value specification.   Calling the Operator's REST Scale API Alternatively, the WebLogic Server Kubernetes Operator exposes a REST endpoint, with the following URL format, that allows an authorized actor to request scaling of a WebLogic cluster:   http(s)://${OPERATOR_ENDPOINT}/operator/<version>/domains/<domainUID>/clusters/<clusterName>/scale   <version> denotes the version of the REST resource. <domainUID> is the unique ID that will be used to identify this particular domain. This ID must be unique across all domain in a Kubernetes cluster. <clusterName> is the name of the WebLogic cluster instance to be scaled.   For example:   http(s)://${OPERATOR_ENDPOINT}/operator/v1/domains/domain1/clusters/cluster-1/scale The /scale REST endpoint: Accepts an HTTP POST request. The request body supports the JSON "application/json" media type. The request body will be a simple name-value item named managedServerCount: {       ”managedServerCount": 3 }   The managedServerCount value designates the number of WebLogic Server instances to scale to.   Note: An example use of the REST API, using the curl command, can be found in scalingAction.sh.   Using a WLDF Policy Rule and Script Action to Call the Operator's REST Scale API A WebLogic Server dynamic cluster can be automatically scaled by increasing (or decreasing) the number of pods based on resource metrics provided by the WebLogic Diagnostic Framework (WLDF). WLDF is a suite of services and APIs that collect and surface metrics that provide visibility into server and application performance. WLDF provides a Policies and Actions component to support the automatic scaling of dynamic clusters.  There are two types of scaling supported by WLDF:   Calendar-based scaling — Scaling operations on a dynamic cluster that are executed on a particular date and time. Policy-based scaling — Scaling operations on a dynamic cluster that are executed in response to changes in demand. In this blog, we will focus on policy-based scaling which lets you write policy expressions for automatically executing configured actions when the policy expression rule is satisfied. These policies monitor one or more types of WebLogic Server metrics, such as memory, idle threads, and CPU load. When the configured threshold in a policy is met, the policy is triggered, and the corresponding scaling action is executed.   Example Policy Expression Rule   The following is an example policy expression rule that was used in Automatic Scaling of WebLogic Clusters on Kubernetes:   wls:ClusterGenericMetricRule("cluster-1","com.bea:Type=WebAppComponentRuntime, ApplicationRuntime=OpenSessionApp,*","OpenSessionsCurrentCount","&gt;=",0.01,5,"1 seconds","10 seconds"   This ‘ClusterGenericMetricRule’ smart rule is used to observe trends in JMX metrics that are published through the Server Runtime MBean Server and can be read as:   For the cluster, ‘cluster-1’, WLDF will monitor the OpenSessionsCurrentCount attribute of the WebAppComponentRuntime MBean for the OpenSessionApp application.  If the OpenSessionsCurrentCount is greater than or equal to 0.01 for 5% of the servers in the cluster, then the policy will be evaluated as true. Metrics will be collected at a sampling rate of 1 second and the sample data will be averaged out over the specified 10 second period of time of the retention window.   You can use any of the following tools to configure policies for diagnostic system modules:   WebLogic Server Administration Console WLST REST JMX application   Below is an example configuration of a policy, named ‘myScaleUpPolicy’, shown as it would appear in the WebLogic Server Administration Console:       Example Action   An action is an operation that is executed when a policy expression rule evaluates to true. WLDF supports the following types of diagnostic actions: Java Management Extensions (JMX) Java Message Service (JMS) Simple Network Management Protocol (SNMP) Simple Mail Transfer Protocol (SMTP) Diagnostic image capture Elasticity framework REST WebLogic logging system Script   The WebLogic Server team has an example shell script, scalingAction.sh, for use as a Script Action, which illustrates how to issue a request to the operator’s REST endpoint.  Below is an example screen shot of the Script Action configuration page from the WebLogic Server Administration Console:       Important notes about the configuration properties for the Script Action:   Working Directory and Path to Script configuration entries specify the volume mount path (/shared) to access the WebLogic domain home. The scalingAction.sh script requires access to the SSL certificate of the operator’s endpoint and this is provided through the environment variable ‘INTERNAL_OPERATOR_CERT’.  The operator’s SSL certificate can be found in the ‘internalOperatorCert’ entry of the operator’s ConfigMap weblogic-operator-cm: For example:   #> kubectl describe configmap weblogic-operator-cm -n weblogic-operator   Name:         weblogic-operator-cm Namespace:    weblogic-operator Labels:       weblogic.operatorName=weblogic-operator Annotations:  kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"v1","data":{"externalOperatorCert":"","internalOperatorCert":"LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0t...   Data ==== internalOperatorCert: ---- LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUR3akNDQXFxZ0F3SUJBZ0lFRzhYT1N6QU...   The scalingAction.sh script accepts a number of customizable parameters: •       action - scaleUp or scaleDown (Required) •       domain_uid - WebLogic domain unique identifier (Required) •       cluster_name - WebLogic cluster name (Required) •       kubernetes_master - Kubernetes master URL, default=https://kubernetes •       access_token - Service Account Bearer token for authentication and authorization for access to REST Resources •       wls_domain_namespace - Kubernetes namespace in which the WebLogic domain is defined, default=default •       operator_service_name - WebLogic Operator Service name of the REST endpoint, default=internal-weblogic-operator-service •       operator_service_account - Kubernetes Service Account name for the WebLogic Operator, default=weblogic-operator •       operator_namespace – Namespace in which the WebLogic Operator is deployed, default=weblogic-operator •       scaling_size – Incremental number of WebLogic Server instances by which to scale up or down, default=1   For more information about WLDF and diagnostic policies and actions, see Configuring Policies and Actions in Configuring and Using the Diagnostics Framework for Oracle WebLogic Server. Note: A more detailed description of automatic scaling using WLDF can be found in WebLogic on Kubernetes, Try It! and Automatic Scaling of WebLogic Clusters on Kubernetes.   There are a few key differences between the automatic scaling of WebLogic clusters described in this blog and my previous blog, Automatic Scaling of WebLogic Clusters on Kubernetes:   In the previous blog, as in the earlier release, only scaling of configured clusters was supported. In this blog: To scale the dynamic cluster, we use the WebLogic Server Kubernetes Operator instead of using a Webhook. To scale the dynamic cluster, we use a Script Action, instead of a REST action. To scale pods, scaling actions invoke requests to the operator’s REST endpoint, instead of the Kubernetes API server. Using a Prometheus Alert Action to Call the Operator's REST Scale API   In addition to using the WebLogic Diagnostic Framework, for automatic scaling of a dynamic cluster, you can use a third party monitoring application like Prometheus.  Please read the following blog for details about Using Prometheus to Automatically Scale WebLogic Clusters on Kubernetes.   What Does the Operator Do in Response to a REST Scaling Request?   When the WebLogic Server Kubernetes Operator receives a scaling request through its scale REST endpoint, it performs the following actions: Performs an authentication and authorization check to verify that the specified user is allowed to perform the specified operation on the specified resource. Validates that the specified domain, identified by the domainUID, exists. The domainUID is the unique ID that will be used to identify this particular domain. This ID must be unique across all domains in a Kubernetes cluster. Validates that the WebLogic cluster, identified by the clusterName, exists. The clusterName is the name of the WebLogic cluster instance to be scaled. Verifies that the scaling request’s ‘managedServerCount’ value does not exceed the configured maximum cluster size for the specified WebLogic cluster.  For dynamic clusters, ‘MaxDynamicClusterSize’ is a WebLogic attribute that specifies the maximum number of running Managed Server instances allowed for scale up operations.  See Configuring Dynamic Clusters for more information on attributes used to configure dynamic clusters. Initiates scaling by setting the ‘Replicas’ property within the corresponding domain custom resource, which can be done in either:   A clusterStartup entry, if defined for the specified WebLogic cluster. For example:   Spec:   …   Cluster Startup:     Cluster Name:   cluster-1     Desired State:  RUNNING     Env:       Name:     JAVA_OPTIONS       Value:    -Dweblogic.StdoutDebugEnabled=false       Name:     USER_MEM_ARGS       Value:    -Xms64m -Xmx256m     Replicas:   2    …   At the domain level, if a clusterStartup entry is not defined for the specified WebLogic cluster and the startupControl property is set to AUTO For example:     Spec:     Domain Name:  base_domain     Domain UID:   domain1     Export T 3 Channels:     Image:              store/oracle/weblogic:12.2.1.3     Image Pull Policy:  IfNotPresent     Replicas:           2     Server Startup:       Desired State:  RUNNING       Env:         Name:         JAVA_OPTIONS         Value:        -Dweblogic.StdoutDebugEnabled=false         Name:         USER_MEM_ARGS         Value:        -Xms64m -Xmx256m       Server Name:    admin-server     Startup Control:  AUTO   Note: You can view the full WebLogic Kubernetes domain resource with the following command: #> kubectl describe domain <domain resource name> In response to a change to the ‘Replicas’ property in the Custom Resource Domain, the operator will increase or decrease the number of pods (Managed Servers) to match the desired replica count. Wrap Up The WebLogic Server team has developed an Oracle WebLogic Server Kubernetes Operator, based on the Kubernetes Operator pattern, for integrating WebLogic Server in a Kubernetes environment.  The operator is used to manage the life cycle of a WebLogic domain and, more specifically, to scale a dynamic cluster.  Scaling a WebLogic dynamic cluster can be done, either on-demand or automatically, using either the WebLogic Diagnostic Framework or third party monitoring applications, such as Prometheus.  In summary, the advantages of using WebLogic dynamic clusters over configured clusters in a Kubernetes cluster are:   Managed Server configuration is based on a single server template. When additional server capacity is needed, new server instances can be added to the cluster without having to manually configure them individually. Unlike configured clusters, scaling up of dynamic clusters is not restricted to the set of servers defined in the cluster but can be increased based on runtime demands. I hope you’ll take the time to download and take the Oracle WebLogic Server Kubernetes Operator for a spin and experiment with the automatic scaling feature for dynamic clusters. Stay tuned for more blogs on future features that are being added to enhance the Oracle WebLogic Server Kubernetes Operator.

Overview A WebLogic Server cluster consists of multiple Managed Server instances running simultaneously and working together to provide increased scalability and reliability.  WebLogic Server supports...

Using Prometheus to Automatically Scale WebLogic Clusters on Kubernetes

Elasticity (scaling up or scaling down) of a WebLogic Server cluster provides the benefits of being able to manage resources based on demand and enhances the reliability of customer applications while managing resource costs. There are different ways to trigger the automatic scaling of WebLogic Server clusters in a Kubernetes environment. The architecture of the WebLogic Server Elasticity component as well as a detailed explanation of how to scale up a WebLogic cluster using a WebLogic Diagnostic Framework (WLDF) policy can be found in the Automatic Scaling of WebLogic Clusters on Kubernetes blog. In this demo, we demonstrate another way to automatically scale a WebLogic cluster on Kubernetes, by using Prometheus. Since Prometheus has access to all available WebLogic metrics data, user has flexibility to use any of it to specify the rules for scaling. Based on collected metrics data and configured alert rule conditions, Prometheus’s Alert Manager will send an alert to trigger the desired scaling action and change the number of running Managed Servers in the WebLogic Server cluster. We use the WebLogic Monitoring Exporter to scrape runtime metrics for specific WebLogic Server instances and feed them to Prometheus. We also implement a custom notification integration using the webhook receiver, a user-defined REST service that is triggered when a scaling alert event occurs. After the alert rule matches the specified conditions, the Prometheus Alert Manager sends an HTTP request to the URL specified as a webhook to request the scaling action. For more information about the webhook used in the sample demo, see adnanh/webhook/. In this blog, you will learn how to configure Prometheus, Prometheus Alert Manager, and a webhook to perform automatic scaling of WebLogic Server instances running in Kubernetes clusters. This picture shows all the components running in the pods in the Kubernetes environment: The WebLogic domain, running in a Kubernetes cluster, consists of: An Administration Server (AS) instance, running in a Docker container, in its own pod (POD 1). A WebLogic Server cluster, composed of a set of Managed Server instances, in which each instance is running in a Docker container in its own pod (POD 2 to POD 5). The WebLogic Monitoring Exporter web application, deployed on a WebLogic Server cluster. Additional components, running in a Docker container, in their own pod are: Prometheus Prometheus Alert Manager WebLogic Kubernetes Operator Webhook server   Installation and Deployment of the Components in the Kubernetes Cluster Follow the installation instructions to create the WebLogic Kubernetes Operator and domain deployments. In this blog, we will be using the following parameters to create the WebLogic Kubernetes Operator and WebLogic domain: 1. Deploy the WebLogic Kubernetes Operator (create-weblogic-operator.sh) In create-operator-inputs.yaml: serviceAccount: weblogic-operator targetNamespaces: domain1 namespace: weblogic-operator weblogicOperatorImage: container-registry.oracle.com/middleware/weblogic-kubernetes-operator:latest weblogicOperatorImagePullPolicy: IfNotPresent externalRestOption: SELF_SIGNED_CERT externalRestHttpsPort: 31001 externalSans: DNS:slc13kef externalOperatorCert: externalOperatorKey: remoteDebugNodePortEnabled: false internalDebugHttpPort: 30999 externalDebugHttpPort: 30999 javaLoggingLevel: INFO 2. Create and start a domain (create-domain-job.sh) In create-domain-job-inputs.yaml: domainUid: domain1 managedServerCount: 4 managedServerStartCount: 2 namespace: weblogic-domain adminPort: 7001 adminServerName: adminserver startupControl: AUTO managedServerNameBase: managed-server managedServerPort: 8001 weblogicDomainStorageType: HOST_PATH weblogicDomainStoragePath: /scratch/external-domain-home/pv001 weblogicDomainStorageReclaimPolicy: Retain weblogicDomainStorageSize: 10Gi productionModeEnabled: true weblogicCredentialsSecretName: domain1-weblogic-credentials exposeAdminT3Channel: true adminNodePort: 30701 exposeAdminNodePort: true namespace: weblogic-domain loadBalancer: TRAEFIK loadBalancerWebPort: 30305 loadBalancerDashboardPort: 30315 3. Run this command to identify the admin NodePort to access the console : kubectl -n weblogic-domain  describe service domain1-adminserver weblogic-domain – is the namespace where the WebLogic domain pod is deployed. Previous blog entries have described how to start and run WebLogic Server instances in Kubernetes with the WebLogic Monitoring Exporter deployed on Managed Servers running in the cluster. Access the WebLogic Server Administration Console at this URL, http://[hostname]:30701/console, using the WebLogic credentials, “weblogic/welcome1”. In our example, we setup an alert rule based on the number of the opened sessions produced by this web application, “testwebapp.war”. Deploy the testwebapp.war application and WebLogic Monitoring Exporter “wls-exporter.war” to DockerCluster. Review the DockerCluster NodePort for external access: kubectl -n weblogic-domain  describe service domain1-dockercluster-traefik To make sure that the WebLogic Monitoring Exporter is deployed and running, access the application with a URL like the following: http://[hostname]:30305/wls-exporter/metrics You will be prompted for the WebLogic user credentials that are required to access the metrics data, weblogic/welcome1. The metrics page will show the metrics configured for the WebLogic Monitoring Exporter:     Make sure that the alert rule you want to setup in the Prometheus Alert Manager matches the metrics configured for the WebLogic Exporter. Here is an example of the alert rule we used: if sum(webapp_config_open_sessions_current_count{webapp=”testwebapp”) > 15 ; The metric used, ‘webapp_config_open_sessions_current_count’, should be listed on the metric’s web page.   Setting Up the Webhook for Alert Manager We used this webhook application in our example. To build the Docker image, create this directory structure: - apps -scripts -webhooks 1. Copy the webhook application executable file to the ‘apps’ directory and copy the scalingAction.sh script to ‘scripts’ directory. Create a scaleUpAction.sh file in the ‘scripts’ directory and edit it with the code listed below: #!/bin/bash echo scale up action >> scaleup.log MASTER=https://$KUBERNETES_SERVICE_HOST:$KUBERNETES_SERVICE_PORT echo Kubernetes master is $MASTER source /var/scripts/scalingAction.sh --action=scaleUp --domain_uid=domain1 --cluster_name=DockerCluster --kubernetes_master=$MASTER --wls_domain_namespace=domain1 2. Create a Docker file for the webhook, Docker.webhook, as suggested: FROM store/oracle/serverjre:8 COPY apps/webhook /bin/webhook COPY webhooks/hooks.json /etc/webhook/ COPY scripts/scaleUpAction.sh /var/scripts/ COPY scripts/scalingAction.sh /var/scripts/ CMD ["-verbose", "-hooks=/etc/webhook/hooks.json", "-hotreload"] ENTRYPOINT ["/bin/webhook"] 3. Create hooks.json file in the webhooks directory, for example: [ { "id": "scaleup", "execute-command": "/var/scripts/scaleUpAction.sh", "command-working-directory": "/var/scripts", "response-message": "scale-up call ok\n" } ] 4. Build the ‘webhook’ Docker image: docker rmi webhook:latest docker build -t webhook:latest -f Dockerfile.webhook .   Deploying Prometheus, Alert Manager, and Webhook We will run Prometheus, the Alert Manager and the webhook pods under the namespace ‘monitoring’. Execute the following command to create a ‘monitoring’ namespace: kubectl create namespace monitoring To deploy a Prometheus instance in Kubernetes, create a Prometheus configuration file, prometheus-kubernetes.yml. A sample file is provided here. The example of the Prometheus configuration file specifies: -    weblogic/welcome1 as the user credentials -    Five seconds as the interval between updates of WebLogic Server metrics -    32000 as the external port to access the Prometheus dashboard -    scaling rules: ALERT scaleup if sum(webapp_config_open_sessions_current_count{webapp=”testwebapp”}) > 15 ANNOTATIONS { summary = "Scale up when current sessions is greater than 15", description = "Firing when total sessions active greater than 15" }         -   Alert Manager is configured to listen port 9093 As required, you can change these values to reflect your specific environment and configuration. You can also change the Alert Rule by constructing Prometheus-defined queries matching your elasticity needs. To generate alerts, we need to deploy the Prometheus Alert Manager as a separate pod, running in the Docker container. In our provided sample Prometheus Alert Manager configuration file, we use the webhook: Update the ‘INTERNAL_OPERATOR_CERT’ property from the webhook-deployment.yaml file with the value of the ‘internalOperatorCert’ property from the generated weblogic-operator.yaml file, used for WebLogic Kubernetes Operator deployment, for example:   Start the webhook, Prometheus, and the Alert Manager to monitor the Managed Server instances: kubectl apply -f  alertmanager-deployment.yaml kubectl apply –f prometheus-deployment.yaml kubectl apply –f webhook-deployment.yaml Verify that all the pods are started: Check that Prometheus is monitoring all Managed Server instances by browsing to http://[hostname]:32000. Examine the Insert metric at cursor pull-down menu. It should list the metric names based on the current configuration of the WebLogic Monitoring Exporter web application.   You can check the Prometheus Alert Setting by accessing this URL, http://[hostname]:32000/alerts: It should show the configured rule, listed in the prometheus-deployment.yaml configuration file. Auto Scaling of WebLogic Clusters in K8s In this demo, we configured each WebLogic Server cluster to have two running Managed Server instances, with a total number of Managed Servers equal to four. You can modify the values of these parameters, configuredManagedServerCount and initialManagedServerReplicas, in the create-domain-job-inputs.yaml file, to reflect your desired number of Managed Servers running in the cluster and maximum limit of allowed replicas. Per our sample file configuration, initially we have only two Managed Servers pods started. Let’s check all the running pods now: Per our configuration in the Alert Rule, the scale up will happen when the number of open session for the application ‘testwebapp’ on the cluster is more than 15.  Let’s invoke the application URL 17 times using curl.sh: #!/bin/bash COUNTER=0 MAXCURL=17 while [ $COUNTER -lt $MAXCURL ]; do OUTPUT="$(curl http:/$1:30305/testwebapp/)" if [ "$OUTPUT" != "404 page not found" ]; then echo $OUTPUT let COUNTER=COUNTER+1 sleep 1 fi done Issue the command: . ./curl.sh [hostname] When the sum of open sessions for the “testwebapp” application becomes more than 15, Prometheus will fire an alert via the Alert Manager. We can check the current alert status by accessing this URL, http://[hostname]:32000/alert To verify that the Alert Manager sent the HTTP POST to the webhook, check the webhook pod log: When the hook endpoint is invoked, the command specified by the “execute-command” property is executed, which in this case is the shell script, /var/scripts/scaleUpAction.sh. The scaleUpAction.sh script passes the parameters for the scalingAction.sh script, provided by the WebLogic Kubernetes Operator. The scalingAction.sh script issues a request to the Operator Service REST URL for scaling. To verify the scale up operation, let’s check the number of running Managed Server pods. It should be increased to a total of three running pods: Summary In this blog, we demonstrated how to use the Prometheus integration with WebLogic Server to trigger the automatic scaling of WebLogic Server clusters in a Kubernetes environment. The WebLogic Server cluster can be automatically scaled by increasing (or decreasing) the number of pods based on a very comprehensive set of WebLogic domain-specific (custom) metrics monitored and analyzed by Prometheus. Our sample demonstrates that in addition to being a great monitoring tool, Prometheus can easily be configured for WebLogic Server cluster scaling decisions.  

Elasticity (scaling up or scaling down) of a WebLogic Server cluster provides the benefits of being able to manage resources based on demand and enhances the reliability of customer applications while...

Add category

Ensuring high level of performance with WebLogic JDBC

Written by Joseph Weinstein.   In this post, you will find some common best practices aimed at ensuring high levels of performance with WebLogic JDBC.   Use WebLogic DataSources for connection pooling of JDBC connections Making a real DBMS connection is expensive and slow, so you should use our datasources to retain and re-use connections. The ideal mode for using pooled connections is to use them as quickly and briefly as possible, getting them just when needed, and closing them (returning them to the pool) as soon as possible. This maximizes concurrency.  (It is crucial that the connection is a method-level object, not shared between application threads, and that it *is* closed, no matter what exit path the application code takes, else the pool could be leaked dry, all connections taken out and abandoned. See a best-practices code example farther down in this article. The long-term hardening and integration of WebLogic Datasources with applications and other WebLogic APIs make them much the preferred choice to UCP or third-party options.)   Use Oracle JDBC thin driver (Type 4) rather than OCI driver (Type 2) The Oracle JDBC thin driver is lightweight (easy to install and administrate), platform-independent (entirely written in Java), and provides slightly higher performance than the JDBC OCI (Oracle Call Interface) driver.  The thin driver does not require any additional software on the client side.  Oracle JDBC FAQ stipulates that the performance benefit with the thin driver is not consistent and that the OCI driver can even deliver better performances in some scenarios. Using OCI in WebLogic carries the danger that any bug in the native library can take down an entire WebLogic server.WebLogic officially no longer supports using the driver in the OCI mode.   Use PreparedStatements objects rather than plain Statements  With PreparedStatements, the compiled SQL query plans will be kept in the DBMS cache, only parsed once and re-used thereafter.   Use/configure the WebLogic Datasource’s statement cache size wisely. The datasource can actually cache and allow you to transparently re-use a Prepared/CallableStatement made from a given pooled connection. The pool’s statement cache size (default=10) determines how many. This may take some memory but is usually worth the performance gain. Note well though that the cache size is purged in the least recently used policy so if your app(s) that use a datasource typically make 30 distinct prepared statements, each next request would put the new one in the cache and kick out one used 10 statements ago, and this would thrash the cache, with no statement ever surviving long enough to be re-used. The console makes several statement cache statistics available to allow you to size the cache to service all your statements, but if memory becomes a huge issue, it may be better to set the cache size to zero. When using the Oracle JDBC driver, also consider using its statement caching as a lower-level alternative to WebLogic caching. There have been times when the driver uses significant memory per open statement, such as if cached by WebLogic, but if cached at the driver level instead, the driver knows it can share and minimize this memory. To use driver-level statement caching instead, make sure the WebLogic statement cache size is zero, and add these properties to the list of driver properties for the datasource: implicitCachingEnabled=true     and    maxStatements=XXX  where XXX is ideally a number of statements enough to cover all your common calls. Similarly to the WebLogic cache size, a too-small number might be useless or worse. Observe your memory usage after the server has run under full load for a while.   Close all JDBC resources ASAP, inline, and for safety, verify so in a finally block This includes Lobs, ResultSets, Statements, and Connections objects to maximize memory and avoid certain DBMS-side resource issues.  By spec, the Connection.close() should close all sub-objects from it, and the WebLogic version of close() intends to do that while putting the actual connection back into the pool, but some objects may have different implementations in different drivers that won’t allow WebLogic to release everything. JDBC objects like Lobs not properly closed can lead to this error: java.sql.SQLException: ORA-01000: maximum open cursors exceeded.   If you don't explicitly close Statements and ResultSets right away, cursors may accumulate and exceed the maximum number allowed in your DB before the Connection is closed.    Here is a code example for WebLogic JDBC best practices:   Public void myTopLevelJDBCMethod() {     Connection c = null; // defined as a method-level object, not accessible or kept where other threads can use it.       … do all pre-JDBC stuff…       // The try block, in which all JDBC for this method (and sub-methods) will be done     Try {       // Get the connection directly, fresh from a WLS datasource       c = myDatasource.getConnection();         … do all your JDBC… You can pass the connection to sub-methods, but they should not keep it,       or expect it or any of the objects gotten from it to be open/viable after the end of the method…       doMyJDBCSubTaskWith( c );          c.close(); // close the connection as soon as all JDBC is done   c = null;  // so the finally block knows it’s been closed if it was ever obtained.         .. do whatever else that may remain that doesn’t need JDBC. I have seen *huge* concurrency improvements by       closing the connection ASAP before doing any non-JDBC post-processing of the data etc.          } catch (Exception e) {       .. do what you want/need, if you need a catch-block, but *always* have the finally block:     } finally {       // If we got here somehow without closing c, do it now, without fail, as the first thing in the finally block so it always happens       If (c != null) try {c.close();} catch (Exception ignore){}       … do whatever else you want in the finally block     } }   Set The Datasource Shrink frequency to 0 for fastest connection availability A datasource can be configured to vary its count of real connections, closing an unneeded portion (above the minimum capacity) when there is insufficient load currently, and it will repopulate itself as/when needed. This will impose slowness on apps during the uptick in load, while new replacement connections are made. By setting the shrink frequency to zero, the datasource will keep all working connections indefinitely, ready. This is sometimes a tradeoff in the DBMS, if there are too many idle sessions…   Set the datasource test frequency to something infrequent or zero The datasource can be configured to periodically test any connections that are currently unused, idle in the pool, replacing bad ones, independently of any application load. This has some benefits, such as keeping the connections looking busy enough for firewalls and DBMSes that might otherwise silently kill them for inactivity. However, it is overhead in WLS, and is mostly superfluous if you have test-connections-on-reserve as you should.   Consider skipping the SQL-query connection test on reserve sometimes You should always explicitly enable ‘test connections on reserve’ because even with Active GridLink information about DBMS health, individual connections may go bad, unnoticed. The only way to ensure a connection you’re getting is good is to have the datasource test it just before you get it. However, there may be cases where this connection test every time is too expensive, either because it adds too much time to the short user use-case, or it burdens the DBMS too much. In these cases, if it is somewhat tolerable that an application occasionally gets a bad connection, there is a datasource option ‘seconds to trust an idle connection’ (default 10 seconds) which means that if a connection in the pool has been tested successfully, or previously used by an application successfully, within that number of seconds, we will trust the connection, and give it to the requester without testing it. In a heavy-load, quick-turnover environment this can safely and completely avoid the explicit overhead of testing. For maximal safety however, set ‘seconds to trust an idle connection’ explicitly to zero.   Consider making the test as lightweight as possible If the datasource’s ‘Test Table’ parameter is set, the pool will test a connection by doing a ‘select  count(*) from’ from that table. DUAL is the traditional choice for Oracle. There are options to use the JDBC isValid() call instead, which for *some* drivers is faster. When using the Oracle driver you can set the ‘test table’ to SQL ISVALID. The Oracle dbping() is an option, enabled by the ‘test table’ being set to SQL PINGDATABAS, which checks the to-DBMS net connectivity without actually invoking any user-level DBMS functionality. These are faster, but there are rare cases where the user session functionality is broken, even if the net connectivity is still good. For XA connections, there is a heavier tradeoff. A test table query will be done in its own XA transaction, which is more overhead, but this is useful sometimes because catches and works around some session state problems that would otherwise cause the next user XA transaction to fail. For maximal safety, do a quick real query, such as by setting the test table to SQL SELECT 1 FROM DUAL.   Pinned-to-Thread not recommended Disabled by default, this option can improve performance by transparently assigning pool connections to specific WLS threads. This eliminates contention between threads while accessing a datasource.  However, this parameter should be used with great care because the connection pool maximum capacity is ignored when pinned-to-thread is enabled. Each thread (numbering possibly in the several hundreds) will need/get its own connection, and no shrinking can apply to that pool. That being said, pinned-to-thread is not recommended, for historical/trust reasons. It has not gotten the historical usage, testing, and hardening that the rest of WebLogic pooling has gotten.   Match the Maximum Thread Constraint property with the maximum capacity of database connections This property (See Environment Work Manager in the console) will set a maximum number of possible concurrent application threads/executions. If your applications can run concurrently, unbounded in number except for this WebLogic limit, the maximum capacity of the datasource should match this thread-count so none of your application threads have to wait at an empty pool until some other thread returns a connection.   Visit Tuning Data Source Connection Pools and Tuning Data Sources for additional parameters tuning in JDBC data sources and connection pools to improve system performance with Weblogic Server, and Performance Tuning Your JDBC Application for application-specific design and configuration.     

Written by Joseph Weinstein.   In this post, you will find some common best practices aimed at ensuring high levels of performance with WebLogic JDBC.   Use WebLogic DataSources for connection pooling of...

Processing the Oracle WebLogic Server Kubernetes Operator Logs using Elastic Stack

  Oracle has been working with the WebLogic community to find ways to make it as easy as possible for organizations using WebLogic Server to run important workloads and to move those workloads into the cloud. One aspect of that effort is the delivery of the Oracle WebLogic Server Kubernetes Operator. In this article we will demonstrate a key feature that assists with the management of WebLogic domains in a Kubernetes environment: the ability to publish and analyze logs from the operator using products from the Elastic Stack.  What Is the Elastic Stack? The Elastic Stack (ELK) consists of several open source products, including Elasticsearch, Logstash, and Kibana. Using the Elastic Stack with your log data, you can gain insight about your application's performance in near real time. Elasticsearch is a scalable, distributed and RESTful search and analytics engine based on Lucene. It provides a flexible way to control indexing and fast search over various sets of data. Logstash is a server-side data processing pipeline that can consume data from several sources simultaneously, transform it, and route it to a destination of your choice. Kibana is a browser-based plug-in for Elasticsearch that you use to visualize and explore data that has been collected. It includes numerous capabilities for navigating, selecting, and arranging data in dashboards. A customer who uses the operator to run a WebLogic Server cluster in a Kubernetes environment will need to monitor the operator and servers. Elasticsearch and Kibana provide a great way to do it. The following steps explain how to set this up. Processing Logs Using ELK In this example, the operator and the Logstash agent are deployed in one pod, and Elasticsearch and Kibana are deployed as two independent pods in the default namespace. We will use a memory-backed volume that is shared between the operator and Logstash containers and that is used to store the logs. The operator instance places the logs into the shared volume, /logs. Logstash collects the logs from the volume and transfers the filtered logs to Elasticsearch. Finally, we will use Kibana and its browser-based UI to analyze and visualize the logs. Operator and ELK integration To enable ELK integration with the operator, first we need to set the elkIntegrationEnabled parameter in the create-operator-inputs.yaml file to true. This causes Elasticsearch, Logstash and Kibana to be installed, and Logstash to be configured to export the operator's logs to Elasticsearch. Then simply follow the installation instructions to install and start the operator. To verify that ELK integration is activated, check the output produced by the following command: $ . ./create-weblogic-operator.sh -i create-operator-inputs.yaml This command should print the following information for ELK: Deploy ELK... deployment "elasticsearch" configured service "elasticsearch" configured deployment "kibana" configured service "kibana" configured To ensure that all three deployments are up and running, perform these steps: Check that the Elasticsearch and Kibana pods are deployed and started (note that they run in the default Kubernetes namespace): $ kubectl get pods The following output is expected: Verify that the operator pod is deployed and running. Note that it runs in the weblogic-operator namespace: $ kubectl -n weblogic-operator get pods The following output is expected: Check that the operator and Logstash containers are running inside the operator’s pod: $ kubectl get pods -n weblogic-operator --output json    | jq '.items[].spec.containers[].name' The following output is expected:   Verify that the Elasticsearch pod has started: $ kubectl exec -it elasticsearch-3938946127-4cb2s /bin/bash $ curl   "http://localhost:9200" $ curl  “http://localhost:9200/_cat/indices?v” We get the following indices if Elasticstash was successfully started: If Logstash is not listed, then you might check the Logstash log output: $ kubectl logs weblogic-operator-501749275-nhjs0 -c logstash   -n weblogic-operator If there are no errors in the Logstash log, then it is possible that the Elasticsearch pod has started after the Logstash container. If that is the case, simply restart Logstash to fix it. Using Kibana Kibana provides a web application for viewing logs. Its Kubernetes service configuration includes a NodePort so that the application can be accessed outside of the Kubernetes cluster. To find its port number, run the following command: $ kubectl describe service kibana This should print the service NodePort information, similar to this: From the description of the service in our example, the NodePort value is 30911. Kibana's web application can be accessed at the address http://[NODE_IP_ADDRESS]:30911. To verify that Kibana is installed correctly and to check its status, connect to the web page at http://[NODE_IP_ADDRESS]:30911/status. The status should be Green. The next step is to define a Kibana index pattern. To do this, click Discover in the left panel. Notice that the default index pattern is logstash-*, and that the default time filter field name is @timestamp. Click Create. The Management page displays the fields for the logstash* index: The next step is to customize how the operator logs are presented. To configure the time interval and auto-refresh settings, click the upper-right corner of the Discover page, double-click the Auto-refresh tab, and select the desired interval. For example, 10 seconds. You can also set the time range to limit the log messages to those generated during a particular interval: Logstash is configured to split the operator log records into separate fields. For example: method: dispatchDomainWatch level: INFO log: Watch event triggered for WebLogic Domain with UID: domain1 thread: 39 timeInMillis: 1518372147324 type: weblogic-operator path: /logs/operator.log @timestamp: February 11th 2018, 10:02:27.324 @version: 1 host: weblogic-operator-501749275-nhjs0 class: oracle.kubernetes.operator.Main _id: AWGGCFGulCyEnuJh-Gq8 _type: weblogic-operator _index: logstash-2018.02.11 _score: You can limit the fields that are displayed. For example, select the level, method, and log fields, then click add. Now only those fields will be shown. You can also use filters to display only those log messages whose fields match an expression. Click Add a filter at the top of the Discover page to create a filter expression. For example, choose method, is one of, and onFailure. Kibana will display all log messages from the onFailure methods: Kibana is now configured to collect the operator logs. You can use its browser-based viewer to easily view and analyze the data in those logs. Summary In this blog, you learned about the Elastic Stack and the Oracle WebLogic Server Kubernetes Operator integration architecture, followed by a detailed explanation of how to set up and configure Kibana for interacting with the operator logs. You will find its capabilities, flexibility, and rich feature set to be an extremely valuable asset for monitoring WebLogic domains in a Kubernetes environment.    

  Oracle has been working with the WebLogic community to find ways to make it as easy as possible for organizations using WebLogic Server to run important workloads and to move those workloads into the...

Announcement

Announcing the New WebLogic Server Kubernetes Operator

We are pleased to announce the release and open sourcing of the Technology Preview version of the Oracle WebLogic Server Kubernetes Operator! We are releasing this Operator to GitHub for creating and managing a WebLogic Server 12.2.1.3 domain on Kubernetes. We are also publishing a blog that describes in detail how to run the Operator, how to stand up one or more WebLogic domains in Kubernetes, how to scale up or down  a WebLogic cluster manually or automatically using the WebLogic Diagnostics Framework (WLDF) or Prometheus, how the Operator manages load balancing for web applications deployed in WebLogic clusters, and how to provide integration for managing Operator logs through ElasticSearch, logstash and Kibana. A Kubernetes Operator is "an application specific controller that extends the Kubernetes API to create, configure, and manage instances of complex applications". We are adopting the Operator pattern and using it to provide an adapter to integrate WebLogic Server and Kubernetes, allowing Kubernetes to serve as a container infrastructure hosting WebLogic Server instances. And so the WebLogic Server Kubernetes Operator is an operator that extends Kubernetes to create, configure, and manage a WebLogic domain. The Operator uses the standard Oracle WebLogic Server 12.2.1.3 Docker image, which can be found in the Docker Store or in the Oracle Container Registry.  It treats this image as immutable, and all application and product runtime state is persisted in a Kubernetes persistent volume.  This allows us to treat all of the pods as throwaway and replaceable, and it completely eliminates the need to manage state written into Docker containers at run time (because there is none). The Oracle WebLogic Server Kubernetes Operator has the following requirements: Kubernetes 1.7.5+, 1.8.0+ (check with kubectl version) Flannel networking v0.9.1-amd64 (check with docker images | grep flannel) Docker 17.03.1.ce (check with docker version) Oracle WebLogic Server 12.2.1.3 WebLogic Server 12.2.1.3  domains on Kubernetes are certified and supported, as described in detail in My Oracle Support Doc Id 2349228.1 for details.  The WebLogic Kubernetes Operator is a Technical Preview version and is not yet supported by Oracle Support.  If users encounter problems related to the WebLogic Kubernetes Operator, they should open an issue in the GitHub project https://github.com/oracle/weblogic-kubernetes-operator.  GitHub project members will respond to the issues and resolve them in a timely fashion.  A series of video demonstrations of the operator are available here: Installing the operator shows the installation and also shows using the operator's REST API. Creating a WebLogic domain with the operator shows creation of two WebLogic domains including accessing the WebLogic Server Administration Console and looking at the various resources created in Kubernetes:  services, Ingresses, pods, load balancers, and so on. Deploying a web application, scaling a WebLogic cluster with the operator and verifying load balancing Using WLST against a domain running in Kubernetes shows how to create a data source for an Oracle database that is also running in Kubernetes. Scaling a WebLogic cluster with WLDF. Prometheus integration shows exporting WebLogic Server metrics to Prometheus and creating a Prometheus alert to trigger scaling. The overall process of installing and configuring the Operator and using it to manage WebLogic domains consists of the following steps. The provided scripts will perform most of these steps, but some must be performed manually: Registering for access to the Oracle Container Registry Setting up secrets to access the Oracle Container Registry Customizing the Operator parameters file Deploying the Operator to a Kubernetes cluster Setting up secrets for the Administration Server credentials Creating a persistent volume for a WebLogic domain Customizing the domain parameters file Creating a WebLogic domain Full up to date instructions are available at https://github.com/oracle/weblogic-kubernetes-operator/blob/master/site/installation.md. We hope to provide formal support for this operator soon, and intend to add new features and enhancements over time. WebLogic Server Kubernetes Operator.  Please stay tuned for more information. We hope this announcement is helpful to those of you seeking to deploy WebLogic Server on Kubernetes, and look forward to your feedback.          

We are pleased to announce the release and open sourcing of the Technology Preview version of the Oracle WebLogic Server Kubernetes Operator! We are releasing this Operator to GitHubfor creating and...

T3 RMI Communication for WebLogic Server Running on Kubernetes

Overview Oracle WebLogic Server supports Java EE and includes several vendor-specific enhancements. It has two RMI implementations and, beyond the standard Java EE-based IIOP RMI, WebLogic Server has a proprietary RMI protocol called T3. This blog describes the configuration aspects of generic RMI that also apply to T3, and also some T3-specific aspects for running WebLogic RMI on Kubernetes. Background T3 RMI is a proprietary WebLogic Server high performance RMI protocol and is a major communication component for WebLogic Server internally, and also externally for services like JMS, EJB, OAM, and many others. WebLogic Server T3 RMI configuration has evolved. It starts with a single multi-protocol listen port and listen address on WebLogic Server known as the default channel. We enhanced the default channel by adding a network access point layer, which allows users to configure multiple ports, as well as different protocols for each port, known as custom channels. When WebLogic Server is running on Kubernetes, the listen port number of WebLogic Server may or may not be the same as the Kubernetes exposed port number. For WebLogic Server running on Kubernetes, a custom channel allows us to map these two port numbers. The following table lists key terms that are used in this blog and provides links to documentation that gives more details. TERMINOLOGY   Listen port The TCP/IP port that WebLogic Server physically binds to. Public port Public port The port number that the caller uses to define the T3 URL. Usually it is the same as the listen port, unless the connection goes through “port mapping Port mapping An application of network address translation (NAT) that redirects a communication request from one address and port number combination to another. See port mapping. Default channel Every WebLogic Server domain has a default channel that is generated automatically by WebLogic Server. See definition. Custom channel Used for segregating different types of network traffic. ServerTemplateMBean Also known as the default channel. Learn more. NetworkAccessPointMBean Also known as a custom channel. Learn more. WebLogic cluster communication WebLogic Server instances in a cluster communicate with one another using either of two basic network technologies: multicast and unicast. Learn more about multicast and unicast. WebLogic transaction coordinator WebLogic Server transaction manager that serves as coordinator of the transaction Kubernetes service See Kubernetes concepts https://kubernetes.io/docs/concepts/services-networking/service/, which defines Kubernetes back end and NodePort. Kubernetes pod IP address Kubernetes assumes that pods can communicate with other pods, regardless of which host they land on. We give every pod its own cluster-private-IP address so you do not need to explicitly create links between pods or mapping container ports to host ports. Learn more. ClusterMBean See ClusterBroadcastChannel. WebLogic Server Listen Address WebLogic Server supports two cluster messaging protocols: multicast and unicast. The WebLogic Server on Kubernetes certification was done using the Flannel network fabric. Currently, we only certify unicast communication. By default WebLogic Server will use the default channel for unicast communication. Users can override it by setting a custom channel on the associated WebLogic Server ClusterMBean. As part of the unicast configuration in a WebLogic cluster, a designated listen address and port is required for each WebLogic cluster member so that they can locate each other. By default, the default channel or custom channel has a null listen address and is assigned at run time as 0.0.0.0. In a multinode Kubernetes cluster environment, neither 0.0.0.0 nor localhost will allow other cluster members from different nodes to discover each other. Instead, users can use the Kubernetes pod IP address that the WebLogic Server instance is running on. TCP Load Balancing In general, WebLogic T3 is TCP/IP-based, so it can support TCP load balancing when services are homogeneous, such as in a Kubernetes service with multiple back ends. In WebLogic Server some subsystems are homogeneous, such as JMS and EJB. For example, a JMS front end subsystem can be configured in a WebLogic cluster in which remote JMS clients can connect to any cluster member. By contrast, a JTA subsystem cannot safely use TCP load balancing in transactions that span across multiple WebLogic domains that, in turn, extend beyond a single Kubernetes cluster. The JTA transaction coordinator must establish a direct RMI connection to the server instance that is chosen as the subcoordinator of the transaction when that transaction is either committed or rolled back. The following figure shows a WebLogic transaction coordinator using the T3 protocol to connect to a subcoordinator. The WebLogic transaction coordinator cannot connect to the chosen subcoordinator due to the TCP load balancing. Figure 1: Kubernetes Cluster with Load Balancing Service To support cluster communication between the WebLogic transaction coordinator and the transaction subcoordinator across a Kubernetes environment, the recommended configuration is to have an individual NodePort service defined for each default channel and custom channel. Figure 2: Kubernetes Cluster with One-on-One Service Depending on the application requirements and the WebLogic subsystem used, TCP load balancing might or might not be suitable. Port Mapping and Address Mapping WebLogic Server supports two styles of T3 RMI configuration. One is defined by means of the default channel (see ServerTemplateMBean), and the other is defined by means of the custom channel (see NetworkAccessPointMBean). When running WebLogic Server in Kubernetes, we need to give special attention to the port mapping. When we use NodePort to expose the WebLogic T3 RMI service outside the Kubernetes cluster, we need to map the NodePort to the WebLogic Server listen port. If the NodePort is the same as the WebLogic Server listen port, then users can use the WebLogic Server default channel. Otherwise, users must configure a custom channel that defines a "public port" that matches the NodePort nodePort value, and a “listen port” that matches the NodePort port value. The following graph shows a nonworking NodePort/default channel configuration and a working NodePort/custom channel configuration: Figure 3: T3 External Clients in K8S The following table describes the properties of the default channel versus the corresponding ones in the custom channel:   Default Channel (ServerTemplateMBean) Custom Channel (NetworkAccessPointMBean) Multiple protocol support (T3, HTTP, SNMP, LDAP, and more) Yes No RMI over HTTP tunneling Yes (disable by default) Yes (disable by default) Port mapping No Yes Address Yes Yes Examples of WebLogic T3 RMI configurations WebLogic Server supports several ways to configure T3 RMI. The following examples show the common ones. Using the WebLogic Server Administration Console The following console page shows a WebLogic Server instance called AdminServer with a listen port of 9001 on a null listen address and with no SSL port. Because this server instance is configured with the default channel, port 9001 will support T3, http, iiop, snmp, and ldap. Figure 4: T3 RMI via ServerTemplateMBean on WebLogic console The following console page shows a custom channel with a listen port value of 7010, a null listen address, and a mapping to public port 30010. By default, the custom channel supports T3 protocol. Figure 5: T3 RMI via NetworkAccessPointMBean on WebLogic Console Using WebLogic RESTful management services The following shell script will create a custom channel with listen port ${CHANNEL_PORT} and a paired public port ${CHANNEL_PUBLIC_PORT}. #!/bin/sh HOST=$1 PORT=$2 USER=$3 PASSWORD=$4 CHANNEL=$5 CHANNEL_PORT=$6 CHANNEL_PUBLIC_PORT=$7 echo "Rest EndPoint URL http://${HOST}:${PORT}/management/weblogic/latest/edit" if [ $# -eq 0 ]; then echo "Please specify HOST, PORT, USER, PASSWORD CHANNEL CHANNEL_PORT CHANNEL_PUBLIC_PORT" exit 1 fi # Start edit curl -j --noproxy '*' --silent \ --user $USER:$PASSWORD \ -H X-Requested-By:MyClient \ -H Accept:application/json \ -H Content-Type:application/json \ -d "{}" \ -X POST http://$HOST:$PORT/management/weblogic/latest/edit/changeManager/startEdit # Create curl -j --noproxy '*' --silent \ --user $USER:$PASSWORD \ -H X-Requested-By:MyClient \ -H Accept:application/json \ -H Content-Type:application/json \ -d "{ name: '${CHANNEL}' }" \ -X POST http://$HOST:$PORT/management/weblogic/latest/edit/Servers/myServer/networkAccessPoints curl -j --noproxy '*' --silent \ --user $USER:$PASSWORD \ -H X-Requested-By:MyClient \ -H Accept:application/json \ -H Content-Type:application/json \ -d "{ listenPort: ${CHANNEL_PORT}, publicPort: ${CHANNEL_PUBLIC_PORT} }" \ -X POST http://$HOST:$PORT/management/weblogic/latest/edit/Servers/myServer/networkAccessPoints/${CHANNEL} curl -j --noproxy '*' --silent \ --user $USER:$PASSWORD \ -H X-Requested-By:MyClient \ -H Accept:application/json \ -H Content-Type:application/json \ -d "{}" \ -X POST http://$HOST:$PORT/management/weblogic/latest/edit/changeManager/activate Using a WLST script The following WLST script creates a custom T3 channel named t3Channel that has a listen port listen_port and a paired public port public_port. host = sys.argv[1] port = sys.argv[2] user_name = sys.argv[3] password = sys.argv[4] listen_port = sys.argv[5] public_port = sys.argv[6] print('custom host : [%s]' % host); print('custom port : [%s]' % port); print('custom user_name : [%s]' % user_name); print('custom password : ********'); print('public address : [%s]' % public_address); print('channel listen port : [%s]' % listen_port); print('channel public listen port : [%s]' % public_port); connect(user_name, password, 't3://' + host + ':' + port) edit() startEdit() ls() cd('/') cd('Servers') cd('myServer') create('t3Channel','NetworkAccessPoint') cd('NetworkAccessPoints/t3Channel') set('Protocol','t3') set('ListenPort',int(listen_port)) set('PublicPort',int(public_port)) print('Channel t3Channel added ...') activate() disconnect() Summary WebLogic Server uses RMI communication using the T3 protocol to communicate between WebLogic Servers instances and with other Java programs and clients. When WebLogic Server runs in a Kubernetes cluster, there are special considerations and configuration requirements that need to be taken into account to make the RMI communication work. This blog describes how to configure WebLogic Server and Kubernetes so that RMI communication from outside the Kubernetes cluster can successfully reach the WebLogic Server instances running inside the Kubernetes cluster. For many WebLogic Server features using T3 RMI, such as EJBs, JMS, JTA, and WLST, we support clients inside and outside the Kubernetes cluster. In addition, we support both a single WebLogic domain in a multinode Kubernetes cluster, and multiple WebLogic domains in a multinode Kubernetes cluster as well.

Overview Oracle WebLogic Server supports Java EE and includes several vendor-specific enhancements. It has two RMI implementations and, beyond the standard Java EE-based IIOP RMI, WebLogic Server has...

Messaging

WebLogic Server Certification on Kubernetes

We are pleased to announce the certification of Oracle WebLogic Server on Kubernetes! As part of this certification, we are releasing a sample on GitHub to create an Oracle WebLogic Server 12.2.1.3 domain image running on Kubernetes. We are also publishing a series of blogs that describe in detail the WebLogic Server configuration and feature support as well as best practices.  A video of a WebLogic Server domain running in Kubernetes can be seen at WebLogic Server on Kubertnetes Video. Kubernetes is an open-source system for automating deployment, scaling, and management of containerized applications. It supports a range of container tools, including Docker.  Oracle WebLogic Server configurations running in Docker containers can be deployed and orchestrated on Kubernetes platforms. The following table identifies versions of Oracle WebLogic Server, JDK, Linux, Kubernetes, Docker, and network fabric that are certified for running WebLogic Server configurations on Kubernetes.   WebLogic Server JDK Host OS Kubernetes Docker Network Fabric 12.2.1.3 8 Oracle Linux 7  UEK 4 1.7.5 and 1.8.0 17.03-ce Flannel  v0.9.1-amd64   For additional information about Docker certification with Oracle WebLogic Server, see My Oracle Support Doc ID 2017945.1.  Support for running WebLogic Server domains on Kubernetes platforms other than on Oracle Linux with a network fabric other than Flannel, see My Oracle Support Doc ID 2349228.1.  For the most current information on supported configurations, see the Oracle Fusion Middleware Supported System Configurations page on Oracle Technology Network. This certification enables users to create clustered and nonclustered Oracle WebLogic Server domain configurations, including both development and production modes, running on Kubernetes clusters.  This certification includes support for the following: Running one or more WebLogic domains in a Kubernetes cluster Single or multiple node Kubernetes clusters WebLogic managed servers in clustered and nonclustered configurations WebLogic Server Configured clusters (vs dynamic clusters). See documentation for details. Unicast WebLogic Server cluster messaging protocol Load balancing HTTP requests using Træfik as Ingress controllers on Kubernetes clusters HTTP session replication JDBC communication with external database systems JMS JTA JDBC store and file store using persistent volumes Inter-domain communication (JMS, Transactions, EJBs, and so on) Auto scaling of a WebLogic cluster Integration with Prometheus monitoring using the WebLogic Monitoring Exporter RMI communication from outside and inside the Kubernetes cluster Upgrading applications Patching WebLogic domains Service migration of singleton services Database leasing In this certification of WebLogic Server on Kubernetes the following configurations and features are not supported: WebLogic domains spanning Kubernetes clusters Whole server migration Use of Node Manager for WebLogic Servers lifecycle management (start/stop) Consensus leasing Dynamic clusters (will add certification of dynamic clusters at a future date) MultiCast WebLogic Server cluster messaging protocol Multitenancy Production redeployment Flannel with portmap We have released a sample to GitHub (https://github.com/oracle/docker-images/tree/master/OracleWebLogic/samples/wls-k8s-domain)  that show how to  create and run a WebLogic Server domain on Kubernetes.  The README.md in the sample provides all the steps.  This sample extends the certified Oracle WebLogic Server 12.2.1.3 developer install image by creating a sample domain and cluster that runs on Kubernetes. The WebLogic domain consists of an administrator server and several managed servers running in a WebLogic cluster. All WebLogic Server share the same domain home, which is mapped to an external volume. The persistent volumes must have the correct read/write permissions so that all WebLogic Server instances have access to the files in the domain home.  Check out the best practices in the blog WebLogic Server on Kubernetes Data Volume Usage, which explains the WebLogic Server services and files that are typically configured to leverage shared storage, and provides full end-to-end samples that show mounting shared storage for a WebLogic domain that is orchestrated by Kubernetes. After you have this domain up and running you can deploy JMS and JDBC resources.  The blog Run a WebLogic JMS Sample on Kubernetes provides a step-by-step guide to configure and run a sample WebLogic JMS application in a Kubernetes cluster.  This blog also describes how to deploy WebLogic JMS and JDBC resources, deploy an application, and then run the application. This application is based on a sample application named 'Classic API - Using Distributed Destination' that is included in the WebLogic Server sample applications. The application implements a scenario in which employees submit their names when they are at work, and a supervisor monitors employee arrival time. Employees choose whether to send their check-in messages to a distributed queue or a distributed topic. These destinations are configured on a cluster with two active managed servers. Two message-driven beans (MDBs), corresponding to these two destinations, are deployed to handle the check-in messages and store them in a database. A supervisor can then scan all of the check-in messages by querying the database. The follow up blog, Run Standalone WebLogic JMS Clients on Kubernetes, expands on the previous blog and demonstrates running standalone JMS clients communicating with each other through WebLogic JMS services, and database-based message persistence. WebLogic Server and Kubernetes each provide a rich set of features to support application deployment. As part of the process of certifying WebLogic Server on Kubernetes, we have identified a set of best practices for deploying Java EE applications on WebLogic Server instances that run in Kubernetes and Docker environments. The blog Best Practices for Application Deployment on WebLogic Server Running on Kubernetes describes these best practices. They include the general recommendations described in Deploying Applications to Oracle WebLogic Server, and also include the application deployment features provided in Kubernetes. One of the most important tasks in providing optimal performance and security of any software system is to make sure that the latest software updates are installed, tested, and rolled out promptly and efficiently with minimal disruption to system availability.  Oracle provides different types of patches for WebLogic Server, such as Patch Set Updates, and One-Off patches.  The patches you install, and the way in which you install them, depends upon your custom needs and environment. In Kubernetes, Docker, and on-premises environments, we use the same OPatch tool to patch WebLogic Server.  However, with Kubernetes orchestrating the cluster, we can leverage the update strategy options in the StatefulSet controller to roll out the patch from an updated WebLogic Server image.  The blog Patching WebLogic Server in a Kubernetes Environment; explains how. And, of course, a very important aspect of certification is security. We have identified best practices for securing Docker and Kubernetes environments when running WebLogic Server, explained in the blog Security Best Practices for WebLogic Server Running in Docker and Kubernetes. These best practices are in addition to the general WebLogic Server recommendations documented in Securing a Production Environment for Oracle WebLogic Server 12c documentation. In the area of monitoring and diagnostics, we have developed for open source a new tool The WebLogic Monitoring Exporter. WebLogic Server generates a rich set of metrics and runtime state information that provides detailed performance and diagnostic data about the servers, clusters, applications, and other resources that are running in a WebLogic domain. This tool leverages WebLogic’s monitoring and diagnostics capabilities when running in Docker/Kubernetes environments. The blog Announcing The New Open Source WebLogic Monitoring Exporter on GitHub describes how to build the exporter from a Dockerfile and source code in the GitHub project https://github.com/oracle/weblogic-monitoring-exporter. The exporter is implemented as a web application that is deployed to the WebLogic Server managed servers in the WebLogic cluster that will be monitored. For detailed information about the design and implementation of the exporter, see Exporting Metrics from WebLogic Server. Once after the exporter has been deployed to the running managed servers in the cluster and is gathering metrics and statistics, the data is ready to be collected and displayed via Prometheus and Grafana. Follow the blog entry Using Prometheus and Grafana to Monitor WebLogic Server on Kubernetes that steps you through collecting metrics in Prometheus and displaying them in Grafana dashboards. Elasticity (scaling up or scaling down) of a WebLogic Server cluster provides increased reliability of customer applications as well as optimization of resource usage. The WebLogic Server cluster can be automatically scaled by increasing (or decreasing) the number of pods based on resource metrics provided by the WebLogic Diagnostic Framework (WLDF).  When the WebLogic cluster scales up or down, WebLogic Server capabilities like HTTP session replication and service migration of singleton services are leveraged to provide the highest possible availability. Refer to the blog entry Automatic Scaling of WebLogic Clusters on Kubernetes for an illustration of automatic scaling of a WebLogic Server cluster in a Kubernetes cloud environment. In addition to certifying WebLogic Server on Kubernetes, the WebLogic Server team is developing a WebLogic Server Kubernetes Operator that will be released in the near future.  A Kubernetes Operator is "an application specific controller that extends the Kubernetes API to create, configure, and manage instances of complex applications".  Please stay tuned for information on the release of the WebLogic Server Kubernetes Operator. The certification of WebLogic Server on Kubernetes encompasses all the various  WebLogic configurations and capabilities described in this blog. Our intent is to enable you to run WebLogic Server in Kubernetes, to run WebLogic Server in the Kubernetes-based Oracle Container Engine that Oracle intends to release shortly, and to enable integration of WebLogic Server applications with applications developed on our Kubernetes-based Container Native Application Development Platform. We hope this information is helpful to customers seeking to deploy WebLogic Server on Kubernetes, and look forward to your feedback        

We are pleased to announce the certification of Oracle WebLogic Server on Kubernetes! As part of this certification, we are releasing a sample on GitHub to create an Oracle WebLogic Server 12.2.1.3...

Run Standalone WebLogic JMS Clients on Kubernetes

Overview JMS applications are applications that use JMS services to send and receive messages. There are two main types of WebLogic JMS applications: server-side JMS applications and standalone JMS clients. Server-side applications are applications that are running on WebLogic servers or clusters and they are usually Java EE applications like MDBs, servlets and so on. Standalone JMS clients can be applications running on a foreign EE server, desktop applications, or microservices. In my last blog, Run a WebLogic JMS Sample on Kubernetes, we demonstrated WebLogic JMS communication between Java EE applications on Kubernetes and we used file-based message persistence. In this blog, we will expand the previous blog to demonstrate running standalone JMS clients communicating with each other through WebLogic JMS services, and we will use database-based message persistence. First we create a WebLogic domain based on the sample WebLogic domain on GitHub, with an Administrator Server, and a WebLogic cluster. Then we deploy a data source, a JDBC store, and JMS resources to the WebLogic domain on a Kubernetes cluster. After the WebLogic JMS services are ready and running, we create and deploy a Java microservice to the same Kubernetes cluster to send/receive messages to/from the WebLogic JMS destinations. We use REST API and run scripts against the Administration Server pod to deploy the resources which are targeted to the cluster. Creating WebLogic JMS Services on Kubernetes Preparing the WebLogic Base Domain and Data Source If you completed the steps to create the domain, set up the MySQL database, and create the data source as described in the blog Run a WebLogic JMS Sample on Kubernetes, you can go directly to the next section. Otherwise, you need to finish the steps in the following sections of the blog Run a WebLogic JMS Sample on Kubernetes: Section "Creating the WebLogic Base Domain" Section "Setting Up and Running MySQL Server in Kubernetes" Section "Creating a Data Source for the WebLogic Server Domain" Now you should have a WebLogic base domain running on a Kubernetes cluster and a data source which connects to a MySQL database, running in the same Kubernetes cluster. Deploying the JMS Resources with a JDBC Store First, prepare a JSON data file that contains definitions for one database store, one JMS server, and one JMS module. The file will be processed by a Python script to create the resources, one-by-one, using the WebLogic Server REST API. File jms2.json: {"resources": { "jdbc1": { "url": "JDBCStores", "data": { "name": "jdbcStore1", "dataSource": [ "JDBCSystemResources", "ds1" ], "targets": [{ "identity":["clusters", "myCluster"] }] } }, "jms2": { "url": "JMSServers", "data": { "messagesThresholdHigh": -1, "targets": [{ "identity":["clusters", "myCluster"] }], "persistentStore": [ "JDBCStores", "jdbcStore1" ], "name": "jmsserver2" } }, "module": { "url": "JMSSystemResources", "data": { "name": "module2", "targets":[{ "identity": [ "clusters", "myCluster" ] }] } }, "sub2": { "url": "JMSSystemResources/module2/subDeployments", "data": { "name": "sub2", "targets":[{ "identity": [ "JMSServers", "jmsserver2" ] }] } } }} Second, prepare the JMS module file, which contains a connection factory, a distributed queue, and a distributed topic. File module2-jms.xml: <?xml version='1.0' encoding='UTF-8'?> <weblogic-jms xmlns="http://xmlns.oracle.com/weblogic/weblogic-jms" xmlns:sec="http://xmlns.oracle.com/weblogic/security" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:wls="http://xmlns.oracle.com/weblogic/security/wls" xsi:schemaLocation="http://xmlns.oracle.com/weblogic/weblogic-jms http://xmlns.oracle.com/weblogic/weblogic-jms/1.1/weblogic-jms.xsd"> <connection-factory name="cf2"> <default-targeting-enabled>true</default-targeting-enabled> <jndi-name>cf2</jndi-name> <transaction-params> <xa-connection-factory-enabled>true</xa-connection-factory-enabled> </transaction-params> <load-balancing-params> <load-balancing-enabled>true</load-balancing-enabled> <server-affinity-enabled>false</server-affinity-enabled> </load-balancing-params> </connection-factory> <uniform-distributed-queue name="dq2"> <sub-deployment-name>sub2</sub-deployment-name> <jndi-name>dq2</jndi-name> </uniform-distributed-queue> <uniform-distributed-topic name="dt2"> <sub-deployment-name>sub2</sub-deployment-name> <jndi-name>dt2</jndi-name> <forwarding-policy>Partitioned</forwarding-policy> </uniform-distributed-topic> </weblogic-jms> Third, copy these two files to the Administration Server pod. Then, in the Administration Server pod, run the Python script to create all the JMS resources: $ kubectl exec $adminPod -- mkdir /u01/wlsdomain/config/jms/ $ kubectl cp ./module2-jms.xml $adminPod:/u01/wlsdomain/config/jms/ $ kubectl cp ./jms2.json $adminPod:/u01/oracle/ $ kubectl exec $adminPod -- python /u01/oracle/run.py createRes /u01/oracle/jms2.json Launch the WebLogic Server Administration Console by going to your browser and entering the URL http://<hostIP>:30007/console in the address bar. Make sure that all the JMS resources are running successfully. Visit the monitoring page of the destination dq2 to check whether it has two members, jmsserver2@managed-server-0@dq2 and jmsserver2@managed-server-1@dq2. Now that the WebLogic JMS services are ready, JMS messages sent to this service will be stored in the MySQL database. Running the WebLogic JMS Client The JMS client pod is a Java microservice which is based on the openjdk8 image packaged with the WebLogic client JAR file. The client-related scripts are on GitHub which include Dockerfile, JMS client Java files and yaml files. NOTE: You need to get wlthint3client.jar from the installed WebLogic directory $WL_HOME/server/lib and put it in the folder jms-client/container-scripts/lib. Step 1: Build the Docker image for JMS clients and the image will contain the compiled JMS client classes which can be run directly. $ cd jms-client $ docker build -t jms-client . Step 2: Create the JMS client pod. $ kubectl create -f jmsclient.yml Run the Java programs to send and receive messages from the WebLogic JMS destinations. Please replace $clientPod with the actual client pod name. Run the sender program to send messages to the destination dq2. $ kubectl exec -it $clientPod java samples.JMSSender By default, the sender sends 10 messages on each run and these messages are distributed to two members of dq2. Check the Administration Console to verify this. Run the receiver program to receive messages from destination dq2. $ kubectl exec -it $clientPod java samples.JMSReceiver dq2 The receiver uses WebLogic JMSDestinationAvailabilityHelper API to get notifications about the distributed queue's membership change, so the receiver can receive messages from both members of dq2. Please refer to the WebLogic document, "Advanced Programming with Distributed Destinations Using the JMS Destination Availability Helper API", for the detailed usage. Summary In this blog, we expanded our sample Run a WebLogic Sample on Kubernetes to demonstrate using external JMS clients to communicate with WebLogic JMS services running inside a Kubernetes cluster. We leveraged basic Kubernetes facilities to manage WebLogic Server life cycles and used database-based message persistence to persist data beyond the life cycle of the pods. In future blogs, we’ll explore hosting a WebLogic JMS cluster using Oracle’s upcoming fully certified WebLogic Kubernetes ‘operator-based’ Kubernetes environment. In addition, we’ll also explore using WebLogic JMS automatic service migration to migrate JMS instances from shutdown pods to running pods.  

Overview JMS applications are applications that use JMS services to send and receive messages. There are two main types of WebLogic JMS applications: server-side JMS applications and standalone...

Patching WebLogic Server in a Kubernetes Environment

Of course, one of the most important tasks in providing optimal performance and security of any software system is to make sure that the latest software updates are installed, tested, and rolled out promptly and efficiently with minimal disruption to system availability.  Oracle provides different types of patches for WebLogic Server, such as Patch Set Updates, Security Patch Updates, and One-Off patches.  The patches you install, and the way in which you install them, depends upon your custom needs and environment. For WebLogic Server running on Kubernetes, we recently shared in Github the steps for creating a WebLogic Server instance with a shared domain home directory that is mapped to a Kubernetes volume.  In Kubernetes, Docker, and on-premises environments, we use the same OPatch tool to patch WebLogic Server.  However, with Kubernetes orchestrating the cluster, we can leverage the update strategy options in the StatefulSet controller to roll out the patch from an updated WebLogic Server image.  In this blog, I explain how. Prerequisites Create the WebLogic Server environment on Kubernetes based on the instructions provided at https://github.com/oracle/docker-images/tree/master/OracleWebLogic/samples/wls-k8s-domain.  The patching processes described below will be based on the environment created. Make sure that the WebLogic Server One-Off patches or Patch Set Updates are accessible from the environment created in the preceding step. Patch Set Updates and One-Off Patches Patch Set Updates are cumulative patches that include security fixes and critical fixes. They are used to patch Oracle WebLogic Server only and released in a regular basis.  For additional details related to Patch Set Updates, see Fusion Middleware Patching with OPatch. One-Off patches are targeted to solve some known issues or to add feature enhancements. For information about how to download patches, see My Oracle Support. Kubernetes Update Strategies for StatefulSets There are three different update strategy options available for StatefulSets that you can use for the following tasks: To configure and disable automated rolling updates for container images To configure resource requests or limits, or both To configure labels and annotations of the pods For details about the Kubernetes StatefulSets update strategies, see Update StatefulSets at the following location: https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/#updating-statefulsets. These update strategies are as follows: On Delete Manually delete the pod in any random sequence based on your environment configuration. When Kubernetes detects that a pod is deleted, it creates a new pod that is based on the specification defined for that StatefulSet.  This is the default update strategy when StatefulSet is created. Rolling Updates Perform a rolling update for all the pods in the cluster.  Kubernetes will delete and re-create all the Pods defined in the StatefulSet controller one at a time, but in reverse ordinal order. Rolling Update + Partition A rolling update can also be partitioned, which is determined by the partition value in the specification defined for the StatefulSet.  For example, if there are four pods in the cluster and the partition value is 2, only two pods are updated.  The other two pods are updated only if partition value is set to 4 or if the partition attribute is removed from the specification. Methods for Updating the StatefulSet Controller In a StatefulSet, there are three attributes that need to be verified prior to roll out: image, imagePullPolicy, and updateStrategy.  Any one of these attributes can be updated by using the in-place update approach.  These in-place options are: kubectl apply Update the yaml file that is used for creating the StatefulSet, and execute kubectl apply -f <statefulset yml file> to roll out the new configuration to the Kubernetes cluster. Sample updated yaml file: kubectl edit Directly update the attribute value using kubectl edit statefulset <statefulset name> Sample edit of a StatefulSet kubectl patch Directly update the attribute using kubectl patch. An example command that updates updateStrategy to RollingUpdate: An example command that updates updateStrategy to RollingUpdate with the partition option: An example command to use JSON format to update the image attribute to wls-k8s-domain-v1: Kubernetes Dashboard Drill down to the specific StatefulSet from the menu path and update the value of image, imagePullPolicy, and updateStrategy. Steps to Apply One-Off Patches and Patch Set Updates with an External Domain Home To create a patched WebLogic image with a new One-Off patch and apply it to all the pods: Complete the steps in Example of Image with WLS Domain in Github to create a patched WebLogic image. If the Kubernetes cluster is configured on multiple nodes, and the newly created image is not available in the Docker registry, complete the steps provided in docker save and docker load to copy the image to all the nodes in the cluster. Update the controller definition using one of the 3 methods described in Methods for Updating the StatefulSet Controller.  If you want Kubernetes to automatically apply the new image on all the pods for the StatefulSet, you can set the updateStrategy value to RollingUpdate. Apply the new image on the admin-server pod. Because there is only one Administration Server in the cluster, the preferred option is to use the RollingUpdate update strategy option. After the change is committed to the StatefulSet controller, Kubernetes will delete and re-create the Administration Server pod automatically. Apply the new image to all the pods defined in the Managed Server StatefulSet: a)     For the OnDelete option, get the list of the pods in the cluster and change the updateStrategy value to OnDelete. You need to manually delete all the pods in the cluster to roll out the new image, using the following commands: b)    For the RollingUpdate option, after you change the updateStrategy value to RollingUpdate, Kubernetes will delete and re-create the pods created for the Managed Server instances in a rolling fashion, but in reverse ordinal order, as shown in Figure 1 below. c)     If the Partition attribute is added to the RollingUpdate value, the rolling update order depends on the partition value. Kubernetes will roll out the new image to the pods with the ordinal whereby the order is greater than or equal to the partition value. Figure 1: Before and After Patching the Oracle Home Image Using the Kubernetes Statefulset RollingUpdate Strategy Roll Back In case you need to rollback the patch, you use the same steps as when applying a new image; that is, by changing the image value to the one for the original image.  You should retain at least two or three versions of the images in the registry. Monitoring There are several ways you can monitor the rolling update progress of your WebLogic domain:  Use the kubectl command to check the pod status, For example, the following output is produced when doing a rolling update of two Managed Server pods:   kubectl get pod -o wide   kubectl rollout status statefulset 2.  You can use the REST API to query the Administration Server to monitor the status of the Managed Servers during the rolling update. For information about how to use the REST API to monitor WebLogic Server, see Oracle WebLogic RESTful Management Services: From Command Line to JavaFX. The following example command queries the status of the Administration Server: The proceeding command generates output similar to the following: 3.  You can use the WebLogic Server Administration Console to monitor the status of the update. The server instance wls-domain-ms-1 is stopped: The update is done on wls-domain-ms-1, then switched to wls-domain-ms-0 :   4. A fourth way is to use the Kubernetes Dashboard.  From your browser enter the URL https:<hostname>:<nodePort>    Summary The process for applying a One-Off Patch or Patch Set Updates to WebLogic Server on Kubernetes is the same as when running in a bare metal environment.  When you use a Kubernetes policy of Statefulset, we recommend that you create a new patched image by extending the image with a previous version and then using the update strategy (OnDelete, RollingUpdate, or "RollingUpdate + partition") that is best suited for your environment. In a future blog, we will explore the patch options available with Kubernetes Operator. You might be able to integrate some of the manual steps shared in above with Operator to further simplify the overall WebLogic Server patching process when running on Kubernetes.  

Of course, one of the most important tasks in providing optimal performance and security of any software system is to make sure that the latest software updates are installed, tested, and rolled out...

Best Practices for Application Deployment on WebLogic Server Running on Kubernetes

Overview WebLogic Server and Kubernetes each provide a rich set of features to support application deployment. As part of the process of certifying WebLogic Server on Kubernetes, we have identified a set of best practices for deploying Java EE applications on WebLogic Server instances that run in Kubernetes and Docker environments. This blog describes those best practices. They include the general recommendations described in Deploying Applications to Oracle WebLogic Server, and also include the application deployment features provided in Kubernetes. Application Deployment Terminology  Both WebLogic Server and Kubernetes use similar terms for resources they manage, but with different meanings. For example, the notion of application or deployment has slightly different meaning, which can create confusion. The table below lists key terms that are used in this blog and how they are defined differently in WebLogic Server and Kubernetes. See Kubernetes Reference Documentation for a standardized glossary with a list of Kubernetes terminology. Table 1 Application Deployment Terminology   WebLogic Server Kubernetes Application A Java EE application (an enterprise application or Web application) or a standalone Java EE module (such as an EJB or resource adapter) that has been organized according to the Java EE specification. An application unit includes a web application, enterprise application, enterprise javaBean, resource adapter, web service, Java EE library or an optional package. An application unit may also include JDBC, JMS, WLDF modules or a client application archive. A software that is containerized and managed in a cluster environment by Kubernetes. WebLogic Server is an example of a Kubernetes application. Application Deployment A process of making a Java Enterprise Edition (Java EE) application or module available for processing client requests in WebLogic Server. A way of packaging, instantiating, running and communicating the containerized applications in a cluster environment. Kubernetes also has an API object called a Deployment that manages a replicated application. Deployment Tool weblogic.Deployer utility Administration Console WebLogic Scripting Tool (WLST) wldeploy Ant task weblogic-maven-plugin Maven plug-in WebLogic Deployment API Auto-deployment feature kubeadm kubectl minikube Helm Chart kops Cluster A WebLogic cluster consists of multiple WebLogic Server instances running simultaneously and working together to provide increased scalability and reliability. A cluster appears to clients to be a single WebLogic Server instance. The server instances that constitute a cluster can run on the same machine, or be located on different machines. You can increase a cluster's capacity by adding additional server instances to the cluster on an existing machine, or you can add machines to the cluster to host the incremental server instances. Each server instance in a cluster must run the same version of WebLogic Server. A Kubernetes cluster consists of a master node and a set of worker nodes. In a production environment these run in a distributed setup on multiple nodes. For testing purposes, all the components can run on the same node (either a physical host or a virtual machine). Within the context of this blog, the following definitions are used: The application mentioned in this page is the Java EE application.  The application deployment in this page is the Java EE application deployment on WebLogic Server.  A Kubernetes application is the software managed by Kubernetes. For example, a WebLogic Server. Summary of Best Practices for Application Deployment in Kubernetes In this blog, the best practices for application deployment on WebLogic Server running in Kubernetes includes several parts: Distributing Java EE application deployment files to a Kubernetes environment so the WebLogic Server containers in pods can access the deployment files. Deploying Java EE applications in a Kubernetes environment so the applications are available for the WebLogic Server containers in pods to process the client requests. Integrating Kubernetes applications with the ReadyApp framework to check the Kubernetes applications' readiness status. General Java EE Application Deployment Best Practices Overview Before drilling down into the best practices details, let’s briefly review the general Java EE application deployment best practices, which are described in Deploying Applications to Oracle WebLogic Server. The general Java EE application deployment process involves multiple parts, mainly: Preparing the Java EE application or module. See Preparing Applications and Modules for Deployment, including Best Practices for Preparing Deployment Files. Configuring the Java EE application or module for deployment. See Configuring Applications for Production Deployment, including Best Practices for Managing Application Configuration. Exporting the Java EE application or module for deployment to a new environment. See Exporting an Application for Deployment to New Environments, including Best Practices for Exporting a Deployment Configuration. Deploying the Java EE application or module. See Deploying Applications and Modules with weblogic.Deployer, including Best Practices for Deploying Applications. Redeploying the Java EE application or module. See Redeploying Applications in a Production Environment. Distributing Java EE Application Deployment Files in Kubernetes Assume the WebLogic Server instances have been deployed into Kubernetes and Docker environments. Before you deploy the Java EE applications on WebLogic Server instances, the Java EE application deployment files, for example, the EAR, WAR, RAR files,  need to be distributed to the locations that can be accessed by the WebLogic Server instances in the pods. In Kubernetes, the deployment files can be distributed by means of a Docker images, or manually by an administrator. Pre-distribution of Java EE Applications in Docker Images A Docker image can contain a pre-built WebLogic Server domain home directory that has one or more Java EE applications deployed to it. When the containers in the pods are created and started using the same Docker image, all containers should have the same Java EE applications deployed to them. If the Java EE applications in the Docker image are updated to a newer version, a new Docker image can be created on top of the current existing Docker image, as shown in Figure 1. However as newer application versions are introduced, additional layers are needed in the image, which consumes more resources, such as disk space. Consequently, having an excessive number of layers in the Docker image is not recommended. Figure 1 Pre-distribution of Java EE Application in layered Docker Images Using Volumes in a Kubernetes Cluster Application files can be shared among all the containers in all the pods by mapping the application volume directory in the pods to an external directory on the host. This makes the application files accessible to all the containers in the pods. When using volumes, the application files need to be copied only once to the directory on the host. There is no need to copy the files to each pod. This saves disk space and the deployment time especially for large applications. Using volumes is recommended for distributing the Java EE applications to WebLogic Server instances running in Kubernetes. Figure 2 Mounting Volumes to an External Directory As shown in Figure 2, every container in each of the three pods has an application volume directory /shared/applications. Each of these directories is mapped to the same external directory on the host: /host/apps. After the administrator puts the application file simpleApp.war in the /host/apps directory on the host, this file can then be accessed by the containers in each pod from the /shared/applications directory. Note that Kubernetes supports different volume types. For information about determining the volume type to use, creating the volume directory, determining the medium that backs it, and identifying the contents of the volume, see Volumes in the Kubernetes documentation. Best Practices for Distributing Java EE Application Deployment Files in Kubernetes Use volumes to persist and share the application files across the containers in all pods. On-disk files in a container are ephemeral. When using a pre-built WebLogic Server domain home in a Docker image, use a volume to store the domain home directory on the host. A sample WebLogic domain wls-k8s-domain that includes a pre-built WebLogic Server domain home directory is available from GitHub at https://github.com/oracle/docker-images/tree/master/OracleWebLogic/samples/wls-k8s-domain Store the application files in a volume whose location is separate from the domain home volume directory on the host. A deployment plan generated for an existing Java EE web application that is deployed to WebLogic Server can be stored in a volume as well. For more details about using the deployment plan,  see the tutorial at http://www.oracle.com/webfolder/technetwork/tutorials/obe/fmw/wls/12c/09-DeployPlan--4464/deployplan.htm. By default all processes in WebLogic Server pods are running with user ID 1000 and group ID 1000. Make sure that the proper access permissions are set to the application volume directory so that user ID 1000 or group ID 1000 has read and write access to the application volume directory. Java EE Application Deployment in Kubernetes After the application deployment files are distributed throughout the Kubernetes cluster, you have several WebLogic Server deployment tools to choose from for deploying the Java EE applications to the containers in the pods. WebLogic Server supports the following deployment tools for deploying, undeploying and redeploying the Java EE applications: WebLogic Administration Console WebLogic Scripting Tool (WLST) weblogic.Deployer utility REST API wldeploy Ant task The WebLogic Deployment API which allows you to perform deployment tasks programmatically using Java classes. The auto-deployment feature. When auto-deployment is enabled, copying an application into the /autodeploy directory of the Administration Server causes that application to be deployed automatically. Auto-deployment is intended for evaluation or testing purposes in a development environment only For more details about using these deployment tools, see Deploying Applications to Oracle WebLogic Server. These tools can also be used in Kubernetes. The following samples show multiple ways to deploy and undeploy an application simpleApp.war in a WebLogic cluster myCluster.  Using WLST in a Dockerfile Using the weblogic.Deployer utility Using the REST API Note that the environment in which the deployment command is run is created based upon the sample WebLogic domain wls-k8s-domain available on GitHub at  https://github.com/oracle/docker-images/tree/master/OracleWebLogic/samples/wls-k8s-domain. In this environment,  A sample WLS 12.2.1.3 domain and cluster are created by extending the Oracle WebLogic developer install image and running it in Kubernetes. The WebLogic domain (for example base_domain) consists of an Admininstrator Server and several Managed Servers running in the WebLogic cluster myCluster. Each WebLogic Server is started in a container. Each pod has one WebLogic Server container. For details about the wls-k8s-domain sample, see the GitHub page. Each pod has one domain home volume directory (for example /u01/wlsdomain). This domain home volume directory is mapped to an external directory (for example /host/domain). The sample WLS 12.2.1.3 domain is created under this external directory. Each pod can have an application volume directory (for example /shared/applications) created in the same way as the domain home volume directory. This application volume directory is mapped to an external directory (for example /host/apps). The Java EE applications can be distributed to this external directory. Sample of Using Offline WLST in a Dockerfile to Deploy a Java EE Application In this sample, a Dockerfile is used for building an application Docker image. This application Docker image extends a wls-k8s-domain image that creates the sample wls-k8s-domain domain. This Dockerfile also calls WLST with a py script to update the sample wls-k8s-domain domain configuration with a new application deployment in a offline mode. # Dockerfile # Extends wls-k8s-domain FROM wls-k8s-domain # Copy the script files and call a WLST script. COPY container-scripts/* /u01/oracle/ # Run a py script to add a new application deployment into the domain configuration RUN wlst /u01/oracle/app-deploy.py  The script app-deploy.py is called to deploy the application simpleApp.war using the Offline WLST apis: # app-deploy.py # Read the domain readDomain(domainhome) # Create application # ================== cd('/') app = create('simpleApp', 'AppDeployment') app.setSourcePath('/shared/applications/simpleApp.war') app.setStagingMode('nostage') # Assign application to cluster # ================================= assign('AppDeployment', 'simpleApp, 'Target', 'myCluster') # Update domain. Close It. Exit # ================================= updateDomain() closeDomain() exit() The application is deployed during the application Docker image build phase. When a WebLogic Server container is started, the simpleApp application is started and is ready to service the client requests. Sample of Using weblogic.Deployer to Deploy and Undeploy a Java EE Application in Kubernetes In this sample, the application simpleApp.war exists in the external directory: /host/apps which is located on the host as described in the prior section: Using External Volumes in Kubernetes Cluster. The following commands show running the webloigc.Deployer utility in the Adminstration Server pod: # Find the pod id for Admin Server pod: admin-server-1238998015-f932w > kubectl get pods NAME READY STATUS RESTARTS AGE admin-server-1238998015-f932w 1/1 Running 0 11m managed-server-0 1/1 Running 0 11m managed-server-1 1/1 Running 0 8m # Find the Admin Server service name that can be connected to from the deployment command. # Here the Admin Server service name is admin-server which has a port 8001. > kubectl get services NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE admin-server 10.102.160.123 <nodes> 8001:30007/TCP 11m kubernetes 10.96.0.1 <none> 443/TCP 39d wls-service 10.96.37.152 <nodes> 8011:30009/TCP 11m wls-subdomain None <none> 8011/TCP 11m # Execute the /bin/bash in the Admin Server pod > kubectl exec -it admin-server-1238998015-f932w /bin/bash # Once in the Admin Server pod, setup a WebLogic env, then run weblogic.Deployer # to deploy the simpleApp.war located in the /shared/applications directory to # the cluster "myCluster" ]$ cd /u01/wlsdomain/base_domain/bin ]$ . setDomainEnv.sh ]$ java weblogic.Deployer -adminurl t3://admin-server:8001 -user weblogic -password weblogic1 -name simpleApp -targets myCluster -deploy /shared/applications/simpleApp.war The next command verifies that the Java EE application deployment to the WebLogic cluster is completed successfully: # Kubernetes routes the traffic to both managed-server-0 and managed-server-1 via the wls-service port 30009. http://<hostIP>:30009/simpleApp/Scrabble.jsp The following command uses the weblogic.Deployer utility to undeploy the application. Note its similarity to the steps for deployment: # Execute the /bin/bash in the Admin Server pod > kubectl exec -it admin-server-1238998015-f932w /bin/bash # Undeploy the simpleApp ]$ cd /u01/wlsdomain/base_domain/bin ]$ . setDomainEnv.sh ]$ java weblogic.Deployer -adminurl t3://admin-server:8001 -user weblogic -password weblogic1 -undeploy -name simpleApp Sample of Using REST APIs to Deploy and Undeploy a Java EE Application in Kubernetes In this sample, the application simpleApp.war has already been distributed to the host directory /host/apps. This host directory, in turn, mounts to the application volume directory /shared/applications, which is in the pod admin-server-1238998015-f932w. The following example shows executing a curl command in the pod admin-server-1238998015-f932w. This curl command sends a REST request to the Adminstration Server using NodePort 30007 to deploy the simpleApp to the WebLogic cluster myCluster. # deploy simpleApp.war file to the WebLogic cluster > kubectl exec admin-server-1238998015-f932w -- curl -v --user weblogic:weblogic1 \           -H X-Requested-By:MyClient \           -H Content-Type:application/json \           -d "{ name: 'simpleApp', \                 sourcePath: '/shared/applications/simpleApp.war', \                 targets: [ { identity: [ 'clusters', 'myCluster' ] } ] }" \           -X POST http://<hostIP>:30007/management/weblogic/latest/edit/appDeployments The following command uses the REST API to undeploy the application: # undeploy simpleApp.war file from the WebLogic cluster > kubectl exec admin-server-1238998015-f932w -- curl -v --user weblogic:weblogic1 \           -H X-Requested-By:MyClient \           -H Accept:application/json \           -X DELETE http://<hostIP>:30007/management/wls/latest/deployments/application/id/simpleApp Best Practices for Deploying Java EE Applications in Kubernetes Deploy Java EE applications or modules to a WebLogic cluster, instead of individual WebLogic Server instances. This simplifies scaling the WebLogic cluster later because changes to deployment strategy are not necessary. WebLogic Server deployment tools can be used in the Kubernetes environment. When updating an application, follow the same steps as described above to distribute and deploy the application. When using a pre-built WebLogic Server domain home in a Docker image, deploying the applications to the domain automatically updates the domain configuration. However deploying applications this way results in the domain configuration in the pods to become out-of-sync with the domain configuration in the Docker image. You can avoid this synchronization issue whenever possible by including the required applications in the pre-built domain home in the Docker image. This way you can avoid extra deployment steps later on. Integrating ReadyApp Framework in Kubernetes Readiness Probe Kubernetes provides a flexible approach to configuring load balancers and frontends in a way that isolates clients from the details of how services are deployed. As part of this approach, Kubernetes performs and reacts to a readiness probe to determine when a container is ready to accept traffic. By contrast, WebLogic Server provides the ReadyApp framework, which reports whether the WebLogic Server instance startup is completed and ready to service client requests. The ReadyApp framework uses two states: READY and NOT READY. The READY state means that not only is a WebLogic Server instance in a RUNNING state, but also that all applications deployed on the WebLogic Server instance are ready to service requests. When in the NOT READY state, the WebLogic Server instance startup is incomplete and is unable to accept traffic. When starting a WebLogic Server container in a Kubernetes environment, you can use a Kubernetes readiness probe to access the ReadyApp framework on WebLogic Server. When the ReadyApp framework reports a READY state of a WebLogic Server container startup, the readiness probe notifies Kubernetes that the traffic to the WebLogic Server container may begin. The following example shows how to use the ReadyApp framework integrated in a readiness probe to determine whether a WebLogic Server container running on the port 8011 is ready to accept traffic. apiVersion: apps/v1beta1 kind: StatefulSet metadata: [...] spec:   [...]   template:     [...]     spec:       containers:         [...]         readinessProbe:           failureThreshold: 3           httpGet:           path: /weblogic/ready           port: 8011           scheme: HTTP [...] The ReadyApp framework on WebLogic Server can be accessed from the URL: http://<hostIP>:<port>/weblogic/ready When WebLogic Server is running, this URL returns a page with either a status 200 (READY) or 503 (NOT READY). When WebLogic Server is not running, an Error 404 page appears.  Similar to WebLogic Server, other Kubernetes applications can register with the ReadyApp framework and use a readiness probe to check the state of the ReadyApp framework on the Kubernetes applications. See Using the ReadyApp Framework for information about how to register an application with the ReadyApp framework.  Best Practices for Integrating ReadyApp Framework in Kubernetes Readiness Probe The use of the ReadyApp framework to register Kubernetes applications and also of the readinessProbe to check the status of the ReadyApp framework to determine whether the applications are ready to service requests, is recommended. Only when the status of the ReadyApp framework is in a READY state, Kubernetes routes traffic to those Kubernetes applications. Conclusion When integrating WebLogic Server in Kubernetes and Docker environments, customers can use the existing powerful WebLogic Server deployment tools to deploy their Java EE applications onto WebLogic Server instances running in Kubernetes. Customers also can use Kubernetes features to manage WebLogic Server: they can use volumes to share the application files with all the containers among all pods in a Kubernetes cluster, and also of the readinessProbe to monitor WebLogic Server startup state, and more. This integration not only allows customers to support flexible deployment scenarios that fit into their company's business practices, but also provides ways to quickly deploy WebLogic Server in a cloud environment, to autoscale it on the fly, and to update it seamlessly.

Overview WebLogic Server and Kubernetes each provide a rich set of features to support application deployment. As part of the process of certifying WebLogic Server on Kubernetes, we have identified...

WebLogic Server on Kubernetes Data Volume Usage

As part of certifying WebLogic Server on Kubernetes, we have identified best practices for sharing file data among WebLogic Server pods that are running in a Kubernetes environment. In this blog, I review the WebLogic Server services and files that are typically configured to leverage shared storage, and I provide full end-to-end samples, which you can download and run, that show mounting shared storage for a WebLogic domain that is orchestrated by Kubernetes. WebLogic Server Persistence in Volumes When running WebLogic Server on Kubernetes, refer to the blog Docker Volumes in WebLogic for information about the advantages of using data volumes. This blog also identifies the WebLogic Server artifacts that are good candidates for being persisted in those data volumes. Kubernetes Solutions In a Kubernetes environment, pods are ephemeral. To persist data, Kubernetes provides the Volume abstraction, and the PersistentVolume (PV) and PersistentVolumeClaim (PVC) API resources. Based on the official Kubernetes definitions [Kubernetes Volumes and Kubernetes Persistent Volumes and Claims], a PV is a piece of storage in the cluster that has been provisioned by an administrator, and a PVC is a request for storage by a user. Therefore, PVs and PVCs are independent entities outside of pods. They can be easily referenced by a pod for file persistence and file sharing among pods inside a Kubernetes cluster. When running WebLogic Server on Kubernetes, using PVs and PVCs to handle shared storage is recommended for the following reasons: Usually WebLogic Server instances run in pods on multiple nodes that require access to a shared PV. The life cycle of a WebLogic Server instance is not limited to a single pod. PVs and PVCs can provide more control. For example, the ability to specify: access modes for concurrent read/write management, mount options provided by volume plugins, storage capacity requirements, reclaim policies for resources, and more. Use Cases of Kubernetes Volumes for WebLogic Server To see the details about the samples, or to run them locally, please download the examples and follow the steps provided below. Software Versions Host machine: Oracle Linux 7u3 UEK4 (x86-64) Kubernetes v1.7.8 Docker 17.03 CE Prepare Dependencies Build the oracle/weblogic:12.2.1.3-developer image locally based on the Dockerfile and scripts at https://github.com/oracle/docker-images/tree/master/OracleWebLogic/dockerfiles/12.2.1.3/. Download the WebLogic Kubernets domain sample source code from https://github.com/oracle/docker-images/tree/master/OracleWebLogic/samples/wls-k8s-domain. Put the sample source code to a local folder named wls-k8s-domain. Build the WebLogic domain image locally based on the Dockerfile and scripts. $ cd wls-k8s-domain $ docker build -t was-k8s-domain . For Use Case 2, below, prepare a NFS server and a shared directory by entering the following commands (in this example I use machine 10.232.128.232). Note that Use Case 1 uses a host path instead of NFS and does not require this step. # systemctl start rpcbind.service # systemctl start nfs.service # systemctl start nfslock.service $ mkdir -p /scratch/nfsdata $ chmod o+rw /scratch/nfadata # echo /scratch/nfsdata *(rw,fsid=root,no_root_squash,no_subtree_check) >> /etc/exports By default, in the WebLogic domain wls-k8s-domain, all processes in pods that contain WebLogic Server instances run with user ID 1000 and group ID 1000. Proper permissions need to be set to the external NFS shared directory to make sure that user ID 1000 and group ID 1000 have read and write permission to the NFS volume. To simplify the permissions management in the examples, we grant read and write permission to others to the shared directory as well. Use Case 1: Host Path Mapping at Individual Machine with a Kubernetes Volume The WebLogic domain consists of an Administration Server and multiple Managed Servers, each running inside its own pod. All pods have volumes directly mounted to a folder on the physical machine. The domain home is created in a shared folder when the Administration Server pod is first started. At runtime, all WebLogic Server instances, including the Administration Server, share the same domain home directory via a mounted volume. Note: This example runs on a single machine, or node, but this approach also works when running the WebLogic domain across multiple machines. When running on multiple machines, each WebLogic Server instance must share the same directory. In turn, the host path can refer to this directory, thus access to the volume is controlled by the underlying shared directory. Given a set of machines that are already set up with a shared directory, this approach is simpler than setting up an NFS client (although maybe not as portable). To run this example, complete the following steps: Prepare the yml file for the WebLogic Administration Server. Edit wls-admin.yml to mount the host folder /scratch/data to /u01/wlsdomain in the Administration Server pod: apiVersion: apps/v1beta1 kind: Deployment metadata: name: admin-server spec: replicas: 1 template: metadata: labels: app: admin-server spec: containers: - name: admin-server image: wls-k8s-domain imagePullPolicy: Never command: ["sh"] args: ["/u01/oracle/startadmin.sh"] readinessProbe: httpGet: path: /weblogic/ready port: 8001 initialDelaySeconds: 15 timeoutSeconds: 5 ports: - containerPort: 8001 env: - name: WLUSER valueFrom: secretKeyRef: name: wlsecret key: username - name: WLPASSWORD valueFrom: secretKeyRef: name: wlsecret key: password volumeMounts: # name must match the volume name below - name: domain-home mountPath: "/u01/wlsdomain" volumes: - name: domain-home hostPath: path: /scratch/data type: Directory Prepare the yml file for the Managed Servers. Edit wls-stateful.yml to mount the host folder /scratch/data to /u01/wlsdomain in the Managed Server pods: apiVersion: apps/v1beta1 # for versions before 1.6.0 use extensions/v1beta1 kind: StatefulSet metadata: name: managed-server spec: serviceName: wls-subdomain replicas: 2 template: metadata: name: ms labels: app: managed-server spec: subdomain: wls-subdomain containers: - name: managed-server image: wls-k8s-domain imagePullPolicy: Never command: ["sh"] args: ["/u01/oracle/startms.sh"] readinessProbe: httpGet: path: /weblogic/ready port: 8011 initialDelaySeconds: 15 timeoutSeconds: 5 ports: - containerPort: 8011 env: - name: JAVA_OPTIONS value: "-Dweblogic.StdoutDebugEnabled=true" - name: USER_MEM_ARGS value: "-Xms64m -Xmx256m " - name: MY_POD_IP valueFrom: fieldRef: fieldPath: status.podIP - name: MY_POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: DNS_DOMAIN_NAME value: "wls-subdomain" - name: WLUSER valueFrom: secretKeyRef: name: wlsecret key: username - name: WLPASSWORD valueFrom: secretKeyRef: name: wlsecret key: password volumeMounts: # name must match the volume name below - name: domain-home mountPath: "/u01/wlsdomain" volumes: - name: domain-home hostPath: path: /scratch/data type: Directory Create the Administration Server and Managed Server pods with the shared volume. These WebLogic Server instances will start from the mounted domain location. $ kubectl create -f wls-admin.yml $ kubectl create -f wls-stateful.yml Use Case 2: NFS Sharing with Kubernetes PV and PVC This example shows a WebLogic Server cluster with one Administration Server and several Managed Server instances, each server residing in a dedicated pod. All the pods have volumes mounted to a central NFS server that is located in a physical machine that the pods can reach. The first time the Administration Server pod is started, the WebLogic domain is created in the shared NFS folder. At runtime, all WebLogic Server instances, including the Administration Server, share the same domain home directory via a mounted volume by PV and PVC. In this sample we have the NFS server on host 10.232.128.232, which has a read/write export to all external hosts on /scratch/nfsdata. Prepare the PV. Edit pv.yml to make sure each WebLogic Server instance has read and write access to the NFS shared folder: kind: PersistentVolume apiVersion: v1 metadata: name: pv1 labels: app: wls-domain spec: storageClassName: manual capacity: storage: 10Gi accessModes: - ReadWriteMany persistentVolumeReclaimPolicy: Recycle # Retain, Recycle, Delete nfs: # Please use the correct NFS server host name or IP address server: 10.232.128.232 path: "/scratch/nfsdata" Prepare the PVC. Edit pvc.yml: apiVersion: v1 kind: PersistentVolumeClaim metadata: name: wlserver-pvc-1 labels: app: wls-server spec: storageClassName: manual accessModes: - ReadWriteMany resources: requests: storage: 10Gi Kubernetes will find the matching PV for the PVC, and bind them together [Kubernetes Persistent Volumes and Claims]. Create the PV and PVC: $ kubectl create -f pv.yml $ kubectl create -f pvc.yml Then check the PVC status to make sure it binds to the PV: $ kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESSMODES STORAGECLASS AGE wlserver-pvc-1 Bound pv1 10Gi RWX manual 7s Prepare the yml file for the Administration Server. It has a reference to the PVC wlserver-pvc-1. Edit wls-admin.yml to mount the NFS shared folder to /u01/wlsdomain in the WebLogic Server Administration Server pod: apiVersion: apps/v1beta1 kind: Deployment metadata: name: admin-server spec: replicas: 1 template: metadata: labels: app: admin-server spec: containers: - name: admin-server image: wls-k8s-domain imagePullPolicy: Never command: ["sh"] args: ["/u01/oracle/startadmin.sh"] readinessProbe: httpGet: path: /weblogic/ready port: 8001 initialDelaySeconds: 15 timeoutSeconds: 5 ports: - containerPort: 8001 env: - name: WLUSER valueFrom: secretKeyRef: name: wlsecret key: username - name: WLPASSWORD valueFrom: secretKeyRef: name: wlsecret key: password volumeMounts: # name must match the volume name below - name: domain-home mountPath: "/u01/wlsdomain" volumes: - name: domain-home persistentVolumeClaim: claimName: wlserver-pvc-1 Prepare the yml file for the Managed Servers. It has a reference to the PVC wlserver-pvc-1. Edit wls-stateful.yml to mount the NFS shared folder to /u01/wlsdomain in each Managed Server pod: apiVersion: apps/v1beta1 # for versions before 1.6.0 use extensions/v1beta1 kind: StatefulSet metadata: name: managed-server spec: serviceName: wls-subdomain replicas: 2 template: metadata: name: ms labels: app: managed-server spec: subdomain: wls-subdomain containers: - name: managed-server image: wls-k8s-domain imagePullPolicy: Never command: ["sh"] args: ["/u01/oracle/startms.sh"] readinessProbe: httpGet: path: /weblogic/ready port: 8011 initialDelaySeconds: 15 timeoutSeconds: 5 ports: - containerPort: 8011 env: - name: JAVA_OPTIONS value: "-Dweblogic.StdoutDebugEnabled=true" - name: USER_MEM_ARGS value: "-Xms64m -Xmx256m " - name: MY_POD_IP valueFrom: fieldRef: fieldPath: status.podIP - name: MY_POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: DNS_DOMAIN_NAME value: "wls-subdomain" - name: WLUSER valueFrom: secretKeyRef: name: wlsecret key: username - name: WLPASSWORD valueFrom: secretKeyRef: name: wlsecret key: password volumeMounts: - name: domain-home mountPath: "/u01/wlsdomain" volumes: - name: domain-home persistentVolumeClaim: claimName: wlserver-pvc-1 Create the Administration Server and Managed Server pods with the NFS shared volume. Each WebLogic Server instance will start from the mounted domain location: $ kubectl create -f wls-admin.yml $ kubectl create -f wls-stateful.yml Summary This blog describes the best practices of setting Kubernetes data volumes when running a WebLogic domain in a Kubernetes environment. Because Kubernetes pods are ephemeral, it is a best practice to persist the WebLogic domain to volumes, as well as files such as logs, stores, and so on. Kubernetes provides persistent volumes and persistent volume claims to simplify externalizing state and persisting important data to volumes. We provide two use cases: the first describes how to map the volume to a host machine where the Kubernetes nodes are running; and the second describes how to use an NFS shared volume. In both use cases, all WebLogic Server instances must have access to the files that are mapped to these volumes.

As part of certifying WebLogic Server on Kubernetes, we have identified best practices for sharing file data among WebLogic Server pods that are running in a Kubernetes environment. In this blog, I...

Exporting Metrics from WebLogic Server

As it runs, WebLogic Server generates a rich set of metrics and runtime state information. Several thousand individual metrics are available to capture performance data, such as invocation counts, session activity, work manager threads, and so forth. These metrics are very useful for tracking activity, diagnosing problems, and ensuring sufficient resources are available. Exposed through both JMX and web services, these metrics are supported by Oracle administration tools, such as Enterprise Manager and the WebLogic Server Administration Console, as well as third-party clients.  One of those third-party clients is Prometheus. Prometheus is an open source monitoring toolkit that is commonly used in cloud environments as a framework for gathering, storing, and querying time series data. A number of exporters have been written to scrape information from various services and feed that information into a Prometheus server. Once there, this data can be retrieved using Prometheus itself or other tools that can process Prometheus data, such as Grafana. Oracle customers have been using the generic Prometheus JMX Exporter to scrape information from WebLogic Server instances, but this solution is hampered by usability issues and scalability at larger sites. Consider the following portion of an MBean tree: In this tree, ServerRuntime represents the top of the MBean tree and has several ApplicationRuntime MBeans, each of which has multiple ComponentRuntime MBeans. Some of those are of type WebAppComponentRuntime, which has multiple Servlet MBeans. We can configure the JMX Exporter as follows: jmxUrl: service:jmx:t3://@HOST@:@PORT@/jndi/weblogic.management.mbeanservers.runtime  username: system  password: gumby1234  lowercaseOutputName: false  lowercaseOutputLabelNames: false  whitelistObjectNames:    - "com.bea:ServerRuntime=*,Type=ApplicationRuntime,*"    - "com.bea:Type=WebAppComponentRuntime,*"    - "com.bea:Type=ServletRuntime,*"    rules:    - pattern: "^com.bea<ServerRuntime=.+, Name=(.+), ApplicationRuntime=(.+), Type=ServletRuntime, WebAppComponentRuntime=(.+)><>(.+): (.+)"      attrNameSnakeCase: true      name: weblogic_servlet_$4      value: $5      labels:        name: $3        app: $2        servletName: $1      - pattern: "^com.bea<ServerRuntime=(.+), Name=(.+), ApplicationRuntime=(.+), Type=WebAppComponentRuntime><>(.+): (.+)$"      attrNameSnakeCase: true      name: webapp_config_$4      value: $5      labels:        app: $3        name: $2 This selects the appropriate MBeans and allows the exporter to generate metrics such as: webapp_config_open_sessions_current_count{app="receivables",name="accounting"} 3  webapp_config_open_sessions_current_count{app="receivables",name="inventory"} 7  weblogic_servlet_invocations_total_count{app="receivables",name="accounting",servletName="Balance"} 0  weblogic_servlet_invocations_total_count{app="receivables",name="accounting",servletName="Login"} 0  weblogic_servlet_invocations_total_count{app="receivables",name="inventory",servletName="Count"} 0  weblogic_servlet_invocations_total_count{app="receivables",name="inventory",servletName="Reorder"} 0 However, this approach has challenges. The JMX Exporter can be difficult to set up because it must run as a Java agent. In addition, because JMX is built on top of RMI, and JMX over RMI/IIOP has been removed from the JRE as of Java SE 9, the exporter must be packaged with a platform-specific RMI implementation. The JMX Exporter is also somewhat processor-intensive. It requires a separate invocation of JMX to obtain each bean in the tree, which adds to the processing that must be done by the server. And configuring the exporter can be difficult because it relies on MBean names and regular expressions. While it is theoretically possible to select a subset of the attributes for a given MBean, in practice that adds further complexity to the regular expressions, thereby making it impractical. As a result, it is common to scrape everything and incur the transport and storage costs, and then to apply filtering only when the data is eventually viewed. The WebLogic Monitoring Exporter Along with JMX, Oracle WebLogic Server 12.2.1 and later provides a RESTful Management Interface for accessing runtime state and metrics. Included in this interface is a powerful bulk access capability that allows a client to POST a query that describes exactly what information is desired and to retrieve a single response that includes only that information. Oracle has now created the WebLogic Monitoring Exporter, which takes advantage of this interface. This exporter is implemented as a web application that is deployed to the WebLogic Server instance being monitored. Its configuration explicitly follows the MBean tree, starting below the ServerRuntime MBean. To obtain the same result as in the previous example, we could use the following: metricsNameSnakeCase: true   queries:     - applicationRuntimes:       key: name       keyName: app       componentRuntimes:         type: WebAppComponentRuntime         prefix: webapp_config_         key: name         values: [openSessionsCurrentCount, openSessionsHighCount]         servlets:           prefix: weblogic_servlet_           key: servletName This exporter can scrape the desired metrics with a single HTTP query rather than multiple JMX queries, requires no special setup, and provides an easy way to select the metrics that should be produced for an MBean, while defaulting to using all available fields. Note that the exporter does not need to specify a URL because it always connects to the server on which it is deployed, and does not specify username and password, but rather requires its clients to specify them when attempting to read the metrics. Managing the Application Because the exporter is a web application, it includes a landing page: Not only does the landing page include the link to the metrics, but it also displays the current configuration. When the app is first loaded, the configuration that’s used is the one embedded in the WAR file. However, the landing page contains a form that allows you to change the configuration by selecting a new yaml file. Only the queries from the new file are used, and we can combine queries by selecting the Append button before submitting. For example, we could add some JVM metrics: The new metrics will be reported the next time a client accesses the metrics URL. The new elements above will produce metrics such as: jvm_heap_free_current{name="myserver"} 285027752 jvm_heap_free_percent{name="myserver"} 71 jvm_heap_size_current{name="myserver"} 422051840 Metrics in a WebLogic Server Cluster In a WebLogic Server cluster, of course, it is of little value to change the metrics collected by a single server instance; because all cluster members are serving the same applications, we want them to report the same metrics. To do this, we need a way to have all the servers respond to the changes made to any one of them. The exporter does this by using a separate config_coordinator process to track changes. To use it, we need to add a new top-level element to the initial configuration that describes the query synchronization: query_sync:    url: http://coordinator:8099    refreshInterval: 10   This specifies the URL of the config_coordinator process, which runs in its own Docker container. When the exporter first starts, and its configuration contains this element, it will contact the coordinator to see if it already has a new configuration. Thereafter, it will do so every time either the landing page or the metrics page is queried. The optional refreshInterval element limits how often the exporter looks for a configuration update. When it finds one, it will load it immediately without requiring a server restart. When you update the configuration in an exporter that is configured to use the coordinator, the new queries are sent to the coordinator where other exporters can load them. In this fashion, an entire cluster of Managed Servers can have its metrics configurations kept in sync. Summary The WebLogic Monitoring Exporter greatly simplifies the process of exporting metrics from clusters of WebLogic Server instances in a Docker/Kubernetes environment. It does away with the need to figure out MBean names and work with regular expressions. It also allows metric labels to be defined explicitly from field names, and then automatically uses those definitions for metrics from subordinate MBeans, ensuring consistency. In our testing, we have found enormous improvements in performance using it versus the JMX Exporter. It uses less CPU and responds more quickly. In the graphs below, the green lines represent the JMX Exporter, and the yellow lines represent the WebLogic Monitoring Exporter. We expect users who wish to monitor WebLogic Server performance will gain great benefits from our efforts. See Using Prometheus and Grafana to Monitor WebLogic Server on Kubernetes for more information.

As it runs, WebLogic Server generates a rich set of metrics and runtime state information. Several thousand individual metrics are available to capture performance data, such as invocation...

Announcement

Announcing the New WebLogic Monitoring Exporter 

Very soon we will be announcing certification of WebLogic Server on Kubernetes.  To give our users the best possible experience when running WebLogic domains in Docker/Kubernetes environments, we have developed the WebLogic Monitoring Exporter.  This new tool exposes WebLogic Server metrics that can be read and collected by monitoring tools such as Prometheus, and displayed in Grafana. We are also making the WebLogic Monitoring Exporter tool available in open source here, which will allow our community to contribute to this project and be part of enhancing it.  As it runs, WebLogic Server generates a rich set of metrics and runtime state information that provides detailed performance and diagnostic data about the servers, clusters, applications, and other resources that are running in a WebLogic domain.  The WebLogic Monitoring Exporter enables administrators of Kubernetes environments to easily monitor this data using tools like Prometheus and Grafana, tools that are commonly used for monitoring Kubernetes environments. For more information on the design and implementation of the WebLogic Monitoring Exporter, see Exporting Metrics from WebLogic Server. For more information on Using Prometheus and Grafana to Monitor WebLogic Server on Kubernetes see WebLogic on Kubernetes monitoring using Prometheus and Grafana. Stay tuned for more information about WebLogic Server certification on Kubernetes. Our intent is to enable you to run WebLogic Server in Kubernetes, to run WebLogic Server in the Kubernetes-based Oracle Container Engine, and to enable integration of WebLogic Server applications with applications developed on our Kubernetes-based Container Native Application Development Platform.

Very soon we will be announcing certification of WebLogic Server on Kubernetes.  To give our users the best possible experience when running WebLogic domains in Docker/Kubernetes environments, we have...

Run a WebLogic JMS Sample on Kubernetes

Overview This blog is a step-by-step guide to configuring and running a sample WebLogic JMS application in a Kubernetes cluster. First we explain how to create a WebLogic domain that has an Administration Server, and a WebLogic cluster. Next we add WebLogic JMS resources and a data source, deploy an application, and finally run the application. This application is based on a sample application named 'Classic API - Using Distributed Destination' that is included in the WebLogic Server sample applications. This application implements a scenario in which employees submit their names when they arrive, and a supervisor monitors employee arrival time. Employees choose whether to send their check-in messages to a distributed queue or a distributed topic. These destinations are configured on a cluster with two active Managed Servers. Two message-driven beans (MDBs), corresponding to these two destinations, are deployed to handle the check-in messages and store them in a database. A supervisor can then scan all of the check-in messages by querying the database. The two main approaches for automating WebLogic configuration changes are WLST and the REST API. To run the scripts, WLST or REST API, in a WebLogic domain on Kubernetes, you have two options: Running the scripts inside Kubernetes cluster pods — If you use this option,  use 'localhost', NodePort service name, or Statefulset's headless service name,  pod IP,  Cluster IP, and the internal ports. The instructions in this blog use 'localhost'. Running the scripts outside the Kubernetes cluster — If you use this option, use hostname/IP and the NodePort. In this blog we use the REST API and run the scripts within the Administration Server pod to deploy all the resources. All the resources are targeted to the whole cluster which is the recommended approach for WebLogic Server on Kubernetes because it works well when the cluster scales up or scales down. Creating the WebLogic Base Domain We use the sample WebLogic domain in GitHub to create the base domain. In this WebLogic sample you will find a Dockerfile, scripts, and yaml files to build and run the WebLogic Server instances and cluster in the WebLogic domain on Kubernetes. The sample domain contains an Administration Server named AdminServer and a WebLogic cluster with four Managed Servers named managed-server-0 through managed-server-3. We configure four Managed Servers but we start only the first two: managed-server-0 and managed-server-1.  One feature that distinguishes a JMS service from others is that it's highly stateful and most of its data needs to be kept in a persistent store, such as persistent messages, durable subscriptions, and so on. A persistent store can be a database store or a file store, and in this sample we demonstrate how to use external volumes to store this data in file stores. In this WebLogic domain we configure three persistent volumes for the following: The domain home folder – This volume is shared by all the WebLogic Server instances in the domain; that is, the Administration Server and all Managed Server instances in the WebLogic cluster. The file stores – This volume is shared by the Managed Server instances in the WebLogic cluster. A MySQL database – The use of this volume is explained later in this blog. Note that by default a domain home folder contains configuration files, log files, diagnostic files, application binaries, and the default file store files for each WebLogic Server instance in the domain. Custom file store files are also placed in the domain home folder by default, but we customize the configuration in this sample to place these files in a separate, dedicated persistent volume. The two persistent volumes – one for the domain home, and one for the customer file stores – are shared by multiple WebLogic Servers instances. Consequently, if the Kubernetes cluster is running on more than one machine, these volumes must be in a shared storage. Complete the steps in the README.md file to create and run the base domain. Wait until all WebLogic Server instances are running; that is, the Administration Server and two Managed Servers. This may take a short while because Managed Servers are started in sequence after the Administration Server is running and the provisioning of the initial domain is complete. $ kubectl get pod NAME READY    STATUS    RESTARTS    AGE admin-server-1238998015-kmbt9  1/1      Running   0           5m managed-server-0               1/1      Running   0          3m managed-server-1               1/1      Running   0   3m Note that in the commands used in this blog you need to replace $adminPod and $mysqlPod with the actual pod names. Deploying the JMS Resources with a File Store When the domain is up and running, we can deploy the JMS resources. First, prepare a JSON data file that contains definitions for one file store, one JMS server, and one JMS module. The file will be processed by a Python script to create the resources, one-by-one, using the WebLogic Server REST API. file jms1.json: {"resources": { "filestore1": { "url": "fileStores", "data": { "name": "filestore1", "directory": "/u01/filestores/filestore1", "targets": [{ "identity":["clusters", "myCluster"] }] } }, "jms1": { "url": "JMSServers", "data": { "messagesThresholdHigh": -1, "targets": [{ "identity":["clusters", "myCluster"] }], "persistentStore": [ "fileStores", "filestore1" ], "name": "jmsserver1" } }, "module": { "url": "JMSSystemResources", "data": { "name": "module1", "targets":[{ "identity": [ "clusters", "myCluster" ] }] } }, "sub1": { "url": "JMSSystemResources/module1/subDeployments", "data": { "name": "sub1", "targets":[{ "identity": [ "JMSServers", "jmsserver1" ] }] } } }} Second, prepare the JMS module file, which contains a connection factory, a distributed queue, and a distributed topic. file module1-jms.xml: <?xml version='1.0' encoding='UTF-8'?> <weblogic-jms xmlns="http://xmlns.oracle.com/weblogic/weblogic-jms" xmlns:sec="http://xmlns.oracle.com/weblogic/security" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:wls="http://xmlns.oracle.com/weblogic/security/wls" xsi:schemaLocation="http://xmlns.oracle.com/weblogic/weblogic-jms http://xmlns.oracle.com/weblogic/weblogic-jms/1.1/weblogic-jms.xsd"> <connection-factory name="cf1"> <default-targeting-enabled>true</default-targeting-enabled> <jndi-name>cf1</jndi-name> <transaction-params> <xa-connection-factory-enabled>true</xa-connection-factory-enabled> </transaction-params> <load-balancing-params> <load-balancing-enabled>true</load-balancing-enabled> <server-affinity-enabled>false</server-affinity-enabled> </load-balancing-params> </connection-factory> <uniform-distributed-queue name="dq1"> <sub-deployment-name>sub1</sub-deployment-name> <jndi-name>dq1</jndi-name> </uniform-distributed-queue> <uniform-distributed-topic name="dt1"> <sub-deployment-name>sub1</sub-deployment-name> <jndi-name>dt1</jndi-name> <forwarding-policy>Partitioned</forwarding-policy> </uniform-distributed-topic> </weblogic-jms> Third, copy these two files to the Administration Server pod, then run the Python script to create the JMS resources within the Administration Server pod: $ kubectl exec $adminPod -- mkdir /u01/wlsdomain/config/jms/ $ kubectl cp ./module1-jms.xml $adminPod:/u01/wlsdomain/config/jms/ $ kubectl cp ./jms1.json $adminPod:/u01/oracle/ $ kubectl exec $adminPod -- python /u01/oracle/run.py createRes /u01/oracle/jms1.json Launch the WebLogic Server Administration Console, by going to your browser and entering the URL http://<hostIP>:30007/console in the address bar, and make sure that all the JMS resources are running successfully. Deploying the Data Source Setting Up and Running MySQL Server in Kubernetes This sample stores the check-in messages in a database. So let's set up MySQL Server and get it running in Kubernetes. First, let's prepare the mysql.yml file, which defines a secret to store encrypted username and password credentials, a persistent volume claim (PVC) to store database data in an external directory, and a MySQL Server deployment and service. In the base domain, one persistent volume is reserved and available so that it can be used by the PVC that is defined in mysql.yml. file mysql.yml: apiVersion: v1 kind: Secret metadata: name: dbsecret type: Opaque data: username: bXlzcWw= password: bXlzcWw= rootpwd: MTIzNHF3ZXI= --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: mysql-pv-claim labels: app: mysql-server spec: storageClassName: manual accessModes: - ReadWriteOnce resources: requests: storage: 10Gi --- apiVersion: apps/v1beta1 kind: Deployment metadata: name: mysql-server spec: replicas: 1 template: metadata: labels: app: mysql-server spec: containers: - name: mysql-server image: mysql:5.7 imagePullPolicy: IfNotPresent ports: - containerPort: 3306 env: - name: MYSQL_ROOT_PASSWORD valueFrom: secretKeyRef: name: dbsecret key: rootpwd - name: MYSQL_USER valueFrom: secretKeyRef: name: dbsecret key: username - name: MYSQL_PASSWORD valueFrom: secretKeyRef: name: dbsecret key: password - name: MYSQL_DATABASE value: "wlsdb" volumeMounts: - mountPath: /var/lib/mysql name: db-volume volumes: - name: db-volume persistentVolumeClaim: claimName: mysql-pv-claim --- apiVersion: v1 kind: Service metadata: name: mysql-server labels: app: mysql-server spec: ports: - name: client port: 3306 protocol: TCP targetPort: 3306 clusterIP: None selector: app: mysql-server Next, deploy MySQL Server to the Kubernetes cluster: $ kubectl create -f mysql.yml Creating the Sample Application Table First, prepare the DDL file for the sample application table: file sampleTable.ddl: create table jms_signin ( name varchar(255) not null, time varchar(255) not null, webServer varchar(255) not null, mdbServer varchar(255) not null); Next, create the table in MySQL Server: $ kubectl exec -it $mysqlPod -- mysql -h localhost -u mysql -pmysql wlsdb < sampleTable.ddl Creating a Data Source for the WebLogic Server Domain We need to configure a data source so that the sample application can communicate with MySQL Server. First, prepare the ds1-jdbc.xml module file. file ds1-jdbc.xml: <?xml version='1.0' encoding='UTF-8'?> <jdbc-data-source xmlns="http://xmlns.oracle.com/weblogic/jdbc-data-source" xmlns:sec="http://xmlns.oracle.com/weblogic/security" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:wls="http://xmlns.oracle.com/weblogic/security/wls" xsi:schemaLocation="http://xmlns.oracle.com/weblogic/jdbc-data-source http://xmlns.oracle.com/weblogic/jdbc-data-source/1.0/jdbc-data-source.xsd"> <name>ds1</name> <datasource-type>GENERIC</datasource-type> <jdbc-driver-params> <url>jdbc:mysql://mysql-server:3306/wlsdb</url> <driver-name>com.mysql.jdbc.Driver</driver-name> <properties> <property> <name>user</name> <value>mysql</value> </property> </properties> <password-encrypted>mysql</password-encrypted> <use-xa-data-source-interface>true</use-xa-data-source-interface> </jdbc-driver-params> <jdbc-connection-pool-params> <capacity-increment>10</capacity-increment> <test-table-name>ACTIVE</test-table-name> </jdbc-connection-pool-params> <jdbc-data-source-params> <jndi-name>jndi/ds1</jndi-name> <algorithm-type>Load-Balancing</algorithm-type> <global-transactions-protocol>EmulateTwoPhaseCommit</global-transactions-protocol> </jdbc-data-source-params> <jdbc-xa-params> <xa-transaction-timeout>50</xa-transaction-timeout> </jdbc-xa-params> </jdbc-data-source> Then deploy the data source module to the WebLogic Server domain: $ kubectl cp ./ds1-jdbc.xml $adminPod:/u01/wlsdomain/config/jdbc/ $ kubectl exec $adminPod -- curl -v \ --user weblogic:weblogic1 \ -H X-Requested-By:MyClient \ -H Accept:application/json \ -H Content-Type:application/json \ -d '{ "name": "ds1", "descriptorFileName": "jdbc/ds1-jdbc.xml", "targets":[{ "identity":["clusters", "myCluster"] }] }' -X POST http://localhost:8001/management/weblogic/latest/edit/JDBCSystemResources Deploying the Servlet and MDB Applications First, download the two application archives: signin.war and signinmdb.jar. Enter the commands below to deploy these two applications using REST APIs within the pod running the WebLogic Administration Server. # copy the two app files to admin pod $ kubectl cp signin.war $adminPod:/u01/wlsdomain/signin.war $ kubectl cp signinmdb.jar $adminPod:/u01/wlsdomain/signinmdb.jar # deploy the two app via REST api $ kubectl exec $adminPod -- curl -v \ --user weblogic:weblogic1 \ -H X-Requested-By:MyClient \ -H Content-Type:application/json \ -d "{ name: 'webapp', sourcePath: '/u01/wlsdomain/signin.war', targets: [ { identity: [ 'clusters', 'myCluster' ] } ] }" \ -X POST http://localhost:8001/management/weblogic/latest/edit/appDeployments $ kubectl exec $adminPod -- curl -v \ --user weblogic:weblogic1 \ -H X-Requested-By:MyClient \ -H Content-Type:application/json \ -d "{ name: 'mdb', sourcePath: '/u01/wlsdomain/signinmdb.jar', targets: [ { identity: [ 'clusters', 'myCluster' ] } ] }" \ -X POST http://localhost:8001/management/weblogic/latest/edit/appDeployments Next, go to the WebLogic Server Administration Console (http://<hostIP>:30007/console) to verify the applications have been successfully deployed and running. Running the Sample Invoke the application on the Managed Server by going to a browser and entering the URL http://<hostIP>:30009/signIn/. Using a number of different browsers and machines to simulate multiple web clients, submit several unique employee names. Then check the result by entering the URL http://<hostIP>:30009/signIn/response.jsp. You can see that there are two different levels of load balancing taking place: HTTP requests are load balanced among Managed Servers within the cluster. Notice the entries beneath the column labeled Web Server Name. For each employee check-in, thiscolumn identifies the name of the WebLogic Server instance that contains the servlet instance that is processing the corresponding HTTP request. JMS messages that are sent to a distributed destination are load balanced among the MDB instances within the cluster. Notice the entries beneath the column labeled MDB Server Name. This column identifies the name of the WebLogic Server instance that contains the MDB instance that is processing the message. Restarting All Pods Restart the MySQL pod, WebLogic Administration Server pod and WebLogic Managed Server pods. This will demonstrate that the data in your external volumes is indeed preserved independently of your pod life cycles. First, gracefully shut down the MySQL Server pod: $ kubectl exec -it $mysqlpod /etc/init.d/mysql stop After the MySQL Server pod is stopped, the Kubernetes control panel will restart it automatically. Next, follow the section "Restart Pods" in the README.md in order to restart all WebLogic servers pods.  $ kubectl get pod NAME     READY   STATUS   RESTARTS   AGE admin-server-1238998015-kmbt9   1/1     Running  1         7d managed-server-0                1/1     Running  1          7d managed-server-1                1/1     Running  1     7d mysql-server-3736789149-n2s2l   1/1     Running  1          3h You will see that the restart count for each pod has increased from 0 to 1. After all pods are running again, access the WebLogic Server Administration Console to verify that the servers are in running state. After servers restart all messages will get recovered. You'll get the same results as you did prior to the restart because all data is persisted in the external data volumes and therefore can be recovered after the pods are restarted. Cleanup Enter the following command to clean up the resources used by the MySQL Server instance: $ kubectl delete -f mysql.yml Next, follow the steps in the "Cleanup" section of the README.md to remove the base domain and delete all other resources used by this example. Summary and Futures This blog helped demonstrate using Kubernetes as a flexible and scalable environment for hosting WebLogic Server JMS cluster deployments. We leveraged basic Kubernetes facilities to manage WebLogic server life-cycles, used file based message persistence, and demonstrated intra-cluster JMS communication between Java EE applications. We also demonstrated that File based JMS persistence works well when externalizing files to a shared data volume outside the Kubernetes pods, as this persists data beyond the life cycle of the pods. In future blogs we’ll explore hosting a WebLogic JMS cluster using Oracle’s upcoming fully certified WebLogic’s Kubernetes ‘operator based’ Kubernetes environment. In addition, we’ll also explore using external JMS clients to communicate with WebLogic JMS services running inside a Kubernetes cluster, using database instead of file persistence, and using WebLogic JMS automatic service migration to automatically migrate JMS instances from shutdown pods to running pods.

Overview This blog is a step-by-step guide to configuring and running a sample WebLogic JMS application in a Kubernetes cluster. First we explain how to create a WebLogic domain that has an...

Technical

Using Prometheus and Grafana to Monitor WebLogic Server on Kubernetes

As part of certifying Weblogic Server on Kubernetes, the WebLogic team has created a sample that demonstrates orchestrating a WebLogic Server cluster in a Kubernetes environment. This sample includes the WebLogic Monitoring Exporter, which was implemented to scrape runtime metrics for specific WebLogic Server instances and feed them to the Prometheus and Grafana tools. The Weblogic Monitoring Exporter is a web application that you can deploy on a WebLogic Server instance that you want to monitor. The exporter uses the WebLogic Server 12.2.1.x RESTful Management Interface for accessing runtime state and metrics. For a detailed description of WebLogic Monitoring Exporter configuration and usage, see The WebLogic Monitoring Exporter. In this blog you will learn how to configure Prometheus and Grafana to monitor WebLogic Server instances that are running in Kubernetes clusters. Monitoring Using Prometheus We’ll be using the WebLogic Monitoring Exporter to scrape WebLogic Server metrics and feed them to Prometheus. Previous blog entries have described how to start and run WebLogic Server instances in Kubernetes with the WebLogic Monitoring Exporter deployed on Managed Servers running in the cluster. To make sure that the WebLogic Monitoring Exporter is deployed and running, click the link: http://[hostname]:30011/wls-exporter/metrics You will be prompted for the WebLogic user credentials that are required to access the metrics data, which are weblogic/weblogic1. The metrics page should show the metrics configured for the WebLogic Monitoring Exporter: To create a Prometheus instance in Kubernetes, create a Prometheus configuration file, prometheus-kubernetes.yml. A sample file is provided in our sample, which may be modified as required to match your environment: apiVersion: extensions/v1beta1 kind: Deployment metadata:   name: prometheus   labels:     app: prometheus spec:   replicas: 1   strategy:     type: Recreate   template:     metadata:       labels:         app: prometheus     spec:       containers:       - name: prometheus         image: prom/prometheus:v1.7.1         ports:         - containerPort: 9090         args:         - -config.file=/etc/prometheus/prometheus.yml         volumeMounts:         - mountPath: /etc/prometheus/           name: config-volume       restartPolicy: Always       volumes:       - name: config-volume         configMap:           name: prometheus-configuration --- apiVersion: v1 kind: ConfigMap metadata:   name: prometheus-configuration data:   prometheus.yml: |-     global:       scrape_interval:     5s       external_labels:         monitor: 'my-monitor'     scrape_configs:     - job_name: 'kubernetes-pods'       kubernetes_sd_configs:       - role: pod       relabel_configs:       - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape]         action: keep         regex: true       - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_path]         action: replace         target_label: __metrics_path__         regex: (.+)       - source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port]         action: replace         regex: ([^:]+)(?::\d+)?;(\d+)         replacement: $1:$2         target_label: __address__       - action: labelmap         regex: __meta_kubernetes_pod_label_(.+)       - source_labels: [__meta_kubernetes_pod_name]         action: replace         target_label: pod_name       - regex: '(controller_revision_hash|job)'         action: labeldrop       - source_labels: [name]         regex: '.*/(.*)$'         replacement: $1         target_label: webapp       basic_auth:        username: weblogic        password: weblogic1 --- apiVersion: v1 kind: PersistentVolumeClaim metadata:   name: prometheus-storage spec:   accessModes:   - ReadWriteOnce   resources:     requests:       storage: 100Mi status: {} --- apiVersion: v1 kind: Service metadata:   name: prometheus spec:   type: NodePort   ports:   - port: 9090     targetPort: 9090     nodePort: 32000   selector:     app: prometheus   The above example of the Prometheus configuration file specifies: -    weblogic/weblogic1 as the user credentials -    5 seconds as the interval between updates of WebLogic Server metrics -    Use of 32000 as the external port to access the Prometheus dashboard You can change these values as required to reflect your specific environment and configuration. Start Prometheus to monitor the Managed Server instances:    $ kubectl create -f prometheus-kubernetes.yml Verify that Prometheus is monitoring all Managed Servers by browsing to http://[hostname]:32000. Examine the Insert metric at cursor pull-down. It should list metric names based on the current configuration of the WebLogic Monitoring Exporter web application.   To check that the WebLogic Monitoring Exporter is configured correctly, connect to the web page at  http//:[hostname]:30011/wls-exporter. The current configuration will be listed there. For example: Below is the corresponding WebLogic Monitoring Exporter configuration yml file: metricsNameSnakeCase: true queries: - applicationRuntimes:     key: name     keyName: app     componentRuntimes:       type: WebAppComponentRuntime       prefix: webapp_config_       key: name       values: [deploymentState, contextRoot, sourceInfo, openSessionsHighCount, openSessionsCurrentCount, sessionsOpenedTotalCount, sessionCookieMaxAgeSecs, sessionInvalidationIntervalSecs, sessionTimeoutSecs, singleThreadedServletPoolSize, sessionIDLength, servletReloadCheckSecs, jSPPageCheckSecs]       servlets:         prefix: weblogic_servlet_         key: servletName         values: [invocationTotalCount, reloadTotal, executionTimeAverage, poolMaxCapacity, executionTimeTotal, reloadTotalCount, executionTimeHigh, executionTimeLow] - JVMRuntime:     key: name     values: [heapFreeCurrent, heapFreePercent, heapSizeCurrent, heapSizeMax, uptime, processCpuLoad] The configuration listed above was embedded into the WebLogic Monitoring Exporter WAR file. To change or add more metrics data, simply connect to the landing page at http//:[hostname]:30011/wls-exporter and use the Append or Replace buttons to load the configuration file in yml format. For example, workmanager.yml: metricsNameSnakeCase: true queries: - applicationRuntimes:     key: name     workManagerRuntimes:       prefix: workmanager_       key: applicationName       values: [pendingRequests, completedRequests, stuckThreadCount] By constructing Prometheus-defined queries, you can generate any data output you require to monitor and diagnose the servers, applications, and resources that are running in your WebLogic domain. For example, you can enter the following into the query box, and Prometheus will return current data from all running Managed Servers in the WebLogic cluster: weblogic_servlet_execution_time_average > 1 Prometheus also generates graphs that are based on provided data. For example, if you click on the Graph tab, Prometheus will generate a graph showing the number of servlets with an average execution time exceeding the threshold time by 1 second or more.   Monitoring Using Grafana For better visual presentation and dashboards with multiple graphs, use Grafana. Here is an example configuration file, grafana-kubernetes.yml, which can be used to start Grafana in the Kubernetes environment: apiVersion: extensions/v1beta1 kind: Deployment metadata:   name: grafana   labels:     app: grafana spec:   replicas: 1   strategy:     type: Recreate   template:     metadata:       labels:         app: grafana     spec:       containers:       - name: grafana         image: grafana/grafana:4.4.3         ports:         - containerPort: 3000         env:         - name: GF_SECURITY_ADMIN_PASSWORD           value: pass       restartPolicy: Always       volumes: --- apiVersion: v1 kind: PersistentVolumeClaim metadata:   creationTimestamp: null   name: grafana-data spec:   accessModes:   - ReadWriteOnce   resources:     requests:       storage: 100Mi --- apiVersion: v1 kind: Service metadata:   labels:     app: grafana   name: grafana spec:   type: NodePort   ports:   - port: 3000     targetPort: 3000     nodePort: 31000   selector:     app: grafana To start Grafana to monitor the Managed Servers, use the following kubectl command:  $ kubectl create -f grafana-kubernetes.yml Connect to Grafana at http://[hostname]:31000. Log in to the home page with the username admin, and the password pass. The Grafana home page will be displayed. To connect Grafana to Prometheus, select Add Data Source and then enter the following values:        Name:    Prometheus        Type:      Prometheus        Url:         http://prometheus:9090        Access:   Proxy Select the Dashboards tab and click Import: Now we are ready to generate a dashboard to monitor WebLogic Server. Complete the following steps: Click the Grafana symbol in the upper left corner of the home page, and select Dashboards add new. Select Graph and pull it into the empty space. It will generate an empty graph panel:​ Click on Click on the panel and select the edit option. It will open an editable panel where you can customize how the metrics graph will be presented. In the Graph panel, select the General tab, and enter WebLogic Servlet Execution Average Time in Info à title. Select the Metrics tab, then select the Prometheus option in the Panel Data Source pull-down menu. If you click in the empty Metric lookup field, all metrics configured in the WebLogic Monitoring Exporter will be pulled in, the same way as in Prometheus. Let’s enter the same query we used in the Prometheus example, weblogic_servlet_execution_time_average > 1. The generated graphs will show data for all available servlets with an average execution time greater than 1 second, on all Managed Servers in the cluster. Each color represents a specific pod and servlet combination. To show data for particular pod, simply click on the corresponding legend. This will remove all other pods’ data from the graph, and their legends will no longer be highlighted. To add more data, just press the shift key and click on any desired legend. To reset, just click the same legend again, and all others will be redisplayed the graph. To customize the legend, click the desired values in the Legend Format field. For example: {{pod_name}} :appName={{webapp}} : servletName={{servletName}} Grafana will begin to display your customized legend. If you click the graph, you can see all values for the selected time: Select the Graph → Legend tab to obtain more options for customizing the legend view. For example, you can move the placement of the legend, show the minimum, maximum, or average values, and more. By selecting the Graph → Axes tab, you can switch units to the corresponding metrics data, in this example it is time(millisecs): Grafana also provides alerting tools. For example, we can configure an alert for specified conditions. In the example below, Grafana will fire an alert if the average servlet execution time is greater than 100 msec. It will also send an email to the administrator: Last, we want our graph to be refreshed every 5 seconds, the same refresh interval as the Prometheus scrape interval. We can also customize the time range for monitoring the data. To do that, we need to click on right upper corner of the created dashboard. By default, it is configured to show metrics for the prior 6 hours up to the current time. Perform the desired changes. For example, switch to refresh every 5 seconds and click Apply: When you are done, simply click the ‘save’ icon in the upper left corner of the window, and enter a name for the dashboard. Summary WebLogic Server today has a rich set of metrics that can be monitored using well-known tools such as the WebLogic Server Administration Console and the Monitoring Dashboard. These tools are used to monitor the WebLogic Server instances, applications, and resources running in a WebLogic deployment in Kubernetes. In this container ecosystem, tools like Prometheus and Grafana offer an alternative way of exporting and monitoring the metrics from clusters of WebLogic Server instances running in Kubernetes. It also makes monitored data easy to collect, access, present, and customize in real time without restarting the domain. In addition, it provides a simple way to create alerts and send notifications to any interested parties. Start using it, you will love it!

As part of certifying Weblogic Server on Kubernetes, the WebLogic team has created a sample that demonstrates orchestrating a WebLogic Server cluster in a Kubernetes environment. This sample includes...

Technical

How to... WebLogic Server on Kubernetes

The WebLogic Server team is working on certifying WebLogic domains being orchestrated in Kubernetes.  As part of this work we are releasing a series of blogs that answer questions our users might have, and describe best practices for running WebLogic Server on Kubernetes. These blogs cover topics such as security best practices, monitoring, logging, messaging, transactions, scaling clusters, externalizing state in volumes, patching, updating applications, and much more.  Our first blog walks you through a sample on GitHub that lets you jump right in and try it!  We will continue to update this list of blogs to make it easy for you to follow them, so stay tuned. WebLogic on Kubernetes, Try It! Security Best Practices for WebLogic Server Running in Docker and Kubernetes Automatic Scaling of WebLogic Clusters on Kubernetes Let WebLogic work with Elastic Stack in Kubernetes Docker Volumes in WebLogic Exporting Metrics from WebLogic Server Using Prometheus and Grafana to Monitor WebLogic Server on Kubernetes Run a WebLogic JMS Sample on Kubernetes Run Standalone WebLogic JMS Clients on Kubernetes Best Practices for Application Deployment on WebLogic Server Running on Kubernetes Patching WebLogic Server in a Kubernetes Environment WebLogic Server on Kubernetes Data Volume Usage T3 RMI Communication for WebLogic Server Running on Kubernetes Processing the Oracle WebLogic Server Kubernetes Operator Logs using Elastic Stack Using Prometheus to Automatically Scale WebLogic Clusters on Kubernetes WebLogic Dynamic Clusters on Kubernetes How to run WebLogic clusters on the Oracle Cloud Infrastructure Container Engine for Kubernetes WebLogic Server JTA in a Kubernetes Environment Voyager/HAProxy as Load Balancer to Weblogic Domains in Kubernetes

The WebLogic Server team is working on certifying WebLogic domains being orchestrated in Kubernetes.  As part of this work we are releasing a series of blogs that answer questions our users might...

The WebLogic Server

Migrating from Multi Data Source to Active GridLink - Take 2

In the original blog article on this topic at this link, I proposed that you delete the multi data source (MDS) and create a replacement Active GridLink (AGL) data source.  In the real world, the multi data source is likely referenced by another objects like a JDBC store and deleting the MDS will create an invalid configuration.  Further, those objects using connections from the MDS will fail during and after this re-configuration.  That implies that for this type of operation the related server needs to be shut down, the configuration updated with offline WLST, and the server restarted.  The administration console cannot be used for this type of migratoin.  Except for the section that describes using the console, the other information in the earlier blog article still aplies to this process.  No changes should be required in the application, only in the configuration, because we preserve the JNDI name. The following is a sample of what the offline WLST script might look like.  You could parameterize it and make it more flexible in handling multiple datasources. # java weblogic.WLST file.py import sys, socket, os # Application values dsName='myds' memberds1='ds1' memberds2='ds2' domain='/domaindir onsNodeList=host1:6200,host2:6200' url='jdbc:oracle:thin:@(DESCRIPTION=(ADDRESS_LIST=' \  + '(ADDRESS=(PROTOCOL=TCP)(HOST=host1)(PORT=1521))' \  + '(ADDRESS=(PROTOCOL=TCP)(HOST=host2)(PORT=1521)))' \  + '(CONNECT_DATA=(SERVICE_NAME=servicename)))' user='user1' password='password1' readDomain(domain) # change type from MDS to AGL # The following is for WLS 12.1.2 and 12.1.3 if not setting # FanEnabled true, which is not recommended # set('ActiveGridlink','true') # The following is for WLS 12.2.1 and later #cd('/JDBCSystemResources/' + dsName + '/JdbcResource/' + dsName ) #set('DatasourceType', 'AGL') # set the AGL parameters cd('/JDBCSystemResources/' + dsName + '/JdbcResource/' + dsName) create('myJdbcOracleParams','JDBCOracleParams') cd('JDBCOracleParams/NO_NAME_0') set('FanEnabled','true') set('OnsNodeList',onsNodeList) # Set the data source parameters cd('/JDBCSystemResources/' + dsName + '/JdbcResource/' + dsName + '/JDBCDataSourceParams/NO_NAME_0') set('GlobalTransactionsProtocol','None') unSet('DataSourceList') unSet('AlgorithmType') # Set the driver parameters cd('/JDBCSystemResources/' + dsName + '/JdbcResource/' + dsName) create('myJdbcDriverParams','JDBCDriverParams') cd('JDBCDriverParams/NO_NAME_0') set('Url',url) set('DriverName','oracle.jdbc.OracleDriver') set('PasswordEncrypted',password) create('myProps','Properties') cd('Properties/NO_NAME_0') create('user', 'Property') cd('Property') cd('user') set('Value', user) # Set the connection pool parameters cd('/JDBCSystemResources/' + dsName + '/JdbcResource/' + dsName) create('myJdbcConnectionPoolParams','JDBCConnectionPoolParams') cd('/JDBCSystemResources/' + dsName + '/JdbcResource/' + dsName + '/JDBCConnectionPoolParams/NO_NAME_0') set('TestTableName','SQL ISVALID') # remove member data sources if they are not needed cd('/') delete(memberds1, 'JDBCSystemResource') delete(memberds2, 'JDBCSystemResource') updateDomain() closeDomain() exit()   In WLS 12.1.2 and 12.1.3, there is an explicit ActiveGridlink flag that can be used to identify an AGL datasource, even if FanEnabled is not set to true (which is not recommended) and auto-ONS is used (12.1.2 is the first release in which auto-ONS is supported). In the script above, uncomment the line to set it only if FanEnabled and OnsNodeList are not set. Starting in WLS 12.2.1, there is an explicit datasource type at the JDBCResource level. If that is set to GENERIC, you must re-set it using set("DatasourceType", "AGL"). In the script above, uncomment the lines to set it.  In this case, the ActiveGridlink flag is not necessary. If you are using an Oracle wallet for ONS, that needs to be added to the JDBCOracleParams object as well.  Prior to WLS 12.2.1, the ONS information needs to be explicitly specified.  In WLS 12.2.1 and later, the ONS information can be excluded and the database will try to determine the correct information.  For more complex ONS topologies, the configuration can be specified using the format described in http://docs.oracle.com/middleware/1221/wls/JDBCA/gridlink_datasources.htm#BABGJBIC.   Note: the unSet() method was not added to offline WLST until WLS 12.2.1.2.0.  There is a related patch to add this feature to WLS 12.1.3 at Patch 25695948.  For earlier releases, one option is to edit the MDS descriptor file and delete the lines for "data-source-list" and "algorithm-type" and commend out the "unSet()" calls before running the offline WLST script.  Another option is to run the following online WLST script, which does support the unSet() method.  However, the server will need to be restarted after the update and before the member datasources can be deleted. # java weblogic.WLST file.py import sys, socket, os # Application values dsName='myds' memberds1='ds1' memberds2='ds2' onsNodeList=host1:6200,host2:6200' url='jdbc:oracle:thin:@(DESCRIPTION=(ADDRESS_LIST=' \  + '(ADDRESS=(PROTOCOL=TCP)(HOST=host1)(PORT=1521))' \  + '(ADDRESS=(PROTOCOL=TCP)(HOST=host2)(PORT=1521)))' \  + '(CONNECT_DATA=(SERVICE_NAME=otrade)))' user='user1' password='password1' hostname='localhost' admin='weblogic' adminpw='welcome1' connect(admin,adminpw,"t3://"+hostname+":7001") edit() startEdit() # change type from MDS to AGL # The following is for WLS 12.1.2 and 12.1.3 if not setting # FanEnabled to  true.  It is recommended to always set FanEnabled to true. # cd('/JDBCSystemResources/' + dsName) # set('ActiveGridlink','true') # The following is for WLS 12.2.1 and later # cd('/JDBCSystemResources/' + dsName + '/JDBCResource/' + dsName ) # set('DatasourceType', 'AGL') # set the AGL parameters cd('/JDBCSystemResources/' + dsName + '/JDBCResource/' + dsName + '/JDBCOracleParams/' + dsName) set('FanEnabled','true') set('OnsNodeList',onsNodeList) # Set the data source parameters cd('/JDBCSystemResources/' + dsName + '/JDBCResource/' + dsName + '/JDBCDataSourceParams/' + dsName) set('GlobalTransactionsProtocol','None') cmo.unSet('DataSourceList') cmo.unSet('AlgorithmType') # Set the driver parameters cd('/JDBCSystemResources/' + dsName + '/JDBCResource/' + dsName + '/JDBCDriverParams/' + dsName) set('Url',url) set('DriverName','oracle.jdbc.OracleDriver') set('PasswordEncrypted',password) cd('Properties/' + dsName) userprop=cmo.createProperty('user') userprop.setValue(user) # Set the connection pool parameters cd('/JDBCSystemResources/' + dsName + '/JDBCResource/' + dsName + '/JDBCConnectionPoolParams/' + dsName) set('TestTableName','SQL PINGDATABASE') # cannot remove member data sources until server restarted #cd('/') #delete(memberds1, 'JDBCSystemResource') #delete(memberds2, 'JDBCSystemResource') save() activate() exit() A customer was having problems setting multiple targets in a WLST script.  It's not limited to this topic but here's how it is done. targetlist = [] targetlist.append(ObjectName('com.bea:Name=myserver1,Type=Server')) targetlist.append(ObjectName('com.bea:Name=myserver2,Type=Server')) targets = array(targetlist, ObjectName) cd('/JDBCSystemResources/' + dsName) set('Targets',targets)          

In the original blog article on this topic at this link, I proposed that you delete the multi data source (MDS) and create a replacement Active GridLink (AGL) data source.  In the real world, the...

Docker Volumes in WebLogic

Background Information In the Docker world, containers are ephemeral; they can be destroyed and replaced. After a container is destroyed, it is gone and all the changes made to the container are gone. If you want to persist data which is independent of the container's lifecycle, you need to use volumes. Volumes are directories that exist outside of the container file system. Docker Data Volume Introduction This blog provides a generic introduction to Docker data volumes and is based on a WebLogic Server 12.2.1.3 image. You can build the image using scripts in github. In this blog, this base image is used only to demonstrate the usage of data volumes; no WebLogic Server instance is actually running. Instead, it uses the 'sleep 3600' command to keep the container running for 6 minutes and then stop. Local Data Volumes Anonymous Data Volumes For an anonymous data volume, a unique name is auto-generated internally. Two ways to create anonymous data volumes are: Create or run a container with '-v /container/fs/path' in docker create or docker run Use the VOLUME instruction in Dockerfile: VOLUME ["/container/fs/path"]   $ docker run --name c1 -v /mydata -d weblogic-12.2.1.3-developer 'sleep 3600' $ docker inspect c1 | grep Mounts -A 10 "Mounts": [ { "Name": "625672972c137c2e29b85f1e6ae70d6d4eb756002062792721a5ac0e9d0f1421", "Source": "/scratch/docker/volumes/625672972c137c2e29b85f1e6ae70d6d4eb756002062792721a5ac0e9d0f1421/_data", "Destination": "/mydata", "Driver": "local", "Mode": "", "RW": true, "Propagation": "" } ],   # now we know that the volume has a random generated name 625672972c137c2e29b85f1e6ae70d6d4eb756002062792721a5ac0e9d0f1421 $ docker volume inspect 625672972c137c2e29b85f1e6ae70d6d4eb756002062792721a5ac0e9d0f1421 [ { "Name": "625672972c137c2e29b85f1e6ae70d6d4eb756002062792721a5ac0e9d0f1421", "Driver": "local", "Mountpoint": "/scratch/docker/volumes/625672972c137c2e29b85f1e6ae70d6d4eb756002062792721a5ac0e9d0f1421/_data", "Labels": null, "Scope": "local" } ] Named Data Volumes Named data volumes are available in Docker 1.9.0 and later. Two ways to create named data volumes are: Use docker volume create --name volume_name Create or run a container with '-v volume_name:/container/fs/path' in docker create or docker run $ docker volume create --name testv1 $ docker volume inspect testv1 [ { "Name": "testv1", "Driver": "local", "Mountpoint": "/scratch/docker/volumes/testv1/_data", "Labels": {}, "Scope": "local" } ] Mount Host Directory or File You can mount an existing host directory to a container directly.  To mount a host directly when running a container: Create or run a container with '-v /host/path:/container/path' in docker create or docker run You can mount an individual host file in the same way: Create or run a container with '-v /host/file:/container/file' in docker create or docker run Note that the mounted host directory or file is not an actual data volume managed by Docker so it is not shown when running docker volume ls. Also, you cannot mount a host directory or file in Dockerfile. $ docker run --name c2 -v /home/data:/mydata -d weblogic-12.2.1.3-developer 'sleep 3600' $ docker inspect c2 | grep Mounts -A 8 "Mounts": [ { "Source": "/home/data", "Destination": "/mydata", "Mode": "", "RW": true, "Propagation": "rprivate" } ], Data Volume Containers Data volume containers are data-only containers. After a data volume container is created, it doesn't need to be started. Other containers can access the shared data using --volumes-from. # step 1: create a data volume container 'vdata' with two anonymous volumes $ docker create -v /vv/v1 -v /vv/v2 --name vdata weblogic-12.2.1.3-developer # step 2: run two containers c3 and c4 with reference to the data volume container vdata $ docker run --name c3 --volumes-from vdata -d weblogic-12.2.1.3-developer 'sleep 3600' $ docker run --name c4 --volumes-from vdata -d weblogic-12.2.1.3-developer 'sleep 3600' Data Volume Plugins Docker 1.8 and later support a volume plugin which can extend Docker with new volume drivers. You can use volume plugins to mount remote folders in a shared storage server directly, such as iSCSI, NFS, or FC. The same storage can be accessed, in the same manner, from another container running in another host. Containers in different hosts can share the same data. There are plugins available for different storage types. Refer to the Docker documentation for volume plugins: https://docs.docker.com/engine/extend/legacy_plugins/#volume-plugins.  WebLogic Persistence in Volumes When running WebLogic Server in Docker, there are basically two use cases for using data volumes: To separate data from the WebLogic Server lifecycle, so you can reuse the data even after the WebLogic Server container is destroyed and later restarted or moved. To share data among different WebLogic Server instances, so they can recover each other's data, if needed (service migration). The following WebLogic Server artifacts are candidates for using data volumes: Domain home folders Server logs Persistent file stores for JMS, JTA, and such. Application deployments Refer to the following table for the data stored by WebLogic Server subsystems. Subsystem or Service What It Stores More Information Diagnostic Service Log records, data events, and harvested metrics Understanding WLDF Configuration in Configuring and Using the Diagnostics Framework for Oracle WebLogic Server JMS Messages Persistent messages and durable subscribers Understanding the Messaging Models in Developing JMS Applications for Oracle WebLogic Server JMS Paging Store One per JMS server. Paged persistent and non-persistent messages. Main Steps for Configuring Basic JMS System Resources in Administering JMS Resources for Oracle WebLogic Server  JTA Transaction Log (TLOG) Information about committed transactions, coordinated by the server, that may not have been completed. TLOGs can be stored in the default persistent store or in a JDBC TLOG store.    Managing Transactions in Developing JTA Applications for Oracle WebLogic Server     Using a JDBC TLog Store in Developing JTA Applications for Oracle WebLogic Server  Path Service The mapping of a group of messages to a messaging resource Using the WebLogic Path Service in Administering JMS Resources for Oracle WebLogic Server Store-and-Forward (SAF) Service Agents Messages from a sending SAF agent for re-transmission to a receiving SAF agent Understanding the Store-and-Forward Service in Administering the Store-and-Forward Service for Oracle WebLogic Server  Web Services Request and response SOAP messages from an invocation of a reliable WebLogic Web Service Using Reliable SOAP Messaging in Programming Advanced Features of JAX-RPC Web Services for Oracle WebLogic Server  EJB Timer Services EJB Timer objects Understanding Enterprise JavaBeans in Developing Enterprise JavaBeans, Version 2.1, for Oracle WebLogic Server A best practice is to run each WebLogic Server instance in its own container and share domain configuration in a data volume. This is the basic usage scenario for data volumes in WebLogic Server. When the domain home is in an external volume, server logs are also in the external volume, by default. But, you can explicitly configure server logs to be located in a different volume because server logs may contain more sensitive data than other files in the domain home and need more permission control.  File stores for JMS and JTA etc should be located in an external volume and use shared directories. This is required for service migration to work. It’s fine for all default and custom stores in the same domain to use the same shared directory, as different instances automatically, uniquely decorate their file names. But different domains must never share the same directory location, as the file names can collide. Similarly, two running, duplicate domains must never share the same directory location. File collisions usually result in file locking errors, and may corrupt data.  File stores create a number of files for different purposes. Cache and paging files can be stored in the container file system locally. Refer to following table for detailed information about the different files and locations. Store Type Directory Configuration Store Path Not Configured   Relative Store Path   Absolute Store Path   File Name   default The directory configured on a WebLogic Server default store. See Using the Default Persistent Store. <domainRoot>/servers/<serverName>/data/store/default <domainRoot>/<relPath> <absPath> _WLS_<serverName>NNNNNN.DAT custom file The directory configured on a custom file store. See Using Custom File Stores. <domainRoot>/servers/<serverName>/data/store/<storeName> <domainRoot>/<relPath> <absPath> <storeName>NNNNNN.DAT cache The cache directory configured on a custom or default file store that has a DirectWriteWithCache synchronous write policy. See Tuning the WebLogic Persistent Store in Tuning Performance of Oracle WebLogic Server. ${java.io.tmpdir}/WLStoreCache/${domainName}/<storeUuid> <domainRoot>/<relPath> <absPath> <storeName>NNNNNN.CACHE paging The paging directory configured on a SAF agent or JMS server. See Paging Out Messages To Free Up Memory in Tuning Performance of Oracle WebLogic Server. <domainRoot>/servers/<serverName>/tmp <domainRoot>/<relPath> <absPath> <jmsServerName>NNNNNN.TMP <safAgentName>NNNNNN.TMP   In order to properly secure data in external volumes, it is an administrator's responsibility to set the appropriate permissions on those directories. To allow the WebLogic Server process to access data in a volume, the user running the container needs to have the proper permission to the volume folder.  Summary Use local data volumes: Docker 1.8.x and earlier recommends that you use data volume containers (with anonymous data volumes). Docker 1.9.0 and later recommends that you use named data volumes.  If you have multiple volumes, to be shared among multiple containers, we recommend that you use a data volume container with named data volumes. To share data among containers in different hosts, first mount the folder in a shared storage server, and then choose one volume plugin to mount it to Docker. We recommend that the WebLogic Server domain home be externalized to a data volume. The externalized domain home must be shared by the Admin server and Managed servers, each running in their own container. For high availability all Managed Servers need to read and write to the stores in the shared data volume. The kind of data volume that is selected should be chosen thinking of  persistence of the stores, logs, and diagnostic files.

Background Information In the Docker world, containers are ephemeral; they can be destroyed and replaced. After a container is destroyed, it is gone and all the changes made to the container are gone....

The WebLogic Server

WebLogic Server in Eclipse IDE for Java EE Developers

This article describes how to integrate WebLogic Server in the latest supported version of Eclipse IDE for Java EE Developers. You need to start by getting all of the pieces - Java SE Development Kit, WebLogic Server, and Eclipse IDE. Go to http://www.oracle.com/technetwork/java/javase/downloads/jdk8-downloads-2133151.html to download Java SE Development kit (neither WLS nor Eclipse come with it).  Accept the license agreement, download the binary file, and install it on your computer.  Set your JAVA_HOME to the installation directory and add $JAVA_HOME/bin to your PATH. Next get and install a copy of WebLogic Server (WLS).  The latest version of WLS that is currently supported in Eclipse is WLS 12.2.1.3.0. The standard WLS download page at http://www.oracle.com/technetwork/middleware/weblogic/downloads/wls-main-097127.html  has 12.2.1.3.0 at the top.  If you are running on Windows, your command window will need to be running as the Administrator.  (Note that the graphics below show 12.2.1.2.0 because they haven't been updated.) unzip fmw_12.2.1.3.0_wls_Disk1_1of1.zip java -jar fmw_12.2.1.3.0_wls.jar You can take all of the default values.  You will need to specify an installation directory.  You might want to install the examples.  On the last page, make sure the “Automatically Launch the Quickstart Configuration Wizard” is clicked to create the domain.  In the Configuration Wizard, take the defaults, you may want to change the domain directory, enter a password, and click on Create. Download “Eclipse IDE for Java EE Developers” from http://www.eclipse.org/downloads/eclipse-packages/ and unzip it.  The latest version is Oxygen.2 (4.7.2) and is now supported by OEPE.  Change to the Eclipse installation directory and run eclipse. Select a directory as a work space and optionally select to use this as the default.  You can close the Welcome screen so we can get to work. Click on the Windows menu item and select the Servers view.     Then click on the link “No servers are available.  Click this link to create a new server”.  Expand Oracle, select Oracle WebLogic Server Tools, and click on Next.   It will then go off and get a bunch of files including the Oracle Enterprise Pack for Eclipse (OEPE).  You will need to accept the OEPE license agreement to finish the installation.  Eclipse needs to restart to adopt the changes.  You will need to again go into the Server view, click on the link to create a server, and select Oracle WebLogic. If you want to access the server remotely, you will need to enter the computer name; otherwise using localhost is sufficient for local access.  Click Next. On the next screen, browse to the directory name where you installed WLS 12.2.1.2.0 and then select the wlserver subdirectory (i.e., WebLogic home is always the wlserver subdirectory of the installation directory).  Eclipse will automatically put in the value of JAVA_HOME for the “Java home” value.  Click Next. On the next screen, browse to the directory where you created the domain using the WLS Configuration Wizard.  Click Next and Finish. You can double-click on the new server entry that you just created to bring up a window for the server.  Click on “Open Launch configuration” to configure any options that you want while running the server and then click OK.     Back in the server view, right click on the server entry and select start to boot the server. A Console window will open to display the server log output and eventually you will see that the server is in RUNNING mode.   That covers the logistics of getting everything installed to run WLS in Eclipse.  Now you can create your project and start your development.  Let’s pick a Dynamic Web Project from the new options. The project will automatically be associated with the server runtime that you just set up.  For example, selecting a dynamic web project will display a window like the following where the only value to provide is the name of the project. There are many tutorials on creating projects within Eclipse, once you get the tools set up.  

This article describes how to integrate WebLogic Server in the latest supported version of Eclipse IDE for Java EE Developers. You need to start by getting all of the pieces - Java SE Development Kit,...

Technical

Automatic Scaling of WebLogic Clusters on Kubernetes

Elasticity (scaling up or scaling down) of a WebLogic Server cluster provides increased reliability of customer applications as well as optimization of resource usage.  Elasticity was introduced in WebLogic Server 12.2.1 and was built on the concepts of the elastic services framework and dynamic clusters:     Elasticity in WebLogic Server is achieved by either:   ·      Manually adding or removing a running server instance in a dynamic WebLogic Server cluster using the WebLogic Server Administration Console, or the WebLogic Scripting Tool (WLST). This is known as on-demand scaling.   ·      Establishing WLDF scaling policies that set the conditions under which a dynamic cluster should be scaled up or down, and actions that define the scaling operations themselves. When the conditions defined in the scaling policy occur, the corresponding scaling action is triggered automatically.   When a scaling action occurs, Managed Server instances are started and stopped through the use of WebLogic Server Node Managers.  Node Manager is a WebLogic Server utility that manages the lifecycle (startup, shutdown, and restart) of Managed Server instances.   The WebLogic Server team is investing in running WebLogic Server in Kubernetes cloud environments.  A WebLogic Server cluster can be automatically scaled by increasing (or decreasing) the number of pods based on resource metrics provided by the WebLogic Diagnostic Framework (WLDF).  We will use the sample demo from WebLogic on Kubernetes, Try It! to illustrate automatic scaling of a WebLogic Server cluster in a Kubernetes cloud environment. There are a few key differences between how elasticity works in the sample demo for a Kubernetes cloud environment versus in traditional WebLogic Server deployment environments:   1.     The sample demo uses statically-configured clusters, whereas elasticity works with dynamic clusters in a traditional deployment.  We’ll discuss elasticity of WebLogic Server clusters in a Kubernetes cloud environment in a future blog. 2.     In the sample demo, scaling actions invoke requests to the Kubernetes API server to scale pods, versus requests to Node Manager in traditional deployments.   In this blog entry, we will show you how a WebLogic Server cluster can be automatically scaled up or down in a Kubernetes environment based on metrics provided by WLDF.    WebLogic on Kubernetes Sample Demo We will use the WebLogic domain running on Kubernetes described in the following blog entry, WebLogic on Kubernetes, Try It!.        The WebLogic domain, running in a Kubernetes cluster, consists of:   1.     An Administration Server (AS) instance, running in a Docker container, in its own pod (POD 1). 2.     A webhook implementation, running in its own Docker container, in the same pod as the Administration Server (POD 1).   What is a webhook? A webhook is a lightweight HTTP server that can be configured with multiple endpoints (hooks) for executing configured commands, such as shell scripts. More information about the webhook used in the sample demo, see adnanh/webhook/.   NOTE: As mentioned in WebLogic on Kubernetes, Try It!, a prerequisite for running  WLDF initiated scaling is building and installing a Webhook Docker image  (oow-demo-webhook).     3.     A WebLogic Server cluster composed of a set of Managed Server instances in which each instance is running in a Docker container in its own pod (POD 2 to POD 6). WebLogic Diagnostic Framework The WebLogic Diagnostics Framework (WLDF) is a suite of services and APIs that collect and surface metrics that provide visibility into server and application performance.  To support automatic scaling of a dynamic cluster, WLDF provides the Policies and Actions component, which lets you write policy expressions for automatically executing scaling operations on a dynamic cluster. These policies monitor one or more types of WebLogic Server metrics, such as memory, idle threads, and CPU load.  When the configured threshold in a policy is met, the policy is triggered, and the corresponding scaling action is executed. For more information about WLDF and diagnostic policies and actions, see Configuring Policies and Actions in Configuring and Using the Diagnostics Framework for Oracle WebLogic Server.   Policies can be based on the following types of data: ·       Trends over time, or historical data, such as changes in average values during a specific time interval. For example, a policy can be based on average JVM heap usage above a certain threshold. ·       Runtime metrics relevant to all server instances in a cluster, not just one server instance. ·       Data from multiple services that are considered together. For example, a policy can be based on response-time metrics reported by a load balancer and message-backlog metrics from a message queue. ·       Calendar-based schedules. Policies can identify a specific calendar time, such as time of day or day of week, when a scaling action must be executed. ·       Log rules or event data rules. Automatic Scaling of a WebLogic Server Cluster in Kubernetes Here is how we can achieve automatic scaling of a WebLogic Server cluster in a Kubernetes environment using WLDF.  I’ll be discussing only the relevant configuration changes for automatic scaling. You can find instructions for setting up and running the sample demo in WebLogic on Kubernetes, Try It!.   First, I’ll quickly describe how automatic scaling of a WebLogic Server cluster in Kubernetes works.   In the sample demo, we have a WebLogic Server cluster running in a Kubernetes cluster with a one-to-one mapping of WebLogic Server Managed Server instances to Kubernetes pods. The pods are managed by a StatefulSet controller.  Like ReplicaSets and Deployments, StatefulSets are a type of replication controller that can be scaled by simply increasing or decreasing the desired replica count field.  A policy and scaling action is configured for the WebLogic Server cluster. While the WebLogic Server cluster is running, WLDF collects and monitors various runtime metrics, such as the OpenSessionsCurrentCount attribute of the WebAppComponentRuntimeMBean.  When the conditions defined in the policy occur, the policy is triggered, which causes the corresponding scaling action to be executed. For a WebLogic Server cluster running in a Kubernetes environment, the scaling action is to scale the corresponding StatefulSet by setting the desired replica count field.  In turn, this causes the StatefulSet controller to increase or decrease the number of pods (that is, the WebLogic Server Managed Server instances) to match the desired replica count.   Because StatefulSets are managing the pods in which the Managed Server instances are running, a WebLogic Server cluster can also be scaled on-demand by using tools such as kubectl:   For example: $ kubectl scale statefulset ms --replicas=3   WLDF Policies and Actions   For information about configuring the WLDF Policies and Actions component, see Configuring Policies and Actions.  For this sample, the policy and action is configured in a WLDF diagnostic system module, whose corresponding resource descriptor file, Module-0-3905.xml, is shown below:     <?xml version='1.0' encoding='UTF-8'?> <wldf-resource xmlns="http://xmlns.oracle.com/weblogic/weblogic-diagnostics" xmlns:sec="http://xmlns.oracle.com/weblogic/security" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:wls="http://xmlns.oracle.com/weblogic/security/wls" xsi:schemaLocation="http://xmlns.oracle.com/weblogic/weblogic-diagnostics http://xmlns.oracle.com/weblogic/weblogic-diagnostics/2.0/weblogic-diagnostics.xsd">   <name>Module-0</name>   <watch-notification>     <watch>       <name>myScaleUpPolicy</name>       <enabled>true</enabled>       <rule-type>Harvester</rule-type>       <rule-expression>wls:ClusterGenericMetricRule("DockerCluster", "com.bea:Type=WebAppComponentRuntime,ApplicationRuntime=OpenSessionApp,*", "OpenSessionsCurrentCount","&gt;=",0.01,5,"1 seconds","10 seconds")       </rule-expression>       <expression-language>EL</expression-language>       <alarm-type>AutomaticReset</alarm-type>       <schedule>         <minute>*</minute>         <second>*/15</second>       </schedule>       <alarm-reset-period>60000</alarm-reset-period>       <notification>RestScaleUpAction</notification>     </watch>     <rest-notification>       <name>RestScaleUpAction</name>       <enabled>true</enabled>       <timeout>0</timeout>       <endpoint-url>http://${OPERATOR_ENDPOINT}/hooks/scale-up</endpoint-url>       <rest-invocation-method-type>PUT</rest-invocation-method-type>       <accepted-response-type>application/json</accepted-response-type>       <http-authentication-mode>None</http-authentication-mode>       <custom-notification-properties></custom-notification-properties>     </rest-notification>   </watch-notification> </wldf-resource>   The base element for defining policies and actions is <watch-notification>. Policies are defined in <watch> elements. Actions are defined in elements whose names correspond to the action type. For example, the element for a REST action is <rest-notification>.   Here are descriptions of key configuration details regarding the policies and actions that are specified in the preceding resource descriptor file.  For information about all the available action types, see Configuring Actions.   Policies:   ·      The sample demo includes a policy named myScaleUpPolicy, which has the policy expression shown below as it would appear in the WebLogic Server Administration Console:       ·      The policy expression for myScaleUpPolicy uses the smart rule, ClusterGenericMetricRule. The configuration of this smart rule can be read as:   For the cluster DockerCluster, WLDF will monitor the OpenSessionsCurrentCount attribute of the WebAppComponentRuntimeMBean for the OpenSessionApp application.  If the OpenSessionsCurrentCount is greater than or equal to 0.01 for 5 per cent of the Managed Server instances in the cluster, then the policy will be evaluated as true. Metrics will be collected at a sampling rate of 1 second, and the sample data will be averaged out over the specified 10 second period of time of the retention window.   For more information about smart rules, see Smart Rule Reference.   Actions:   An action is an operation that is executed when a policy expression evaluates to true.  In a traditional WebLogic Server deployment, scaling actions (scale up and scale down) are associated with policies for scaling a dynamic cluster.  Elastic actions scale Managed Server instances in a dynamic cluster by interacting with Node Managers.   WLDF also supports many other types of diagnostic actions:   ·       Java Management Extensions (JMX) ·       Java Message Service (JMS) ·       Simple Network Management Protocol (SNMP) ·       Simple Mail Transfer Protocol (SMTP) ·       Diagnostic image capture ·       REST ·       WebLogic logging system ·       WebLogic Scripting Tool (WLST) ·       Heap dump ·       Thread dump     For our sample, we use a REST action to show invoking a REST endpoint to initiate a scaling operation.  We selected a REST action, instead of an elastic action, because we are not running Node Manager in the Kubernetes environment, and we’re scaling pods by using the Kubernetes API and API server.  For more information about all the diagnostic actions supported in WLDF, see Configuring Actions.   ·      The REST action, associated with the policy myScaleUpPolicy from earlier, was configured in the Actions tab of the policy configuration pages in the WebLogic Server Administration Console:       ·      The REST endpoint URL in which to send the notification is established by the <endpoint-url> element in the diagnostic system module’s resource descriptor file.   By looking at the configuration elements of the REST action, you can see that the REST invocation will send an empty PUT request to the endpoint with no authentication.  If you prefer, you can also send a Basic Authentication REST request by simply setting the <http-authentication-mode> attribute to Basic.   Other WLDF resource descriptor configuration settings worth noting are:   1.     The file name of a WLDF resource descriptor can be anything you like.  For our sample demo, Module-0-3905.xml was generated when we used the WebLogic Server Administration Console to configure the WLDF policy and REST action.   2.     In the demo WebLogic domain, the WLDF diagnostic system module was created using the container-scripts/add-app-to-domain.py script:   # Configure WLDF # ============-= as_name = sys.argv[3]   print('Configuring WLDF system resource'); cd('/')   create('Module-0','WLDFSystemResource') cd('/WLDFSystemResources/Module-0') set('DescriptorFileName', 'diagnostics/Module-0-3905.xml')   cd('/') assign('WLDFSystemResource', 'Module-0', 'Target', as_name)   In the script, you can see that:   ·      A WLDF diagnostic system module is created named Module-0. ·      The WLDF resource descriptor file, Module-0-3905.xml, is associated with Module-0. ·      The diagnostic system module is targeted to the Administration Server, specified as as_name, which is passed in as a system argument.  This diagnostic system module was targeted to the Administration Server because its policy contains the ClusterGenericMetricRule smart rule, which must be executed from the Administration Server so that it can have visibility across the entire cluster. For more information about smart rules and their targets, see Smart Rule Reference. Demo Webhook   In the sample demo, a webhook is used to receive the REST notification from WLDF and to scale the StatefulSet and, by extension, the WebLogic Server cluster.  The following hook is defined in webhooks/hooks.json:   [   {     "id": "scale-up",     "execute-command": "/var/scripts/scaleUpAction.sh",     "command-working-directory": "/var/scripts",     "response-message": "scale-up call ok\n"   } ]   This hook named scale-up corresponds to the <endpoint-url> specified in the REST notification:   <endpoint-url>http://${OPERATOR_ENDPOINT}/hooks/scale-up</endpoint-url>   Notice that the endpoint URL contains the environment variable ${OPERATOR_ENDPOINT}. This environment variable will be replaced with the correct host and port of the webhook when the Administration Server is started.   When the hook endpoint is invoked, the command specified by the “execute-command” property is executed, which in this case is the shell script "/var/scripts/scaleUpAction.sh:   #!/bin/sh   echo "called" >> scaleUpAction.log   num_ms=`curl -v --cacert /var/run/secrets/kubernetes.io/serviceaccount/ca.crt -H "Authorization: Bearer $(cat /var/run/secrets/kubernetes.io/serviceaccount/token)" -X GET https://kubernetes/apis/apps/v1beta1/namespaces/default/statefulsets/${MS_STATEFULSET_NAME}/status | grep -m 1 replicas| sed 's/.*\://; s/,.*$//'`   echo "current number of servers is $num_ms" >> scaleUpAction.log   new_ms=$(($num_ms + 1))   echo "new_ms is $new_ms" >> scaleUpAction.log   curl -v --cacert /var/run/secrets/kubernetes.io/serviceaccount/ca.crt -H "Authorization: Bearer $(cat /var/run/secrets/kubernetes.io/serviceaccount/token)" -X PATCH -H "Content-Type: application/strategic-merge-patch+json" -d '{"spec":{"replicas":'"$new_ms"'}}' https://kubernetes/apis/apps/v1beta1/namespaces/default/statefulsets/${MS_STATEFULSET_NAME}   In the script, we are issuing requests to the Kubernetes API server REST endpoints with ‘curl’ and then parsing the JSON response.  The first request is to retrieve the current replica count for the StatefulSet.  Then we scale up the StatefulSet by incrementing the replica count by one and sending a PATCH request with the new value for the replicas property in the request body.   Wrap Up With a simple configuration to use the Policies and Actions component in WLDF, we can provide automatic scaling functionality for a statically-configured WebLogic Server cluster in a Kubernetes environment.  WLDF’s tight integration with WebLogic Server provides a very comprehensive set of WebLogic domain-specific (custom) metrics to be used for scaling decisions.  Although we used a webhook as our REST endpoint to receive WLDF notifications, we could have just as easily implemented another Kubernetes object or service running in the Kubernetes cluster to scale the WebLogic Server cluster in our sample demo. For example, the WebLogic Server team is also investigating the Kubernetes Operator pattern for integrating WebLogic Server in a Kubernetes environment.  A Kubernetes Operator is "an application-specific controller that extends the Kubernetes API to create, configure, and manage instances of complex applications".  For more information on Operators,  see Introducing Operators: Putting Operational Knowledge into Software. Stay tuned for future blog updates on WebLogic Server and its integration with Kubernetes. The next blog related to WebLogic Server Clustering will be in the area of dynamic clusters for WebLogic Server on Kubernetes.

Elasticity (scaling up or scaling down) of a WebLogic Server cluster provides increased reliability of customer applications as well as optimization of resource usage.  Elasticity was introduced...

Let WebLogic work with Elastic Stack in Kubernetes

Over the past decade, there has been a big change in application development, distribution, and deployment. More and more popular tools have become available to meet the requirements. Some of the tools that you may want to use are provided in the Elastic Stack. In this article, we'll show you how to integrate them with WebLogic Server in Kubernetes. Note: You can find the code for this article at https://github.com/xiumliang/Weblogic-ELK. What Is the Elastic Stack? The Elastic Stack consists of several products: Elasticsearch, Logstash, Kibana, and others. Using the Elastic Stack, you can gain insight from your application's log data, in real time. Elasticsearch is a distributed, RESTful search and analytics engine capable of solving a growing number of use cases. As the heart of the Elastic Stack, it stores your data centrally so you can discover the expected and uncover the unexpected. Logstash is an open source, server-side data processing pipeline that ingests data from a multitude of sources simultaneously, transforms it, and then sends it to your favorite “stash.” Kibana lets you visualize your Elasticsearch data and navigate the Elastic Stack. It gives you the freedom to select how to shape your data. You don’t always have to know what you're looking for. Let WebLogic Server Work with the Elastic Stack There are several ways to use the Elastic Stack to gather and analyze WebLogic Server logs. In this article, we will introduce two of them. Integrate the Elastic Stack with WebLogic Server by Using Shared Volumes WebLogic Server instances put their logs into a shared volume. The volume could be an NFS or a host path. Logstash collects the logs from the volume and transfers the filtered logs to Elasticsearch. This type of integration requires a shared disk for the logs. The advantage is that the log files are stored and persisted, even after the WebLogic Server and Elastic Stack pods shutdown. Also, by using a shared volume, you do not have to consider the network configuration between Logstash and Elasticsearch. You just need deploy two pods (Elastic Stack and WebLogic Server), and use the shared volume. The disadvantage is that you need to maintain a shared volume for the pods; you must consider the disk space. In a multi-server environment, you need to arrange the logs on the shared volume so that there is no conflict between them. Deploy a WebLogic Server Pod to Kubernetes $ kubectl create -f k8s_weblogic.yaml In this k8s_weblogic.yaml file, we've defined a shared volume of type 'hostPath'. When the pod starts, the WebLogic Server logs are written to the shared volume so Logstash can access them. We can change the volume type to NFS or another type supported by Kubernetes, but we must be careful about permissions. If the permission is not correct, the logs may not be written or read on the shared volume. We can check if the pod is deployed and started: $ kubectl get pods We get the following: NAME READY STATUS RESTARTS AGE -------------------------------------------------------- weblogic-1725565574-fgmsr 1/1 Running 0 31s Deploy an Elastic Stack Pod to Kubernetes $ kubectl create -f k8s_elk.yaml The K8s_elk.yaml file defines the shared volume, which is the same as the definition in the k8s_weblogic.yaml file, because both the WebLogic Server and the Elastic Stack pods mount the same shared volume, so that Logstash can read the logs. Please note that Logstash is not started when the pod starts. We need to further configure Logstash before starting it. After the Elastic Stack pod is started, we have two pods in the Kubernetes node: NAME READY STATUS RESTARTS AGE ---------------------------------------------------- weblogic-1725565574-fgmsr 1/1 Running 0 31s elk-3823852708-zwbfg 1/1 Running 0 6m Connect to the Pod and Verify the Elastic Stack Pods Started on the Pod Machine $ kubectl exec -it elk-3823852708-zwbfg /bin/bash Run the following command to verify that Elasticstash has started. $ curl GET -i "http://127.0.0.1:9200" $ curl GET -i "http://127.0.0.1:9200/_cat/indices?v" We get the following indices if Elasticstash was started:   Because Kibana is a web application, we verify Kibana by opening the following URL in a browser: http://[NODE_IP_ADDRESS]:31711/app/kibana We get Kibana's welcome page. The port 31711 is the node port defined in the k8s_elk.yaml. Configure Logstash $ vim /opt/logstash/config/logstash.conf In the logstash.conf file, the "input blcok" defines where Logstash gets the input logs. The "filer block" defines a simple rule for how to filter WebLogic Server logs. The "output block" transfers the Logstash filtered logs to the Elasticsearch address:port. Start Logstash and Verify the Result $ /opt/logstash/bin# ./logstash -f ../config/logstash.conf After Logstash is started, open the browser and point to the Elasticsearch address: http://[NODE_IP_ADDRESS]:31712/_cat/indices?v Compared to the previous result, there is an additional line, logstash-2017.07.28, which indicates that Logstash has started and transferred logs to Elasticsearch.  Also, we can try to access any WebLogic Server applications. Now the Elastic Stack can gather and process the logs. Integrate Elastic Stack with WebLogic Server via the Network In this approach, WebLogic Server and the Logstash agent are deployed in one pod, and Elasticsearch and Kibana are deployed in another pod. Because Logstash and Elasticsearch are not in the same pod, Logstash has to transfer data to Elasticsearch using an outside ip:port. For this type of integration, we need to configure the network for Logstash. The advantage is that we do not have to maintain a shared disk and arrange the log folders when using multiple WebLogic Server instances. The disadvantage is that we must add a Logstash for each WebLogic Server pod so that the logs can be collected. Deploy Elasticsearch and Kibana to Kubernetes $ kubectl create -f k8s_ek.yaml The k8s_ek.yaml file is similar to the k8s_elk.yaml file. They use the same image. The difference is, k8s_ek.yaml set env "LOGSTASH_START = 0", which indicates that Logstash does not start when the container starts up. Also, k8s_ek.yaml does not define a port for Logstash. The Logstash port will be defined in the same pod with WebLogic Server. We can verify the ek startup with: http://[NODE_IP_ADDRESS]:31712/_cat/indices?v Generate the Logstash Configuration with EK Pod IP Address $ kubectl describe pod ek-3905776065-4rmdx We get the following information: Name: ek-3905776065-4rmdx Namespace: liangz Node: [NODE_HOST_NAME]/10.245.252.214 Start Time: Thu, 02 Aug 2017 14:37:19 +0800 Labels: k8s-app=ek pod-template-hash=3905776065 Annotations: kubernetes.io/created-by={"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicaSet","namespace":"liangz-cn","name":"ek-3905776065","uid":"09a30990-7296-11e7-bd24-0021f6e6a769","a... Status: Running IP: 10.34.0.5 The IP address of the ek pod is [10.34.0.5]. We need to define the IP address in the Logstash.conf file. Create the Logstash.conf file in the shared volume where we need to locate it: input { file { path => "/shared-logs/*.log*" start_position => beginning } } filter { grok { match => [ "message", "<%{DATA:log_timestamp}> <%{WORD:log_level}> <%{WORD:thread}> <%{HOSTNAME:hostname}> <%{HOSTNAME:servername}> <%{DATA:timer}> <<%{DATA:kernel}>> <> <%{DATA:uuid}> <%{NUMBER:timestamp}> <%{DATA:misc}> <%{DATA:log_number}> <%{DATA:log_message}>" ] } } output { elasticsearch { hosts => ["10.34.0.5:9200"] } } We will define two VolumeMounts in the Logstash-WebLogic Server pod: Log path: /shared-logs For the WebLogic Server instance which shared its logs with Logstash in the same pod conf path: /shared-conf For Logstash which uses the Logstash.config file The Logstash.conf file defines the input file path to /shared-logs. Also, it connects to Elasticsearch with "10.34.0.5:9200" which we discovered previously. Deploy the Logstash and WebLogic Server Pod to Kubernetes $ kubectl create -f k8s_logstash_weblogic.yaml In this k8s_logstash_weblogic.yaml, we add two images (WebLogic Server and Logstash). They share WebLogic Server logs with a pod-level shared volume, "shared-logs". This is a benefit of defining WebLogic Server and Logstash together. We do not need an NFS. If we want to deploy the pod to more nodes, we just need to modify the replicas value. All the new pods will have their own pod-level shared volume. We do not have to consider a possible conflict between the logs. $ kubectl get pods NAME READY STATUS RESTARTS AGE --------------------------------------------------------------- ek-3905776065-4rmdx 1/1 Running 0 6m logstash-wls-38554443-n366v 2/2 Running 0 14s Verify the Result Open the following URL: http://[NODE_IP_ADDRESS]:31712/_cat/indices?v The first line shows us that Logstash has collected the logs and transferred them to Elasticsearch.

Over the past decade, there has been a big change in application development, distribution, and deployment. More and more popular tools have become available to meet the requirements. Some of...

Technical

WebLogic on Kubernetes, Try It!

The WebLogic team is certifying WebLogic domains being orchestrated in Kubernetes.  As part of this work, we have created a sample which deploys a WebLogic domain/cluster in a Kubernetes environment and scales/shrinks the cluster through both Kubernetes and WebLogic. WebLogic Server offers a rich feature set to monitor and diagnose servers, applications and resources.  In this sample, we expose those metrics through Prometheus and display them in Grafana.  We have written a WebLogic Exporter, exposes the WebLogic metrics in a format that can be understood by Prometheus. WebLogic on Kubernetes Sample This sample configuration demonstrates the orchestration of a WebLogic 12.2.1.3 domain cluster in a Kubernetes environment. The sample shows different scaling actions of the WebLogic cluster in Kubernetes: Scale/shrink the cluster by increasing/decreasing the number of ReplicaSets. Define a WLDF policy based on Open Session Count MBean and allow the WLS Admin Server initiate the scaling action. We use StatefulSets to define the WebLogic servers in the domain.  This provides a consistent way of assigning managed server names and of giving managed servers in a WebLogic cluster consistent visibility to each other through well-known DNS names. There are 2 applications deployed to the WebLogic cluster; the Open Session application which triggers the WLDF policy to scale the cluster by one managed server and the Memory Load application allocates heap memory in the JVM. When the WLDF policy is triggered it makes a call to a webhook which is running in a container in the same pod as the Admin Server. The webhook invokes a Kubernetes API to trigger Kubernetes to scale the Kubernetes cluster and thus scale the WebLogic cluster. As part of this sample, we have developed a WLS Exporter which formats the WLS Runtime MBean metrics collected from the Managed Servers and exposes them into a format that can be read by Prometheus and displayed in a Grafana UI. Run Sample The instructions to run this sample are based on running in Minikube. However, this sample will run on any Kubernetes environment. Make sure you have minikube, kubectl installed, and have built the WebLogic developer install image oracle/weblogic:12.2.1.3-developer. Find the Dockerfile and scripts to build this image in GitHub. To build the WebLogic 12.2.1.3 domain image in this sample:      $ cd wls-12213-domain      $ docker build --build-arg ADMIN_PASS=<Admin Password> --build-arg ADMIN_USER=<Admin Username> -t wls-12213-domain . We deploy two webapps, the Open Session application and the Memory Load application, to the WebLogic cluster.  We extend the domain image wls-12213-domain image and create wls-12213-oow-demo-domain image:     $ docker build -t wls-12213-oow-demo-domain -f Dockerfile.adddemoapps . To create the webhook image (oow-demo-webhook image) which is used to scale the cluster when the WLDF policy is triggered, invoke:     $ docker build -t oow-demo-webhook -f Dockerfile.webhook. Save the webapp and the webhook docker images to the file system (*.tar) so that we can then load them into the minikube registry:     $ docker save -o wls-12213-oow-demo-domain.tar wls-12213-oow-demo-domain     $ docker save -o oow-demo-webhook.tar oow-demo-webhook Start minikube and set the environment:     $ minikube start     $ eval $(minikube docker-env) Load the saved webapp and the webhook docker images into minikube:     $ minikube ssh "docker load -i $PWD/wls-12213-oow-demo-domain.tar"     $ minikube ssh "docker load -i $PWD/oow-demo-webhook.tar" From the OracleWebLogic/samples/wls-k8s directory, start the WebLogic Admin Server and Managed Servers:     $ kubectl create -f k8s/wls-admin-webhook.yml     $ kubectl create -f k8s/wls-stateful.yml Verify that the instances are running:     $ kubectl get pods You should see one Admin Server and two Managed Servers running:     NAME                              READY   STATUS    RESTARTS   AGE     ms-0                                   1/1        Running           0              3m     ms-1                                   1/1        Running           0              1m     wls-admin-server-0            1/1        Running           0              5m   Verify that all three servers have reached the Running state before continuing. The servers are accessible on the minikube IP address, usually 192.168.99.100. To verify that address:     $ minikube ip     192.168.99.100 To log into the admin console, from your browser enter http://192.168.99.100:30001/console. Log in using the credentials passed in as build arguments when the WLS domain image (wls-12213-domain) was built.       Start Prometheus to monitor the managed servers:     $ kubectl create -f prometheus/prometheus-kubernetes.yml Verify that Prometheus is monitoring both managed servers by browsing to http://192.168.99.100:32000.  Click on the metrics pull down menu and select 'wls_scrape_cpu_seconds' and click 'execute'. You should see a metric being collected from each Managed Server. Start Grafana to monitor the managed servers:     $ kubectl create -f prometheus/grafana-kubernetes.yml Connect to Grafana at: http://192.168.99.100:31000         Log in with admin/pass         Click "Add Data Source" and then connect Grafana to Prometheus by entering:         Name:    Prometheus         Type:      Prometheus         Url:         http://prometheus:9090         Access:  Proxy         Click the leftmost menu on the menu bar, and select Dashboards > Import. Upload and Import the file prometheus/grafana-config.json and select the data source you added in the previous step ("Prometheus"). It should generate a dashboard named "WLS_Prometheus". You will see three graphs, the first shows “Managed Servers Up Time”, the second shows the webapp “Open Session Count”, and the third shows “Used Heap Current Size”. To run the “Memory Load” webapp, in the browser invoke http://192.168.99.100:30011/memoryloadapp.  When you click on button "Run Memory Load", a spike in memory will be displayed on the Grafana graph labeled "Used Heap Current Size". Scale the Kubernetes cluster by invoking kubectl and increasing the number of replicas in the cluster     $ kubectl scale statefulset ms --replicas=3 The Kubernetes cluster should show one pod running the WLS Admin Server and webhook, and three pods each running a single managed server.  Running “kubectl get pods” shows ms-0, ms-1, ms-2 and wls-admin-server-0.     The WebLogic Admin Console will also show that ms-0, ms-1, ms-2 are RUNNING and ms-3 and ms-4 are SHUTDOWN. Prometheus should show metrics for all three running managed servers and Grafana graph “Servers Up Time” should display 3 lines, representing the managed server We will now scale down the Kubernetes cluster to two replicas:     $ kubectl scale statefulset ms --replicas=2 You can see the scale down with the command “kubectl get pods” or through the Admin Console In the Admin Console we have defined a WLDF rule to trigger a scaling event.  We have defined a WLDF smart rule configured to monitor the OpenSessionsCurrentCount of the WebAppComponentRuntimeMBean for ApplicationRuntime called "OpenSessionApp". A REST action is triggered when the average number of opened sessions >= 0.01 on 5% or more of the servers in a WebLogic cluster called "DockerCluster", computed over the last ten seconds, sampled every second. The REST action invokes a webhook that scales up the Statefulset named "ms" by one. The WLDF rule is configured with a 1-minute alarm, so it will not trigger another action within 1 minute Invoke the Open Session webapp by entering in your browser http://192.168.99.100:30011/opensessionapp.         Within a minute or two, a new ms-1 instance will be created, and show up via Admin Console, kubectl command, or in Grafana. Connect to Grafana to check the collected metrics for OpenSessionCount in the graph "Open Sessions Current Count" When you are ready to clean up your environment, shut down the services. Invoke the following commands:     $ kubectl delete -f prometheus/grafana-kubernetes.yml     $ kubectl delete -f prometheus/prometheus-kubernetes.yml     $ kubectl delete -f k8s/wls-stateful.yml     $ kubectl delete -f k8s/wls-admin-webhook.yml Finally stop minikube:     $ minikube stop Summary The sample described above shows running a WebLogic domain in a Kubernetes environment.  The WebLogic team is working to certify WebLogic domain/cluster running on Kubernetes.  As part of the certification effort, we will release guidelines in the areas of management of WebLogic domain/cluster, applications, upgrades to applications, patching, and secure WebLogic environments in Kubernetes.  Look at the guideline blog “Security Best Practices for WebLogic Server Running in Docker and Kubernetes”. As part of the certification effort, the WebLogic team intend to release a WebLogic Kubernetes Operator which extends the Kubernetes API to create, configure and manage WebLogic Servers instances in a domain/cluster and applications on behalf of a Kubernetes user. The Operator knows about Kubernetes configuration and builds upon the Kubernetes resources and controller, but also includes WebLogic domain and application-specific knowledge to automate and manage WebLogic deployments. Stay tuned for future announcements of the release of the WebLogic Kubernetes Operator and guidelines on how to deploy and build WebLogic environments in Kubernetes!   Safe Harbor Statement The preceding is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions.  The development, release, and timing of any features or functionality described for Oracle’s products remains at the sole discretion of Oracle.

The WebLogic team is certifying WebLogic domains being orchestrated in Kubernetes.  As part of this work, we have created a sample which deploys a WebLogic domain/cluster in a Kubernetes environment...

Technical

WebLogic Server and Java SE 9

The latest WebLogic Server release 12.2.1.3.0 is now available as of August 30, 2017 and you can download it at http://www.oracle.com/technetwork/middleware/fusion-middleware/downloads/index.html .  See https://blogs.oracle.com/weblogicserver/weblogic-server-12213-is-available for more information on the new release.  Java SE 9 became available on September 21, 2017 and it’s available at www.oracle.com/javadownload.  Details about the features included in this release can be found on the OpenJDK  JDK 9  page: http://openjdk.java.net/projects/jdk9/.  While we have been working on WLS support for JDK 9 for 3.5 years, it was clear early on that the schedules would not align.  Although not certified, you should get pretty far running 12.2.1.3 on JDK9. Start by installing JDK9 somewhere convenient and setting your environment appropriately. export JAVA_HOME=/dir/jdk-9 export PATH="$JAVA_HOME/bin:$PATH" Then install the new WLS release and ignore the following warnings. jar xf fmw_12.2.1.3.0_wls_Disk1_1of1.zip fmw_12.2.1.3.0_wls.jar java -jar fmw_12.2.1.3.0_wls.jar WARNING: An illegal reflective access operation has occurred WARNING: Illegal reflective access by com.sun.xml.bind.v2.runtime.reflect.opt.Injector (file:/home/user/OraInstall2017-09-11_11-27-52AM/oracle_common/modules/com.sun.xml.bind.jaxb-impl.jar) to method java.lang.ClassLoader.defineClass(java.lang.String,byte[],int,int) WARNING: Please consider reporting this to the maintainers of com.sun.xml.bind.v2.runtime.reflect.opt.Injector WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations WARNING: All illegal access operations will be denied in a future release Problem: This JDK version was not certified at the time it was made generally available. It may have been certified following general availability. Recommendation: Check the Supported System Configurations Guide (http://www.oracle.com/technetwork/middleware/ias/downloads/fusion-certification-100350.html) for further details. Press "Next" if you wish to continue. Expected result: 1.8.0_131 Actual result: 9 The illegal reflective access warnings are coming from code that has been updated to work on JDK9 but the logic checks if the JDK8 approach works first.  The warnings will go away when the default is (or you run explicitly with)  --illegal-access=deny. You will see these benign warnings from the following known list. org.python.core.PyJavaClass com.oracle.classloader.PolicyClassLoader$3 weblogic.utils.StackTraceUtilsClient   com.sun.xml.bind.v2.runtime.reflect.opt.Injector com.sun.xml.ws.model.Injector net.sf.cglib.core.ReflectUtils$ com.oracle.common.internal.net.InterruptibleChannels com.oracle.common.internal.net.WrapperSelector com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.ClusterService com.tangosol.util.ClassHelper weblogic.utils.io.ObjectStreamClass If you see other warnings, they might need to be reported to the owner. There are a lot of behavior changes in JDK9.  We have done as much as possible to hide them.  In particular, command lines should continue to run without any changes.  You can set the environment using the standard . wlserver/server/bin/setWLSEnv.sh The most popular commands are java weblogic.Server   - create a domain in an empty directory and start the server java weblogic.WLST myscript.py – run a WLST script using Jython WLS 12.2.1.3.0 is now providing the Java EE classes that are hidden by default in JDK9.  Do not use the JDK9 command line option --add-modules java.se.ee or add any of the individual modules.  WLS is not doing anything to make use of or integrate with the new JDK9 module features.  There are many changes in JDK9 and it’s likely that you will require some changes in your application.  See https://docs.oracle.com/javase/9/migrate/toc.htm as a good starting place to review potential changes. Having said all of that, JDK9 is not a Long Term Support release.  Support ends in March 2018 when 18.3 is released and the next Long Term Support release isn't until 18.9 in September 2018.  That means that WebLogic Server won't be certified on JDK9 and the next release to be certified won't be until after September 2018.  Customer support won't be taking bug reports on JDK9.  This article indicates progress on the JDK upgrade front and gives you a hint that you can try playing with WLS and JDK9 starting with release 12.2.1.3.0 (and not earlier releases). Remember that JDK 8 will continue to be supported until 2025. Notes: The WLS installer (GUI or CLI) must be launched from a bash (or bash compatible like ksh) shell. WLS will not boot on JDK 10 or later, due to libraries not recognizing the new version number.

The latest WebLogic Server release 12.2.1.3.0 is now available as of August 30, 2017 and you can download it at http://www.oracle.com/technetwork/middleware/fusion-middleware/downloads/index.html. ...

Support

Customize your ZDT Patching Workflows Using “Hooks”

In the 12.2.1.3.0 release of WebLogic Server, ZDT Patching offers a cool new feature that allows you to extend the logic of your existing patching workflows by adding user-defined scripts that can be “hooked” at predefined locations (known as extension points). These user-defined scripts, known as extensions, are executed in conjunction with the workflow. With the custom hooks feature, you can have your workflow perform any additional task that is specific to a business need but is not appropriate to include in the base patching workflow. You can add checks on each node in the workflow to ensure that there is enough disk space for the rollout of Oracle home, or have your workflow perform admin tasks such as deploy or redeploy any additional applications, or define custom logic to send out email notifications about the status of the upgrade, and so on. This feature provides seven extension points for multitenant and non-multitenant workflows where you can insert your custom scripts and resources. Implement custom hooks to modify any patching workflow in five simple steps: From the list of available predefined extension points, determine the point in the workflow where you want to implement the extension (which is nothing, but your extended logic). Extension points are available at different stages of the workflow. For instance, ep_OnlineBeforeUpdate can be used to execute any logic before the patching operation starts on each node. This is typically the point where prerequisite checks can be performed. You can find the complete list of extension points available for multitenant and non-multitenant workflows at About Extension Points in the Administering Zero Downtime Patching Workflows guide. Create extension scripts to define your custom logic. The scripts can be UNIX-specific shell scripts or Window-specific batch scripts. Optionally, your scripts can use the predefined environment variables that this feature provides. Specify the extensions in the extensionConfiguration.json file. However, this feature allows more than one way of specifying extensions to give you the flexibility to override or customize parameters at different levels. Learn more about these options at Specifying Extensions to Modify the Workflow in the Administering Zero Downtime Patching Workflows guide. Once you have created the extensionConfiguration.json file and defined your custom logic in extension scripts, you must package them into a JAR file that has a specific directory structure. You can find the directory structure of the extension JAR file in our docs. At this point, you must remember to place the extension JAR on all nodes similar to how you place the patched Oracle home on all nodes before the rollout. Configure the workflow using either WLST or the WebLogic Server Administration Console, and specify the name of the JAR file that you created for the update. You will find complete details about available extension points and how to use the custom hooks feature in Modifying Workflows Using Custom Hooks in the Administering Zero Downtime Patching Workflows guide.

In the 12.2.1.3.0 release of WebLogic Server, ZDT Patching offers a cool new feature that allows you to extend the logic of your existing patching workflows by adding user-defined scripts that can be...

The WebLogic Server

Security Best Practices for WebLogic Server Running in Docker and Kubernetes

Overview The WebLogic Server (WLS) team is investing in new integration capabilities for running WLS in Kubernetes and Docker cloud environments. As part of this effort, we have identified best practices for securing Docker and Kubernetes environments when running WebLogic Server. These best practices are in addition to the general WebLogic Server recommendations found in the Oracle® Fusion Middleware Securing a Production Environment for Oracle WebLogic Server 12c documentation.     Ensuring the Security of WebLogic Server Running in Your Docker Environment References to Docker Security Resources These recommendations are based on documentation and white papers from a variety of sources. These include: Docker Security –  https://docs.docker.com/engine/security/security/ Center for Internet Security (CIS) Docker Benchmarks - https://www.cisecurity.org/benchmark/docker/ CIS Linux Benchmarks – https://www.cisecurity.org/benchmark/oracle_linux/ NCC Group: Hardening Linux Containers - https://www.nccgroup.trust/us/our-research/understanding-and-hardening-linux-containers/ Seccomp profiles - https://docs.docker.com/engine/security/seccomp/ Best Practices for Writing Docker Files:  https://docs.docker.com/engine/userguide/eng-image/dockerfile_best-practices/ Understanding Docker Security and Best Practices: https://blog.docker.com/2015/05/understanding-docker-security-and-best-practices/   Summary of Recommendations The recommendations to secure your production environment are: Validate your Docker configuration using the CIS Docker benchmark. This can be done manually or automatically using a 3rd party tool. Validate your Host Operating System configuration using the appropriate CIS Operating System benchmark. Evaluate additional host hardening recommendations not already covered by the CIS Benchmarks.  Evaluate additional container runtime recommendations not already covered by the CIS Benchmarks. These are described in more detail in the following sections. Validate Your Docker Configuration Using the Center for Internet Security (CIS) Docker Benchmark The Center for Internet Security (CIS) produces a benchmark for both Docker Community Edition and multiple Docker EE versions. The latest benchmark is for Docker EE 1.13 and new benchmarks are added after new Docker EE versions are released. These benchmarks contain ~100 detailed recommendations for securely configuring Docker. The recommendations apply to both the host and the Docker components and are organized around the following topics: Host configuration Docker daemon configuration Docker daemon configuration files Container Images and Build File Container Runtime Docker Security Operations Docker Swarm Configuration For more information, refer to detailed benchmark document  - https://www.cisecurity.org/benchmark/docker/ You should validate your configuration against each CIS Docker Benchmark recommendation either manually or via an automated tool. You can find a list of CIS partner benchmark tools that can validate your Docker configuration on http://www.cisecurity.org. These include (in alphabetical order): Cavirin: https://www.cisecurity.org/partner/cavirin/ Docker Bench for Security: https://github.com/docker/docker-bench-security Qualys: https://www.cisecurity.org/partner/qualys/ Symantec: https://www.cisecurity.org/partner/symantec/ Tenable: https://www.cisecurity.org/partner/tenable/ Tripwire: https://www.cisecurity.org/partner/tripwire/ TwistLock: https://www.twistlock.com VMWare: https://blogs.vmware.com/security/2015/05/vmware-releases-security-compliance-solution-docker-containers.html Note: the CIS Benchmarks require a license to use commercially. For more information, refer to https://www.cisecurity.org/cis-securesuite/cis-securesuite-membership-terms-of-use/.   Validate Your Host Operating System (OS) Using the CIS Benchmark The Center for Internet Security (CIS) also produces a benchmark for various Operating Systems including different Linux flavors, AIX, Solaris, Microsoft Windows, OSX, etc. These benchmarks contain a set of detailed recommendations for securely configuring your Host OS. For example, the CIS Oracle Linux 7 Benchmark contains over 200 recommendations and over 300 pages of instructions. The recommendations apply to all aspects of Linux configuration. For more information, refer to detailed benchmark document  - https://www.cisecurity.org/cis-benchmarks/ You should validate your configuration against the appropriate CIS Operating System Benchmark either manually or via an automated tool. You can find a list of CIS partner benchmark tools that can validate your Operating System configuration on http://www.cisecurity.org. For example, for CIS Oracle Linux 7, this includes  (in alphabetical order): NNT: https://www.cisecurity.org/partner/nnt/ Qualys: https://www.cisecurity.org/partner/qualys/ Tenable: https://www.cisecurity.org/partner/tenable/ Tripwire: https://www.cisecurity.org/partner/tripwire/    Additional Host Hardening Recommendations Beyond the CIS benchmarks, there are additional host hardening recommendations and information that should be considered. These include: Use Grsecurity and PAX NCC Group Host Hardening Recommendations Grsecurity and PaX The grsecurity project provides various patches to the Linux kernel that enhance a system's overall security. This includes address space protection, enhanced auditing and process control. PaX flags data memory such as the stack as non-executable and program memory as non-writable. PaX also provides address space layout randomization. Grsecurity and PAX can be run on the kernel used for Docker without requiring changes in Docker configuration. The security features will apply to the entire host and therefore to all containers. You may want to investigate grsecurity and PaX to determine if they can be used in your production environment. For more information, refer to http://grsecurity.net. NCC Group Host Hardening Recommendations The NCC group white paper (NCC group Understanding Hardening Linux Containers) contains additional recommendations for hardening Linux containers:  There is overlap between these recommendations and the CIS Docker Benchmarks. The recommendations in the two sections: 10.1 Generation Container Recommendations 10.3 Docker Specific Recommendations  include additional host hardening items. This includes: Keep the kernel as up to date as possible. Typical sysctl hardening should be applied. 
 Isolate storage for containers and ensure appropriate security. Control device access and limit resource usage using Control Groups (cgroups) You may want to investigate these recommendations further; for more information, refer to NCC group Understanding Hardening Linux Containers Additional Container Runtime Recommendations Beyond the CIS Docker recommendations, there are additional container runtime recommendations that should be considered. These include: Use a custom seccomp profile either created manually or with help from a tool such as Docker Slim. Utilize the Docker Security Enhanced Linux (SELinux) profile. Specify additional restricted Linux kernel capabilities when starting Docker. Improve isolation and reduce attack surface by running only Docker on the host server. Additional container hardening based on NCC group recommendations. Run with a Custom seccomp Profile Secure computing mode (seccomp) is a Linux kernel feature that can be used to restrict the actions available within the container. This feature is available only if Docker has been built with seccomp and the kernel is configured with CONFIG_SECCOMP enabled. Oracle Linux 7 supports seccomp and Docker runs with a seccomp profile by default. The default seccomp profile provides a good default for running containers with seccomp and disables around 44 system calls out of 300+.  It is not recommended to change the default seccomp profile, but you can run with a custom profile using the      --security-opt seccomp=/path/to/seccomp/profile.json option. For more information on the seccomp default profile, refer to https://docs.docker.com/engine/security/seccomp/#passing-a-profile-for-a-container If the default seccomp profile is not sufficient for your Docker production environment, then you can optionally create a custom profile and run with it. A tool such as Docker Slim (https://github.com/docker-slim/docker-slim) may be useful in generating a custom seccomp profile via static and dynamic analysis. Run with SELinux Security-Enhanced Linux (SELinux) is a Linux kernel security module that provides a mechanism for supporting access control security policies, including mandatory access controls (MAC). You can enable SELinux when starting the Docker container. To enable SELinux, ensure that SELinux is installed     yum install selinux-policy then enable the docker SELinux module      semodule -v -e docker and then specify that SELinux is enabled when starting Docker     # vi /etc/sysconfig/docker     OPTIONS='--selinux-enabled --group=docker -g /scratch/docker' Run with Restricted Capabilities Docker runs containers with a restricted set of Linux kernel capabilities. You can specify additional capabilities to remove based on the requirements of your environment. For more information, refer to https://docs.docker.com/engine/security/security/#linux-kernel-capabilities. Run Only Docker on Server In order to ensure isolation of resources and reduce the attack surface, it is recommended to run only Docker on the server. Other services should be moved to Docker containers. NCC Group Docker Container Hardening Recommendations The NCC group white paper contains additional recommendations for hardening Linux containers: NCC group Understanding Hardening Linux Containers There is overlap between these recommendations and those listed in prior sections and in the CIS Benchmarks. The recommendations in the two sections: 10.1 Generation Container Recommendations 10.3 Docker Specific Recommendations  include additional Docker container hardening items. This includes: Control device access and limit resource usage using Control Groups (cgroups) Isolate containers based on trust and ownership. Have one application per container if feasible. Use layer two and layer three firewall rules to limit container to host and guest to guest communication. Use Docker container auditing tools such as Clair, drydock, and Project Natilus. For more information on these recommendations, refer to NCC group Understanding Hardening Linux Containers.      Ensuring the Security of WebLogic Server Running in Your Kubernetes Production Environment   References to Kubernetes Security Resources These recommendations are based on documentation and whitepapers from a variety of sources. These include: Security Best Practices for Kubernetes Deployment: http://blog.kubernetes.io/2016/08/security-best-practices-kubernetes-deployment.html CIS Kubernetes Benchmarks: https://www.cisecurity.org/benchmark/kubernetes/ Kubelet Authentication and Authorization: https://kubernetes.io/docs/admin/kubelet-authentication-authorization/ RBAC Authorization: https://kubernetes.io/docs/admin/authorization/rbac/ Pod Security Policies RBAC: https://kubernetes.io/docs/concepts/policy/pod-security-policy/#working-with-rbac Auditing: https://kubernetes.io/docs/tasks/debug-application-cluster/audit/ etcd Security Model: https://coreos.com/etcd/docs/latest/op-guide/security.html Summary of Recommendations The recommendations to secure your environment are: Validate your Kubernetes configuration using the CIS Kubernetes benchmark. This can be done manually or automatically using a 3rd party tool. Evaluate additional Kubernetes runtime recommendations not already covered by the CIS Benchmarks. These are described in more detail in the following sections.   Validate Your Kubernetes Configuration Using the CIS Kubernetes Benchmark The Center for Internet Security (CIS) produces a benchmark for Kubernetes. This benchmarks contain a set of detailed recommendations for securely configuring Kubernetes with ~ 250 pages in the benchmark. The recommendations apply to the various Kubernetes components and are organized around the following topics: Master Node Security Configuration including API Server, Scheduler, Controller Manager, configuration files and etcd. Worker Node Security Configuration including Kubelet and configuration files. Federated Deployments including Federation API Server and Federation Controller Manager. For more information, refer to https://www.cisecurity.org/benchmark/kubernetes/ You should validate your configuration against each CIS Kubernetes Benchmark recommendation either manually or via an automated tool. You can find a list of CIS partner benchmark tools that can validate your Kubernetes configuration on http://www.cisecurity.org. These include (in alphabetical order): Twistlock - www.twistlock.com NeuVector - http://neuvector.com/blog/open-source-kubernetes-cis-benchmark-tool-for-security/ Kube Bench - https://github.com/aquasecurity/kube-bench Note: the CIS Benchmarks require a license to use commercially. For more information, refer to https://www.cisecurity.org/cis-securesuite/cis-securesuite-membership-terms-of-use/.   Additional Container Runtime Recommendations Beyond the CIS Kubernetes recommendations, there are additional container runtime recommendations that should be considered. Images Images should be free of vulnerabilities and should be retrieved from either from a trusted registry or from a private registry. Image scanning should be performed to ensure there are no security vulnerabilities. You should use third party tools to perform the CVE scanning and you should integrate these into the build process. For more information, refer to http://blog.kubernetes.io/2016/08/security-best-practices-kubernetes-deployment.html. Security Context and Pod Security Policy Security policies can be set at the pod or container level via the security context. The security context can be used to: Make the Container run as Non-root user Control capabilities used in container. Make the root file system read-only Prevent container with user running as root from the pod. For more information, refer to https://kubernetes.io/docs/tasks/configure-pod-container/security-context/ and http://blog.kubernetes.io/2016/08/security-best-practices-kubernetes-deployment.html. Secure Access to Nodes You should avoid using SSH access to Kubernetes nodes, reducing the risk for unauthorized access to host resource. Use kubectl exec instead of SSH. If debugging of issues is required, create a separate staging environment that allows for SSH access.  Separate Resources A Kubernetes namespace allows you to partition created resources into logically named groups. Resources created in one namespace can be hidden from other namespaces.  You can create additional namespaces and attach resources and users to them.  You can utilize resource quotas attached to a namespace for memory, CPU, and pods. For more information, refer to http://blog.kubernetes.io/2016/08/security-best-practices-kubernetes-deployment.html. Manage Secrets A secret contains sensitive data such as a password, token, or key. Secrets can be mounted as data volumes or be exposed as environment variables to be used by a container in a pod. They can also be used by other parts of the system, without being directly exposed to the pod. You should: Manage user and pod access to secrets. Store secrets securely Don't use files or environment variables for secrets. For more information, refer to https://kubernetes.io/docs/concepts/configuration/secret/. Networking Segmentation Segment the network so different applications do not run on the same Kubernetes cluster. This reduces the risk of one compromised application attacking a neighboring application. Network segmentation ensures that containers can not communicate with other containers unless authorized. For more information, refer to Network Segmentation in http://blog.kubernetes.io/2016/08/security-best-practices-kubernetes-deployment.html

Overview The WebLogic Server (WLS) team is investing in new integration capabilities for running WLS in Kubernetes and Docker cloud environments. As part of this effort, we have identified...

The WebLogic Server

How to Use Java EL to Write WLDF Policy Expressions

WLDF provides specialized functions, called Smart Rules that encapsulate complex logic for looking at metric trends in servers and clusters over a recent time interval. If these prove insufficient, you have the option to write policy expressions directly using the beans and functions provided by WLDF and Java Expression Language (Java EL). Java EL is the recommended language for creating policy expressions in Oracle WebLogic Server 12c. Java EL has many powerful capabilities built into it, but they can make it more complex to work with. However, to make it easier, WLDF provides a set of EL extensions consisting of beans and functions that you can use in your policy expressions that access WebLogic data and events directly. You can write simple or complex policy expressions using the beans and functions. However, you must have good programming skills and experience using Java EL. For example, a relatively simple policy expression to check if the average HeapFreePercent over a 5-minute window is less than 20 can be written as: wls:extract("wls.runtime.serverRuntime.JVMRuntime.heapFreePercent", "30s", "5m").tableAverages().stream().anyMatch(hfp -> hfp < 20) A more complex policy expression to check the average value of the attribute PendingUserRequestCount across all servers in "cluster1" over a 5 minute interval and trigger if 75% of the nodes exceed an average of 100 pending requests can be written as: wls:extract(wls.domainRuntime.query({"cluster1"},"com.bea:Type=ThreadPoolRuntime,*", "PendingUserRequestCount"), "30s", "2m").tableAverages().stream().percentMatch(pendingCount -> pendingCount > 100) > 0.75 For more information and examples of using Java EL in policy expressions, see Creating Complex Policy Expressions Using WLDF Java EL Extensions in Configuring and Using the Diagnostics Framework for Oracle WebLogic Server.

WLDF provides specialized functions, called Smart Rules that encapsulate complex logic for looking at metric trends in servers and clusters over a recent time interval. If these prove insufficient,...

WebLogic Server 12.2.1.3 is Available

We are pleased to announce that Oracle WebLogic Server 12.2.1.3.0 is now available for download on the Oracle Technology Network (OTN), and will be available on Oracle Software Delivery Cloud (OSDC) soon.  WebLogic Server 12.2.1.3.0 is also referred as Patch Set 3 (PS3) for WebLogic Server 12.2.1.0.0, originally released in October 2015, and is being delivered as part of the overall Oracle Fusion Middleware 12.2.1.3.0 release.   WebLogic Server 12.2.1 Patch Sets include product maintenance and bug fixes for issues found since the initial 12.2.1 release, and we generally encourage WebLogic Server 12.2.1 users to adopt the latest Patch Set releases as they become available.  PS3 contains all of the features supported in WebLogic Server 12.2.1.0.0, 12.2.1.1.0 (PS1) and 12.2.1.2.0 (PS2).  Users running on prior 12.2.1 releases should consider moving to PS3 as soon as possible, and as makes sense within the context of their project plans.  Users planning or considering upgrades from prior versions of WebLogic Server to WebLogic Server 12.2.1 should retarget their migration plans, as possible and as makes sense, to PS3.    The Fusion Middleware 12.2.1.3.0 supported configurations matrix has been updated here.     Although WebLogic Server Patch Sets are intended primarily as maintenance vehicles, these releases also include a limited set of new feature capabilities that deliver incremental value without disrupting customers.   These features are summarized in the WebLogic Server 12.2.1.3 documentation under What's New?.  We will be describing some of these capabilities in detail in future blogs, but I'd like to mention two new security-related features here. Secured Production Mode is a new configuration option to help secure production environments. As indicated in our security documentation, Secured Production Mode helps apply secure WebLogic configuration settings by default, or warns users if settings that should typically be used in secured environments are not being used.  Oracle Identity Cloud Integrator is a new, optionally configurable, WebLogic Server security provider which allows you to use the Oracle Identity Cloud Service.    Users running WebLogic Server 12.2.1.3, either on premises or in Oracle Cloud, can now use Identity Cloud Integrator to support integrated authentication and identity assertion with the same Identity Cloud Service used by Oracle Cloud PaaS and SaaS Services.   We hope these features help you respond to evolving security requirements, and we've implemented these features such that they will not affect you unless you choose to enable them.     You should expect the Oracle Java Cloud Service, and other Oracle Cloud Services using WebLogic Server, to support 12.2.1.3 in the future.   We are also updating our Docker images to support 12.2.1.3, and we hope to have more to say about WebLogic Server Docker support at Oracle Open World.   And as noted in my prior post, we're working in a new WebLogic Server version that will support the upcoming Java EE 8 release.     So please start targeting your WebLogic Server 12.2.1 projects to WebLogic Server to WebLogic Server 12.2.1.3 to take advantage of the latest release with the latest features and maintenance. We hope to have more updates for you soon!  

We are pleased to announce that Oracle WebLogic Server 12.2.1.3.0 is now available for download on the Oracle Technology Network (OTN), and will be available on Oracle Software Delivery Cloud (OSDC)...

The WebLogic Server

WebLogic Server and Opening Up Java EE

Oracle has just announced that it is exploring moving Java Enterprise Edition (Java EE) technologies to an open source foundation, following the delivery of Java EE 8.  The intention is to adopt more agile processes, implement more flexible licensing, and change the governance process to better respond to changing industry and technology demands.   We will keep you updated on developments in this area.  WebLogic Server users may be wondering what this announcement may mean for them, because WebLogic Server supports Java EE standards.  The short answer is that there is no immediate impact.   We will continue to support existing WebLogic Server releases, deliver Oracle Cloud services based on WebLogic Server, and deliver new releases of WebLogic Server in the future. Some WebLogic Server customers are using older product versions, either 10.3.X (Java EE 5) or 12.1.X (Java EE 6).   We will continue to support these customers, and they have an upgrade path to newer WebLogic Server and Java EE versions. Some WebLogic Server customers have adopted WebLogic Server 12.2.1.X, (Java EE 7), with differentiated capabilities we have discussed in earlier blogs.   We’re expecting to release a new 12.2.1.X patch set release, WebLogic Server 12.2.1.3, in the near future.   Stay posted for more information on this.   We will continue to leverage WebLogic Server in Oracle Cloud through the Java Cloud Service, and other PaaS and SaaS offerings.    We are also investing in new integration capabilities for running WebLogic Server in Kubernetes/Docker cloud environments.  Finally, we are planning a new release of WebLogic Server for next calendar year (CY2018) that will support the new capabilities in Java EE 8, including HTTP/2 support, JSON processing and REST support improvements.  See the Aquarium blog for more information on new Java EE 8 capabilities.  In summary, there’s a lot for WebLogic Server customers to leverage going forward, and we have a strong track record of supporting WebLogic Server customers. As to what happens to potential future releases based on future evolutions of Java EE technologies beyond Java EE 8, that will be dependent on the exploration that we as a community are about to begin, and hopefully to a robust community-driven evolution of these technologies with Oracle’s support. Stay tuned for more updates on this topic.    Safe Harbor Statement The preceding is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, and timing of any features or functionality described for Oracle’s products remains at the sole discretion of Oracle.

Oracle has just announced that it is exploring moving Java Enterprise Edition (Java EE) technologies to an open source foundation, following the delivery of Java EE 8.  The intention is to adopt more...

The WebLogic Server

Using REST to Create an AGL Data Source

A recent question was raised by a customer on how to create an Active GridLink (AGL) data source using the RESTFUL API's in WebLogic Server (WLS).  First, you can't do it with the API's provided in WLS release 12.1.3.  New API's were provided starting in WLS 12.2.1 that provide much more complete functionality.  These API's  mirror the MBeans and are more like using WLST. The following shell script creates an Active GridLink data source using minimal parameters.  You can add more parameters as necessary.  It explicitly sets the data source type, which was new in WLS 12.2.1.  It uses the long-format URL, which is required for AGL. It sets up the SQL query using "ISVALID" to be used for test-connections-on-reserve, which is recommended.  It assumes that auto-ONS is used so no ONS node list is specified.  FAN-enabled must be explicitly set. c="curl -v --user weblogic:welcome1 -H X-Requested-By:MyClient -H Accept:application/json -H Content-Type:application/json" localhost=localhost editurl=http://${localhost}:7001/management/weblogic/latest/edit name="JDBCGridLinkDataSource" $c -d "{}" \  -X POST "${editurl}/changeManager/startEdit" $c -d "{     'name': '${name}',     'targets': [ { identity: [ 'servers', 'myserver' ] } ], }" \ -X POST "${editurl}/JDBCSystemResources?saveChanges=false" $c -d "{     'name': '${name}',     'datasourceType': 'AGL', }" \ -X POST "${editurl}/JDBCSystemResources/${name}/JDBCResource" $c -d "{         'JNDINames': [ 'jndiName' ] }" \ -X POST "${editurl}/JDBCSystemResources/${name}/JDBCResource/JDBCDataSourceParams" $c -d "{         'password': 'dbpassword',         'driverName': 'oracle.jdbc.OracleDriver',         'url': 'jdbc:oracle:thin:@(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=dbhost)(PORT=dbport))(CONNECT_DATA=(SERVICE_NAME=dbservice)))', }" \ -X POST "${editurl}/JDBCSystemResources/${name}/JDBCResource/JDBCDriverParams" $c -d "{         name: 'user',         value: 'dbuser' }" \ -X POST "${editurl}/JDBCSystemResources/${name}/JDBCResource/JDBCDriverParams/properties/properties" $c -d "{         'testTableName': 'SQL ISVALID' }" \ -X POST "${editurl}/JDBCSystemResources/${name}/JDBCResource/JDBCConnectionPoolParams" $c -d "{         "fanEnabled":true }" \ -X POST "${editurl}/JDBCSystemResources/${name}/JDBCResource/JDBCOracleParams" $c -d "{}" \  -X POST "${editurl}/changeManager/activate"

A recent question was raised by a customer on how to create an Active GridLink (AGL) data source using the RESTFUL API's in WebLogic Server (WLS).  First, you can't do it with the API's provided in WLS...

The WebLogic Server

Oracle Database 12.2 Feature Support with WebLogic Server

It's finally available - you can download the Oracle 12.2 database!  Integration of WebLogic Server (WLS) with Oracle 12.2 has been in progress for two years. This article provides information on how Oracle 12.2 database features are supported in WebLogic Server releases. Using Older Drivers with the 12.2 Database Server The simplest integration of WebLogic Server with a 12.2 database is to use the Oracle driver jar files included in your WebLogic Server installation. There are no known problems or upgrade issues when using 11.2.0.3, 11.2.0.4, 12.1.0.1, or 12.1.0.2 drivers with a 12.2 database.  See the Oracle JDBC FAQ for more information on driver support and features of Oracle 12.2 database. Using the Oracle 12.2 Drivers with the 12.2 Database Server To use many of the new 12.2 database features, it is necessary to use the 12.2 database driver jar files. Note that Oracle 12.2 database driver jar files are compiled for JDK 8. The earliest release of WLS that supports JDK 8 is WLS 12.1.3. The Oracle 12.2 database driver jar files cannot work with earlier versions of WLS.  In earlier versions of WLS you can use the drivers that come with the WLS installation to connect to the 12.2 DB, as explained above.  This article does not apply to Fusion MiddleWare (FMW) deployments of WLS. It’s likely that the next released version of FMW 12.2.1.3 will ship and support the Oracle 12.2 database driver jar files out of the box. Required Oracle 12.2 Driver Files The 12.2 Oracle database jar files are not shipped with WLS 12.1.3, 12.2.1, 12.2.1.1, and 12.2.1.2. This section lists the files required to use an Oracle 12.2 driver with these releases of WebLogic Server. These files are installed under the 12.2 database $ORACLE_HOME directory. Note: These jar files must be added to the CLASSPATH used for running WebLogic Server at the head of the CLASSPATH. They must come before all of the 12.1.0.2 client jar files. Select one of the following ojdbc files (note that these have "8" in the name instead of "7" from the earlier release)   The _g jar files are using for debugging and required if you want to enable driver level logging.  If you are using FMW, you must use the "dms" version of the jar file.  WLS uses the non-"dms" version of the jar by default. jdbc/lib/ojdbc8.jar jdbc/lib/ojdbc8_g.jar jdbc/lib/ojdbc8dms.jar jdbc/lib/ojdbc8dms_g.jar The following table lists additional required driver files:   File Description jdbc/lib/simplefan.jar Fast Application Notification (new) ucp/lib/ucp.jar Universal Connection Pool opmn/lib/ons.jar Oracle Network Server client jlib/orai18n.jar Internationalization support jlib/orai18n-mapping.jar Internationalization support jlib/orai18n-collation.jar Internationalization support jlib/oraclepki.jar Oracle Wallet support jlib/osdt_cert.jar Oracle Wallet support jlib/osdt_core.jar Oracle Wallet support rdbms/jlib/aqapi.jar AQ JMS support lib/xmlparserv2_sans_jaxp_services.jar SQLXML support rdbms/jlib/xdb.jar SQLXML support   Download Oracle 12.2 Database Files If you want to run one of these releases with the 12.2 jar files, Oracle recommends that you do a custom install of the Oracle Database client kit for a minimal installation. Select the Database entry from http://www.oracle.com/technetwork/index.html. Under Oracle Database 12.2 Release 1, select the "See All" link for your OS platform. For a minimal install, under the Oracle Database 12.2 Release 1 Client heading, select the proper zip file and download it. Unzip the file and run the installer. Select Custom, then select the Oracle JDBC/Thin interfaces, Oracle Net listener, and Oracle Advanced Security check boxes. You can also use an Administrator package client installation or a full database installation to get the jar files. The jar files are identical on all platforms. Update the WebLogic Server CLASSPATH or PRE_CLASSPATH To use an Oracle 12.2 database and Oracle 12.2 JDBC driver, you must update the CLASSPATH in your WebLogic Server environment. Prepend the required files specified in Required Oracle 12.2 Driver Files listed above to the CLASSPATH (before the 12.1.0.2 Driver jar files).  If you are using startWebLogic.sh, you also need to set the PRE_CLASSPATH. The following code sample outlines a simple shell script that updates the CLASSPATH of your WebLogic environment. Make sure ORACLE_HOME is set appropriately (e.g., something like /somedir/app/myid/product/12.2.0/client_1). #!/bin/sh # source this file in to add the new 12.2 jar files at the beginning of the CLASSPATH case "`uname`" in *CYGWIN*) SEP=";" ;; Windows_NT) SEP=";" ;; *) SEP=":" ;; esac dir=${ORACLE_HOME:?} # We need one of the following #jdbc/lib/ojdbc8.jar #jdbc/lib/ojdbc8_g.jar #jdbc/lib/ojdbc8dms.jar #jdbc/lib/ojdbc8dms_g.jar if [ "$1" = "" ] then ojdbc=ojdbc8.jar else ojdbc="$1" fi case "$ojdbc" in ojdbc8.jar|ojdbc8_g.jar|ojdbc8dms.jar|ojdbc8dms_g.jar) ojdbc=jdbc/lib/$ojdbc ;; *) echo "Invalid argument - must be ojdbc8.jar|ojdbc8_g.jar|ojdbc8dms.jar|ojdbc8dms_g.jar" exit 1 ;; esac CLASSPATH="${dir}/${ojdbc}${SEP}$CLASSPATH" CLASSPATH="${dir}/jdbc/lib/lib/simplefan.jar${SEP}$CLASSPATH" CLASSPATH="${dir}/ucp/lib/ucp.jar${SEP}$CLASSPATH" CLASSPATH="${dir}/opmn/lib/ons.jar${SEP}$CLASSPATH" CLASSPATH="${dir}/jlib/orai18n.jar${SEP}$CLASSPATH" CLASSPATH="${dir}/jlib/orai18n-mapping.jar${SEP}$CLASSPATH" CLASSPATH="${dir}/jlib/oraclepki.jar ${SEP}$CLASSPATH" CLASSPATH="${dir}/jlib/osdt_cert.jar ${SEP}$CLASSPATH" CLASSPATH="${dir}/jlib/osdt_core.jar${SEP}$CLASSPATH" CLASSPATH="${dir}/rdbms/jlib/aqapi.jar${SEP}$CLASSPATH" CLASSPATH="${dir}/lib/xmlparserv2_sans_jaxp_services.jar${SEP}$CLASSPATH" CLASSPATH="${dir}/jlib/orai18n-collation.jar${SEP}$CLASSPATH" CLASSPATH="${dir}/rdbms/jlib/xdb.jar${SEP}$CLASSPATH" For example, save this script in your environment with the name setdb122_jars.sh. Then run script with the ojdbc8.jar: . ./setdb122_jars.sh ojdbc8.jar export PRE_CLASSPATH="$CLASSPATH"  WebLogic Server Integration with Oracle Database 12.2 Several Oracle Database 12.2 features have been in WLS for many releases, just waiting for the release of the new database version to start working. The following table lists these features. All of these features require the 12.2 driver jar files and 12.2 database server.   Feature WLS Release Introduced Oracle Database Release JDBC 4.2 support 12.1.3 12.2 Service Switching 12.2.1 12.2 XA Replay Driver 12.2.1 12.2 Gradual Draining 12.2.1.2.0 12.1 with 12.2 enhancements UCP MT Shared Pool support 12.2.1.1.0 12.2 AGL Support for URL with @alias or @ldap 12.2.1.2.0 12.2 Sharding API’s Not directly available in WLS datasource but can be used via the WLS UCP native datasource type added in 12.2.1 12.2   You should expect other articles to describe these features in more detail and additional integration in future WLS releases.

It's finally available - you can download the Oracle 12.2 database!  Integration of WebLogic Server (WLS) with Oracle 12.2 has been in progress for two years. This article provides information on how...

Add category

The Oracle Container Registry has gone live!

We are pleased to announce that the Oracle Container Registry is now available. The Container Registry is designed to provide simple access to Oracle products for use in Docker containers.                                         The Oracle WebLogic Server 12.2.1.1 and 12.2.1.2 images are now available on the Oracle Container Registry. Currently, access to the Oracle Container Registry is limited to customers in the United States, United Kingdom and Australia. How do I login to the Oracle Container Registry? Point your browser at https://container-registry.oracle.com. If this is the first time you’re visiting the Container Registry, you will need to associate your existing Oracle SSO credentials or create a new account. Click the “Register” button and select either: “I Already Have an Oracle Single Sign On Account” to associate your existing account or “I Don't Have an Oracle Single Sign On Account” to create a new account.Once you have an account, click the login button to log into the Container Registry. You will be prompted to read and accept the license agreement. Note that acceptance of the license agreement is required to download images using the Docker command-line tool and that acceptance only persists for eight (8) hours.After accepting the license, you can browse the available business areas and images to review which images you’d like to pull from the registry using the Docker client. Pull The WebLogic Server images The Oracle WebLogic Server images in the registry are install/empty domain images for WebLogic Server 12.2.1.1 and 12.2.1.2. For every version of WebLogic Server there are two install images one created with the generic installer and one with the quick installer. To pull the image from the registry run the command# docker pull container-registry.oracle.com/middleware/weblogic Get Started To create an empty domain with an Admin Server running, you simply call# docker run -d container-registry.oracle.com/middleware/weblogic:12.2.1.2The WebLogic Server image will invoke createAndStartEmptyDomain.sh as the default CMD, and the Admin Server will be running on port 7001. When running multiple containers map port 7001 to a different port on the host:# docker run -d -p 7001:7001 container-registry.oracle.com/middleware/weblogic:12.2.1.2 To run a second container on port 7002:# docker run -d -p 7002:7001 container-registry.oracle.com/middleware/weblogic:12.2.1.2Now you can access the AdminServer Web Console at http://localhost:7001/console. Customize your WebLogic Server Domains You might want to customize your own WebLogic Server domain by extending this image. The best way to create your own domain is by writing your own Dockerfiles,and using WebLogic Scripting Tool (WLST) to create clusters, Data Sources, JMS Servers, Security Realms, and deploy applications.In your Dockerfile you will extend the WebLogic Server image with the FROM container-registry.oracle.com/middleware/weblogic:12.2.1.2 directive.We provide a variety of examples (Dockerfiles, shell scripts, and WLST scripts) to create domains, configure resources, deploy applications, and use load balancer in GitHub.

We are pleased to announce that the Oracle Container Registry is now available. The Container Registry is designed to provide simple access to Oracle products for use in Docker containers.             ...

Add category

Configuring Datasource Fatal Error Codes

Thereare well known error codes on JDBC operations that can always be interpreted asthe database shutting down, already down, or a configuration problem.  In this case, we don’t want to keep theconnection around because we know that subsequent operations will fail and theymight hang or take a long time to complete. These error codes can beconfigured in the datasource configuration using the "fatal-error-codes"value on the Connection Pool Parameters. The value is a comma separated list of error codes.  Ifa SQLException is seen on a JDBC operation and sqlException.getErrorCode()matches one of the configured codes, the connection will be closed instead ofreturning it to the connection pool. Note that in the earlier OC4J applicationserver, it closed all connections in the pool when one of these errors occurredon any connection. In the WLS implementation, we chose to only close theconnection that got the fatal error.  This allows you to add some error codes that are specific to a connection going bad in addition to the database being unavailable. The following error codes are pre-configured andcannot be disabled. You can provide additionalerror codes for these or other drivers on individual datasources. Driver Type Default Fatal Error Codes Oracle Thin Driver 3113, 3114, 1033, 1034, 1089, 1090, 17002 WebLogic or IBM DB2 driver -4498, -4499, -1776, -30108, -30081, -30080, -6036, -1229, -1224, -1035, -1034, -1015, -924, -923, -906, -518, -514, 58004 WebLogic or IBM Informix driver -79735, -79716, -43207, -27002, -25580, -4499, -908, -710, 43012 The following is a WLST script to add a fatal errorcode string to an existing datasource. # java weblogic.WLST fatalerrorcodes.pyimport sys, socket, oshostname = socket.gethostname()datasource="JDBC GridLink Data Source-0"connect("weblogic","welcome1","t3://"+hostname+":7001")edit()startEdit()cd("/JDBCSystemResources/" + datasource )targets=get("Targets")set("Targets",jarray.array([], ObjectName))save()activate()startEdit()cd("/JDBCSystemResources/" + datasource + "/JDBCResource/"+ datasource +"/JDBCConnectionPoolParams/" + datasource )set("FatalErrorCodes","1111,2222")save()activate()startEdit()cd("/JDBCSystemResources/" + datasource )set("Targets", targets)save()activate() As an experiment, I tried this with REST. localhost=localhostediturl=http://${localhost}:7001/management/weblogic/latest/editname="JDBC%20GridLink%20Data%20Source%2D0" curl -v \--user weblogic:welcome1 \-H X-Requested-By:MyClient \-H Accept:application/json \-X GET \${editurl}/JDBCSystemResources/${name}/JDBCResource/JDBCConnectionPoolParams?links=none" curl -v \--user weblogic:welcome1 \-H X-Requested-By:MyClient \-H Accept:application/json \-H Content-Type:application/json \-d "{}" \-X POST "${editurl}/changeManager/startEdit" curl -v \--user weblogic:welcome1 \-H X-Requested-By:MyClient \-H Accept:application/json \-H Content-Type:application/json \-d "{ targets: []}" \-X POST "${editurl}/JDBCSystemResources/${name}" curl -v \--user weblogic:welcome1 \-H X-Requested-By:MyClient \-H Accept:application/json \-H Content-Type:application/json \-d "{ fatalErrorCodes: '1111,2222'}" \-X POST \"${editurl}/JDBCSystemResources/${name}/JDBCResource/JDBCConnectionPoolParams" curl -v \--user weblogic:welcome1 \-H X-Requested-By:MyClient \-H Accept:application/json \-H Content-Type:application/json \-d "{ targets: [ { identity: [ servers,'myserver' ] } ]}" \-X POST "${editurl}/JDBCSystemResources/${name}" curl -v \--user weblogic:welcome1 \-H X-Requested-By:MyClient \-H Accept:application/json \-H Content-Type:application/json \-d "{}" \-X POST "${editurl}/changeManager/activate" curl -v \--user weblogic:welcome1 \-H X-Requested-By:MyClient \-H Accept:application/json \-X GET "${editurl}/JDBCSystemResources/${name}?links=none"

There are well known error codes on JDBC operations that can always be interpreted as the database shutting down, already down, or a configuration problem.  In this case, we don’t want to keep theconne...

Add category

AGL Datasource Support for URL with @alias or @LDAP

The Oracle driver has the ability to have an @alias string in the connection string URL so that the information like the host, port, and service name can be in an external tnsnames.ora file that is shared across many datasources. My perception is that this has grown in popularity in recent years to make management of the connection information easier (one place per computer).  In an effort to centralize the information further, it's possible to use an @LDAP format in the URL to get the connection information from a Lightweight Directory Access Protocol (LDAP) server.  See the Database JDBC Developer's Guide, https://docs.oracle.com/database/121/JJDBC/urls.htm#JJDBC28267, for more information. While this format of URL was supported for Generic and Multi Data sources, it was not supported for Active GridLink (AGL) datasources.  An AGL datasource URL was required to have a (SERVICE_NAME=value) as part of the long format URL.   Starting in WebLogic Server 12.2.1.2.0 (AKA PS2), the URL may also use an @alias or @ldap format.  The short format without an @alias or @LDAP is still not supported and will generate an error (and not work).  It is highly recommended that you use a database service name in the stored alias or LDAP entry.  Do not use a SID.  To optimize your AGL performance, you should be using a long format URL, in the alias or LDAP store, that has features like load balancing, retry count and delay, etc.   ALIAS Example: 1. Create a tnsnames.ora file with tns_entry=(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=RAC-scan-address)(PORT=port))(CONNECT_DATA=(SERVICE_NAME=service))) Normally, it is created in $ORACLE_HOME/network/admin.  2. Create your WLS datasource descriptor using a URL like  "jdbc:oracle:thin:/@tns_entry" 3. Add the following system property to the WebLogic server command line:-Doracle.net.tns_admin=$ORACLE_HOME/network/admin   LDAP Example: 1. Create your WLS datasource descriptor for LDAP or LDAPS using a URL like ""jdbc:oracle:thin:@ldap://ldap.example.com:7777/sales,cn=OracleContext,dc=com" JDBC Driver Requirement   Here's the catch.  You need to use a smarter ucp.jar file to support this functionality.  There are two options:   - Get a WLS patch to the 12.1.0.2 ucp.jar file based on Bug 23190035 - UCP DOESN'T SUPPORT ALIAS URL FOR RAC CLUSTER - Wait to run on an Oracle Database 12.2 ucp.jar file.  I'll be writing a blog about that when it's available.        

The Oracle driver has the ability to have an @alias string in the connection string URL so that the information like the host, port, and service name can be in an external tnsnames.ora file that is...

Add category

WebLogic Server 12.2.1.2 Datasource Gradual Draining

In October 2014, we delivered Oracle WebLogic Server 12.2.1 as part of theoverall Oracle Fusion Middleware 12.2.1 Release and October 2015 we deliveredthe first patch set release 12.2.1.1. This week, the second patch set 12.2.1.2 is available.   NewWebLogic Server 12.2.1.2 installers are now posted on the Oracle Technology Network and Oracle Software Delivery Cloud, andnew documentation has been madeavailable. There are a couple of newdatasource features hidden there. One ofthem is called “gradual draining.” When planned maintenance occurs on an Oracle RAC configuration, a planneddown service event is processed by an Active GridLink data source using thatdatabase. By default, all unreserved connections in the pool areclosed and borrowed connections are closed when returned to the pool. This can cause an uneven performance because: · New connections need to be created on thealternative instances. · A logon storm on the other instances can occur. It is desirable to gradually drain connections instead of closing themall immediately. The application can define the length of the draining periodduring which connections are closed. Itis configured using the weblogic.jdbc.drainTimeout value in the connectionproperties for the datasource. As usual,it can be set in the console, EM, or WLST. The following figure shows the administration console. The result is that connections are closed in a step-wise fashion every 5seconds. If the application is activelyusing connections, then they will be created on the alternative instances at asimilar rate. The following figure showsa perfect demonstration of draining and creating new connections over a 60 second period using a sampleapplication that generates constant load.  Without gradual draining, the current capacity on the down instance would drop off immediately similar to the LBA percentages and connections would be created on the alternative instance as quickly as possible. There are quite a few details about the interaction with RAC servicelife-cycle, datasource suspension and shut down, connection gravitation,etc. For more details, see Gradual Draining in Administering JDBCData Sources for Oracle WebLogic Server. Like several other areas in WLS datasource, this feature will be automaticallyenhanced when running with the Oracle Database 12.2 driver and server. More about that when the 12.2 release ships.

In October 2014, we delivered Oracle WebLogic Server 12.2.1 as part of the overall Oracle Fusion Middleware 12.2.1 Release and October 2015 we deliveredthe first patch set release 12.2.1.1. This week,...

Technical

Uploading AppToCloud Export Files to Oracle Storage Cloud Service

Before you can provision a new JCS instance using AppToCloud, the files that are generated from the on-premise healtcheck and export operations need to be uploaded to a container on the Oracle Storage Cloud.  There are several options available to perform this task.   Using a2c-export options Probably the simplest option is to perform the upload task as part of the operation of the a2c-export utility.   The a2c-export utility provides a mechanism to automatically upload the generated files as part of its normal operation when the relevant parameters are passed that indicate the Oracle Storage Cloud container to use and a username that has the privileges to perform the task.     Usage: a2c-export.sh [-help] -oh <oracle-home> -domainDir <domain-dir>             -archiveFile <archive-file> [-clusterToExport <cluster-name>]             [-clusterNonClusteredServers <cluster-name>] [-force]             [-cloudStorageContainer <cs-container>]             [-cloudStorageUser <cs-user>]      Below is an example of the output from an execution of a2c-export that shows how to use the upload option:     $ ./oracle_jcs_app2cloud/bin/a2c-export.sh \     -oh /Users/sbutton/Desktop/AppToCloudDemo/wls_1213 \     -domainDir /Users/sbutton/Desktop/AppToCloudDemo/wls_1213/user_projects/domains/demo_domain \     -archiveFile /tmp/demo_domain_export/demo_domain.zip  \     -cloudStorageContainer Storage-paas123/a2csc \     -cloudStorageUser fred.bloggs@demo.com      JDK version is 1.8.0_60-b27   A2C_HOME is /Users/sbutton/Desktop/AppToCloudDemo/oracle_jcs_app2cloud   /Library/Java/JavaVirtualMachines/jdk1.8.0_60.jdk/Contents/Home/bin/java -Xmx512m -DUseSunHttpHandler=true -cp /Users/sbutton/Desktop/AppToCloudDemo/oracle_jcs_app2cloud/jcs_a2c/modules/features/jcsa2c_lib.jar -Djava.util.logging.config.class=oracle.jcs.lifecycle.util.JCSLifecycleLoggingConfig oracle.jcs.lifecycle.discovery.AppToCloudExport -oh /Users/sbutton/Desktop/AppToCloudDemo/wls_1213 -domainDir /Users/sbutton/Desktop/AppToCloudDemo/wls_1213/user_projects/domains/demo_domain -archiveFile /tmp/demo_domain_export/demo_domain.zip -cloudStorageContainer Storage-paas123/a2csc -cloudStorageUser fred.bloggs@demo.com      The a2c-export program will write its log to /Users/sbutton/Desktop/AppToCloudDemo/oracle_jcs_app2cloud/logs/jcsa2c-export.log   Enter Storage Cloud password:    ####<07/09/2016 3:11:07 PM> <INFO> <AppToCloudExport> <getModel> <JCSLCM-02005> <Creating new model for domain /Users/sbutton/Desktop/AppToCloudDemo/wls_1213/user_projects/domains/demo_domain>   ####<07/09/2016 3:11:07 PM> <INFO> <EnvironmentModelBuilder> <populateOrRefreshFromEnvironment> <FMWPLATFRM-08552> <Try to discover a WebLogic Domain in offline mode>   ####<07/09/2016 3:11:16 PM> <INFO> <EnvironmentModelBuilder> <populateOrRefreshFromEnvironment> <FMWPLATFRM-08550> <End of the Environment discovery>   ####<07/09/2016 3:11:16 PM> <WARNING> <ModelNotYetImplementedFeaturesScrubber> <transform> <JCSLCM-00579> <Export for Security configuration is not currently implemented and must be manually configured on the target domain.>   ####<07/09/2016 3:11:16 PM> <INFO> <AppToCloudExport> <archiveApplications> <JCSLCM-02003> <Adding application to the archive: ConferencePlanner from /Users/sbutton/Desktop/AppToCloudDemo/ConferencePlanner.war>   ####<07/09/2016 3:11:17 PM> <INFO> <AppToCloudExport> <run> <JCSLCM-02009> <Successfully exported model and artifacts to /tmp/demo_domain_export/demo_domain.zip. Overrides file written to /tmp/demo_domain_export/demo_domain.json>   ####<07/09/2016 3:11:17 PM> <INFO> <AppToCloudExport> <run> <JCSLCM-02028> <Uploading override file to cloud storage from /tmp/demo_domain_export/demo_domain.json>   ####<07/09/2016 3:11:22 PM> <INFO> <AppToCloudExport> <run> <JCSLCM-02028> <Uploading archive file to cloud storage from /tmp/demo_domain_export/demo_domain.zip>   ####<07/09/2016 3:11:29 PM> <INFO> <AppToCloudExport> <run> <JCSLCM-02009> <Successfully exported model and artifacts to https://paas123.storage.oraclecloud.com. Overrides file written to Storage-paas12c/a2csc/demo_domain.json>      Activity Log for EXPORT      Informational Messages:        1. JCSLCM-02030: Uploaded override file to Oracle Cloud Storage container Storage-paas123/a2csc     2. JCSLCM-02030: Uploaded archive file to Oracle Cloud Storage container Storage-paas123/a2csc      Features Not Yet Implemented Messages:        1. JCSLCM-00579: Export for Security configuration is not currently implemented and must be manually configured on the target domain.      An HTML version of this report can be found at /tmp/demo_domain_export/reports/demo_domain-export-activityreport.html      Successfully exported model and artifacts to https://paas109.storage.oraclecloud.com. Overrides file written to Storage-paas109/fubar/demo_domain.json      a2c-export completed successfully (exit code = 0)      Using the Oracle Storage Cloud command line tool:   Another relatively easy approach to performing the upload task is to use the Oracle Cloud Storage command line utility.  This enables to you upload files directly from your local environment without needing to understand and use the REST API.   Download the command line interface utility from: http://www.oracle.com/technetwork/topics/cloud/downloads/index.html#cli and follow the instructions on how to extract it.  The upload client utility is packaged as an executable JAR file, requiring JRE 7+ to execute.   The mandatory parameters the utility requires are shown below:     $ java -jar uploadcli.jar -help   Version 2.0.0   ----Required Parameters----   -url <url>                  Oracle Storage Cloud Service REST endpoint.                               You can get this URL from Oracle Cloud My Services.   -user <user>                User name for the Oracle Storage Cloud Service account.   -container <name>           Oracle Storage Cloud Service container for the uploaded file(s).   <FILENAME>                  File to upload to Oracle Storage Cloud Service.                               Specify '.' to upload all files in current directory.                               Specify directory name to upload all files in the directory.                               For multi-file uploads, separate file names with ','.                               This MUST be the last parameter specified.     To perform the upload of the generated files, simply run the utility and provide the relevant parameter values.   Below is an example of uploading the demo_domain generated files to an Oracle Storage Cloud container.     $  java -jar /tmp/uploadcli.jar -url https://paas123.storage.oraclecloud.com/v1/Storage-paas123 \     -user steve.button@oracle.com \     -container a2csc \     demo_domain_export/demo_domain.json,demo_domain_export/demo_domain.zip       Enter your password: **********   INFO:Authenticating to service ...   INFO:Uploading File : /Users/sbutton/Desktop/AppToCloudDemo/Exports/demo_domain_export/demo_domain.json ...   INFO:File [ demo_domain.json ] uploaded successfully! - Data Transfer Rate: 2 KB/s    INFO:Uploading File : /Users/sbutton/Desktop/AppToCloudDemo/Exports/demo_domain_export/demo_domain.zip ...   INFO:File [ demo_domain.zip ] uploaded successfully! - Data Transfer Rate: 665 KB/s    INFO:Files Uploaded: 2   INFO:Files Skipped : 0   INFO:Files Failed  : 0     Next Steps   Once the generated files have been uploaded to Oracle Storage Cloud, the JCS provisioning process can be used to provision a new JCS instance that will be representative of the original on-premise domain.

Before you can provision a new JCS instance using AppToCloud, the files that are generated from the on-premise healtcheck and export operations need to be uploaded to a container on the Oracle...

Technical

Using AppToCloud to Migrate an On-Premise Domain to the Oracle Cloud

Part One - On-Premise Migration   Moving your WebLogic Server domains to the Oracle Cloud just got a whole shebang easier (#!)   With the introduction of the AppToCloud Tooling in Oracle Java Cloud Service 16.3.5 you can now simply and easily migrate a configured on-premise domain to  an equivalent Java Cloud Service instance in the Oracle Cloud, complete with the same set of configured settings, resources and deployed applications.   A key component of the AppToCloud landscape is the on-premise tooling which is responsible for inspecting a domain to check it's suitability for moving to the Oracle Cloud and then creating an export file containing a model of the domain topology (cluster with managed servers), local settings such as CLASSPATH entries and VM arguments, the set of configured WebLogic Server services such as data sources and deployments units such as Java EE applications and shared-libraries.   In this first of several blogs, I will provide an overview of how the AppToCloud on-premise tooling is used to create an export of a domain that is ready to be uploaded to the Oracle Cloud to then be provisioned as an Oracle Java Cloud Service instance.   Download and Install Tooling   AppToCloud on-premise tooling is used to inspect and export a domain.  The tooling needs to be downloaded from Oracle and installed on the server where the source domain is located.   Download the AppToCloud a2c-zip-installer.zip file from the Oracle Cloud Downloads page:  http://www.oracle.com/technetwork/topics/cloud/downloads/index.html   Copy the asc-zip-installer.zip file onto the machine hosting the on-premise installation and domain and unzip into a relevant directory.     $ unzip a2c-zip-installer.zip   Archive:  a2c-zip-installer.zip     inflating: oracle_jcs_app2cloud/jcs_a2c/modules/features/model-api.jar       inflating: oracle_jcs_app2cloud/bin/a2c-healthcheck.sh       inflating: oracle_jcs_app2cloud/jcs_a2c/modules/wlst.jar       inflating: oracle_jcs_app2cloud/jcs_a2c/modules/healthcheck.jar       inflating: oracle_jcs_app2cloud/jcs_a2c/modules/commons-lang-2.6.jar       ...     inflating: oracle_jcs_app2cloud/oracle_common/modules/com.fasterxml.jackson.core.jackson-databind_2.7.1.jar       inflating: oracle_jcs_app2cloud/bin/a2c-export.cmd       inflating: oracle_jcs_app2cloud/bin/a2c-export.sh       inflating: oracle_jcs_app2cloud/jcs_a2c/modules/jcsprecheck-api.jar       inflating: oracle_jcs_app2cloud/jcs_a2c/modules/jcsprecheck-impl.jar       inflating: oracle_jcs_app2cloud/oracle_common/modules/fmwplatform/common/envspec.jar       Verify the installation is successful by executing one of the utilities, such as the a2c-export utility, and inspecting the help text.     $ ./oracle_jcs_app2cloud/bin/a2c-export.sh -help   JDK version is 1.8.0_60-b27   A2C_HOME is /tmp/oracle_jcs_app2cloud   /Library/Java/JavaVirtualMachines/jdk1.8.0_60.jdk/Contents/Home/bin/java -Xmx512m -DUseSunHttpHandler=true -cp /tmp/oracle_jcs_app2cloud/jcs_a2c/modules/features/jcsa2c_lib.jar -Djava.util.logging.config.class=oracle.jcs.lifecycle.util.JCSLifecycleLoggingConfig oracle.jcs.lifecycle.discovery.AppToCloudExport -help   The a2c-export program will write its log to /private/tmp/oracle_jcs_app2cloud/logs/jcsa2c-export.log      Usage: a2c-export.sh [-help] -oh <oracle-home> -domainDir <domain-dir>             -archiveFile <archive-file> [-clusterToExport <cluster-name>]             [-clusterNonClusteredServers <cluster-name>] [-force]             ...     Step 1: Run a healthcheck on the source domain   The first step to performing an AppToCloud migration is to perform a healthcheck of the on-premise domain using the a2c-healthcheck utility.  The purpose of the healthcheck is to connect to the specified on-premise domain, inspect its contents, generate a report for any issues it discovers that may prevent the migration from being successful and finally, store the results in a directory for the export utility to use.   The healtcheck is run as an online operation.  This requires that the  AdminServer of the specified domain must be running and the appropriate connection details must be supplied as parameters to the healthcheck utility.  For security considerations the use of the password as a parameter should be be avoided as the healthcheck utility will securely prompt for the password when needed.     $ ./oracle_jcs_app2cloud/bin/a2c-healthcheck.sh \     -oh /Users/sbutton/Desktop/AppToCloudDemo/wls_1213 \     -adminUrl t3://localhost:7001 \     -adminUser weblogic \     -outputDir /tmp/demo_domain_export      JDK version is 1.8.0_60-b27   A2C_HOME is /Users/sbutton/Desktop/AppToCloudDemo/oracle_jcs_app2cloud   /Library/Java/JavaVirtualMachines/jdk1.8.0_60.jdk/Contents/Home/bin/java -Xmx512m -cp /Users/sbutton/Desktop/AppToCloudDemo/oracle_jcs_app2cloud/jcs_a2c/modules/features/jcsa2c_lib.jar -Djava.util.logging.config.class=oracle.jcs.lifecycle.util.JCSLifecycleLoggingConfig oracle.jcs.lifecycle.healthcheck.AppToCloudHealthCheck -oh /Users/sbutton/Desktop/AppToCloudDemo/wls_1213 -adminUrl t3://localhost:7001 -outputDir /tmp/demo_domain_export -adminUser weblogic   The a2c-healthcheck program will write its log to /Users/sbutton/Desktop/AppToCloudDemo/oracle_jcs_app2cloud/logs/jcsa2c-healthcheck.log   Enter password: ***********   Checking Domain Health   Connecting to domain      Connected to the domain demo_domain      Checking Java Configuration   ...   checking server runtime : conference_server_one   ...   checking server runtime : AdminServer   ...   checking server runtime : conference_server_two   Done Checking Java Configuration   Checking Servers Health      Done checking Servers Health   Checking Applications Health   Checking ConferencePlanner   Done Checking Applications Health   Checking Datasource Health   Done Checking Datasource Health   Done Checking Domain Health      Activity Log for HEALTHCHECK      Informational Messages:        1. JCSLCM-04037: Healthcheck Completed      An HTML version of this report can be found at /tmp/demo_domain_export/reports/demo_domain-healthcheck-activityreport.html      Output archive saved as /tmp/demo_domain_export/demo_domain.zip.  You can use this archive for the a2c-export tool.     Any findings from the healthcheck utility are reported as messages, including any items that need attention or that aren't supported with the current version of AppToCloud.  A static report is also generated that can be viewed after the execution of the utility showing the details of the on-premise domain and any messages generated from the healthcheck.   Note: it is mandatory to perform a healtcheck on the on-premise domain before an export operation can be performed.  The export operation requires the output from the healthcheck to perform its tasks.   Step 2: Export the source domain   Once a successful healthcheck has been performed on the on-premise domain, it is then ready to be exported into a form that can be uploaded to Oracle Cloud and used in the provisioning process for new Oracle Java Cloud Service instances.   Besides the path to the on-premise domain and location of the WebLogic Server installation, the export utility uses the output from the healthcheck operation to drive the export operation, storing the final output in the same file.     $ ./oracle_jcs_app2cloud/bin/a2c-export.sh \     -oh /Users/sbutton/Desktop/AppToCloudDemo/wls_1213 \     -domainDir /Users/sbutton/Desktop/AppToCloudDemo/wls_1213/user_projects/domains/demo_domain \      -archiveFile /tmp/demo_domain_export/demo_domain.zip      JDK version is 1.8.0_60-b27   A2C_HOME is /Users/sbutton/Desktop/AppToCloudDemo/oracle_jcs_app2cloud   /Library/Java/JavaVirtualMachines/jdk1.8.0_60.jdk/Contents/Home/bin/java -Xmx512m -DUseSunHttpHandler=true -cp /Users/sbutton/Desktop/AppToCloudDemo/oracle_jcs_app2cloud/jcs_a2c/modules/features/jcsa2c_lib.jar -Djava.util.logging.config.class=oracle.jcs.lifecycle.util.JCSLifecycleLoggingConfig oracle.jcs.lifecycle.discovery.AppToCloudExport -oh /Users/sbutton/Desktop/AppToCloudDemo/wls_1213 -domainDir /Users/sbutton/Desktop/AppToCloudDemo/wls_1213/user_projects/domains/demo_domain -archiveFile /tmp/demo_domain_export/demo_domain.zip   The a2c-export program will write its log to /Users/sbutton/Desktop/AppToCloudDemo/oracle_jcs_app2cloud/logs/jcsa2c-export.log   ####<31/08/2016 12:33:12 PM> <INFO> <AppToCloudExport> <getModel> <JCSLCM-02005> <Creating new model for domain /Users/sbutton/Desktop/AppToCloudDemo/wls_1213/user_projects/domains/demo_domain>   ####<31/08/2016 12:33:12 PM> <INFO> <EnvironmentModelBuilder> <populateOrRefreshFromEnvironment> <FMWPLATFRM-08552> <Try to discover a WebLogic Domain in offline mode>   ####<31/08/2016 12:33:21 PM> <INFO> <EnvironmentModelBuilder> <populateOrRefreshFromEnvironment> <FMWPLATFRM-08550> <End of the Environment discovery>   ####<31/08/2016 12:33:21 PM> <WARNING> <ModelNotYetImplementedFeaturesScrubber> <transform> <JCSLCM-00579> <Export for Security configuration is not currently implemented and must be manually configured on the target domain.>   ####<31/08/2016 12:33:21 PM> <INFO> <AppToCloudExport> <archiveApplications> <JCSLCM-02003> <Adding application to the archive: ConferencePlanner from /Users/sbutton/Desktop/AppToCloudDemo/ConferencePlanner.war>   ####<31/08/2016 12:33:22 PM> <INFO> <AppToCloudExport> <run> <JCSLCM-02009> <Successfully exported model and artifacts to /tmp/demo_domain_export/demo_domain.zip. Overrides file written to /tmp/demo_domain_export/demo_domain.json>      Activity Log for EXPORT      Features Not Yet Implemented Messages:        1. JCSLCM-00579: Export for Security configuration is not currently implemented and must be manually configured on the target domain.      An HTML version of this report can be found at /tmp/demo_domain_export/reports/demo_domain-export-activityreport.html      Successfully exported model and artifacts to /tmp/demo_domain_export/demo_domain.zip. Overrides file written to /tmp/demo_domain_export/demo_domain.json     Again messages from the execution of the export operation  are reported the console, including any items that need further attention or that aren't supported with the current version of AppToCloud.  A static report is also generated that can be viewed after the execution of the export of the on-premise domain. The export utility also generates an overrides file which externalizes all of the major settings that were extracted from the on-premise domain.  This file can be  modified locally to change the values of any of the provided settings and supplied with the domain export to the Oracle Java Cloud Service provisioning process.  Any modified settings in the overrides file will be used in place of the original values stored in the domain export when the new instance is provisioned.     {     "model" : {       "databases" : [ {         "id" : "demo_domain-database",         "jdbcConnectInfos" : [ {           "id" : "demo_domain-database-jdbc-0",           "url" : "jdbc:oracle:thin:@localhost:1521:xe",           "driverName" : "oracle.jdbc.xa.client.OracleXADataSource",           "xa" : true         } ]       } ],       "domains" : [ {         "id" : "demo_domain-domain",         "name" : "demo_domain",         "domainProfile" : {           "name" : "demo_domain",           "servers" : [ {             "id" : "AdminServer",             "isAdminServer" : "true"           }, {             "id" : "conference_server_one",             "isAdminServer" : "false"           }, {             "id" : "conference_server_two",             "isAdminServer" : "false"           } ],           "clusters" : [ {             "id" : "conference_cluster",             "serverRefs" : [ "conference_server_one", "conference_server_two" ]           } ]         },         "serverBindings" : [ {           "id" : "demo_domain-domain/AdminServer",           "serverRef" : "AdminServer",           "name" : "AdminServer"         }, {           "id" : "demo_domain-domain/conference_server_one",           "serverRef" : "conference_server_one",           "name" : "conference_server_one"         }, {           "id" : "demo_domain-domain/conference_server_two",           "serverRef" : "conference_server_two",           "name" : "conference_server_two"         } ],         "clusterBindings" : [ {           "clusterRef" : "conference_cluster",           "name" : "conference_cluster"         } ],         "dataSourceBindings" : [ {           "id" : "Conference Planner DataSource",           "dataSourceName" : "Conference Planner DataSource",           "dataSourceType" : "Generic",           "genericDataSourceBinding" : {             "jdbcConnectInfoRef" : "demo_domain-database-jdbc-0",             "credentialRef" : "jdbc/conference"           }         } ]       } ]     },     "extraInfo" : {       "domainVersion" : "12.1.3.0.0",       "a2cClientVersion" : "0.7.6",       "a2cClientCompatibilityVersion" : "1.0",       "a2cArchiveLocation" : {         "url" : "file:/tmp/demo_domain_export/demo_domain.zip"       },       "jvmInfos" : [ {         "serverId" : "AdminServer",         "maxHeapSize" : "512m"       }, {         "serverId" : "conference_server_one",         "maxHeapSize" : "512m"       }, {         "serverId" : "conference_server_two",         "maxHeapSize" : "512m"       } ],       "activityLog" : {         "healthCheck" : {           "infoMessages" : [ {             "component" : { },             "message" : "Healthcheck Completed"           } ]         },         "export" : {           "notYetSupportedMessages" : [ {             "component" : { },             "message" : "Export for Security configuration is not currently implemented and must be manually configured on the target domain."           } ]         }       }     }   }     Uploading to the Oracle Cloud   When the JCS provisioning process for an AppToCloud migration commences, it will load the on-premise domain export and overrides file from an Oracle Storage Cloud Service container.  The a2c-export tool can automatically upload the generated archive and overrides file to the Oracle Storage Cloud Service as part of the export by specifying additional parameters that identify the cloud storage container to use.     ./oracle_jcs_app2cloud/bin/a2c-export.sh \     -oh /Users/sbutton/Desktop/AppToCloudDemo/wls_1213 \     -domainDir /Users/sbutton/Desktop/AppToCloudDemo/wls_1213/user_projects/domains/demo_domain \     -archiveFile /tmp/demo_domain_export/demo_domain.zip \     -cloudStorageContainer "Storage-StorageEval01admin" \     -cloudStorageUser "StorageEval01admin.Storageadmin"         If you chose not to use the  a2c-export tool to upload the upload the archive and overrides files to the Oracle Storage Cloud Service then you will need to perform the upload using its REST API.   Next Steps   At this point the work with the on-premise domain is complete.  The next step after uploading the archive and overrides files to the Oracle Storage Cloud Service is to go to the Java Cloud Service console to provision a new instance using the AppToCloud option.   A Web UI will walk through the steps required, gathering the required information to create the service.Once the information is provided, the Oracle Java Cloud Service provisioning process will commence.

Part One - On-Premise Migration   Moving your WebLogic Server domains to the Oracle Cloud just got a whole shebang easier (#!)   With the introduction of the AppToCloud Tooling in Oracle Java Cloud...

Technical

Introducing AppToCloud

> Typical Workflow for Migrating Applications to Oracle Java Cloud Service   Oracle’s AppToCloud infrastructure enables you to quickly migrate existing Java applications and their supporting Oracle WebLogic Server resources to Oracle Java Cloud Service. The process consists of several tasks that fall into two main categories, On-Premises and Cloud:   On-Premises   The on-premises tasks involve generating an archive of your existing Oracle WebLogic Server environment and applications and importing it into Oracle Cloud.   Verify the prerequisites: ensure that your existing Oracle WebLogic Server domain meets the requirements of the AppToCloud tools. Install the tools: download and install the AppToCloud command line tools on the on-premises machine hosting your domain’s Administration Server. Perform a health check: use the AppToCloud command line tools to validate your on-premises Oracle WebLogic Server domain and applications. This process ensures that your domain and its applications are in a healthy state. These tools also identify any WebLogic Server features in your domain that the AppToCloud framework cannot automatically migrate to Oracle Java Cloud Service.    Note: This step is mandatory. It cannot be skipped.   Export the domain to Oracle Cloud: use the AppToCloud command line tools to capture your on-premises WebLogic Server domain and applications as a collection of files. These files are uploaded by the tool to a storage container that you have previously created in Oracle Storage Cloud Service.  The domain export files can also be manually uploaded to an Oracle Storage Cloud Service container using its REST API. Migrate the databases to Oracle Cloud: use standard Oracle database tools to move existing relational schemas to one or more database deployments in Oracle Database Cloud - Database as a Service. Create an Oracle Java Cloud Service service instance: create a service instance and select the AppToCloud option. As part of the creation process, you provide the location of the AppToCloud artifacts on cloud storage. Import your applications into the service instance: after the Oracle Java Cloud Service service instance is running, import the AppToCloud artifacts.  Oracle Java Cloud Service updates the service instance with the same resources and applications as your exported source environment. Note: The import operation can only be performed on a new and unmodified service instance. Do not perform any scaling operations, modify the domain configuration or otherwise change the service instance prior to this step.   Recreate resources if necessary: some Oracle WebLogic Server features are not currently supported by the AppToCloud tools. These features must be configured manually after provisioning your Oracle Java Cloud Service instance.  Use the same Oracle tools to perform these modifications that you originally used to configure the source environment.   WebLogic Server Administration Console Fusion Middleware Control WebLogic Scripting Tool (WLST)

> Typical Workflow for Migrating Applications to Oracle Java Cloud Service   Oracle’s AppToCloud infrastructure enables you to quickly migrate existing Java applications and their supporting Oracle...

Add category

WebLogic Server 12.2.1.1.0 - Domain to Partition Conversion Tool (DPCT) Updates

WebLogic Server 12.2.1.1.0 - Domain to Partition Conversion Tool (DPCT) Updates The Domain to Partition Conversion Tool (DPCT) provides assistance with the process of migrating an existing domain from WebLogic Server releases 10.3.6, 12.1.2, 12.1.3 or 12.2.1 domain to a partition in a WebLogic Server 12.2.1 domain. The DPCT process consists of two independent but related operations: The first operation involves inspecting an existing domain and exporting into an archive that captures the relevant configuration and binary files. The second task is to use one of several import partition options available with WebLogic Server 12.2.1 to import the contents of the exported domain to create a new partition. The new partition will contain the configuration resources and application deployments from the source domain. With the release of WebLogic Server 12.2.1.1.0 several updates and changes have been made to DPCT to further improve its functionality. The updated documentation covering the new features, bug fixes and known limitations is here:https://docs.oracle.com/middleware/12211/wls/WLSMT/config_dpct.htm#WLSMT1695 Key Updates a) Distribution of DPCT tooling with WebLogic Server 12.2.1.1.0 installation: initially the DPCT tooling was distributed as a separate zip file only available for download from OTN. With the 12.2.1.1.0 release, the DPCT tooling is provided as part of the base product installation as: $ORACLE_HOME/wlserver/common/dpct/D-PCT-12.2.1.1.0.zip This file can be copied from the 12.2.1.1.0 installation to the servers where the source domain is present and extracted for use.  The DPCT tooling is also still available for download from OTN:  http://www.oracle.com/technetwork/middleware/weblogic/downloads/wls-for-dev-1703574.html b) No patch require: previous use of DPCT required a patch to be applied to the target 12.2.1 installation in order to import an archive generated by the DPCT tooling. This requirement has been resolved. c) Improved platform support: several small issues relating to the use of DPCT tooling on Windows have been resolved. d) Improved reporting: a new report file is generated for each domain that is exported, listing the details of the source domain as well as each of the configuration resources and deployments that were captured in the exported archive. Any resources that were unable to be exported are also noted. e) JSON Overrides file formatting: the generated JSON file that serves as an overrides mechanism to allow target environment customizations to be specified on the import is now formatted correctly to make it clearer and easier to make changes. f) Additional Resources in JSON Overrides file: in order to better support customization on the target domain additional resources such as JDBC System Resources, SAF Agents, Mail Sessions and JDBC Stores are now expressed as configurable objects in the generated JSON file. g) Inclusion of new export-domain scripts: the scripts used to run the DPCT tooling have been reworked and included as new (additional) scripts. The new scripts are named export-domain.[cmd|sh] and provide clearer help text and make use of named parameters for providing input values to the script. The previous scripts are provided for backwards compatibility and continue to work, but it is recommended the new scripts are used where possible. Usage detail for the export-domain script: Usage: export-domain.sh -oh {ORACLE_HOME} -domainDir {WL_DOMAIN_HOME}        [-keyFile {KEYFILE}] [-toolJarFile {TOOL_JAR}] [-appNames {APP_NAMES}]         [-includeAppBits {INCLUDE_APP_BITS}] [-wlh {WL_HOME}]        where:              {ORACLE_HOME} : the MW_HOME of where the WebLogic is installed              {WL_DOMAIN_HOME} : the source WebLogic domain path              {KEYFILE} : an optional user-provided file containing a clear-text passphrase used to encrypt exported attributes written to the archive, default: None;              {TOOL_JAR} : file path to the com.oracle.weblogic.management.tools.migration.jar file.              Optional if jar is in the same directory location as the export-domain.sh              {APP_NAMES} : applicationNames is an optional list of application names to export.              {WL_HOME} : an optional parameter giving the path of the weblogic server for version 10.3.6.Used only when the WebLogic Server from 10.3.6 release is installed under a directory other than {ORACLE_HOME}/wlserver_10.3 Enhanced Cluster Topology and JMS Support In addition to the items listed above, some restructuring of the export and import operation has enabled DPCT to better support a number of key WebLogic Server areas.  When inspecting the source domain and generating the export archive, DPCT now enables the targeting of the resources and deployments to appropriate Servers and Clusters in the target domain. For every Server and Cluster in the source domain, there will be a corresponding resource-group object created in the generated JSON file, with each resource-group targeted to a dedicated Virtual Target, which in turn can be targeted to a Server or Cluster on the target domain. All application deployments and resources targeted to that particular WebLogic Server instance or cluster in the source domain corresponds to a resource group in the target domain. This change also supports the situation where the target domain has differently named Cluster and Server resources than the source domain, by allowing the target to be specified in the JSON overrides file so that it can be mapped appropriately to the new environment. A number of the previous limitations around the exporting of JMS configurations for both single server and cluster topologies have been addressed, enabling common JMS use cases to be supported with DPCT migrations. The documentation contains the list of existing known limitations.

WebLogic Server 12.2.1.1.0 - Domain to Partition Conversion Tool (DPCT) Updates The Domain to Partition Conversion Tool (DPCT) provides assistance with the process of migrating an existing domain...

Add category

Connection Initialization Callback on WLS Datasource

WebLogic Server 12.2.1.1 is now available. You can see the blog article announcing it at OracleWebLogic Server 12.2.1.1 is Now Available. One of the WLS datasource features that appeared quite a while ago but not mentioned much is the ability to define a callback that is called during connection initialization.  The original intent of this callback was to provide a mechanism that is used with the Application Continuity (AC) feature.  It allows for the application to ensure that the same initialization of the connection can be done when it is reserved and also later on if the connection is replayed.  For the latter case, the original connection has some type of "recoverable" error and is closed, a new connection is reserved under the covers, and all of the operations that were done on the original connection are replayed on the new connection.  The callback allows for the connection to be re-initialized with whatever state is needed by the application. The concept of having a callback to allow for the application to initialize all connections without scattering this processing all over the application software wherever getConnection() is called is very useful, even without replay being involved.  In fact, since the callback can be configured in the datasource descriptor, which I recommend, there is no change to the application except to write the callback itself.   Here's the history of support for this feature, assuming that the connection initialization callback is configured. WLS 10.3.6 - It is only called on an Active GridLink datasource when running with the replay driver (replay was only supported with AGL). WLS  12.1.1, 12.1.2, and 12.1.3 - It is called if used with the replay driver and any datasource type (replay support was added to GENERIC datasources). WLS 12.2.1 - It is called with any Oracle driver and any datasource type.  WLS 12.2.1.1 - It is called with any driver and any datasource type.  Why limit the goodness to just the Oracle driver? The callback can be configured in the application by registering it on the datasource in the Java code. You need to ensure that you only do this once per datasource.  I think it's much easier to register it in the datasource configuration.    Here's a sample callback. package demo;import oracle.ucp.jdbc.ConnectionInitializationCallback; public class MyConnectionInitializationCallback implements  ConnectionInitializationCallback {   public MyConnectionInitializationCallback()  {   }  public void initialize(java.sql.Connection connection)    throws java.sql.SQLException {     // Re-set the state for the connection, if necessary   }} This is a simple Jython script using as many defaults as possible to just show registering the callback. import sys, sockethostname = socket.gethostname()connect("weblogic","welcome1","t3://"+hostname+":7001")edit()dsname='myds'jndiName='myds'server='myserver'cd('Servers/'+server)target=cmocd('../..')startEdit()jdbcSR = create(dsname, 'JDBCSystemResource')jdbcResource = jdbcSR.getJDBCResource()jdbcResource.setName(dsname)dsParams = jdbcResource.getJDBCDataSourceParams()dsParams.addJNDIName(jndiName)driverParams = jdbcResource.getJDBCDriverParams()driverParams.setUrl('jdbc:oracle:thin:@(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=dbhost)(PORT=1521)))(CONNECT_DATA=(SERVICE_NAME=otrade)))')driverParams.setDriverName('oracle.jdbc.OracleDriver')driverParams.setPassword('tiger')driverProperties = driverParams.getProperties()userprop = driverProperties.createProperty('user')userprop.setValue('scott')oracleParams = jdbcResource.getJDBCOracleParams()oracleParams.setConnectionInitializationCallback('demo.MyConnectionInitializationCallback')  # register the callbackjdbcSR.addTarget(target)save()activate(block='true')  Here are a few observations.  First, to register the callback using the configuration, the class must be in your classpath.  It will need to be in the server classpath anyway to run but it needs to get there earlier for configuration.  Second, because of the history of this feature, it's contained in the Oracle parameters instead of the Connection parameters; there isn't much we can do about that.  In the WLS 12.2.1.1 administration console, the entry can be seen and configured in the Advanced parameters of the Connection Pool tab as shown in the following figure (in addition to the Oracle tab).  Finally, note that the interface is a Universal Connection Pool (UCP) interface so that this callback can be shared with your UCP application (all driver types are supported starting in Database 12.1.0.2). This feature is documented in the Application continuity section of the Administration Guide.   See http://docs.oracle.com/middleware/12211/wls/JDBCA/ds_oracledriver.htm#CCHFJDHF . You might be disappointed that I didn't actually do anything in the callback.  I'll use this callback again in my next blog to show how it's used in another new WLS 12.2.1.1 feature.

WebLogic Server 12.2.1.1 is now available. You can see the blog article announcing it at Oracle WebLogic Server 12.2.1.1 is Now Available. One of the WLS datasource features that appeared quite a...

Add category

WebLogic Server Continuous Availability in 12.2.1.1

We have made enhancements to the Continuous AvailabilityOffering in WebLogic 12.2.1.1 in the areas of Zero Downtime Patching, CrossSite Transaction Recovery, Coherence Federated Caching and CoherencePersistence. We have also enhanced thedocumentation to provide design considerations for the multi-data centerMaximum Availability Architectures (MAA) that are supported for WebLogic ServerContinuous Availability. Zero Downtime Patching Enhancements Enhancements in Zero Downtime Patching support updatingapplications running in a multitenant partition without affecting otherpartitions that run in the same cluster. Coherence applications can now be updated while maintaining highavailability of the Coherence data during the rollout process. We have also removed the dependency onNodeManager to upgrade the WebLogic Administration Server. Multitenancy support Application updates can use partition shutdowninstead of server shutdowns. Can update an application in a partition on a serverwithout affecting other partitions. Can update an application referenced by aResourceGroupTemplate. Coherence support - User can supply minimumsafety mode for rollout to Coherence cluster. Removed Administration Server dependency onNodeManager – The Administration Server no longer needs to be started byNodeManager. Cross-Site Transaction Recovery We introduced a “Site Leasing” mechanism to do auto recoverywhen there is a site failure or mid-tier failure. With site leasing we provide a more robust mechanismto failover and failback transaction recovery without imposing dependencies onthe TLog which affect the health of the Servers hosting the TransactionManager. Every server in a site will update their lease. When thelease expires for all servers running in a cluster in Site 1, servers runningin a cluster in a remote site assume ownership of the TLogs, and recover thetransactions while still continuing their transaction work. To learn more, please read Active-ActiveXA Transaction Recovery. Coherence Federated Caching and Coherence PersistenceAdministration Enhancements We have enhanced the WebLogic Server Administration Console tomake it easier to configure Coherence Federated Caching and Coherence Persistence. Coherence Federated Caching - Added the ability to setupFederation with basic active/active and active/passive configurations using theAdministration Console and eliminated the need to use configuration files. Coherence Persistence - Added a persistence tabin the Administration Console that provides the ability to configurePersistence related settings that apply to all services. Documentation In WebLogic Server 12.2.1.1 we have enhanced the document Continuous Availability for Oracle WebLogicServer to include a new chapter “DesignConsiderations for Continuous Availability” See http://docs.oracle.com/middleware/12211/wls/WLCAG/weblogic_ca_best.htm#WLCAG145. This new chapter provides design considerations and bestpractices for the components of your multi-data center environments. In addition to the general best practicesrecommended for all continuous availability MAA architectures, we provide specificadvice for each of the Continuous Availability supported topologies, and describehow the features can be used in these topologies to provide maximum high availabilityand disaster recovery.

We have made enhancements to the Continuous Availability Offering in WebLogic 12.2.1.1 in the areas of Zero Downtime Patching, CrossSite Transaction Recovery, Coherence Federated Caching...

Add category

Oracle WebLogic Server 12.2.1.1 is Now Available

Last October, we delivered Oracle WebLogic Server 12.2.1 as part of the overall Oracle Fusion Middleware 12.2.1 Release.   As noted previously on this blog, WebLogic Server 12.2.1 delivers compelling new feature capabilities in the areas of Multitenancy, Continuous Availability, and Developer Productivity and Portability to Cloud.   Today, we are releasing WebLogic Server 12.2.1.1, which is the first patch set release for WebLogic Server and Fusion Middleware 12.2.1.   New WebLogic Server 12.2.1.1 installers are now posted on the Oracle Technology Network and Oracle Software Delivery Cloud, and new documentation has been made available.  WebLogic Server 12.2.1.1 contains all the new features in WebLogic Server 12.2.1, and also includes an integrated, cumulative set of fixes and a small number of targeted, non-disruptive enhancements.    For customers who have just begun evaluating WebLogic Server 12cR2, or are planning evaluation and adoption, we recommend that you adopt WebLogic Server 12.2.1.1 so that you can benefit from the maintenance and enhancements that have been included.   For customers who are already running in production on WebLogic Server 12.2.1, you can continue to do so, though we will encourage adoption of WebLogic Server 12.2.1 patch sets. The enhancements are primarily in the following areas: Multitenancy - Improvements to Resource Consumption Management, partition security management, REST management, and Fusion Middleware Control, all targeted at multitenancy manageability and usability. Continuous Availability - New documented best practices for multi data center deployments, and product improvements to Zero Downtime Patching capabilities. Developer Productivity and Portability to the Cloud - The Domain to Partition Conversion Tool (D-PCT), which enables you to convert an existing domain to a WebLogic Server 12.2.1 partition, has been integrated into 12.2.1.1 with improved functionality.   So it's now easier to migrate domains and applications to WebLogic Server partitions, including partitions running in the Oracle Java Cloud Service.  We will provide additional updates on the capabilities described above, but everything is ready for you to get started using WebLogic Server 12.2.1.1 today.   Try it out and give us your feedback!

Last October, we delivered Oracle WebLogic Server 12.2.1 as part of the overall Oracle Fusion Middleware 12.2.1 Release.   As noted previously on this blog, WebLogic Server 12.2.1 delivers compelling...

Add category

Using SQLXML Data Type with Application Continuity

When I first wrote an article about changing Oracleconcrete classes to interfaces to work with Application Continuity (AC) (https://blogs.oracle.com/WebLogicServer/entry/using_oracle_jdbc_type_interfaces),I left out one type. oracle.sql.OPAQUEis replaced with oracle.jdbc.OracleOpaque. There isn’t a lot that you can do with this opaque type. While the original class had a lot ofconversion methods, the new Oracle type interfaces have only methods that are considered significant or not available with standard JDBC API’s. The new interface only has a method to getthe value as an Object and two meta information methods to get meta data andtype name. Unlike the other Oracle typeinterfaces (oracle.jdbc.OracleStruct extends java.sql.Struct andoracle.jdbc.OracleArray extends java.sql.Array), oracle.jdbc.OracleOpaque does not extend aJDBC interface There is one related very common use case that needsto be changed to work with AC. Earlyuses of SQLXML made use of the following XDB API. SQLXML sqlXml = oracle.xdb.XMLType.createXML( ((oracle.jdbc.OracleResultSet)resultSet).getOPAQUE("issue")); oracle.xdb.XMLType extends oracle.sql.OPAQUE and itsuse will disable AC replay. This must be replaced with the standard JDBC API SQLXML sqlXml =resultSet.getSQLXML("issue"); If you try to do a “new oracle.xdb.XMLType(connection,string)” when running with the replay datasource, you will get a ClassCastException. Since XMLTypedoesn’t work with the replay datasource and the oracle.xdb package uses XMLTypeextensively, this package is no longer usable for AC replay. The API’s for SQLXML are documented at https://docs.oracle.com/javase/7/docs/api/java/sql/SQLXML.html. The javadoc shows API’s to work with DOM,SAX, StAX, XLST, and XPath. Take a look at the sample program at //cdn.app.compendium.com/uploads/user/e7c690e8-6ff9-102a-ac6d-e4aebca50425/f4a5b21d-66fa-4885-92bf-c4e81c06d916/File/e57d46dd27d26fbd6aeeb884445dd5b3/xmlsample.txt The sample uses StAX to store the information andDOM to get it. By default, it uses thereplay datasource and it does not use XDB. You can run with replay debugging by doing somethinglike the following. Create a file named /tmp/config.txt that has the followingtext. java.util.logging.ConsoleHandler.formatter =java.util.logging.SimpleFormatterhandlers = java.util.logging.FileHandlerjava.util.logging.FileHandler.pattern = /tmp/replay.logoracle.jdbc.internal.replay.level = FINEST Change your WLS CLASSPATH (or one with the Oracle client jar files) to put ojdbc7_g.jar at thefront (to replace ojdbc7.jar) and add the current directory. Compile the program (after renaming .txt to .java)and run it using java -Djava.util.logging.config.file=/tmp/config.txtXmlSample The output replay log is in /tmp/replay.log. With the defaults in the sample program, youwon’t see replay disabled in the log. Ifyou change the program to set useXdb to true, you will see that replay isdisabled. The log will have “DISABLEREPLAY in preForMethodWithConcreteClass(getOPAQUE)” and “EnteringdisableReplayInternal”. This sample can be used to test other sequences ofoperations to see if they are safe for replay. Alternatively, you can use orachk to do a staticanalysis of the class. See https://blogs.oracle.com/WebLogicServer/entry/using_orachk_to_clean_upfor more information. If you run orachkon this program, you will get this failure. FAILED - [XmlSample][[MethodCall] desc=(Ljava/lang/String;)Loracle/sql/OPAQUE; method name=getOPAQUE, lineno=105]

When I first wrote an article about changing Oracle concrete classes to interfaces to work with Application Continuity (AC) (https://blogs.oracle.com/WebLogicServer/entry/using_oracle_jdbc_type_interfa...

Add category

Testing WLS and ONS Configuration

Introduction Oracle Notification Service (ONS) is installed and configured as part of theOracle Clusterware installation. All nodes participating in the cluster areautomatically registered with the ONS during Oracle Clusterware installation. Theconfiguration file is located on each node in $ORACLE_HOME/opmn/conf/ons.config. See the Oracle documentation for furtherinformation. This article focuses on theclient side. Oracle RAC Fast Application Notification (FAN) events are available startingin database 11.2. This is the minimum databaserelease required for WLS Active GridLink. FAN events are notifications sent by a cluster running Oracle RAC toinform the subscribers about the configuration changes within the cluster. The supported FAN events are service up,service down, node down, and load balancing advisories (LBA). fanWatcher Program You can optionally test your ONS configuration independentof running WLS. This tests theconnection from the ONS client to the ONS server but not configuration of yourRAC services. See https://blogs.oracle.com/WebLogicServer/entry/fanwatcher_sample_programfor details to get, compile, and run the fanWatcher program. I’m assuming that you have WLS 10.3.6 orlater installed and you have your CLASSPATH set appropriately. You would run the test program usingsomething like java fanWatcher"nodes=rac1:6200,rac2:6200" database/event/service If you are using the database 12.1.0.2 client jarfiles, you can handle more complex configurations with multiple clusters, forexample DataGuard, with something like java fanWatcher "nodes.1=site1.rac1:6200,site1.rac2:6200nodes.2=site2.rac1:6200,site2.rac2:6200" database/event/service Note that a newline is used to separate multiple nodelists. You can also test with a walletfile and password, if the ONS server is configured to use SSL communications. Once this program is running, you should minimallysee occasional LBA notifications. If youstart or stop a service, you should see an associated event. Auto ONS It’s possible to run without specifying the ONSinformation using a feature call auto-ONS. The auto-ONS feature cannot be usedif you are running with - an 11g driver or 11g database.Auto-ONS depends on protocol flowing between the driver and the database serverand this feature was added in 12c. - pre-WLS 12.1.3. Auto-ONS is supported starting in WLS 12.1.3. - an Oracle wallet with SSLcommunications. Configuration of thewallet requires also configuring the ONS information. - complicated ONS topology. In general, auto-ONS can figure out what youneed but there are cases where you need to specify it precisely. In WLS 12.2.1, an extension of the ONSconfiguration allows for specifying the exact topology using a property node list. See http://docs.oracle.com/middleware/1221/wls/JDBCA/gridlink_datasources.htm#BABGJBICfor more information. If you have some configurations thatuse an 11g driver or database and some that run with 12c driver/database, youmay want to just specify the ONS information all of the time instead of usingthe auto-ONS simplification. ThefanWatcher link above indicates how to test fanWatcher using auto-ONS. WLS ONS Configuration and Testing The next step is to ensure that you have end-to-endconfiguration running. That includes thedatabase service for which events will be generated to the AGL datasource thatprocesses the events for the corresponding service. On the server side, the database service must beconfigured RCLB enabled. RCLB is enabled for a service if the service GOAL (NOTCLB_GOAL) is set to either SERVICE_TIME or THROUGHPUT. See the Oracle documentation for furtherinformation on using srvctl to set this when creating the service. On the WLS side, the key pieces are the URL and theONS configuration. TheURL is configured using a long format with this service name specified. The URL can use an Oracle Single ClientAccess Name (SCAN) address, for example, jdbc:oracle:thin:@(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=scanname)(PORT=scanport))(CONNECT_DATA=(SERVICE_NAME=myservice))) ormultiplenon-SCAN addresses with LOAD_BALANCE=on, for example, jdbc:oracle:thin:@(DESCRIPTION=(ADDRESS_LIST=(LOAD_BALANCE=ON)(ADDRESS=(PROTOCOL=TCP)(HOST=host1)(PORT=1521))(ADDRESS=(PROTOCOL=TCP)(HOST=host2)(PORT=1521)))(CONNECT_DATA=(SERVICE_NAME=myservice))) Definingthe URL is a complex topic - see the Oracle documentation for more information. Asdescribed above, the ONS configuration can be implicit using auto-ONS orexplicit. The trade-offs andrestrictions are also described above. The format of the explicit ONS information is described at http://docs.oracle.com/middleware/1221/wls/JDBCA/gridlink_datasources.htm#BABGJBIC. Ifyou create the datasource using the administration console with explicit ONSconfiguration, there is a button to click on to test the ONS configuration. This tests doing a simple handshake with theONS server. Ofcourse, the first real test of your ONS configuration with WLS is deploying thedatasource, either when starting the server or when targeting the datasource ona running server. In the administration console,you can look at the AGL runtime monitoring page for ONS, especially if usingauto-ONS, to see the ONS configuration. Youcan look at the page for instances and check the affinity flag and instanceweight attributes that are updated on LBA events. If you stop a service using something like srvctlstop service -db beadev -i beadev2 -s otrade thatshould also show up on this page with the weight and capacity going to 0. Ifyou look at the server log (for example servers/myserver/logs/myserver.log) youshould see a message tracking the outage like the following. ….<Info> <JDBC> … <Datasource JDBC Data Source-0 for service otrade received a service down event forinstance [beadev2].> Ifyou want to see more information like the LBA events, you can enable theJDBCRAC debugging using –Dweblogic.debug.DebugJDBCRAC=true. For example, ...<JDBCRAC> ... lbaEventOccurred() event=service=otrade, database=beadev,event=VERSION=1.0 database=beadev service=otrade { {instance=beadev1 percent=50flag=GOOD aff=FALSE}{instance=beadev2 percent=50 flag=UNKNOWN aff=FALSE} } Therewill be a lot of debug output with this setting so it is not recommended forproduction.

Introduction Oracle Notification Service (ONS) is installed and configured as part of the Oracle Clusterware installation. All nodes participating in the cluster areautomatically registered with the...

Add category

Migrating from Generic Data Source to Active GridLink

Earlier, I wrote anarticle about how to migrate from a Multi Data source (MDS) for RACconnectivity to Active GridLink (AGL). This is needed to move from the older datasource technology to the newertechnology, both supporting Oracle RAC. Theinformation is now in the public documentation set at http://docs.oracle.com/middleware/1221/wls/JDBCA/gridlink_datasources.htm#JDBCA690. There are also many customers thatare growing up from a standalone database to an Oracle RAC cluster. In this case, it’s a migration from a GENERICdatasource to an AGL datasource. Thismigration is pretty simple. No changes should be required toyour applications.  A standard application looks up the datasource in JNDIand uses it to get connections.  The JNDI name won’t change. The only changes necessary should beto your configuration and the necessary information is generally provided byyour database administrator.   The information needed is the new URLand optionally the configuration of Oracle Notification Service (ONS) on theRAC cluster. The latter is only needed if you are running with - an 11g driver or 11g database.Auto-ONS depends on protocol flowing between the driver and the database serverand this feature was added in 12c. - pre-WLS 12.1.3. Auto-ONS is supported starting in WLS 12.1.3. - an Oracle wallet with SSLcommunications. Configuration of thewallet requires also configuring the ONS information. - complicated ONS topology. In general, auto-ONS can figure out what youneed but there are cases where you need to specify it precisely. In WLS 12.2.1, an extension of the ONSconfiguration allows for specifying the exact topology using a property nodelist. See http://docs.oracle.com/middleware/1221/wls/JDBCA/gridlink_datasources.htm#BABGJBICfor more information. TheURL and ONS attributes are configurable but not dynamic. That means that the datasource will need tobe shutdown and restarted after the change. The simplest way to do this is to untarget the datasource, make thechanges, and then re-target the datasource. The recommended approach to migrate from a GENERIC to AGL datasource is to use WLST. The URL must be changed in the JDBCDriverParamsobject. The new JDBCOracleParams object(it generally doesn’t exist for a GENERIC datasource) needs to have FAN enabledset to true and optionally set the ONS information. The following is a sample WLST script with the newvalues hard-coded. You couldparameterize it and make it more flexible in handling multiple datasources. If you are using an Oracle wallet for ONS,that needs to be added to the JDBCOracleParams object as well. # java weblogic.WLST file.pyimport sys, socket, oshostname = socket.gethostname()datasource="JDBC Data Source-0"connect("weblogic","welcome1","t3://"+hostname+":7001")edit()startEdit()cd("/JDBCSystemResources/" + datasource )targets=get("Targets")set("Targets",jarray.array([], ObjectName))save()activate()startEdit()cd("/JDBCSystemResources/" + datasource + "/JDBCResource/"+ datasource + "/JDBCDriverParams/"+ datasource )set("Url","jdbc:oracle:thin:@(DESCRIPTION=(ADDRESS_LIST=("+ "ADDRESS=(PROTOCOL=TCP)(HOST=dbhost)(PORT=1521)))" + "(CONNECT_DATA=(SERVICE_NAME=otrade)))")cd("/JDBCSystemResources/" + datasource + "/JDBCResource/"+ datasource +"/JDBCOracleParams/" + datasource )set("FanEnabled","true")set("OnsNodeList","dbhost:6200")# The following is for WLS 12.1.2 and 12.1.3 if not setting FanEnabled true, which is not recommended#set("ActiveGridlink","true")# The following is for WLS 12.2.1 and later#cd("/JDBCSystemResources/" + datasource + "/JDBCResource/"+# datasource )#set("DatasourceType", "AGL")save()activate()startEdit()cd("/JDBCSystemResources/" + datasource )set("Targets", targets)save()activate() In WLS 12.1.2 and 12.1.3, there is an explicit ActiveGridlink flag that can be used to identify an AGL datasource, even if FanEnabled is not set to true (which is not recommended) and auto-ONS is used (12.1.2 is the first release in which auto-ONS is supported). In the script above, uncomment the line to set it only if FanEnabled and OnsNodeList are not set. Starting in WLS 12.2.1, there is an explicitdatasource type at the JDBCResource level. If that is set to GENERIC, you must re-set it using set("DatasourceType","AGL"). In the script above,uncomment the lines to set it.  In this case, the ActiveGridlink flag is not necessary. In the administrative console, the database type isread-only and there is no mechanism to change the database type. You can try to get around this by setting theURL, FAN Enabled box, and ONS information. However, in 12.2.1 there is no way to re-set the Datasource Type in theconsole and that value overrides all others.

Earlier, I wrote an article about how to migrate from a Multi Data source (MDS) for RAC connectivity to Active GridLink (AGL). This is needed to move from the older datasource technology to the newerte...

Announcement

New WebLogic Server Running on Docker in Multi-Host Environments

Oracle WebLogic Server 12.2.1 is now certified to run on Docker 1.9 containers. As part of this certification, you can create Oracle WebLogic Server 12.2.1 clusters which can span multiple physical hosts. Containers running on multi-host are built as an extension of existing Oracle WebLogic 12.2.1 Install images built with Dockerfiles , Domain images built with Dockerfiles, and existing Oracle Linux images Oracle Linux Images. To help you with this, we have posted scripts on GitHub as examples for you to get started. The table below describes the certification provided for WebLogic Server 12.2.1 on Docker 1.9. You can use these combinations of Oracle WebLogic Server, JDK, Linux and Docker versions when building your Docker images. WLS Version JDK Version Host OS Kernel Docker Version 12.2.1 8 Oracle Linux 6 UEK 4 1.9 or higher Oracle Linux 7   Please read earlier blog Oracle Weblogic 12.2.1 Running on Docker Containers for details on Oracle WebLogic Server 12.1.3 and Oracle WebLogic 12.2.1 certification on other versions of Docker. We support Oracle WebLogic Server in certified Docker containers running on other Linux host operating systems that have Kernel 4 or larger and that support Docker Containers, please read our Support statement. For additional details on the most current Oracle WebLogic Server supported configurations please refer to Oracle Fusion Middleware Certification Pages. The scripts that support multi-host environment on GitHub are based on the latest versions of Docker Networking, Swarm, and Docker Compose. The Docker Machine participates in the Swarm which is networked by a Docker overlay network. The WebLogic Admin Server container as well as the WebLogic Managed Servers containers run on different VMs in the Swarm and are able to communicate with each other. Dockerfiles and scripts we have provided enable users to create clustered and non-clustered Oracle WebLogic Server domain configurations, including both development and production running on a single or multiple hosts operating system or VMs. Each server running in the resulting domain configurations runs in its Docker container, and is capable of communicating as required with other servers. When these containers run in a WebLogic cluster all HA properties of the WebLogic cluster are supported such as in memory session replication, HTTP load balancing service and server migration.   Please check the new WebLogic on Docker Multi Host Workshop in Github. This workshop takes you step by step in how to build a WebLogic Server Domain on Docker in a multi host environment.  After the WebLogic domain has been started an Apache Plugin Web Tier container is started in the Swarm, the Apache Plugin load balances invocations to an application deployed to a WebLogic cluster.  This project takes advantage of the following tools Docker Machine, Docker Swarm, Docker Overlay Network, Docker Compose, Docker Registry, and Consul.  Very easily and quickly using the sample Dockerfiles, and scripts you can set up your environment running on Docker.  Try it out and enjoy! On  YouTube we have a video that shows you how to create a WLS domain/cluster on Multi Host environment. For documentation on how to use these Dockerfiles and scripts, see the whitepaper on OTN. .  We hope you will try running the different configurations of WebLogic Server on Docker containers, and look forward to hearing any feedback you might have.  

Oracle WebLogic Server 12.2.1 is now certified to run on Docker 1.9 containers. As part of this certification, you can createOracle WebLogic Server 12.2.1 clusters which can span multiple physical...

Add category

WebLogic Server 12.2.1: Elastic Cluster Scaling

WebLogic Server 12.2.1 added support for the elastic scaling of dynamic clusters: http://docs.oracle.com/middleware/1221/wls/ELAST/overview.htm#ELAST529 Elasticity allows you to configure elastic scaling for a dynamic cluster based on either of the following: Manually adding or removing a running dynamic server instance from an active dynamic cluster. This is called on-demand scaling. You can perform on-demand scaling using the Fusion Middleware component of Enterprise Manager, the WebLogic Server Administration Console, or the WebLogic Scripting Tool (WLST). Establishing policies that set the conditions under which a dynamic cluster should be scaled up or down and actions that define the scaling operations themselves. When the conditions defined in the scaling policy occur, the corresponding scaling action is triggered automatically. To see this in action, a set of video demonstrations have been added to the youtube.com/OracleWebLogic channel that show the use of various elastic scaling options available. WebLogic Server 12.2.1 Elastic Cluster Scaling with WLSThttps://www.youtube.com/watch?v=6PHYfVd9Oh4 WebLogic Server 12.2.1 Elastic Cluster Scaling with WebLogic Consolehttps://www.youtube.com/watch?v=HkG0Uw14Dak WebLogic Server 12.2.1 Automated Elastic Cluster Scalinghttps://www.youtube.com/watch?v=6b7dySBC-mk

WebLogic Server 12.2.1 added support for the elastic scaling of dynamic clusters: http://docs.oracle.com/middleware/1221/wls/ELAST/overview.htm#ELAST529 Elasticity allows you to configure elastic...

WebLogic on Docker Containers Series, Part 3: Creating a Domain Image

You already know how to quickly get started with WebLogic on Docker. You also learned with more details how to build an installation Docker image of WebLogic and Oracle JDK. This time, you will learn how to create a WebLogic Domain Image for Docker Containers. We are pushing some interesting samples of Docker images on GitHub so this way WebLogic customers and users can have a good idea of what is possible (although not everything in there may be officially supported as of this moment, like multihost), but to experiment and learn more about Docker itself. This blog post focuses on the 1221-domain sample, but make sure to subscribe to this blog or follow me on Twitter for future posts that will look into the other samples. I will also assume that you have the docker-images repository checked out and updated in your computer (with commit 4c36ef9f99c98), and of course you have Docker installed and properly working. Now moving on.  WebLogic Domains WebLogic uses a Domain concept for its infrastructure. This is the first thing a developer or administrator must create in order to be able to run a WebLogic Server. There are many ways to create a WebLogic Server Domain: using the Configuration Wizard, using WLST, or even bootstrapping the weblogic.Server class. Since we are using Docker and we want to automate everything, we create the domain with WLST. TL;DR; Building the Domain Image in 1221-domain sample First things first, make sure you have image oracle/weblogic:12.2.1-developer already created. If not, check Part 2 of this series to learn how.  Now go into folder samples/1221-domain and run the following command: $ pwd ~/docker-images/OracleWebLogic/samples/1221-domain $ docker build -t 1221-domain --build-arg ADMIN_PASSWORD=welcome1 . [...] $ docker images REPOSITORY              TAG                     IMAGE ID            CREATED             SIZE 1221-domain             latest                  327a95a2fbc8        2 days ago          1.195 GB oracle/weblogic         12.2.1-developer        b793273b4c9b        2 days ago          1.194 GB oraclelinux             latest                  4d457431af34        10 weeks ago        205.9 MB This is what you will end up having in your environment.  Understanding the sample WLST domain creation script Customers and users are always welcome to come up with their own scripts and automation process to create WebLogic domains (either for Docker or not), but we shared some examples here to make things easier for them. The 1221-domain sample has a subfolder named container-scripts that holds a set of handy scripts to create and run a domain image. The most important script though is the create-wls-domain.py WLST script. This file is executed when docker build is called, as you can see in the Dockerfile. In this sample, you learn how to read variables in create-wls-domain.py script with default values, that may be defined in the Dockerfile. The script defined in this sample requires a set of information in order to create a domain. Mainly, you need to provide:   Domain name: by default 'base_domain' Admin port (although WLS has 7001 by default when installed, this script defaults to 8001 if nothing is provided) Admin password: no default. Must inform during build with --build-arg ADMIN_PASSWORD=<your password>  Cluster Name: defaults to 'DockerCluster'  Note about About Clustering This sample shows how to define a cluster named with whatever is in $CLUSTER_NAME (defaults to DockerCluster) to demonstrate scalability of WebLogic on Docker containers. You can see how the Cluster is created in the WLST file. Back to the domain creation How to read variables in WLST with default values? Pretty simple:       domain_name = os.environ.get("DOMAIN_NAME", "base_domain")   admin_port = int(os.environ.get("ADMIN_PORT", "8001"))   admin_pass = os.environ.get("ADMIN_PASSWORD")   cluster_name = os.environ.get("CLUSTER_NAME", "DockerCluster") These variables can be defined as part of your Dockerfile, or even passed as arguments during build if you are using Docker 1.10 with the new ARG command, as the ADMIN_PASSWORD example shows.   ARG ADMIN_PASSWORD   ENV DOMAIN_NAME="base_domain" \   ADMIN_PORT="8001" \   ADMIN_HOST="wlsadmin" \   NM_PORT="5556" \   MS_PORT="7001" \   CLUSTER_NAME="DockerCluster" \ Other variables are defined here (NM_PORT, MS_PORT, ADMIN_HOST), but I'll explain them later on a future post. Meanwhile, let's continue. The next step as part of a domain image creation, is that you may want to reuse some Domain Template. In the sample script, we used the default template for new domains wlst.jar, but agan if you are working on your own set of domains feel free to use any template you may already have. Next we tell WLST to configure the AdminServer to listen on all addresses that will be available (in the container), and to listen on port as in $ADMIN_PORT. The 'weblogic' admin user needs a password that you had to provide with --build-arg (or defined directly inside Dockerfile) in $ADMIN_PASSWORD, and then we set that in the script. For the sake of providing some examples, we also define a JMS Server in the script, but we only target it to the AdminServer. If you want to target to the Cluster, you will have to tweak your own script.  The script is configured to set this domain in Production Mode too.  We set some Node Manager options, since we will be using NM as Per Domain (see docs for more details). Remember that each instance of this image (a container) has the same "filesystem", so it is as if you had copied the domain to different servers. If you are an experienced WebLogic administrator, you will quickly understand. If not, please comment and I'll share some links. This is important to be able to run Managed Servers inside containers based on this image. I'll get back to this in the future on running a Clustered WebLogic environment on Docker containers. There are a couple of other things we could've done in this script, such as:   Create and define a Data Source Create, define, and deploy applications (this is demonstrated as part of the 1221-appdeploy sample) And anything else you can do with WLST in Offline mode (remember that domain does not exist and thus is not running) But sure you will quickly find out how to do these for your own domains. Now that you have a domain Docker image 1221-domain, you are able to start it with:   $ docker run -ti 1221-domain Now have some fun with tweaking your own WLST scripts for domain creation in Docker.  

You already know how to quickly get started with WebLogic on Docker. You also learned with more details how to build an installation Docker image of WebLogic and Oracle JDK. This time, you will...

Add category

Now Available: Domain to Partition Conversion Tool (DPCT)

We are pleased to announce that a new utility has just been published to help with the process of converting existing WebLogic Server domains into WebLogic Server 12.2.1 partitions.  The Domain to Partition Conversion Tool (DPCT) provides a utility that inspects a specified source domain and produces an archive containing the resources, deployed applications and other settings.  This can then be used with the importPartition operation provided in WebLogic Server 12.2.1 to create a new partition that represents the original source domain.  An external overrides file is generated (in JSON format) that can be modified to adjust the targets and names used for the relevant artifacts when they are created in the partition. DPCT supports WebLogic Server 10.3.6, 12.1.1, 12.1.2 and 12.1.3 source domains and  makes the conversion to WebLogic Server 12.2.1 partitions a straightforward process. DPCT is available for downloaded from OTN: http://www.oracle.com/technetwork/middleware/weblogic/downloads/wls-main-097127.html  ** Note: there is also a corresponding patch (opatch) posted alongside the DPCT download that needs to be downloaded and applied to the target installation of WebLogic Server 12.2.1 to support the import operation **  The README contains more details and examples of using the tool: http://download.oracle.com/otn/nt/middleware/12c/1221/wls1221_D-PCT-README.txt A video demonstration of using DPCT to convert a WebLogic Server 12.1.3 domain with a deployed application into a WebLogic Server 12.2.1 is also available on our YouTube channel: https://youtu.be/D1vQJrFfz9Q

We are pleased to announce that a new utility has just been published to help with the process of converting existing WebLogic Server domains into WebLogic Server 12.2.1 partitions.  The Domain to...

Technical

ZDT Technical Topic: How are Those Sessions Kept Alive Anyway?

By now you have probably read documentation or previous blog posts about how Zero Downtime Patching provides a convenient automated method of updating a WebLogic Domain in a rolling fashion.  By automating the process, Zero Downtime Patching greatly saves time and eliminates the potential human errors from the repetitive course of procedure. In addition to that there is also some special features around replicated HTTP sessions that make sure end users do not lose their session at any point during the rollout process. Lets explore the technical details around maintaining session state during Zero Downtime Patching. One of the key aspects of WLS replicated session persistence contract is that the session may be maintained within the cluster even in the rare situation where a server crashes. However, the session persistence contract cannot guarantee sessions will be maintained when more than a single server goes down in a short time period. This is because the session has a single copy replicated to some secondary server within the cluster. The session is only replicated when the client makes a request to update the session so the client’s cookie can store a reference to the secondary server. Thus, if the primary server were to go down and then the secondary server were to go down before the session could be updated by a subsequent client request then the session would be lost. The rolling nature of Zero Downtime Patching fits this pattern, and thus must take extra care to avoid losing sessions. Administrators may have already observed that it is very easy to lose sessions by restarting one server at a time through the cluster. Before we go into technical details on how Zero Downtime Patching prevents the issue of losing sessions, it is important to note that the entire methodology relies on Oracle Traffic Director for load balancing, dynamic discovery, health checks, and session failover handling. In addition to this setup, 3 key features were utilized by Zero Downtime Patching directly to prevent the loss of sessions: 1. Preemptive Session Replication - Session data is preemptively propagated to another server in the cluster during graceful shutdown when necessary.  To get even more detailed on this, lets examine the scenario where the ZDT rollout has shutdown the server holding the HTTP Session, and the next step is to shutdown the server holding the replica. In that case, WebLogic can detect during shutdown that the session will be lost as there is no backup copy within the cluster. So the ZDT rollout can ensure that WebLogic Server replicates that session to another server within the cluster. The illustration below shows the problematic scenario where the server,  s1, holding the primary copy of the session is shutdown followed by the shutdown of the server, s2, holding the secondary or replica copy.  The ZDT Orchestration signals that s2 should preemptively replicate any single session copies before shutting down.  Thus there is always a copy available within the cluster.   2. Session State Query Protocol - Due to the way that WebLogicServer relies on the association of an HTTP Session with a primary server and a secondary server, it is not sufficient to simply have the session somewhere in the cluster.  There is also a need to be able to find the session when the client request lands on an arbitrary server within the cluster. The ZDT rollout enables the ability for WebLogicServers to query other servers in the cluster for specific sessions if they don’t have their own copy. The diagram above shows that an incoming request to a server without the session can trigger a query and once the session is found within the cluster it can be fetched so that the request can be served on the server, "s4".  3. Orphaned Session Cleanup - Once we combine the ability to preemptive replicate session instances, and the ability to fetch sessions from within the cluster, we must also take a more active approach to cleanup instances that are fetched.  Historically, WebLogic Server hasn’t had to worry much about orphaned sessions. Front end load balancers and web servers have been required to honor the session’s server affinity. And in the rare case that a request would land on a server that did not contain the primary or secondary, the session would be fetched from the primary server or secondary server and then the orphaned copy would be forgotten to be cleaned up upon timeout or other regular intervals. It was assumed that because the pointer to the session changed, that the actual stored reference would never be used again. However, the ZDT rollout repeatedly presents the scenario where a session must be found within the cluster and fetched from the server that holds the session. Not only can the number of session instances proliferate - all with various versions of the same session - the cluster is now queried for the copy and we must not find any stale copies - only the current replica of the session. The above illustration shows the cleanup action after s4 has fetched the session data to serve the incoming request.  It launches the cleanup request to s3 to ensure no stale data is left within the cluster. Summary: Now during ZDT Patching we can shutdown server1, and expect that any lone session copies will be propagated to server2 without the clients knowledge. When the client does send another request, WLS will be able to handle that request and query the cluster to find the session data. The data will be fetched and used on the server handling the request. The orphaned copy will be cleaned up and the server handling the request will go through the process of choosing its preferred Secondary server to store the replica. For more information about Zero Downtime Patching, view the documentation (http://docs.oracle.com/middleware/1221/wls/WLZDT/configuring_patching.htm#WLZDT166) References https://docs.oracle.com/cd/E24329_01/web.1211/e24425/failover.htm#CLUST205

By now you have probably read documentation or previous blog posts about how Zero Downtime Patching provides a convenient automated method of updating a WebLogic Domain in a rolling fashion.  By...

Technical

ZDT Rollouts and Singletons

pre { background: transparent }pre.cjk { font-family: "Nimbus Mono L", monospace } WebLogicServer offers messaging, transaction and other system services tofacilitate building enterprise grade applications. Typically,services can be either clustered or singleton. Clustered services aredeployed identically to each server in a cluster to provide increasedscalability and reliability. The session state of one clusteredserver is replicated on another server in the cluster. In contrast,singleton services run on only one server in a cluster at any givenpoint of time so as to offer specific quality of service (QOS) butmost importantly to preserve data consistency. Singleton services canbe JMS-related, JTA-related or user-defined. In highly available (HA)environments, it is important for all services to be up and runningeven during patch upgrades. Thenew WebLogic ZeroDowntime Patching (a.k.a ZDT patching) feature introduces a fullyautomated rolling upgrade solution to perform upgrades such thatdeployed applications continue to function and are available for endusers even during the upgrade process. ZDT patching supports rollingout Oracle Home, Java Home and also updating applications. Check out these blogsor view the documentationfor more information on ZDT. DuringZDT rollouts, servers are restarted in a rolling manner. Taking downa server would bring down the singleton service(s) thus causingservice disruptions, services will not be available until the serverstarts back up. The actual down time would vary depending on serverstart up time and number or type of applications deployed. Hence, toensure that singleton services do not introduce a single point offailure for dependent applications in the cluster, ZDT rolloutprocess automatically performs migrations. Somehighlights of how ZDT rollout handles singletons are: Itcan be applied to all types of rollout (rolloutOracleHome,rolloutJavaHome, rollingRestart, rolloutUpdate etc.) UsesJSON file based migration options for fine grained control ofservice migrations during a rollout. Can be specified in WLST orconsole. Supportsservice migrations (JMS or JTA) as well as server migration (WSM) Automaticfail back if needed Terms,Acronyms and Abbreviations Term Definition Singletons Services that are hosted only on one server in a cluster. Migratable Target (MT) A special target that provides a way to group services thatshould move together. It contains a list of candidate servers withonly one active server at a given time. Source Server Server instance where services are migrated “from” Destination Server Server instance where services are migrated “to” Automatic Service Migration (ASM) Process of moving affected subsystem service from one serverinstance to another running server instance Whole Server Migration (WSM) Process of moving entrire server instance from one physicalmachine to another Fail back Fail back refers to relocating services back to originalhosting or “home” server. Assumptions DuringZDT rollout, servers are shutdown gracefully and then started backup. To start with, administrators should be well aware of theimplications of restarting managed servers. An arbitrary applicationmay or may not be tolerant of a restart regardless of whether servicemigration is setup or not. Itmay have non-persistent state Itmay or may not be tolerant of runtime client exceptions Itmay be impacted by duration of restart Whena server is shutdown gracefully, client connections are closed,clients consequently get exceptions, and the JMS server is removedfrom candidate lists for load balancing and JMS message routingdecisions. Most of the time, such client exceptions are transient - aretry will be redirected to a different JMS server, or even to theoriginal JMS Server after it was migrated. But some exceptions willnot be transient and they will instead continue being thrown on eachclient retry until a particular JMS server instance comes back up.Though server does some level of quiescing during shutdown, itdoesn't prevent all errors in JMS client or else where. Withrespect to JTA, when a server is shutting down gracefully, anapplication wouldn'tgenerate any new transaction requests for that particular server. ForEJB/RMI path, the cluster aware stubs would detect server connectionfailures and redirect request to secondary server. It is assumed thatapplications are designed to handle exceptions during a transaction. Ifserver migration (WSM) is configured in environment, one should beaware that it usually takes a longer time (when compared to servicemigrations) to make services available since entire server instanceneeds to boot on a new hardware. Note:Ingeneral, the whole server migration is preferred for basic use due toits relative simplicity, but automatic service migration becomesattractive when faster fail-over times and advanced control over theservice migration are desirable. JMS WebLogicJMS sub system is robust and high-performance and is often used inconjunction with other APIs to build an enterprise application.Smooth functioning of the applications largely depends on how theapplication is designed (being resilient to failures, using certainpatterns or features) and also depends on how the JMS sub system istuned in the admin server. InWebLogic JMS, a message is only available if its host JMS server forthe destination is running. If a message is in a central persistentstore, the only JMS server that can access the message is the serverthat originally stored the message. HA is normally accomplished usingeither one or all of the following: Distributeddestinations: The queue and topic members of a distributeddestination are usually distributed across multiple servers within acluster, with each member belonging to a separate JMS server.Applications that use distributed destinations are more highlyavailable than applications that use simple destinations becauseWebLogic JMS provides load balancing and failover for memberdestinations of a distributed destination within a cluster. Store-and-Forward:JMS modules utilize the SAF service to enable local JMS messageproducers to reliably send messages to remote queues or topics. Ifthe destination is not available at the moment the messages aresent, either because of network problems or system failures, thenthe messages are saved on a local server instance, and are forwardedto the remote destination once it becomes available. HAServers/Services: JMS Servers can be automatically restarted and/ormigrated using either Whole Server Migration or Automatic ServiceMigration. JTA Aproduction environment designed for high availability would mostlikely ensure that JTA service (as well as other services) wouldn'tact as a single point of failure. The WebLogic transaction manager isdesigned to recover from system crashes with minimal userintervention. The transaction manager makes every effort to resolvetransaction branches that are prepared by resource managers with acommit or roll back, even after multiple crashes or crashes duringrecovery. It also attempts to recover transactions on system startupby parsing all transaction log records for incomplete transactionsand completing them. However, in preparation for maintenance type ofoperations like ZDT rollouts, JTA services may be configured formigrations. JTA migration is needed since in-flight transactions canhold locks on the underlying resources. If the transaction manager isnot available to recover these transactions, resources may hold on tothese locks as long as pending transactions are not resolved with acommit/rollback (for long periods of time), causing errors on newtransactions and making it difficult for applications to functionproperly. Moreon Service Migrations Service-levelmigration in WebLogic Server is the process of moving the pinnedservices from one server instance to a different server instance thatis available within the cluster. Servicemigration is controlled by logical migratable target, which serves asa grouping of services that is hosted on only one physical server ina cluster. You can select a migratable target in place of a server orcluster when targeting certain pinned services. The migrationframework provides tools and infrastructure for configuring andmigrating targets, and, in the case of automatic service migration,it leverages WebLogic Server's health monitoring subsystem to monitorthe health of services hosted by a migratable target. Followingtable summarizes various migration options Policy Type Description Manual Only (default) Automatic service migration is disabled for thistarget. Failure Recovery Pinned servicesdeployed to this target will: Initially start only on the preferred server Migrateonly if the cluster master determines that thepreferredserver has failed Exactly Once Pinned servicesdeployed to this target will: Initially start on a candidate server if the preferred oneisunavailable Migrateif the host server fails or is gracefully shutdown ZDTMigration Strategy and Options ForZDT rollouts, “exactly-once” type of services are not of concernsince the migration sub system automatically handles these services.It is the failure-recovery type of services that are of main concern.These services will not migrate if server is gracefully shutdown.Since the duration of restart may vary, these services need to bemigrated so that end users are not affected. Similarly,if user has configured services to be migrated manually, suchservices are automatically migrated on administrator's behalf duringa rollout. ZDT rollouts can handle both JMS and JTA servicesmigration. Caveats: 1. Transaction manager is notassigned to a migratable target like other pinned services,instead JTA ASM is a per-server setting. This is because thetransaction manager has no direct dependencies on other pinnedresources when contrasted with services such as JMS. 2. For user-defined singletons, ZDT rollout doesn't need totake any specific action since they are automatically configuredas “exactly-once”. Administratorcan specify the exact migration action on a per-server basis via themigration properties file passed as an option to any of the rolloutcommands. The migration options specified in the migration propertiesfile is validated against what is configured in the system, Requiredmigrations are initiated accordingly to mitigate down time. As anoptimization, the workflow generates the order in which rolloutshappens across the servers so as to prevent unnecessary migrationsbetween patched and unpatched servers. ZDTrollouts also support whole server migration (WSM) if server(s) areconfigured for it. Hereis the list of all the migration options: MigrationType Description jms All JMS related services running on currenthosting server are migrated to the destination server jta JTA service is migrated from current hostingserver to the destination server all Both JMS and JTA services on current hostingserver are migrated to the destination server server Whole server instance will be migrated todestination machine none No migrations will happen for singleton servicesrunning on current server Youwill observe that these migration options are very similar to WLSTmigrate command options. SampleMigration Sequence Thefollowing picture demonstrates a typical rollout sequence involvingservice migrations. Here, the JMS and JTA singleton services arerepresented by 2 types of migratable targets configured for eachserver. Persistent stores and the TLOG should be accessible from allservers in the cluster. Administrator has the control for specifyinghow the migrations should happen across the servers in the cluster.The next section describes the control knobs for fine grained controlof migrations during a rollout. ZDTMigration Properties Howthe migrations takes place for any of the rollouts is specified in amigration properties file that is passed an option to the rolloutcommand. The migration properties file is nothing but a JSON fileconsisting of 4 main properties: MigrationProperty Description source Denotes the source server (name) which is nothingbut the current hosting server for the singleton(s) destination Denotes the destination server (name) where thesingleton service will be migrated to. This can also be a machinename in case of server migration migrationType Acceptable types are "jms", "jta","all", "server", "none" asdescribed in previous section failback Indicates if automatic failback of service tooriginal hosting server should happen or not Hereis an example migration properties file: {"migrations":[ # Migrate all JMS migratable targets on server1 to server2. Perform a fail back# if the operation fails. { "source":"server1", "destination":"server2", "migrationType":"jms", "failback":"true" },# Migrate only JTA services from server1 to server3. Note that JTA migration# does not support the failback option, as it is not needed. { "source":"server1", "destination":"server3", "migrationType":"jta" },# Disable all migrations from server2 { "source":"server2", "migrationType":"none" }, {# Migrate all services (for example, JTA and JMS) from server 3 to server1 with# no failback "source":"server3", "destination":"server1", "migrationType":"all" }, # Use Whole Server Migration to migrate server4 to the node named machine 5 with# no failback { "source":"server4", "destination":"machine5", "migrationType":"server" } ]} IfmigrationType is "None", then it implies services runningon this server will not be migrated, it also means no failback isneeded. Ifthere are singleton services detected and administrator hasn'tpassed in migration properties file, rollout command will fail. Ifno migrations are needed, administrator should explicitly state thatvia the migration properties (i.e migrationType=”None”) for eachof the servers. IfmigrationType is "server", destination should point to anode manager machine name and WSM will be triggered for that serverinstance. Thedefault value of failback is false (no failback if option is notspecified). Fora specific server, either ASM or WSM can be applied but not both. SinceJTA sub system supports automatic failback of JTA service, failbackis not a valid option for JTA service. Eachof the above mentioned validation checks will happen as part ofpre-requisites check before any rollout. ZDTRollout Examples Thefollowing examples illustrate the usage of migration propertiesoption. Asample migrationProperties.json file: {"migrations":[ {"source":"m1","destination":"m2","migrationType":"jms","failback":"true"} ]} Passingmigration options to rolloutOracleHome rolloutOracleHome('myDomain','/pathto/patchedOracleHome.jar','/pathto/unpatchedOracleHomeBackup/',options='migrationProperties=/pathto/migrationProperties.json') Passingmigration options to rolloutApplications rolloutApplications('myDomain',applicationProperties='/pathto/applicationProperties.json',options='migrationProperties=/pathto/migrationProperties.json') Passingmigration options to rolloutJavaHome rolloutJavaHome('myDomain', javaHome='/pathto/JavaHome1.8.0_60',options='migrationProperties=/pathto/migrationProperties.json') Passingmigration options to rolloutUpdate rolloutUpdate('myDomain', '/pathto/patchedOracleHome.jar','/pathto/unpatchedOracleHomeBackup/', false,options='migrationProperties=/pathto/migrationProperties.json') Passingmigration options to rollingRestart rollingRestart('myDomain',options='migrationProperties=/pathto/migrationProperties.json') References WLSZDT WLSJMS Best Practices WhitePaper on WLS ASM (a good reference though quiteold)

WebLogic Server offers messaging, transaction and other system services to facilitate building enterprise grade applications. Typically,services can be either clustered or singleton. Clustered...

WebLogic on Docker Containers Series, Part 2

On my previous post, the first part of this series, I've shown to you how to quickly get started with WebLogic on Docker. You've learned how to create a base Docker image with WebLogic and Oracle JDK installed, and then how to create a second image that contains a configured WebLogic domain. Today's post will break down and explain what happens behind the scenes of that process Note: for the sake of history and keep this blog post useful in the future, I will refer to the commit 7741161 from the docker-images GitHub project, and version 12.2.1 of WebLogic. Walking through the build process of a WebLogic base image A base image of WebLogic means an image that contains only the software installed with minimum configuration, to further be extended and customized. It may be based on a Red Hat base Docker image, but preferably, we recommend you to use the Oracle Linux base image. Samples for how to build a base image are presented in the dockerfiles folder. Files for WebLogic versions 12.1.3 and 12.2.1 are maintained there, as well for two kinds of distributions: Developer, and Generic. Other versions and distributions may be added in the future. Differences between Developer and Generic distributions There aren't many differences between them, except these (extracted from the README.txt file inside the Quick Installer for Developer): WHAT IS NOT INCLUDED IN THE QUICK INSTALLER - Native JNI libraries for unsupported platforms. - Samples, non-english console help (can be added by using the WLS supplemental Quick Install) - Oracle Configuration Manager (OCM) is not included in the Quick installer - SCA is not included in the Quick Installer Also, the Quick Installer for Developers is compressed using pack200, an optimized compression tool for Java classes and JAR files, to reduce the download size of the installer. Besides these differences, the two distributions work perfectly fine for Java EE development and deployment.   Building the Developer distribution base image Although we provide a handy shell script to help you in this process, what really matters relies inside 12.2.1 folder and the Dockerfile.developer file. That recipe does a COPY of two packages, the RPM of JDK, and the WebLogic Quick Installer. These files must be present. We've put these .download files as placeholders to remind you of the need to download them. This same approach will apply for the Generic distribution. The installation of JDK uses rpm tool, which enables us to run Java inside the base image. A very obvious requirement. After JDK is installed, we proceed with the installation of WebLogic by simply calling "java -jar", and later we clean up yum. An important observation is the use of /dev/urandom in the Dockerfile. WebLogic requires some level of entropy for random bits that are generated during install, and as well domain creation. It is up to customers to decide whether they want to use /dev/random or /dev/urandom. Please configure this as desired. You can build this image in two ways: Using buildDockerImage.sh script. Indicate you want developer distribution [-d], and version 12.2.1 [-v 12.2.1]. $ pwd ~/docker-images/OracleWebLogic/dockerfiles $ sh buildDockerImage.sh -d -v 12.2.1   Manually calling docker build: $ cd 12.2.1 $ docker build -t oracle/weblogic:12.2.1-dev -f Dockerfile.developer . Either of these calls result in the following: REPOSITORY          TAG            IMAGE ID         CREATED          VIRTUAL SIZE oracle/weblogic     12.2.1-dev     99a470dd2110     15 secs ago      1.748 GB oraclelinux         7              bea04efc3319     5 weeks ago      206 MB oraclelinux         latest         bea04efc3319     5 weeks ago      206 MB As you may have know by now, this image contains only WebLogic and JDK installed, and thus does not serve to be executed, only to be extended. Building the Generic distribution base image Most of what you've learned from above applies to the Generic distribution. The differences are that you must download, obviously, the Generic installer. The installation process is a little bit different, since it uses the silent install mode, with environment definition coming from install.file and oraInst.loc. To build this image you either do by:   Call buildDockerImage.sh script. Indicate you want Generic distribution [-g], and version 12.2.1 [-v 12.2.1]. $ pwd ~/docker-images/OracleWebLogic/dockerfiles $ sh buildDockerImage.sh -g -v 12.2.1   Manually calling docker build: $ cd 12.2.1 $ docker build -t oracle/weblogic:12.2.1 -f Dockerfile.generic . Now you have two images you can extend from, either the Developer, or the Generic base image: REPOSITORY          TAG            IMAGE ID         CREATED          VIRTUAL SIZE oracle/weblogic     12.2.1         ea03630ee95d     18 secs ago      3.289 GB oracle/weblogic     12.2.1-dev     99a470dd2110     2 mins ago       1.748 GB oraclelinux         7              bea04efc3319     5 weeks ago      206 MB oraclelinux         latest         bea04efc3319     5 weeks ago      206 MB Note how the Generic image is larger than the developer image. That's because the Developer distribution contains less stuff inside, as described earlier. It will be up to Dev and Ops teams to decide which one to use. And how to build them. In the next post, I will walk you through the process of building the 1221-domain sample image. If you have any questions, feel free to comment, or tweet.

On my previous post, the first part of this series, I've shown to you how to quickly get started with WebLogic on Docker. You've learned how to create a base Docker image with WebLogic and Oracle JDK...

WebLogic on Docker Containers Series, Part 1

WebLogic 12.2.1 is certified to run Java EE 7 applications, supports Java SE 8 (since 12.1.3), and can be deployed on top of Docker containers. It also supports Multitenancy through the use of Partitions in the domain, enabling you to add another level of density to your environment. Undeniably, WebLogic is so much of a great option for Java EE based deployments that both developers and operations will benefit from. Even Adam Bien, Java EE Rockstar, has agreed with that. But you are here to play with WebLogic and Docker, so first, check these links about the certification and support: [Whitepaper] Oracle WebLogic Server on Docker Containers [Blog] WebLogic 12.2.1 Running on Docker Containers Dockerfiles, Scripts, and Samples on GitHub Docker Support Statement on MOS Doc.ID 2017645.1 Oracle Fusion Middleware Certification Pages Understanding WebLogic on Docker We recommend our customers and users to build their own image containing WebLogic and Oracle JDK installed without any domain configured. Perhaps a second image containing a basic domain. This is to guarantee easier reuse between DevOps teams. Let me describe an example: Ops would provide a base WebLogic image to Dev team, either with or without a pre-configured domain with a set of predefined shell scripts, and Devs would perform domain configuration and application deployment. Then Ops get a new image back and just run containers out of that image. It is a good approach, but certainly customers are free to think out of the box here and figure out what works best for them. TL;DR; Alright, alright... Do the following: 1 - Download docker-images' master.zip file repository directly and drop somewhere. $ unzip master.zip && mv docker-images-master docker-images 2 - Download WebLogic 12.2.1 for Developers and Oracle JDK 8 specific versions as indicated in Checksum.developer. Put them inside dockerfiles/12.2.1 folder. You will see placeholders there (*.download files). 3 - Build the installation image $ cd docker-images/OracleWebLogic/dockerfiles $ sh buildDockerImage.sh -d -v 12.2.1 4. Build the WebLogic Domain image $ cd ../samples/1221-domain $ docker build -t 1221-domain . 5. Run WebLogic from a Docker container $ docker run -d -p 8001:8001 1221-domain 6. Access Admin Console from your browser: http://localhost:8001/console Note that these steps are for your first image build only. Customers are encouraged to run a Docker Registry at their internal network, and store these images there just as they probably already do with Oracle software installers at some intranet FTP server. Important! Do not share binaries (either packed as a Docker image or not). * follow this series if you want to learn more of WebLogic on Docker. But please do read the entire post... :-) Creating your first WebLogic Docker Image The very first step to get started, is to checkout the docker-images project on GitHub: $ git checkout --depth=1 https://github.com/oracle/docker-images.git If you don't have or don't want to install the Git client, you can download the ZIP file containing the repository and extract it. Use your browser, or some CLI tool. Another thing to know before building your image, is that WebLogic comes in two flavors: one is the Developer distribution, smaller, and the other is the Generic distribution, for use in any environment. For the developer distribution, you have to download two files indicated inside Checksum.developer. If you want to build the Generic distribution instead of the Developer, see file Checksum.generic for further instructions, but tl;dr; you need two files again (or one if you have downloaded JDK already). The same instructions apply. Oracle WebLogic 12.2.1 Quick Installer for Developers (211MB) Oracle JDK 8u65 Linux x64 RPM (or latest as indicated in the Checksum file) Next step is to go to the terminal again and use the handy shell script buildDockerImage.sh, which will do some checks (like checksum) and select the proper Dockerfile (either .developer or .generic) for the specific version you want, although I do recommend you start with 12.2.1 from now on. $ cd docker-images/OracleWebLogic/dockerfiles $ sh buildDockerImage.sh -d -v 12.2.1   You may notice that it takes some time to copy files during the installation process. That's because WebLogic for Developers is compressed with pack200 to be a small download. But after you build this image, you can easily create any domain image on top of it, and you can also share your customized image using docker save/load. Next step is to create a WebLogic Domain. Creating the WebLogic Domain Image So far you have an image that is based on Oracle Linux 7 and has WebLogic 12.2.1 for Developers, and Oracle JDK installed. To run WebLogic, you must have a domain. Luckily WebLogic is mature enough to be very handy for DevOps operations, and has support for a scripting tool called WLST (you guessed: WebLogic Scripting Tool), based on Jython (Python for Java) that allows you to script any task that you'd perform through a wizard or the web interface, from installation to configuration to management to monitoring. I've shared some samples on the GitHub project and I'll cover a couple of them in this series of WebLogic on Docker, but for now, let's just create the basic, empty WebLogic Domain. Go to the samples folder and access folder 1221-domain. When there, just simply perform: $ cd docker-images/OracleWebLogic/samples/1221-domain $ docker build -t 1221-domain . This process is very fast, at least for this sample. Time may vary if your WLST for creating a domain performs more tasks. See the sample create-wls-domain.py to have some ideas. Starting WebLogic on Docker You now have the image 1221-domain ready to be used. All you need to do is to call: $ docker run -ti -p 8001:8001 1221-domain And now you can access the Admin Console on http://localhost:8001/console. Frequently Asked Questions, Part 1 - Can I write my own Dockerfiles and WLST scripts to install and create WebLogic? A: absolutely! That is the entire idea of sharing these scripts. These are excellent pointers on what can be done, and how. But customers and users are free to come up with their own files. And if you have some interesting approach to share, please send to me at bruno dot borges at oracle dot com. - Why is WebLogic Admin on port 8001 instead of default 7001? A: Well, it is a sample. It is to show what configurations you can do. The environment variable ADMIN_PORT, as well other configurations in the samples/1221-domain/Dockerfile are picked up by the create-wls-domain.py script while creating the domain. The WLST script will even use some defaults if these variables are not defined. Again, it's a sample. - What if I need to patch the WebLogic install? A: you do that by defining a new Dockerfile and apply the patch as part of the build process, to create a new base image version. Then you recreate your domain image that extends the new patched base image. You may also want to simply extend your existing domain image, apply the patch, and use that one, or you can also modify your image by applying the patch in some existing container, then committing the container to a new image. There are different ways to do that, but for sure applying the patch to a live container is not one of them, since it is a good idea to keep containers as disposable as possible, and you should also have an image from where you can create new patched containers. - What if I want a WebLogic cluster with Node Manager and Manged Servers? A: that works too. I'll cover that in this series. - Can I build a Docker image with a deployed artifact? A: yes. More on that in upcoming blog posts of this series. - Can I have a Load Balancer in front of a Swarm of Docker containers? A: yes. That will also be covered as part of this series. I hope you are excited to learn more about WebLogic on Docker. So please follow this blog and my Twitter account for upcoming posts.

WebLogic 12.2.1 is certified to run Java EE 7 applications, supports Java SE 8 (since 12.1.3), and can be deployed on top of Docker containers. It also supports Multitenancy through the use of Parti...

Java Rock Star Adam Bien Impressed by WebLogic 12.2.1

It is not an exaggeration to say Adam Bien is pretty close to a "household name" in the Java world. Adam is a long time Java enthusiast, author of quite a few popular books, Java Community Process (JCP) expert, Oracle ACE Director, official Oracle Java Champion and JavaOne conference Rock Star award winner. Adam most recently won the JCP member of the year award. His blog is amongst the most popular for Java developers.  Adam recently took WebLogic 12.2.1 for a spin and was impressed. Being a developer (not unlike myself) he focused on the full Java EE 7 support in WebLogic 12.2.1. He reported his findings to Java developers on his blog. He commented on fast startup, low memory footprint, fast deployments, excellent NetBeans integration and solid Java EE 7 compliance. You can read Adam's full write-up here. None of this of course is incidental. WebLogic is a mature product with an extremely large deployment base. With those strengths often comes the challenge of usability. Nonetheless many folks that haven't kept up-to-date with WebLogic evolution don't realize that usability and performance have long been a continued core focus. That is why folks like Adam are often pleasantly surprised when they take an objective fresh look at WebLogic. You can of course give WebLogic 12.2.1 a try yourself here. There is no need to pay anything just to try it out as you can use a free OTN developer license (this is what Adam used as per the instructions on his post). You can also use an official Docker image here. Solid Java EE support is of course the tip of the iceberg as to what WebLogic offers. As you are aware WebLogic offers a depth and breadth of proven features geared towards mission-critical, 24x7 operational environments that few other servers come close to. One of the best ways for anyone to observe this is taking a quick glance at the latest WebLogic documentation.

It is not an exaggeration to say Adam Bien is pretty close to a "household name" in the Java world. Adam is a long time Java enthusiast, author of quite a few popular books, Java Community Process...

Technical

Even Applications can be Updated with ZDT Patching

Zero Downtime Patching enables a convenient method of updating production applications on WebLogic Server without incurring any application downtime or loss of session data for your end-users.  This new feature may be especially useful for users who want to update multiple applications at the same time, or for those who cannot take advantage of the Production Redeployment feature due to various limitations or restrictions. Now there is a convenient alternative to complex application patching methods. This rollout is based on the process and mechanism for automating rollouts across a domain while allowing applications to continue to service requests. In addition to the reliable automation, the Zero Downtime Patching feature also combines Oracle Traffic Director (OTD) load balancer and WebLogic Server to provide some advanced techniques for preserving active sessions and even handling incompatible session state during the patching process. To rollout an application update, follow these 3 simple steps. 1. Produce a copy of the updated the application(s), test and verify. Note the administrator is responsible for making sure that the updated application sources are distributed to the appropriate nodes.  For stage mode, the updated application source needs to be available on the file system for the AdminServer to distribute the application source.  For no stage and external stage mode, the updated application source needs to be available on the file system for each node. 2. Create a JSON formatted file with the details of any applications that need to be updated during the rollout. {"applications":[ { "applicationName":"ScrabbleStage", "patchedLocation":"/scratch/wlsconfg/Domains/Domain1221/applications/ScrabbleStagev2.war", "backupLocation": "/scratch/wlsconfg/Domains/Domain1221/applications/ScrabbleStagev1.war" }, { "applicationName":"ScrabbleNoStage", "patchedLocation":"/scratch/wlsconfg/Domains/Domain1221/applications/ScrabbleNoStagev2.war", "backupLocation":"/scratch/wlsconfg/Domains/Domain1221/applications/ScrabbleNoStagev1.war" }, { "applicationName":"ScrabbleExternalStage", "patchedLocation":"/scratch/wlsconfg/Domains/Domain1221/applications/ScrabbleExternalStagev2.war", "backupLocation":"/scratch/wlsconfg/Domains/Domain1221/applications/ScrabbleExternalStagev1.war" } ]} 3. Simply run the Application rollout using a WLST command like this one: rolloutApplications(“Cluster1”, “/pathTo/applicationRolloutProperties”) The Admin Server will start the rollout that coordinates the rolling restart of each node in the cluster named “Cluster1”. While the servers are shutdown, the original application source is moved to the specified backup location, and the new application source is copied into place.  Each server in turn is then started in admin mode.  While the server is in admin mode, the application redeploy command is called for that specific server, causing it to reload the new source.  Then the server is resumed to its original running state and is serving the updated application. For more information about updating Applications with Zero Downtime Patching view the documentation.

Zero Downtime Patching enables a convenient method of updating production applications on WebLogic Server without incurring any application downtime or loss of session data for your end-users.  This...

Technical

WLS 12.2.1 launch - Servlet 3.1 new features

Introduction WLS 12.2.1 release support new features of Servlet 3.1 specification. The Servlet 3.1 specification is a major version of Servlet specification. This version of specification mainly introduced Non-blocking IO and Http Protocal upgrade features into ServletContainer for adopting in modern web application development.Non-blocking IO helps improve the ever increasing demand for improved Web Container scalability, increase the number of connections that can simultaneously be handled by the Web Container. Non-blocking IO in the Servlet container allows developers to read data as it becomes available or write data when possible to do so. Also this version introduced several minor changes for security and functional enhancement. 1 Upgrade Processing 1.1 Description In HTTP/1.1, the Upgrade general-header allows the client to specify the additional communication protocols that it supports and would like to use. If the server finds it appropriate to switch protocols, then new protocols will be used in subsequent communication.The Servlet container provides an HTTP upgrade mechanism. However the Servlet container itself does not have knowledge about the upgraded protocol. The protocol processing is encapsulated in the HttpUpgradeHandler. Data reading or writing between the Servlet container and the HttpUpgradeHandler is in byte streams.When an upgrade request is received, the Servlet can invoke the HttpServletRequest.upgrade method, which starts the upgrade process. This method instantiates the given HttpUpgradeHandler class. The returned HttpUpgradeHandler instance may be further customized. The application prepares and sends an appropriate response to the client. After exiting the service method of the Servlet, the Servlet container completes the processing of all filters and marks the connection to be handled by the HttpUpgradeHandler. It then calls the HttpUpgradeHandler's init method, passing a WebConnection to allow the protocol handler access to the data streams.The Servlet filters only process the initial HTTP request and response. They are not involved in subsequent communications. In other words, they are not invoked once the request has been upgraded. The HttpUpgradeHandler may use non blocking IO to consume and produce messages. The Developer has the responsibility for thread safe access to the ServletInputStream and ServletOutputStream while processing HTTP upgrade. When the upgrade processing is done, HttpUpgradeHandler.destroy will be invoked. 1.2 Example In this example, the client sends the request to the server. The server accepts the request, sends back the response, and then invokes the HttpUpgradeHandler.init() method and continues the communication using a dummy protocol. The client shows the request and response headers during the handshake process. Client the client initiates the HTTP upgrade request. @WebServlet(name = "ClientTest", urlPatterns = {"/"})public class ClientTest extends HttpServlet {protected void processRequest(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException {String reqStr = "POST " + contextRoot + "/ServerTest HTTP/1.1" + CRLF; ... reqStr += "Upgrade: Dummy Protocol" + CRLF;// Create socket connection to ServerTest s = new Socket(host, port); input = s.getInputStream(); output = s.getOutputStream();// Send request header with data output.write(reqStr.getBytes()); output.flush(); }} The header Upgrade: Dummy Protocol is an HTTP/1.1 header field set to Dummy Protocol in this example. The server decides whether to accept the protocol upgrade request. Server ServerTest.java checks the Upgrade field in the request header. When it accepts the upgrade requests, the server instantiates ProtocolUpgradeHandler, which is the implementation of HttpUpgradeHandler. If the server does not support the Upgrade protocol specified by the client, it sends a response with a 404 status. @WebServlet(name="ServerTest", urlPatterns={"/ServerTest"})public class ServerTest extends HttpServlet {protected void processRequest(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException {// Checking request header if ("Dummy Protocol".equals(request.getHeader("Upgrade"))){ ... ProtocolUpgradeHandler handler = request.upgrade(ProtocolUpgradeHandler.class); } else { response.setStatus(400); ... } } ...} ProtocolUpgradeHandler is the implementation of HttpUpgradeHandler, which processes the upgrade request and switches the communication protocol. The server checks the value of the Upgrade header to determine if it supports that protocol. Once the server accepts the request, it must use the Upgrade header field within a 101 (Switching Protocols) response to indicate which protocol(s) are being switched. Implementation of HttpUpgradeHandler public class ProtocolUpgradeHandler implements HttpUpgradeHandler { @Overridepublic void init(WebConnection wc) {this.wc = wc;try { ServletOutputStream output = wc.getOutputStream(); ServletInputStream input = wc.getInputStream(); Calendar calendar = Calendar.getInstance(); DateFormat dateFormat = new SimpleDateFormat("MM/dd/yyyy HH:mm:ss");// Reading the data into byte array input.read(echoData);// Setting new protocol header String resStr = "Dummy Protocol/1.0 " + CRLF; resStr += "Server: Glassfish/ServerTest" + CRLF; resStr += "Content-Type: text/html" + CRLF; resStr += "Connection: Upgrade" + CRLF; resStr += "Date: " + dateFormat.format(calendar.getTime()) +CRLF; resStr += CRLF;// Appending data with new protocol resStr += new String(echoData) + CRLF;// Sending back to client ... output.write(resStr.getBytes()); output.flush(); } catch (IOException ex) { Logger.getLogger(ProtocolUpgradeHandler.class.getName()).log(Level.SEVERE, null, ex); } ... } @Overridepublic void destroy() { ...try { wc.close(); } catch (Exception ex) { Logger.getLogger(ProtocolUpgradeHandler.class.getName()).log(Level.SEVERE, "Failed to close connection", ex); } ... }} The init() method sets up the new protocol headers. The new protocol is used for subsequent communications. This example uses a dummy protocol. The destroy() method is invoked when the upgrade process is done. This example shows the handshake process of the protocol upgrade. After the handshake process, the subsequent communications use the new protocol. This mechanism only applies to upgrading application-layer protocols upon the existing transport-layer connection. This feature is most useful for Java EE Platform providers. 2 Non-blocking IO 2.1 Description Non-blocking request processing in the Web Container helps improve the ever increasing demand for improved Web Container scalability, increase the number of connections that can simultaneously be handled by the Web Container. Non blocking IO in the ServletContainer allows developers to read data as it becomes available or write data when possible to do so. Non-blocking IO only works with async request processing in Servlets and Filters, and upgrade processing. Otherwise, an IllegalStateException must be thrown when ServletInputStream's setReadListener is invoked. 2.2 Non-Blocking Read Example Servlet In ServerServlet, the server receives the request, starts the asynchronous processing of the request, and registers a ReadListener @WebServlet(name = "ServerServlet", urlPatterns = {"/server"}, asyncSupported = true)public class ServerServlet extends HttpServlet { .....protected void service(HttpServletRequest request, HttpServletResponse response)throws ServletException, IOException { response.setContentType("text/html;charset=UTF-8");// async read final AsyncContext context = request.startAsync();final ServletInputStream input = request.getInputStream();final ServletOutputStream output = response.getOutputStream(); input.setReadListener(new ReadListenerImpl(input, output, context)); } Note: Non-blocking I/O only works with asynchronous request processing in servlets and filters or upgrade handler. See Servlet Spec 3.2 for more details. Read Listener Implementation public class ReadListenerImpl implements ReadListener {private ServletInputStream input;private ServletOutputStream output;private AsyncContext context;private StringBuilder sb = new StringBuilder();public ReadListenerImpl(ServletInputStream input, ServletOutputStream output, AsyncContext context) {this.input = input;this.output = output;this.context = context; } /** * do when data is available to be read. */ @Overridepublic void onDataAvailable() throws IOException {while (input.isReady()) { sb.append((char) input.read()); } } /** * do when all the data has been read. */ @Overridepublic void onAllDataRead() throws IOException {try { output.println("ServerServlet has received '" + sb.toString() + "'."); output.flush(); } catch (Exception e) { e.printStackTrace(); } finally { context.complete(); } } /** * do when error occurs. */ @Overridepublic void onError(Throwable t) { context.complete(); t.printStackTrace(); } The onDataAvailable() method is invoked when data is available to be read from the input request stream. The container subsequently invokes the read() method if and only if isReady() returns true. The onAllDataRead() method is invoked when all the data from the request has been read. The onError(Throwable t) method is invoked if there is any error or exceptions occurs while processing the request. The isReady() method returns true if the underlying data stream is not blocked. At this point, the container invokes the onDataAvailable() method.Users can customize the constructor to handle different parameters. Usually, the parameters are ServletInputStream, ServletOutputStream, or AsyncContext. This sample uses all of them to implement the ReadListener interface. 2.3 Non-Blocking Write Example Servlet In ServerServlet.java, after receiving a request, the servlet starts the asynchronous request processing and registers a WriteListener. protected void processRequest(HttpServletRequest request, HttpServletResponse response)throws ServletException, IOException { response.setContentType("text/html;charset=UTF-8");// async write final AsyncContext context = request.startAsync();final ServletOutputStream output = response.getOutputStream(); output.setWriteListener(new WriteListenerImpl(output, context)); } Write Listener Implementation public class WriteListenerImpl implements WriteListener {private ServletOutputStream output;private AsyncContext context;public WriteListenerImpl(ServletOutputStream output, AsyncContext context) {this.context = context;this.output = output; } /** * do when the data is available to be written */ @Overridepublic void onWritePossible() throws IOException {if (output.isReady()) { output.println("<p>Server is sending back 5 hello...</p>"); output.flush(); }for (int i = 1; i <= 5 && output.isReady(); i++) { output.println("<p>Hello " + i + ".</p>"); output.println("<p>Sleep 3 seconds simulating data blocking.<p>"); output.flush();// sleep on purpose try {Thread.sleep(3000); } catch (InterruptedException e) {// ignore } } output.println("<p>Sending completes.</p>"); output.flush(); context.complete(); } /** * do when error occurs. */ @Overridepublic void onError(Throwable t) { context.complete(); t.printStackTrace(); }} The method onWritePossible() is invoked when data is available to write to the response stream. The container subsequently invokes the writeBytes() method if and only if isReady() returns true. The onError(Throwable t) method is invoked if any error or exceptions occur while writing to the response. The isReady() method returns true if the underlying data stream is not blocked. At this point, the container invokes the writeBytes() method. 4 SessionID change 4.1 Description Servlet specification 3.1 a new interfaces and method for avoiding Session fixation. Weblogic ServletContainer should implement the session id change processing for security reason. 4.2 SessionID change Example In this example application, the SessionIDChangeListener interface overrides the sessionIdChanged method, which receives a notification that the session ID has been changed in a session. The SessionIDChangeTest changes the value of the session ID by invoking javax.servlet.http.HttpServletRequest.changeSessionId(). Servlet @WebServlet(name = "SessionIDChangeServlet", urlPatterns = {"/SessionIDChangeServlet"})public class SessionIDChangeServlet extends HttpServlet {protected void processRequest(HttpServletRequest request, HttpServletResponse response)throws ServletException, IOException { response.setContentType("text/html;charset=UTF-8"); PrintWriter out = response.getWriter(); HttpSession session = request.getSession(true);try { StringBuilder sb = new StringBuilder(); sb.append("<h3>Servlet SessionIDChangeTest at " + request.getContextPath() + "</h3><br/>"); sb.append("<p>The current session id is: &nbsp;&nbsp;" + session.getId() + "</p>"); /* Call changeSessionID() method. */ request.changeSessionId(); sb.append("<p>The current session id has been changed, now it is: &nbsp;&nbsp;" + session.getId() + "</p>"); request.setAttribute("message", sb.toString()); request.getRequestDispatcher("response.jsp").forward(request, response); } finally { out.close(); } }....} The Servlet get a session object from request. A sessionID generated at that time. After request.changeSessionId() was called a new sessionID generated to replace the old one on the session object. HttpSessionIdListener Implementation @WebListenerpublic class SessionIDChangeListener implements HttpSessionIdListener { @Overridepublic void sessionIdChanged(HttpSessionEvent event, String oldSessionId) {System.out.println("[Servlet session-id-change example] Session ID " + oldSessionId + " has been changed"); }} The implementation's sessionIdChanged method will be triggered when the request.changeSessionId() was called.

Introduction WLS 12.2.1 release support new features of Servlet 3.1 specification. The Servlet 3.1 specification is a major version of Servletspecification. This version of specification...

Messaging

Introducing WLS JMS Multi-tenancy

Introduction Multi-tenancy (MT) is the main theme of the WebLogic Server 12.2.1 release. It enhances the Oracle Platform for Software-as-a-Service (SaaS) and Platform-as-a-Service (PaaS) use cases. The main benefits of WebLogic multi-tenancy are increased density, tenant isolation, and simplified cloud configuration and management. This article introduces multi-tenancy support for WebLogic JMS, the messaging component in WebLogic Server.    Key MT Concepts Some of you may have already learned from other blogs (for example Tim’s blog about Domain Partitions for Multi Tenancy) about some of the key concepts in WebLogic MT. But for the benefit of a broader audience, here is a quick review of those concepts before we get into JMS specifics. WebLogic Multi-tenancy introduces the concepts of domain partition (also known as partition), resource group (RG), and resource group template (RGT).   A Partition is conceptually a slice of a WebLogic domain, where resources and applications for different tenants can be configured and deployed in isolation on the same WebLogic server or in the same cluster. This improves overall density. Partitions define the isolation boundaries for JNDI, security, runtime MBeans, application persistent data, work managers and logging.  Furthermore, Partitions running on the same server instance have their own lifecycle, for example, a partition can be shut down at any time without impacting other partitions. A Resource Group is simply a collection of functionally related resources and applications. A RG can be targeted and managed independently of other resource groups in the same partition. Resource groups can be defined not only inside a partition, but also at the domain level. As with partitions, RGs in the same partition (or at the domain level) that are running on the same server instance have their own lifecycle. A Resource Group Template provides a templating mechanism to reduce the administrative overhead of configuring WebLogic resources and applications for SaaS use cases where the same resources and applications need to run in multiple partitions. It offers a configure-once-and-use-everywhere capability, where a common set of configuration artifacts can be specified in a RGT, and can then be referenced from RGs in different partitions. A RGT is not targetable, and resources in a RGT will not deploy unless the RGT is referenced by a deployed RG. Note that the resources and applications configured or deployed in a partition (directly inside RGs or via RGs referencing a RGT) are scoped to that partition. Understanding JMS Resources in MT In a similar way to other WebLogic configuration artifacts, JMS resources such as JMS servers, SAF agents, path services, persistent stores, messaging bridges, JMS system modules, app-deployment JMS modules, Java EE 7 resource definition modules, and JMS applications can all now be configured and deployed in a RG, either directly or via RGTs, as well as in the ‘classic’ way, which is always directly at the domain level. Note that it is perfectly acceptable to combine both partition and ‘classic’ configuration together in the same domain.    Resources and applications in different partitions are isolated from one another. For example, you can configure a JMS destination with the same JNDI name in multiple partitions running in the same cluster, and these destinations will be managed via independent runtime MBean instances, and can be independently secured via partition-specific security realms. In addition to non-persistent state, the persistent data (for example, persistent messages and durable subscriptions) in such JMS destinations are also isolated from one another. Configuring JMS Resources in MT The following configuration snippets show how JMS resources configured in a multi-tenant environment differs from traditional non-MT JMS configuration.   As you can see, partition-scoped JMS resources are embedded in a resource group in a partition (alternatively, they can be embedded in a Resource Group Template, which is in turn referenced by a Resource Group). In addition, resources in a resource group are never individually targeted. Instead, the whole resource group is targeted via a virtual target, which is itself targeted in the normal way.  If a RG targets to a virtual target that is in turn targeted to a WL cluster, all JMS resources and applications in the RG will also be targeted to that cluster. As we will see later, a virtual target not only provides the targeting information of a RG, it also defines the accessing point of a partition. For more information about resource group targeting and virtual targets, check out Joe's blog about Partition Targeting and Virtual Targets. You might have noticed that I did not discuss configuring individual JMS resources for each server in a WL cluster, nor did I mention configuring “migratable targets” to add high availability. I have good news for you! Neither is needed or even supported in MT. They have been replaced with greatly enhanced WebLogic JMS cluster-targeting and HA support; my colleague Kathiravan blogs about it in 12.2.1 WebLogic JMS: Dynamic Scalability and High Availability Made Simple. Although system level JMS resources (such as JMS servers, SAF agents, persistent stores, messaging bridges, path services, and JMS modules) are scoped differently in a MT configuration, their respective attributes are specified in exactly the same way as in a non-MT configuration. Various validation and targeting rules are enforced to ensure that WebLogic MT JMS configuration is isolated, self contained, and easy to manage. One basic and high-level rule in configuring JMS in MT is that a JMS configuration artifact may only reference other configuration artifacts that are in the same scope. For example, a resource group scoped JMS server can only reference a persistent store that is also defined in the same resource group. These rules are enforced by configuration validation checks and by errors and warnings that are logged at runtime. {C} Accessing JMS Resources in MT A JMS application designed for multi-tenancy accesses JMS resources in the same way as ‘classic’ JMS applications, by looking up JMS resources in a JNDI name space. The difference is that in a MT environment, a WebLogic JNDI InitialContext is associated with a particular scope (i.e. the domain or a partition), when it is created.  A MT application can have multiple JNDI contexts that refer to the same WebLogic cluster but are scoped to different partitions. An initial context, once created, sticks to its scope until it is closed. This means that all JNDI operations using a partition-scoped JNDI context instance are performed using the partition-specific area of the JNDI space. The scope of a JNDI context is determined by the “provider URL” supplied when the initial context is created. Once an application successfully establishes a partition-scoped JNDI initial context, it can use this context to look up JMS connection factories and destinations in the same way as in a non-MT environment; except that now the application can only access partition-scoped JMS resources. Let us look at some specific use cases and see how an application can establish an initial context to a particular partition in each of the use cases. Use Case 1 - Local Intra-partition Access When a Java EE application needs to access a JMS destination in its local partition in the same cluster (or on the same non-clustered managed server), the application can just create an initial context without supplying a provider URL. Example 1: Null Provider URL Context ctx = new InitialContext(); Object cf = ctx.lookup("jms/mycf1"); Object dest = ctx.lookup("jms/myqueue1");   This initial context will be scoped to the partition in which the application is deployed. Use Case 2 - Local Inter-partition Access If a Java EE application needs to access a JMS destination (or other resource) in a different partition than the partition to which it is deployed, and the partition is in the same cluster (or on the same managed server) then it can use either a partition-scoped JNDI name or a provider URL with the "local:" protocol. Using Partition Scoped JNDI Names A JNDI name can be decorated with a namespace prefix to indicate its scope. Example 2.1: given the partition configuration in the above examples, the following code can be used to access a JMS destination that is configured in "partition1". Context ctx = new InitialContext(); Object cf = ctx.lookup("partition:partition1/jms/mycf1"); Object dest = ctx.lookup("partition:partition1/jms/myqueue1");   Similarly a Java EE application in a partition can access a domain level JNDI resource in the same cluster using a partition scoped initial context with the "domain:" namespace prefix, for example "domain:jms/mycf2". Using a provider URL with the "local:" Protocol Alternatively, one can specify a "local:" provider URL when creating an initial context to a specific partition. Example 2.2: given the partition configuration in the above examples, the following code can be used to access a JMS destination that is configured in "partition1". Hashtable<String, String> env = new Hashtable<>(); env.put(Context.PROVIDER_URL, "local://?partitionName=partition1"); env.put(Context.SECURITY_PRINCIPAL, "weblogic"); env.put(Context.SECURITY_CREDENTIALS, "welcome1"); Context ctx = new InitialContext(env); Object cf = ctx.lookup("jms/mycf1"); Object dest = ctx.lookup("jms/myqueue1");   The initial context will be associated with "partition1" for its lifetime. Similarly, a Java EE application in a partition can access a domain level JNDI resource in the same cluster using “local://?partitionName=DOMAIN” as the provider URL.  Use Case 3 - General Partition Access A third way for a Java EE application or client to access a JMS destination in a partition is to use a "partition URL". A partition URL is intended to be used when the JMS destination is in a remote cluster (or on a remote non-clustered managed server).  A typical "partition URL" is t3://hostname:port, or t3://host:port/URI-prefix. Partition URLs may only be used by Java EE applications or clients using WLS 12.2.1 or later (older versions should use dedicated partition ports: see below). Example 3: given the partition configuration in the examples above, the following code can be used to access a JMS destination that is configured in "partition1". Note that "/partition1" in the provider URL below is the uri-prefix configured in the VirtualTarget for partition1. Hashtable<String, String> env = new Hashtable<>(); env.put(Context.PROVIDER_URL, "t3://abcdef00:7001/partition1"); env.put(Context.SECURITY_PRINCIPAL, "weblogic"); env.put(Context.SECURITY_CREDENTIALS, "welcome1"); Context ctx = new InitialContext(env); Object cf = ctx.lookup("jms/mycf1"); Object dest = ctx.lookup("jms/myqueue1");   Although it is not a best practice, a “partition URL” can also be used to access another partition in the same JVM/cluster. Use Case 4 – Dedicated Partition Ports One last option is to setup dedicated ports for each partition, and configuring these is described in Joe's blog about Partition Targeting and Virtual Targets. Configuring dedicated partition ports enables applications that use ‘classic’ URLs to access a partition, and is mainly intended to enable clients and applications that are running on releases older than 12.2.1 to access partitions in a 12.2.1 or later domain. Such older clients and applications do not support the use of host name and URI-prefix to access a partition. An attempt to use them from an older client will simply fail or may silently access the domain level JNDI name space. What’s next? I hope this article helps you to understand the basics of JMS MT! It is time to start exploring this new and exciting capability. You can find more information for messaging in MT in the Configuring Messaging chapter of Oracle® Fusion Middleware Using WebLogic Server Multitenant.

Introduction Multi-tenancy (MT) is the main theme of the WebLogic Server 12.2.1 release. It enhances the Oracle Platform for Software-as-a-Service (SaaS) and Platform-as-a-Service (PaaS) use cases. The...

Technical

New EJB 3.2 feature - Modernized JCA-based Message-Driven Bean

WebLogic Server 12.2.1 is a fully compatible implementation of Java EE 7 specification. One of the big improvements in EJB container in this release of WebLogic Server is that, a message-driven bean is able to implement a listener interface with no methods. When such a no-methods listener interface is used, all non-static public methods of the bean class (and of the bean class's super classes except java.lang.Object) are exposed as message listener methods. Let's develop a sample step by step. The sample application assumes that an e-commercial website sends the buy/sell events to JMS Queues - buyQueue and sellQueue - respectively when a product is sold or bought. The connector listens on the queues, and execute message-driven bean's non-static public methods to persist the records in events to persistent store. 1. Define a no-methods message listener interface In our sample, the message listener interface NoMethodsListenerIntf has no methods in it. List 1 - No-methods message listener interface public NoMethodsListenerIntf { } 2. Now define the bean class In message-driven bean class, there are two non-static public methods - productBought and productSold, so they are both exposed as message listener methods. When connector gets a product-sold event from sellQueue, it will then invoke message-driven bean's productSold method, and likewise for product-bought event. We annotate productSold method and productBought method with @EventMonitor, indicating that they are the target methods that connector should execute. These two methods will persist the records into database or other persistent store. You can define more non-static public methods, but which ones should be executed by connector are up to connector itself. List 2 - Message-Driven Bean @MessageDriven(activationConfig = {   @ActivationConfigProperty(propertyName = "resourceAdapterJndiName", propertyValue = "eis/TradeEventConnector") }) public class TradeEventProcessingMDB implements NoMethodsListenerIntf {   @EventMonitor(type = "Retailer")   public void productSold(long retailerUUID, long productId) {     System.out.println("Retailer [" + retailerUUID + "], product [" + productId + "] has been sold!");     // persist to database   }   @EventMonitor(type = "Customer")   public void productBought(long customerId, long productId) {     System.out.println("Customer [" + customerId + "] has bought product [" + productId + "]!");     // persist to database   } } The EventMonitor annotation is defined as below: List 3 - EventMonitor annotation @Target({ ElementType.METHOD }) @Retention(RetentionPolicy.RUNTIME) public @interface EventMonitor {   public String type(); } When this message-driven bean is deployed onto WebLogic Server, EJB container detects that it's an EJB 3.2 compatible message-driven bean. If you forgot to specify a value for resourceAdapterJndiName, WebLogic Server will try to locate a suitable connector resource, for example, a connector that is declaring support of the same no-methods message listener interface (in the current application or server-wide connector that is global-accessible). If a suitable connector is found and associated with message-driven bean, the connector can retrieve the bean class definition and then analyze. 3. Developing a connector that is used to associate with message-driven bean In connector application, we retrieve the bean class definition via getEndpointClass() method of MessageEndpointFactory, and then inspect every method if it's annotated with @EventMonitor. After that, we create a javax.jms.MessageListener with the target method of the bean class to listen on the event queues. List 4 - trade event connector @Connector(     description = "This is a sample resource adapter",     eisType = "Trade Event Connector",     vendorName = "Oracle WLS",     version = "1.0") public class TradeEventConnector implements ResourceAdapter, Serializable {   // jms related resources   ......   private static final String CALLBACK_METHOD_TYPE_RETAILER = "Retailer";   private static final String CALLBACK_METHOD_TYPE_CUSTOMER = "Customer";   @Override   public void endpointActivation(MessageEndpointFactory mef, ActivationSpec activationSpec)       throws ResourceException {     try {       Class<?> beanClass = mef.getEndpointClass(); // retrieve bean class definition       ......       jmsContextForSellingEvent = ...; // create jms context       jmsContextForBuyingEvent = ...;       jmsConsumerForSellingEvent = jmsContextForSellingEvent.createConsumer(sellingEventQueue);       jmsConsumerForBuyingEvent = jmsContextForBuyingEvent.createConsumer(buyingEventQueue);       jmsConsumerForSellingEvent.setMessageListener(createTradeEventListener(mef, beanClass, CALLBACK_METHOD_TYPE_RETAILER));       jmsConsumerForBuyingEvent.setMessageListener(createTradeEventListener(mef, beanClass, CALLBACK_METHOD_TYPE_CUSTOMER));       jmsContextForSellingEvent.start();       jmsContextForBuyingEvent.start();     } catch (Exception e) {       throw new ResourceException(e);     }   }   private MessageListener createTradeEventListener(MessageEndpointFactory mef, Class<?> beanClass, String callbackType) {     for (Method m : beanClass.getMethods()) {       if (m.isAnnotationPresent(EventMonitor.class)) {         EventMonitor eventMonitorAnno = m.getAnnotation(EventMonitor.class);         if (callbackType.equals(eventMonitorAnno.type())) {           return new JmsMessageEventListener(mef, m);         }       }     }     return null;   }   @Override   public void endpointDeactivation(MessageEndpointFactory mef, ActivationSpec spec) {     // deactivate connector   }   ...... } The associated activation spec for the connector is defined as below: List 5 - the activation spec @Activation(     messageListeners = {NoMethodsListenerIntf.class}   ) public class TradeEventSpec implements ActivationSpec, Serializable {   ...... } 4. Developing a message listener to listen on the event queue. When message listener's onMessage() is invoked, we create a message endpoint via MessageEndpointFactory, and invoke the target method on this message endpoint. List 6 - jms message listener public class JmsMessageEventListener implements MessageListener {   private MessageEndpointFactory endpointFactory;   private Method targetMethod;   public JmsMessageEventListener(MessageEndpointFactory mef, Method executeTargetMethod) {     this.endpointFactory = mef;     this.targetMethod = executeTargetMethod;   }   @Override   public void onMessage(Message message) {     MessageEndpoint endpoint = null;     String msgText = null;     try {       if (message instanceof TextMessage) {         msgText = ((TextMessage) message).getText();       } else {         msgText = message.toString();       }       long uid = Long.parseLong(msgText.substring(0, msgText.indexOf(",")));       long pid = Long.parseLong(msgText.substring(msgText.indexOf(",") + 1));       endpoint = endpointFactory.createEndpoint(null);       endpoint.beforeDelivery(targetMethod);       targetMethod.invoke(endpoint, new Object[]{uid, pid});       endpoint.afterDelivery();     } catch (Exception e) {       // log exception       System.err.println("Error when processing message: " + e.getMessage());     } finally {       if (endpoint != null) {         endpoint.release();       }     }   } } 5. Verify the application We assume that the syntax of the event is composed of two digits separated with ",", for example, 328365,87265. The former digit is customer or retailer id, and the latter digit is product id. Now sending such events to the event queues, you'll find that they are persisted by message-driven bean.

WebLogic Server 12.2.1 is a fully compatible implementation of Java EE 7 specification. One of the big improvements in EJB containerin this release of WebLogic Server is that, a message-driven...

Add category

WLS JNDI Multitenancy

  The most important feature introduced in WebLogic Server 12.2.1 is multi-tenancy. As we know, before WLS 12.2.1, one WLS domain is used by one tenant. Since WLS 12.2.1, WLS domain can be divided into multiple partitions so that tenants can use different partitions of on WLS domain. Then Multiple tenants can share one WLS domain without influence each other. So isolation of resources between partitions is key. Since JNDI is a common way to access to these resources, the main target of JNDI in WLS 12.2.1 is to isolate JNDI resources.   Before WLS 12.2.1, there is only one JNDI global tree of WLS domain. It is difficult to use this one JNDI global tree to support multiple partitions. Because partition requires a unique isolated namespace. For example, it is possible multiple partitions use same JNDI name to bind/lookup JNDI resource separately. And it will result in NameAlreadyBoundException. To isolate JNDI resources in different partitions, every partition has unique JNDI global tree since WLS 12.2.1. Then tenant can operate JNDI resource in one partition without name conflict with one in other partition. For Application scoped JNDI tree, it is only visible in application internal, so it isolated naturally. So there is no change for App scoped JNDI tree in WLS 12.2.1. Let us see how to access JNDI resource in partition. Access JNDI resource in partition   To access JNDI resources in partition, we need add partition information in provider url property during creating InitialContext.   Runtime environment:     Managed server:           ms1 , ms2     Cluster:                         managed server ms1, managed server ms2     Virtual target:               VT1 target to managed server ms1, VT2 target to cluster     Partition:                      Partition 1 has available target VT1, Partition2 has available target VT2.   We need add parition1 inforamtion in properties during creating InitialContext to access JNDI resources in partition1.     Hashtable<String, String> env = new Hashtable<>();     env.put(Context.PROVIDER_URL, "t3://ms1:7001/partition1");       env.put(Context.SECURITY_PRINCIPAL, "weblogic");     env.put(Context.SECURITY_CREDENTIALS, "welcome1");     Context ctx = new InitialContext(env);     Object c = ctx.lookup("jdbc/ds1");   Partition2 runs in cluster, so we can use cluster address format in properties during creating InitialContext.     Hashtable<String, String> env = new Hashtable<>();     env.put(Context.PROVIDER_URL, "t3://ms1:7001,ms2:7003/partition2");     env.put(Context.SECURITY_PRINCIPAL, "weblogic");     env.put(Context.SECURITY_CREDENTIALS, "welcome1");     Context ctx = new InitialContext(env);     Object c = ctx.lookup("jdbc/ds1");   In weblogic, we can create Foreign JNDI provider to link JNDI resources in other server. In WLS 12.2.1, we also can use Foreign JNDI provider to link to  JNDI resources in specified partition by adding partition information in configuration. These partition information including provider url, user and password will be used to create JNDI context. The following is an example of Foreign JNDI provider configuration in partition1. This provider links to partition2. <foreign-jndi-provider-override>  <name>jndi_provider_rgt</name>  <initial-context-factory>weblogic.jndi.WLInitialContextFactory</initial-context-factory>  <provider-url>t3://ms1:7001,ms2:7003/partition2</provider-url>  <password-encrypted>{AES}6pyJXtrS5m/r4pwFT2EXQRsxUOu2n3YEcKJEvZzxZ7M=</password-encrypted>  <user>weblogic</user>  <foreign-jndi-link>    <name>link_rgt_2</name>    <local-jndi-name>partition_Name</local-jndi-name>    <remote-jndi-name>weblogic.partitionName</remote-jndi-name>  </foreign-jndi-link></foreign-jndi-provider-override> Stickiness of JNDI Context   When a JNDI context is created, the context will associate with  specified partition. Then all subsequent JNDI operations are done within associated partition JNDI tree, not in current partition one. And this associated partition will remain even if the context is used by a different thread than was used to create the context.. If provider url property is set in environment during creating JNDI context, partition specified in provider url is associated. Otherwise, JNDI context associates with current partition. Life cycle of Partition JNDI service   Before WLS 12.2.1,, JNDI service life cycle is same with weblogic server. In WLS 12.2.1, every partition owns JNDI global tree by itself, so JNDI service life cycle is same with partition. As soon as partition startup, JNDI service of this partition is available. And during partition shutdown, JNDI service of this partition is destroyed.

  The most important feature introduced in WebLogic Server 12.2.1 is multi-tenancy. As we know, before WLS 12.2.1, one WLS domain is used by one tenant.Since WLS 12.2.1, WLS domain can be divided into...

Add category

Monitoring FAN Events

fanWatcher is a sample program to print the Oracle Notification Service (ONS) Fast Application Notification (FAN) event information. These events provide information regarding load balancing, and service and instance up and down events. This information is automatically processed by WebLogic Server Active GridLink and UCP on the mid-tier. For more information about FAN events, see this link.  The program described here is an enhancement of the earlier program described in that white paper  This program can be modified to work as desired to monitor events and help diagnose problems with configuration. The code is available this link. To run this Java application, you need to be set up to run a JDK and you need ons.jar and ojdbcN.jar in the CLASSPATH. The CLASSPATH is set differently depending on whether you are running on the database server or on the mid-tier with WebLogic Server or UCP. Make sure to use the correct path separator for CLASSPATH on your platform (';' for Windows, ':' otherwise). The general format for the command line is java fanWatcher config_type [eventtype … ] Event Type Subscription The event type sets up the subscriber to only return limited events. You can run without specifying the event type to see what types of events are returned. When you specify an event name on the command line, the program sets up the subscriber to have a simple match on the event. If the specified pattern occurs anywhere in a notification's header, then the comparison statement evaluates true. The most basic pattern match is an empty string (not null), which matches all notifications. The pattern is enclosed in double quotes (required) and prefixed with “%” to be case insensitive. Event processing is more complete than shown in this sample. The subscription string is generally composed of one or more comparison statements, each logically related to another with the boolean operators '|' for an OR relationship or '&' for an AND relationship. Parentheses are used to group these comparison statements, and the '!' operator placed before an opening parenthesis negates the evaluated value within. Each individual comparison statement must be enclosed within double quotes ('"'), and can take one of two basic forms: "pattern" or "name=value". A "pattern" is a simple string match of the notification header: if the specified "pattern" occurs anywhere in a notification's header, then the comparison statement evaluates true. The most basic pattern match is an empty string (not NULL) which matches all notifications. The "name=value" format compares the ONS notification header or property name with the name against the specified value, and if the values match, then the comparison statement evaluates true. If the specified header or property name does not exist in the notification the comparison statement evaluates false. A comparison statement will be interpreted as a case insensitive when a percent character ('%') is placed before the opening quote. Note that for "name=value" comparison statements, only the value is treated as case insensitive with this option: the name lookup will always be case sensitive. A comparison statement will be interpreted as a regular expression when a dollar sign character ('$') is placed before the opening quote. Standard POSIX regular expressions are supported. To specify a regular expression that is also case insensitive, place the dollar sign and percent sign together and in that order ("$%") before the opening quote. A special case subscription string composed of only the exclamation point character ('!') signifies that the subscription will not match any notifications. You might want to modify the event to select on a specific service by using something like %"eventType=database/event/servicemetrics/<serviceName> " Running with Database Server 10.2 or later This approach runs on the database server and connects directly to the local ONS daemon available in the Grid Infrastructure cluster. The FANwatcher utility must be run as a user that has privilege to access the $CRS_HOME/opmn/conf/ons.config, which is used by the ons daemon to start and accessed by this program. The configuration type on the command line is set to “crs”. # CRS_HOME should be set for your Grid infrastructure echo $CRS_HOME CRS_HOME=/mypath/scratch/12.1.0/grid/ CLASSPATH="$CRS_HOME/jdbc/lib/ojdbc6.jar:$CRS_HOME/opmn/lib/ons.jar:." export CLASSPATH javac fanWatcher.java java -Doracle.ons.oraclehome=$CRS_HOME fanWatcher crs Running with WLS 10.3.6 or later using an explicit node list There are two ways to run in a client environment – with an explicit node list and using auto-ONS. It’s necessary to have ojdbcN.jar and ons.jar that are available when configured for WLS. If you are set up to run with UCP directly, these should also be in your CLASSPATH. In the first approach, it will work with Oracle driver and database 11 and later (SCAN support came in later versions of Oracle including the 11.2.0.3 jar files that shipped with WLS 10.3.6). # Set the WLS environment using wlserver*/server/bin/setWLSEnv CLASSPATH="$CLASSPATH:." # add local directory for sample program export CLASSPATH javac fanWatcher.java java fanWatcher "nodes=rac1:6200,rac2:6200" database/event/service The node list is a string of one or more values of the form name=value separated by a newline character (\n). There are two supported formats for the node list. The first format is available for all versions of ONS. The following names may be specified. nodes – This is required. The format is one or more host:port pairs separated by a comma. walletfile – Oracle wallet file used for SSL communication with the ONS server. walletpassword – Password to open the Oracle wallet file. The second format is available starting in database 12.2.0.2. It supports more complicated topologies with multiple clusters and node lists. It has the following names. nodes.id—this value is a list of nodes representing a unique topology of remote ONS servers. id specifies a unique identifier for the node list. Duplicate entries are ignored. The list of nodes configured in any list must not include any nodes configured in any other list for the same client or duplicate notifications will be sent and delivered. The list format is a comma separated list of ONS daemon listen addresses and ports pairs separated by colon. maxconnections.id— this value specifies the maximum number of concurrent connections maintained with the ONS servers. id specifies the node list to which this parameter applies. The default is 3. active.id If true, the list is active and connections are automatically established to the configured number of ONS servers. If false, the list is inactive and is only be used as a fail over list in the event that no connections for an active list can be established. An inactive list can only serve as a fail over for one active list at a time, and once a single connection is re-established on the active list, the fail-over list reverts to being inactive. Note that only notifications published by the client after a list has failed over are sent to the fail over list. id specifies the node list to which this parameter applies. The default is true. remotetimeout —The timeout period, in milliseconds, for a connection to each remote server. If the remote server has not responded within this timeout period, the connection is closed. The default is 30 seconds. The walletfile and walletpassword may also be specified (note that there is one walletfile for all ONS servers). The nodes attribute cannot be combined with name.id attributes. Running with WLS using auto-ONS Auto-ONS is available starting in Database 12.1.0.1. Before that, no information is available. Auto-ONS only works with RAC configurations; it does not work with an Oracle Restart environment.  Since the first version of WLS that ships with Database 12.1 is WLS 12.1.3, this approach will only work with upgraded database jar files on versions of WLS earlier than 12.1.3. Auto-ONS works by getting a connection to the database to query the ONS information from the server. For this program to work, a user, password, and URL are required. For the sample program, the values are assumed to be in the environment (to avoid putting them on the command line). If you want, you can change the program to prompt for them or hard-code the values into the java code. # Set the WLS environment using wlserver*/server/bin/setWLSEnv # Set the credentials in the environment. If you don't like doing this, # hard-code them into the java program password=mypassword url='jdbc:oracle:thin:@(DESCRIPTION=(ADDRESS_LIST=\ (ADDRESS=(PROTOCOL=TCP)(HOST=rac1)(PORT=1521))\ (ADDRESS=(PROTOCOL=TCP)(HOST=rac2)(PORT=1521)))\ (CONNECT_DATA=(SERVICE_NAME=otrade)))' user=myuser export password url user CLASSPATH="$CLASSPATH:." export CLASSPATH javac fanWatcher.java java fanWatcher autoons fanWatcher Output The output looks like the following. You can modify the program to change the output as desired. In this short output capture, there is a metric event and an event caused by stopping the service on one of the instances. ** Event Header ** Notification Type: database/event/servicemetrics/otrade Delivery Time: Fri Dec 04 20:08:10 EST 2015 Creation Time: Fri Dec 04 20:08:10 EST 2015 Generating Node: rac1 Event payload: VERSION=1.0 database=dev service=otrade { {instance=inst2 percent=50 flag=U NKNOWN aff=FALSE}{instance=inst1 percent=50 flag=UNKNOWN aff=FALSE} } timestam p=2015-12-04 17:08:03 ** Event Header ** Notification Type: database/event/service Delivery Time: Fri Dec 04 20:08:20 EST 2015 Creation Time: Fri Dec 04 20:08:20 EST 2015 Generating Node: rac1 Event payload: VERSION=1.0 event_type=SERVICEMEMBER service=otrade instance=inst2 database=dev db_domain= host=rac2 status=down reason=USER timestamp=2015-12-04 17:  

fanWatcher is a sample program to print the Oracle Notification Service (ONS) Fast Application Notification (FAN) event information. These events provide information regardingload balancing, and...

Technical

Multi-Tenancy Samples

In order to make it easier to understand all aspects of Multi-tenancy in WebLogic Server 12.2.1.*, MedRec can support running in a Multi-tenancy environment to be used as a demonstration vehicle. What’s MedRec Avitek Medical Records (or MedRec) is a WebLogic Server sample application suite that demonstrates all aspects of the Java Platform, Enterprise Edition (Java EE). MedRec is designed as an educational tool for all levels of Java EE developers. It showcases the use of each Java EE component, and illustrates best-practice design patterns for component interaction and client development. MedRec also illustrates best practices for developing and deploying applications with WebLogic Server. Please choose 'Complete with Examples' when you install WebLogic server whilst going to the step of 'Installation Type'. The codes, binaries and documentations of MedRec will be located at ‘$MW_HOME/wlserver/samples/server/medrec’ directory. Otherwise samples of WebLogic Server will be overleaped. There are two non-OOTB Multi-tenancy samples. You need to run Ant commands provided to stage WebLogic domains. Single Server Multi-tenancy MedRec Overview It’s a SAAS sample that is all focusing on the Multi-tenancy features themselves, without cluster, extra managed servers, all virtual targets only target to admin server. Multi-tenancy Demonstration There are 2 tenants named bayland and valley, and valley has 2 partitions(one tenant can have multiple partitions). In this sample, it’s demonstrating various Multi-tenancy features following. If you have any questions about a certain feature, you'd refer to the relevant blogs or documentations. Resource Group Template All resources including applications, JMS, file store, mail session, JDBC system resource are deployed onto a resource group template. Applications Other resources Resource Overriding Databases are supposed to be isolated among partitions. At resource group template, the JDBC system resource is a mere template with name, driver, JNDI lookup name. The real URL, username, password of datasource are set at the resource overriding in Partition scope. Virtual Target Each partition, exactly each partition resource group deriving from forementioned MedRec resource group template has it own virtual target. The 2 virtual target of valley share the same host names within different URI prefixes. We can see three virtual targets, one per one partition. Web container is aware of which application is accessed to according to the host name plus URI prefix. For example, in this sample, medrec.ear is deployed at all partitions. Yet how to access the web module of medrec.ear on bayland? The URL would be 'http://hostname:port/bayland/medrec'. '/bayland' is the uri prefix. 'medrec' is the root context name of webapp. Security Realm Each tenant is supposed to have its own security realm with isolated users. MedRec has an use case of Servlet access control and authentication that demonstrates the scenario. Resource Consumption Management. Bayland is deemed as a VIP customer of this sample. So it has more quota of CPU, memory heap, thread work, etc. Trigger will slow down or shut down the partition if the usage is up to the specified value. Partition Work Manager Partition Work Managers define a set of policies that limit the usage of threads by Work Managers in partitions only. They do not apply to the domain. Deployment Plan Then deployment plan file can be used in partition scope. The sample utilises this mechanism to change the appearance of the web pages on valley tenant including photos and background colour. That means you can let your application different from each other of different partition even though of one resource group template. Installation Prior to running setup script, you need to do a couple of preparation. Setting sample environment, editing etc/hosts file, customising the properties of admin server, host, port etc. After that, one ant command will stage all content of the SAAS sample. Setting environment. cd $MW_HOME/wlserver/samples/server . ./setExamplesEnv.sh Network address mapping. Please open /etc/hosts file and add following lines: 127.0.0.1 www.baylandurgentcare.com 127.0.0.1 www.valleyhealth.com Customizing admin server properties.update 5 properties in $MW_HOME/wlserver/samples/server/medrec/install/mt-single-server/weblogic.properties. Please use weblogic as the username of the admin server. admin.server.name=adminServer admin.server.host=localhost admin.server.port=7005 admin.server.username=weblogic admin.server.password=XXXXXX Running setup script cd $MW_HOME/wlserver/samples/server/medrec ant mt.single.server.sample Webapp URLs You can access MedRec via following URLs according to the server port you set. For example, you set admin.server.port = 7005. URL  Patition http://www.baylandurgentcare.com:7005/bayland/medrec  bayland http://localhost:7005/bayland/medrec  bayland http://www.valleyhealth.com:7005/valley1/medrec  valley1 http://localhost:7005/valley1/medrec  valley1 http://www.valleyhealth.com:7005/valley2/medrec  valley2 http://localhost:7005/valley2/medrec  valley2 Coherence Cluster Multi-tenancy MedRec Overview The second SAAS sample. Beyond the simple one, coherence cache, dynamic cluster, Oracle PDB are involved. In some extent, it’s a real usage of MT in practice. Look at this diagram above, it also has 2 tenants but a partition per tenant. Bayland is the blue one, valley the green one. There are 2 resource group templates named app RGT and cache RGT instead of one. App RGT is similar to the resource group template of the first MT sample including all resources of MedRec. In order to enable coherence cache, a GAR archive is packaged into an application of medrec.ear. And the identical GAR is also deployed into the second cache resource group template. Both of the partitions have 2 resource groups app and cache deriving from the app and cache resource group templates respectively. Each resource group targets to different virtual target. So here has 4 virtual targets. 2 app virtual targets target to a storage disabled dynamic cluster app cluster with 2 managed servers. The applications and other resources run on this app cluster. In contrast, 2 cache virtual targets target to another dynamic cluster named cache cluster with 2 managed servers but storage enable. The GAR of cache resource group runs on the cache cluster. Coherence Scenario MedRec has 2 key archives medrec.ear and physician.ear. Physician archive is set to a web service(JAX-RS and JAX-WS) client application. And there aren't any JPA stuff in physician.ear all of which are in server side.  So leveraging Coherence Cache here can avoid frequent web service invocations and JDBC invocations in business services of web service server side. Method Invocation Cache This cache is one partitioned tenant cache. Most of business services of physician scenarios are annotated method invocation cache interceptor. First check data whether it has stored in cache. If data isn't cached, gets data through web service. Then stores return data into method cache. After that, following invocations with same values of parameter will fetch data from cache directly. When is the data removed from the method cache? For examples, physician can look a patient's record summary which is cached after first getting the data. And physician creates a new record for this patient. At present, the record summary in the cache has already been inaccurate. So the dirty data should be cleaned. In this case, after success of creating record, the business service will fire an update event to remove the old data. Actually, Method invocation cache has 3 different types. MedRec can be aware of the WLS environment and activate the relevant cache. For example, physician login, when you first login as physician at bayland app server 1, the app_cluster-1.log should be printed liking following logs: Method Invocation Coherence Cache is available. Checking method XXXX invocation cache... Not find the result. Caching the result in method XXXX invocation cache.... Added result to cache Method: XXXXX Parameters: XXXX Result: XXXXX Logout and change to bayland server 2 login again, the app_cluster-2.log should like these: Checking method XXXX invocation cache... Found result in cache Method: XXXXX Parameters: XXXX Result: XXXXX Shared Cache This cache is one partitioned shared cache that means bayland and valley can share it. It's still the creating record user-case. Physician can create prescriptions for the new record. Choosing drug uses a drug information list from database of server side. The list is a stable invariable data so that can be shared with both Partitions. So the drug information list is stored in this cache. Open creating record page on browser at bayland server 1, the app_cluster-1.log should like this: Drug info list is not stored in shared cache. Fetch list from server end point. Store drug info list into shared cache. Then do the same thing at valley server 2, the app_cluster-2.log should like this: Drug info list has already stored in shared cache. That means the it is a shared cache over Partitions. Installation The installation and usage are similar to the first MT sample. Beyond the first one, the first sample adopts derby as database, the second adopts Oracle database. You need to prepare 2 Oracle PDBs and customize the file of db properties. Setting environment. cd $MW_HOME/wlserver/samples/server . ./setExamplesEnv.sh Update 5 properties in $MW_HOME/wlserver/samples/server/medrec/install/mt-coherence-cluster/configure.properties according with PDBs. E.g.: # Partition 1 dbURL1      = jdbc:oracle:thin:XXXXXXXX:1521/pdb1 dbUser1     = pdb1 dbPassword1 = XXXXXX # Partition 2 dbURL2      = jdbc:oracle:thin:XXXXXXXX:1521/pdb2 dbUser2     = pdb2 dbPassword2 = XXXXXX Network address mapping. Please open /etc/hosts file and add following lines: 127.0.0.1 bayland.weblogicmt.com 127.0.0.1 valley.weblogicmt.com Customizing admin server properties.update 5 properties in $MW_HOME/wlserver/samples/server/medrec/install/mt-coherence-cluster/weblogic.properties. Please use weblogic as the username of admin server. admin.server.name=adminServer admin.server.host=localhost admin.server.port=7003 (Please don’t use 2105, 7021, 7022, 7051, 7052 which will be used as servers’ listening ports) admin.server.username=weblogic admin.server.password=XXXXXX Running setup script cd $MW_HOME/wlserver/samples/server/medrec ant mt.coherence.cluster.sample Webapp URLs After success, please access following URLs to experience MedRec: Partition bayland app_cluster-1:  http://bayland.weblogicmt.com:7021/medrec Partition bayland app_cluster-2:  http://bayland.weblogicmt.com:7022/medrec Partition valley app_cluster-1: http://valley.weblogicmt.com:7021/medrec Partition valley app_cluster-2: http://valley.weblogicmt.com:7022/medrec

In order to make it easier to understand all aspects of Multi-tenancy in WebLogic Server 12.2.1.*, MedRec can support running in a Multi-tenancy environment to be used as a demonstration vehicle. What’s...

Add category

Three Easy Steps to a Patched Domain Using ZDT Patching and OPatchAuto

Now that you’ve seen how easy it is to Update WebLogic by rollingout a new patched OracleHome to your managed servers, let’s go one step furtherand see how we can automate the preparation and distribution parts of thatoperation as well. ZDT Patching has integrated with a great new tool in 12.2.1 calledOPatchAuto. OPatchAuto is a singleinterface that allows you to apply patches to an OracleHome, distribute thepatched OracleHome to all the nodes you want to update, and start theOracleHome rollout, in just three steps. 1. The first step is to create a patched OracleHome archive (.jar) based oncombining an OracleHome in your production environment with the desired patchor patchSetUpdate. This operation will makea copy of that OracleHome so it will not affect the productionenvironment. It will then apply the specifiedpatches to the copy of the OracleHome and create the archive from it. This is the archive that the rollout will usewhen the time comes, but first it needs to be copied to all of the targetednodes. The OPatchAuto command for the first step looks like this: ${ORACLE_HOME}/OPatch/auto/core/bin/opatchauto.sh apply /pathTo/PatchHome-create-image -image-location /pathTo/image.jar –oop –oh /pathTo/OracleHome PatchHome is a directory or file containing the patch orpatchSetUpdate to apply. image-location is where to put the resulting image file -oop means “out-of-place” and tells OPatchAuto to copy the sourceOracleHome before applying the patches 2.  The second step is to copy the patched OracleHome archive created in stepone to all of the targeted nodes. Onecool thing about this step is that since OPatchAuto is integrated with ZDT Patching,you can give OPatchAuto the same target you would use with ZDT Patching, and itwill ask ZDT Patching to calculate the nodes automatically. Here’s an example of what this command mightlook like: ${ORACLE_HOME}/OPatch/auto/core/bin/opatchauto.sh apply-plan wls-push-image -image-location/pathTo/image.jar -wls-admin-host ${ADMINHOSTNAME}:7001 -wls-target Cluster1 -remote-image-location /pathTo/image.jar-wallet ${WALLET_DIR} image-location is the jar file created in the first step wls-target can be a domain name, cluster name, or list of clusters Note that if you do not already have a wallet for ssh authorization to theremote hosts, you may need to configure one using 3.  The last step is using OPatchAuto to invoke the ZDT Patching OracleHomerollout. You could switch to WLST atthis point and start it as described in the previous post, but OPatchAuto willmonitor the progress of the rollout and give you some helpful feedback aswell. The command to start the rolloutthrough OPatchAuto looks like this: ${ORACLE_HOME}/OPatch/auto/core/bin/opatchauto.sh apply-plan wls-zdt-rollout -image-location/pathTo/image.jar -wls-admin-host ${ADMINHOSTNAME}:7001 -wls-target Cluster1 -backup-home/pathTo/home-backup -remote-image-location/pathTo/image.jar -wallet ${WALLET_DIR} image-location is the jar file created in the first step backup-home is the location on each remote node to store thebackup of the original OracleHome image-location and remote-image-location are both specified sothat if a node is encountered that is missing the image, it can be copiedautomatically. This is also why thewallet is specified here again One more great thing to consider when looking at automating the entireprocess is how easy it would be to use these same commands to distribute androllout the same patched OracleHome archive to a test environment forverification. Once verification ispassed, a minor change to the same two commands will push the exact same (verified)OracleHome archive out to a production environment. For more information about updating OracleHome with Zero Downtime Patchingand OPatchAuto, view the documentation.

Now that you’ve seen how easy it is to Update WebLogic by rolling out a new patched OracleHome to your managed servers, let’s go one step furtherand see how we can automate the preparation and...

Technical

Multi-Tenancy EJB

Multi-Tenancy EJB Benefit from the Multi-Tenancy support of WLS 12.2.1, EJB container gains a lot of enhancements. Application and resource "multiplication" allows for EJB container to provide MT features while remaining largely partition unaware. Separate application copies also brings more isolation, such as distinct Remote objects, beans pools, caches, module class loader instances etc. Below names a few of the new features you can leverage for EJB applications. 1. JNDI Server naming nodes will be partition aware. Applications deployed to partitions will have their EJB client views exposed in the corresponding partition's JNDI namespace.  2. Security Security implementation now allows multiple active realms, including support for per-partition security realm. Role based access control, and credential mapping for applications deployed to partition will use the partition's configured realm. 3. Runtime Metrics and Monitoring New ApplicationRuntimeMBean instance with the PartitionName attribute populated, will get created for every application deployed to a partition. EJB container exposed Runtime MBean sub-tree will be rooted by the ApplicationRuntimeMBean instance. 4. EJB Timer service Persistent local timers rely on the store component. Partitioned custom file stores provide the required isolation of tenant data. Clustered timers under the hood use Job scheduler, which is also providing isolation. 5. JTA configuration at Partition Level JTA timeout can be configured at partition level, in addition to domain level and EJB component level. Timeout value in EJB component level takes precedence over the other two. Support dynamic update via deployment plan. 6. WebLogic Store Persistent local timers rely on the store component. Partitioned custom file stores provide the required isolation of tenant data.  7. Throttling Thread Resource usage Work Managers with constraints can be defined at global runtime level, and application instances in partitions can refer to these shared WMs to throttle thread usage across partitions esp. for non interactive use cases - batch, async, message driven bean invocations ... 8. Data Sources for Java Persistence API users Persistence Units that use data sources defined as system resources in the Resource Group Template will be able to take advantage of the PartitionDataSourceInfoMBean based overrides. Use cases requiring advanced customization can use the new deployment plan support being added for system resource re-configuration. Persistence Units that use application packaged data source modules can use the current deployment plan support to have the copies in different partitions, point to the appropriate PDBs. A sample EJB application leveraging Multi-Tenancy Now we're going through a simple setup of an EJB application on MT environment to demonstrate the usage of some of these features. The EJB application archive is named sampleEJB.jar, it includes a stateful session bean which interacts with database by JPA API. We want the application to be deployed to 2 separate partitions, each of which points to a database instance of its own, so they can work independently. 1.  Create Virtual Targets The first step is to create 2 virtual targets for the 2 partitions respectively, which use different URI prefixes /partition1 and /partition2 respectively as showed below. 2.  Create Resource Group Template Now we create a Resource Group Template named myRGT.  Resource Group Template is a new concept introduced by WLS 12.2.1, to which you can deploy your applications and different resources you need. This is very helpful when your application setup is complicated, because you don't want to repeat the same thing for multiple times on different partitions. 3.  Deploy application and data source Now we can deploy the application and define the data source as below. Please pay attention that the application and the data source are all defined in myRGT scope. 4.  Create Partitions Now everything is ready, it's time to create partitions. As the following image shows, we can apply the Resource Group Template just defined when creating partitions, it will deploy everything in the template automatically. 5.  Access the EJB Now with the partitions created and started, you can lookup and access the EJB with the following code. We're using the URL for partition1 here, you can also change the URL to access another partition. Hashtable<String, String> props = new Hashtable<String, String>(); props.put(Context.INITIAL_CONTEXT_FACTORY, "weblogic.jndi.WLInitialContextFactory"); props.put(Context.PROVIDER_URL, "t3://server1:7001/partition1"); props.put(Context.SECURITY_PRINCIPAL, user); props.put(Context.SECURITY_CREDENTIALS, pass); Context ctx = new InitialContext(props); BeanIntf bean = (BeanIntf)ctx.lookup("MyEJB");  boolean result = bean.doSomething();  6.  Override the data source If you're feeling something is wrong, you're right. We defined the data source myDS in myRGT, then applied myRGT to both the partitions, now the 2 partitions are sharing the same data source. Normally we don't want this to happen, we need the 2 partitions to work independently without disturbing each other.  How can we do that? If you want to make partition2 switch to another data source,  you can do that in the Resource Overrides tab of partition2 settings page. You can change the database URL here so another database instance will be used by partition2.   7.  Change the transaction timeout As mentioned above, for EJB applications it's supported to dynamically change the transaction timeout value for a particular partition. This can also be accomplished in partition settings page. In the following example, we set the timeout to 15 seconds. This will take affect immediately without asking to reboot. There're also some other things you can do in the partition settings page, such as defining a work manager or monitoring the resource usage for a particular partition. Spend some time you will find more very useful tools around here.

Multi-Tenancy EJB Benefit from the Multi-Tenancy support of WLS 12.2.1, EJB container gains a lot of enhancements. Application and resource "multiplication" allows for EJB container to provide MT...

Technical

Concurrency Utilities support in WebLogic Server 12.2.1, Part Five: Multi-tenancy Support

Overview One of the key features in WLS 12.2.1 is the multi-tenancy support where a single Weblogic Server domain can contain multiple partitions. Please read Part One ~ Part Four prior to this article. Applications deployed to partitions can use the 4 types of concurrent managed objects in the same way as described in Part One ~ Part Four. Applications deployed to partitions can also use global pre-defined concurrent managed object templates, which means when an application is deployed to a partition, WebLogic Server creates concurrent managed objects for this application based on the configuration of global concurrent managed object templates. As you may recall there are server scope Max Concurrent Long Running Requests and Max Concurrent New Threads, please note that they limit long-running request/running threads in the whole server, including partitions. Configuration System administrators can define partition scope concurrent managed object templates. As mentioned in Part One(ManagedExecutorService) and Part Three(ManagedThreadFactory), WebLogic Server provides configurations(Max Concurrent Long Running Requests/Max Concurrent New Threads) to limit the number of concurrent long-running tasks/threads in a ManagedExecutorService/ManagedScheduledExecutorService/ManagedThreadFactory instance, in the global (domain-level) runtime on a server or in the server. Among them, instance scope and server scope limits are applicable to partitions. Besides, system administrators can also define partition scope Max Concurrent Long Running Requests and Max Concurrent New Threads. There are a default Max Concurrent Long Running Requests(50) and a default Max Concurrent New Threads(50) for each partition. ManagedExecutorService/ManagedScheduledExecutorService/ManagedThreadFactory accepts a long-running task submission/new thread creation only when neither of the 3 limits is exceeded. For instance, there is an application deployed to a partition on a server, when a long-running task is submitted to its default ManagedExecutorService, RejectedExecutionException will be thrown if there are 10 in progress long-running tasks which are submitted to this ManagedExecutorService, or there are 50 in progress long-running tasks which are submitted to the ManagedExecutorServices/ManagedScheduledExecutorServices in scope of this partition on the server, or there are 100 in progress long-running tasks which are submitted to the ManagedExecutorServices/ManagedScheduledExecutorServices in the server. Configure Partition Scope Concurrent Managed Object Templates WebLogic system administrators can configure pre-defined concurrent managed object templates for a partition. When an application is deployed to the partition, WebLogic Server creates concurrent managed object instances based on the configuration of partition scope concurrent managed object templates, and the created concurrent managed object instances are all in scope of this application. Example-1: Configure a Partition Scope ManagedThreadFactory template using WebLogic Administration Console Step1: in WebLogic Administration Console, a Partition Scope ManagedThreadFactory template can be created by clicking on the “New” button from the “Summary of Concurrent Managed Object Templates” page. This brings up the "Create a New Managed Thread Factory Template" page where the name and other parameters of the new ManagedThreadFactory template can be specified. Choose the Scope to the partition. In this example, a ManagedThreadFactory template called "testMTFP1" is being created for partition1. Step2: Once a Partition Scope ManagedThreadFactory template is created, any application in the partition can get its own ManagedThreadFactory instance to use. @WebServlet("/SomeServlet") public class SomeServlet extends HttpServlet {     @Resource(mappedName="testMTFP1")     ManagedThreadFactory mtf;     protected void doGet(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException {         Runnable aTask = new Runnable() {            ...        };        Thread t = mtf.newThread(aTask);         t.start();        ...    } } Configure Partition Scope Max Concurrent New Threads & Max Concurrent Long Running Requests Max Concurrent New Threads of a partition is the limit of running threads created by all ManagedThreadFactories in that partition on a server. Max Concurrent Long Running Requests of a partition is the limit of concurrent long-running tasks submitted to all ManagedExecutorServices and ManagedScheduledExecutorServices in that partition on a server. In WebLogic Administration Console, Max Concurrent New Threads and Max Concurrent Long Requests of a partition can be edited from the “Settings for <partitionName>” screen. In this example, Max Concurrent New Threads of partition1 is set to 30, Max Concurrent Long Running Requests of partition1 is set to 80 Related Articles: Concurrency Utilities support in WebLogic Server 12.2.1 Concurrency Utilities support in WebLogic Server 12.2.1, Part One: ManagedExecutorService Concurrency Utilities support in WebLogic Server 12.2.1, Part Two: ManagedScheduledExecutorService Concurrency Utilities support in WebLogic Server 12.2.1, Part Three: ManagedThreadFactory Concurrency Utilities support in WebLogic Server 12.2.1, Part Four: ContextService See for more details Configuring Concurrent Managed Objects in the product documentation.

Overview One of the key features in WLS 12.2.1 is the multi-tenancy support where a single Weblogic Server domain can contain multiple partitions. Please read Part One ~Part Four prior to this...

Technical

Concurrency Utilities support in WebLogic Server 12.2.1, Part Four: ContextService

Overview ContextService is for creating contextual proxy objects. It provides method createContextualProxy to create a proxy object, then proxy object methods will run within the captured context at a later time.  Weblogic Server provides a preconfigured, default ContextService for each application, and applications can easily use it in web or EJB components without any configuration. Let's begin with a simple example that uses default ContextService. Example-1: Execute a task with the creator's context using a ExecutorService Step1: Write the task. In this simple example, the task extends Runnable. public class SomeTask implements Runnable {     public void run() {         // do some work     } } Step2: SomeServlet.java injects the default ContextService, uses the ContextService to create a new contextual object proxy for SomeTask, then submit the contextual object proxy to a Java SE ExecutorService. Each invocation of run() method will have the context of the servlet that created the contextual object proxy. @WebServlet("/SomeServlet") public class SomeServlet extends HttpServlet {     @Resource ContextService cs;     @Resource ManagedThreadFactory mtf;     ExecutorService exSvc = Executors.newThreadPool(10, mtf);     protected void doGet(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException {        SomeTask taskInstance = new SomeTask();         Runnable rProxy = cs.createContextualProxy(taskInstance, Runnable.class);         Future f =  = exSvc.submit(rProxy);         // Process the result and reply to the user      } } Runtime Behavior Application Scoped Instance ContextServices are application scoped. Each application has its own default ContextService instance, and the lifecycle of the ContextService instances are bound to the application. Proxy objects created by ContextServices are also application scoped, so that when the application is shut down, invocations to proxied interface methods will fail with an IllegalStateException, and calls to createContextualProxy() will  throw an IllegalArgumentException WebLogic Server only provides a default ContextService instance for each application, and does not provide any way to configure a ContextService. Context Propagation ContextService will capture the application context at contextual proxy object creation, then propagate the captured application context before invocation of contextual proxy object methods, so that proxy object methods can also run with the application context. Four types of application context are propagated: JNDI, ClassLoader, Security and WorkArea. The propagated context types are the same for four types of the concurrent managed objects. Related Articles: Concurrency Utilities support in WebLogic Server 12.2.1 Concurrency Utilities support in WebLogic Server 12.2.1, Part One: ManagedExecutorService Concurrency Utilities support in WebLogic Server 12.2.1, Part Two: ManagedScheduledExecutorService Concurrency Utilities support in WebLogic Server 12.2.1, Part Three: ManagedThreadFactory Concurrency Utilities support in WebLogic Server 12.2.1, Part Five: Multitenancy See for more details Configuring Concurrent Managed Objects in the product documentation.

Overview ContextService is for creating contextual proxy objects. It provides method createContextualProxy to create a proxy object, then proxy objectmethods will run within the captured context at a...

Technical

Concurrency Utilities support in WebLogic Server 12.2.1, Part Three: ManagedThreadFactory

Overview ManagedThreadFactory is for creating threads managed by WebLogic Server. It extends from java.util.concurrent.ThreadFactory without new methods, and provides the method newThread from ThreadFactory. It can be used with Java SE concurrency utilities APIs where ThreadFactory is needed. e.g. in java.util.concurrent.Executors. Weblogic Server provides a preconfigured, default ManagedThreadFactory for each application, and applications can easily use it in web or EJB components without any configuration. Let's begin with a simple example that uses default ManagedThreadFactory in a servlet. Example-1: Use Default ManagedThreadFactory to Create a Thread in a Servlet Step1: Write a Runnable, logging data until the thread is interrupted. public class LoggerTask implements Runnable {     @Override     public void run() {         while (!Thread.interrupted()) {             // collect data and write them to database or file system         }     } } Step2: SomeServlet.java injects the default ManagedThreadFactory and use it to create the thread. @WebServlet("/SomeServlet") public class SomeServlet extends HttpServlet {     @Resource ManagedThreadFactory mtf;     protected void doGet(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException {         Thread t = mtf.newThread(new LoggerTask());         t.start();         // Do something else and reply to the user     } } Runtime Behavior Application Scoped Instance ManagedThreadFactories are application scoped. Each application has its own ManagedThreadFactory instances, and the lifecycle of the ManagedThreadFactory instances are bound to the application. Threads created by ManagedThreadFactory are also application scoped, so that when the application is shut down, related threads will be interrupted. Each application has its own default ManagedThreadFactory instance. Besides, Applications or system administrators can define customized ManagedThreadFactory. Please note that even ManagedThreadFactory templates(see in a later section) defined globally in the console are application scoped during runtime. Context Propagation ManagedThreadFactories will capture the application context at ManagedThreadFactory creation(NOT at newThread method invocation), then propagate the captured application context before task execution, so that the task can also run with the application context. Four types of application context are propagated: JNDI, ClassLoader, Security and WorkArea. The propagated context types are the same for four types of the concurrent managed objects. Limit of Running Threads When newThread method is invoked, WebLogic Server creates a new thread. Because an excessive number of running threads can have a negative affect on server performance and stability, WebLogic Server provides configurations(Max Concurrent New Threads) to limit the number of running threads in a ManagedThreadFactory instance, in the global (domain-level) runtime on a server or in the server. By default, the limits are: 10 for a ManagedThreadFactory instance, 50 for the global (domain-level) runtime on a server and 100 for a Server. When either of the limits is exceeded, calls to newThread() method of ManagedThreadFactory return null. Please note the difference between the global (domain-level) runtime scope Max Concurrent New Threads and the server scope Max Concurrent New Threads. One of the key features in WLS 12.2.1 is the multi-tenancy support where a single Weblogic Server domain can contain multiple partitions. The global (domain-level) runtime Max Concurrent New Threads is the maximum number of threads created by all of the ManagedThreadFactories on the server for global (domain-level) runtime, this excludes threads created within the scope of partitions running on the server, while the server scope Max Concurrent New Threads is the maximum number of threads created by all of the ManagedThreadFactories on the server, including threads created  within the scope of partitions. For partition scope Max Concurrent New Threads, please read Part Five - Multi-tenancy Support. ManagedThreadFactory returns a new thread only when neither of the 3 limits is exceeded. For instance, there is an application deployed to global (domain-level) runtime on a server, when servlets or EJBs invoke the newThread method of the default ManagedThreadFactory, they will get null if there are 10 in progress threads which are created by this ManagedThreadFactory, or there are 50 in progress threads which are created by the ManagedThreadFactories in scope of global (domain-level) runtime on the server, or there are 100 in progress threads which are created by the ManagedThreadFactories in the server. There are examples on how to specify the Max Concurrent New Threads in a later section. Configuration As mentioned earlier, each application has its own default ManagedThreadFactory. The default ManagedThreadFactory has a default max concurrent new threads(10), and has a default thread priority(Thread.NORM_PRIORITY). There is also a default max concurrent new threads(100) for the whole server. If the default configuration is not good enough you will need to read further for configurations. For instance, when you need to create threads with higher priority, you will need to configure a ManagedThreadFactory; and if there would be more than 100 concurrent running threads in the server, you will need to change the server scope Max Concurrent New Threads. Configure ManagedThreadFactories Name, Max Concurrent New Threads, and Priority are configured inside a ManagedThreadFactory. Name is a string that identifies the ManagedThreadFactory,Max Concurrent New Threads is the limit of running threads created by this ManagedThreadFactory, and Priority is the priority of threads. An application can configure a ManagedThreadFactory in DD(weblogic-application.xml/weblogic-ejb-jar.xml/weblogic.xml), and gets the ManagedThreadFactory instance using @Resource(mappedName=<Name of ManagedThreadFactory>), then uses it to create threads. Besides annotation, the application can also bind the ManagedThreadFactory instance to JNDI by specifying <resource-env-description> and <resource-env-ref> in DD, then look it up using JNDI Naming Context, you can read Configuring Concurrent Managed Objects in the product documentation for details. Also, a WebLogic system administrator can configure pre-defined ManagedThreadFactory templates. When an application is deployed, WebLogic Server creates ManagedThreadFactories based on the configuration of ManagedThreadFactory templates, and the created ManagedThreadFactories are all in scope of this application. Example-2: Configure a ManagedThreadFactory in weblogic.xml Step1: defining ManagedThreadFactory: <!-- weblogic.xml --> <managed-thread-factory>     <name>customizedMTF</name>     <priority>3</priority>     <max-concurrent-new-threads>20</max-concurrent-new-threads> </managed-thread-factory> Step2: obtaining the ManagedThreadFactory instance to use @WebServlet("/SomeServlet") public class SomeServlet extends HttpServlet {     @Resource(mappedName="customizedMTF")     ManagedThreadFactory mtf;     protected void doGet(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException {         Runnable aTask = new Runnable() {            ...        };        Thread t = mtf.newThread(aTask);         t.start();        ...    } } Example-3: Configure a ManagedThreadFactory template using WebLogic Administration Console If there is requirement on multiple applications instead of individual application, you can create ManagedThreadFactory templates globally that are available to all applications. For instance, when you need to create threads from all applications with lower priority, you will need to configure a ManagedThreadFactory template. As mentioned earlier, if there is a ManagedThreadFactory template, WebLogic Server creates a ManagedThreadFactory instance for each application based on the configuration of the template. Step1: in WebLogic Administration Console, a ManagedThreadFactory template can be created by clicking on the “New” button from the “Summary of Concurrent Managed Object Templates” page. This brings up the "Create a New Managed Thread Factory Template" page where the name and other parameters of the new ManagedThreadFactory template can be specified. In this example, a ManagedThreadFactory template called "testMTF" is being created with priority 3. Step2: Once a ManagedThreadFactory template is created, any application in the WebLogic Server can get its own ManagedThreadFactory instance to use. @WebServlet("/SomeServlet") public class SomeServlet extends HttpServlet {     @Resource(mappedName="testMTF")     ManagedThreadFactory mtf;     protected void doGet(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException {         Runnable aTask = new Runnable() {            ...        };        Thread t = mtf.newThread(aTask);         t.start();        ...    } } Configure Max Concurrent New Threads in global (domain-level) runtime scope or server scope Example-4: Configure global (domain-level) runtime Scope Max Concurrent New Threads Max Concurrent New Threads of global (domain-level) runtime is the limit of threads created by ManagedThreadFactories in global (domain-level) runtime on that server, this excludes threads created within the scope of partitions running on that server. In WebLogic Administration Console, Max Concurrent New Threads of global (domain-level) runtime can be edited from the “Settings for <domainName>” screen. In this example, global (domain-level) runtime Max Concurrent New Threads of mydomain is set to 100. Example-5: Configure Server Scope Max Concurrent New Threads Max Concurrent New Threads of a server is the limit of running threads submitted to all ManagedThreadFactories in that server. In WebLogic Administration Console, Max Concurrent New Threads of a server can be edited from the “Settings for <serverName>” screen. In this example, Max Concurrent New Threads of myserver is set to 200. Related Articles: Concurrency Utilities support in WebLogic Server 12.2.1 Concurrency Utilities support in WebLogic Server 12.2.1, Part One: ManagedExecutorService Concurrency Utilities support in WebLogic Server 12.2.1, Part Two: ManagedScheduledExecutorService Concurrency Utilities support in WebLogic Server 12.2.1, Part Four: ContextService Concurrency Utilities support in WebLogic Server 12.2.1, Part Five: Multi-tenancy Support See for more details Configuring Concurrent Managed Objects in the product documentation.

Overview ManagedThreadFactory is for creating threads managed by WebLogic Server. It extends from java.util.concurrent.ThreadFactory without newmethods, and provides the method newThread...

Technical

Concurrency Utilities support in WebLogic Server 12.2.1, Part Two: ManagedScheduledExecutorService

Overview ManagedScheduledExecutorService extends from ManagedExecutorService, all the methods from ManagedExecutorService are supported in ManagedScheduledExecutorService, so prior to this article please read Part One: ManagedExecutorService. ManagedScheduledExecutorService extends from java.util.concurrent.ScheduledExecutorService, so it also provides methods(schedule, scheduleAtFixedRate, scheduleAtFixedDelay) from ScheduledExecutorService for scheduling tasks to run after a given delay, or periodically. Besides, ManagedScheduledExecutorService provides new methods itself for tasks to run at some custom schedule based on a Trigger. All those tasks are run on threads provided by WebLogic Server. Weblogic Server provides a preconfigured, default ManagedScheduledExecutorService for each application, and we can easily use it in web or EJB components without any configuration. Let's begin with a simple example that uses default ManagedScheduledExecutorService in a ServletContextListener. Example-1: Use Default ManagedScheduledExecutorService to Submit a Periodical Task Step1: Write a task to log data. public class LoggerTask implements Runnable {     @Override     public void run() {         // collect data and write them to database or file system     } } Step2: SomeListener.java injects the default ManagedScheduledExecutorService, schedules the task periodically in contextInitialized, and cancels the task in contextDestroyed. @WebListener public class SomeListener implements ServletContextListener {     Future loggerHandle = null;     @Resource ManagedScheduledExecutorService mses;     public void contextInitialized(ServletContextEvent scEvent) {         // Creates and executes LoggerTask every 5 seconds, beginning at 1 second later         loggerHandle = mses.scheduleAtFixedRate(new LoggerTask(), 1, 5, TimeUnit.SECONDS);     }     public void contextDestroyed(ServletContextEvent scEvent) {         // Cancel and interrupt our logger task         if (loggerHandle != null) {             loggerHandle.cancel(true);         }     } } Runtime Behavior ManagedScheduledExecutorService provides all the features described in Runtime Behavior of Part One: ManagedExecutorService. As mentioned earlier, ManagedScheduledExecutorService can run task periodically or at some custom schedule, so that a task can run multiple times. Please note that for a long-running task, even if a task can be executed more than once, WebLogic Server creates only one thread for this long-running task at the time of the first run. Configuration Configure ManagedScheduledExecutorService ManagedScheduledExecutorService has the same configurations(Name, Dispatch Policy, Max Concurrent Long Running Requests, and Long Running Priority) as ManagedExecutorService. And the way to get and use the customized ManagedScheduledExecutorService is also similar to ManagedExecutorService. Example-2: Configure a  ManagedScheduledExecutorService in weblogic.xml Step1: defining ManagedScheduledExecutorService: <!-- weblogic.xml --> <work-manager> <name>customizedWM</name>     <max-threads-constraint>         <name>max</name>         <count>1</count>     </max-threads-constraint> </work-manager> <managed-scheduled-executor-service>     <name>customizedMSES</name>     <dispatch-policy>customizedWM</dispatch-policy>     <long-running-priority>10</long-running-priority>     <max-concurrent-long-running-requests>20</max-concurrent-long-running-requests> </managed-scheduled-executor-service> Step2: obtaining the ManagedScheduledExecutorService instance to use @WebServlet("/SomeServlet") public class SomeServlet extends HttpServlet {     @Resource(mappedName="customizedMSES")     ManagedScheduledExecutorService mses;     protected void doGet(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException {         Runnable aTask = new Runnable() {            ...        };        mses.schedule(aTask, 5, TimeUnit.SECONDS);        ...    } } Example-3: Configure a ManagedScheduledExecutorService template using WebLogic Administration Console Step1: in WebLogic Administration Console, a ManagedScheduledExecutorService template can be created by clicking on the “New” button from the “Summary of Concurrent Managed Object Templates” page. This brings up the "Create a New Managed Scheduled Executor Service Template" page where the name and other parameters of the new ManagedScheduledExecutorService template can be specified. In this example, a ManagedScheduledExecutorService called "testMSES" is being created to map to a pre-defined work manager "testWM". Step2: Once a ManagedScheduledExecutorService template is created, any application in the WebLogic Server can get its own ManagedScheduledExecutorService instance to use. @WebServlet("/SomeServlet") public class SomeServlet extends HttpServlet {     @Resource(mappedName="testMSES")     ManagedScheduledExecutorService mses;     protected void doGet(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException {         Runnable aTask = new Runnable() {            ...        };        mses.schedule(aTask, 5, TimeUnit.SECONDS);        ...    } } Related Articles: Concurrency Utilities support in WebLogic Server 12.2.1 Concurrency Utilities support in WebLogic Server 12.2.1, Part One: ManagedExecutorService Concurrency Utilities support in WebLogic Server 12.2.1, Part Three: ManagedThreadFactory Concurrency Utilities support in WebLogic Server 12.2.1, Part Four: ContextService Concurrency Utilities support in WebLogic Server 12.2.1, Part Five: Multitenancy See for more details Configuring Concurrent Managed Objects in the product documentation.

Overview ManagedScheduledExecutorService extends from ManagedExecutorService, all the methods from ManagedExecutorService are supported inManagedScheduledExecutorService, so prior to this article...

Technical

Concurrency Utilities support in WebLogic Server 12.2.1, Part One: ManagedExecutorService

Overview ManagedExecutorService is for running tasks asynchronously on threads provided by WebLogic Server. It extends from java.util.concurrent.ExecutorService without new methods, it provides methods(execute, submit, invokeAll, invokeAny) from ExecutorService, and its lifecycle methods(awaitTermination, isTerminated, isShutdown, shutdown, shutdownNow) are disabled with IllegalStateException. Weblogic Server provides a preconfigured, default ManagedExecutorService for each application, and applications can easily use it in web or EJB components without any configuration. Let's begin with a simple example that uses default ManagedExecutorService in a servlet. Example-1: Use Default ManagedExecutorService to Submit an Asynchronous Task in a Servlet Step1: Write an asynchronous task. Asynchronous tasks must implement either java.util.concurrent.Callable or java.lang.Runnable. A task can optionally implement javax.enterprise.concurrent.ManagedTask(see JSR236 specification) to provide identifying information, a ManagedTaskListener or additional execution properties of the task. public class SomeTask implements Callable<Integer> {     public Integer call() {         // Interact with a database, then return answer.     } } Step2: SomeServlet.java injects the default ManagedExecutorService and submit the task to it. @WebServlet("/SomeServlet") public class SomeServlet extends HttpServlet {     @Resource ManagedExecutorService mes;      protected void doGet(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException {         // Create and submit the task instances         Future<Integer> result = mes.submit(new SomeTask());         // do something else         try {             // Wait for the result             Integer value = result.get();             // Process the result and reply to the user         } catch (InterruptedException | ExecutionException e) {             throw new ServletException("failed to get result for SomeTask", e);         }     } } Runtime Behavior Application Scoped Instance There are two applications(A in red and B in green) in the above figure. You can see that the two applications are submitting tasks to different ManagedExecutorService instances, this is true because ManagedExecutorServices are application scoped. Each application has its own ManagedExecutorService instances, and the lifecycle of the ManagedExecutorService instances are bound to the application. Asynchronous tasks submitted to ManagedExecutorServices are also application scoped, so that when the application is shut down, related asynchronous tasks/threads will be cancelled/interrupted. Each application has its own default ManagedExecutorService instance. Besides, Applications or system administrators can define customized ManagedExecutorService. Please note that even ManagedExecutorService templates(see in a later section) defined globally in the console are application scoped during runtime. Context Propagation In the above figure you can see that when application A is submitting a task, the task is wrapped with the context of application A, whereas when application B is submitting a task, the task is wrapped with the context of application B. This is true because ManagedExecutorServices will capture the application context at task submission, then propagate the captured application context before task execution, so that the task can also run with the application context. Four types of application context are propagated: JNDI, ClassLoader, Security and WorkArea. The propagated context types are the same for four types of the concurrent managed objects. Self Tuning(for short-running tasks) In the above figure you can see that ManagedExecutorServices submit short-running tasks to WorkManagers(see Workload Management in WebLogic Server 9.0 for overview of WebLogic work managers), and create a new thread for each long-running task. As you may know, WebLogic Server diagnoses a thread as stuck if it is continually working (not idle) for a set period of time(default is 10 minutes), so normally if a task would last longer than that period of time, it can be a long-running task. You can set ManagedTask.LONGRUNNING_HINT property(see JSR236 specification) to "true" to make it run as a long-running task. Each ManagedExecutorService is associated with an application-scoped WorkManager. By default, ManagedExecutorServices are associated with the application default WorkManager. Applications or system administrators can specify Dispatch Policy to associate a ManagedExecutorService with a specific application-scoped WorkManager. There are examples on how to use the dispatch policy in a later section. By associating a ManagedExecutorService with a WorkManager, WebLogic Server utilize the threads in the single thread pool to run asynchronous tasks from applications, so that asynchronous tasks can also be dynamically prioritized together with servlet or RMI requests. Limit of Concurrent Long-running Requests As mentioned earlier long-running tasks do not utilize the threads in the single thread pool, WebLogic Server creates a new thread for each task. Because an excessive number of running threads can have a negative affect on server performance and stability, WebLogic Server provides configurations(Max Concurrent Long Running Requests) to limit the number of concurrent long-running tasks in a ManagedExecutorService/ManagedScheduledExecutorService instance, in the global (domain-level) runtime on a server or in the server. By default, the limits are: 10 for a ManagedExecutorService/ManagedScheduledExecutorService instance, 50 for the global (domain-level) runtime on a server and 100 for a Server. When either of the limits is exceeded, ManagedExecutorService/ManagedScheduledExecutorService rejects long-running tasks submissions by throwing a RejectedExecutionException. Please note the difference between the global (domain-level) runtime scope Max Concurrent Long Running Requests and the server scope Max Concurrent Long Running Requests. One of the key features in WLS 12.2.1 is the multi-tenancy support where a single Weblogic Server domain can contain multiple partitions. The global (domain-level) runtime scope Max Concurrent Long Running Requests is the maximum number of concurrent long-running tasks submitted by all of the ManagedExecutorServices/ManagedScheduledExecutorServices on the server for global (domain-level) runtime, this excludes concurrent long-running tasks submitted within the scope of partitions running on the server, while the server scope Max Concurrent Long Running Requests is the maximum number of concurrent long-running tasks submitted by all of the ManagedExecutorServices/ManagedScheduledExecutorServices on the server, including concurrent long-running tasks submitted within global (domain-level) runtime and partitions. For partition scope Max Concurrent Long Running Requests, please read Part Five - Multi-tenancy Support. ManagedExecutorService/ManagedScheduledExecutorService accepts a concurrent long-running task submission only when neither of the 3 limits is exceeded. For instance, there is an application deployed to global (domain-level) runtime on a server, when a long-running task is submitted to its default ManagedExecutorService, RejectedExecutionException will be thrown if there are 10 in progress long-running tasks which are submitted to this ManagedExecutorService, or there are 50 in progress long-running tasks which are submitted to the ManagedExecutorServices/ManagedScheduledExecutorServices in scope of global (domain-level) runtime on the server, or there are 100 in progress long-running tasks which are submitted to the ManagedExecutorServices/ManagedScheduledExecutorServices in the server. There are examples on how to specify the Max Concurrent Long Running Requests in a later section. Configuration As mentioned earlier, each application has its own default ManagedExecutorService. The default ManagedExecutorService is associated with the default WorkManager, has a default max concurrent long running requests(10), and has a default thread priority(Thread.NORM_PRIORITY). There is also a default max concurrent long running requests(100) for the whole server. If the default configuration is not good enough you will need to read further for configurations. For instance, when you need to associate short-running tasks to a pre-defined WorkManager with higher priority, you will need to configure a ManagedExecutorService; and if there would be more than 100 concurrent long-running tasks in the server, you will need to change the server scope Max Concurrent Long Running Requests. Configure ManagedExecutorServices Name, Dispatch Policy, Max Concurrent Long Running Requests, and Long Running Priority are configured inside a ManagedExecutorService. Name is a string that identifies the ManagedExecutorService, Dispatch Policy is the name of the WorkManager to which the short-running tasks are submitted, Max Concurrent Long Running Requests is the limit of concurrent long-running tasks submitted to this ManagedExecutorService, and Long Running Priority is the priority of the threads created for long-running tasks. An application can configure a ManagedExecutorService in DD(weblogic-application.xml/weblogic-ejb-jar.xml/weblogic.xml), and gets the ManagedExecutorService instance using @Resource(mappedName=<Name of ManagedExecutorService>), then submits a task to it. Besides annotation, the application can also bind the ManagedExecutorService instance to JNDI by specifying <resource-env-description> and <resource-env-ref> in DD, then look it up using JNDI Naming Context, you can read Configuring Concurrent Managed Objects in the product documentation for details. Also, a WebLogic system administrator can configure pre-defined ManagedExecutorService templates. When an application is deployed, WebLogic Server creates ManagedExecutorServices based on the configuration of ManagedExecutorService templates, and the created ManagedExecutorServices are all in scope of this application. Example-2: Configure a ManagedExecutorService in weblogic.xml Step1: defining ManagedExecutorService: <!-- weblogic.xml --> <work-manager> <name>customizedWM</name>     <max-threads-constraint>         <name>max</name>         <count>1</count>     </max-threads-constraint> </work-manager> <managed-executor-service>     <name>customizedMES</name>     <dispatch-policy>customizedWM</dispatch-policy>     <long-running-priority>10</long-running-priority>     <max-concurrent-long-running-requests>20</max-concurrent-long-running-requests> </managed-executor-service> Step2: obtaining the ManagedExecutorService instance to use @WebServlet("/SomeServlet") public class SomeServlet extends HttpServlet {     @Resource(mappedName="customizedMES")      ManagedExecutorService mes;      protected void doGet(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException {          Runnable aTask = new Runnable() {             ...         };         mes.submit(aTask);         ...    } } Example-3: Configure a ManagedExecutorService template using WebLogic Administration Console If there is requirement on multiple applications instead of individual application, you can create ManagedExecutorService templates globally that are available to all applications. For instance, when you need to run short-running tasks from all applications with lower priority, you will need to configure a ManagedExecutorService template . ManagedExecutorService templates are also useful in Batch jobs. As mentioned earlier, if there is a ManagedExecutorService template, WebLogic Server creates a ManagedExecutorService instance for each application based on the configuration of the template. Step1: in WebLogic Administration Console, a ManagedExecutorService template can be created by clicking on the “New” button from the “Summary of Concurrent Managed Object Templates” page. This brings up the "Create a New Managed Executor Service Template" page where the name and other parameters of the new ManagedExecutorService template can be specified. In this example, a ManagedExecutorService called "testMES" is being created to map to a pre-defined work manager "testWM". Step2: Once a ManagedExecutorService template is created, any application in the WebLogic Server can get its own ManagedExecutorService instance to use. @WebServlet("/SomeServlet") public class SomeServlet extends HttpServlet {     @Resource(mappedName="testMES")     ManagedExecutorService mes;     protected void doGet(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException {         Runnable aTask = new Runnable() {            ...        };        mes.submit(aTask);        ...    } } Configure Max Concurrent Long Running Requests Running in global (domain-level) runtime scope or server scope Example-4: Configure global (domain-level) runtime Scope Max Concurrent Long Running Requests Max Concurrent Long Running Requests of global (domain-level) runtime is the limit of concurrent long-running tasks submitted to all ManagedExecutorServices and ManagedScheduledExecutorServices in global (domain-level) runtime on that server, this excludes long-running tasks submitted within the scope of partitions running on that server. In WebLogic Administration Console, Max Concurrent Long Requests of global (domain-level) runtime can be edited from the “Settings for <domainName>” screen. In this example, global (domain-level) runtime Max Concurrent Long Running Requests for mydomain is set to 80. Example-5: Configure Server Scope Max Concurrent Long Running Requests Max Concurrent Long Running Requests of a server is the limit of concurrent long-running tasks submitted to all ManagedExecutorServices and ManagedScheduledExecutorServices in that server. In WebLogic Administration Console, Max Concurrent Long Requests of a server can be edited from the “Settings for <serverName>” screen. In this example, Max Concurrent Long Running Requests of myserver is set to 200. Related Articles: Concurrency Utilities support in WebLogic Server 12.2.1 Concurrency Utilities support in WebLogic Server 12.2.1, Part Two: ManagedScheduledExecutorService Concurrency Utilities support in WebLogic Server 12.2.1, Part Three: ManagedThreadFactory Concurrency Utilities support in WebLogic Server 12.2.1, Part Four: ContextService Concurrency Utilities support in WebLogic Server 12.2.1, Part Five: Multitenancy See for more details Configuring Concurrent Managed Objects in the product documentation.

Overview ManagedExecutorService is for running tasks asynchronously on threads provided by WebLogic Server. It extends from java.util.concurrent.ExecutorService without newmethods, it provides...

Technical

Concurrency Utilities support in WebLogic Server 12.2.1

As part of its support for Java EE 7, WebLogic Server 12.2.1 supports the Java EE Concurrency Utilities(JSR236) specification. This specification provides a simple, standardized API(4 types of managed objects) for using concurrency from Java EE application components(such as servlets and EJBs). The 4 types of concurrent managed objects implement these interfaces in javax.enterprise.concurrent package: ManagedExecutorService, ManagedScheduledExecutorService, ManagedThreadFactory, ContextService. If you are still using common Java SE concurrency APIs such as java.lang.Thread or java.util.Timer directly in your servlets or EJBs, you are strongly recommended to use java EE Concurrency Utilities instead. Threads created by using Java SE concurrency APIs are not managed by WebLogic Server, so that services and resources provided by WebLogic Server are typically unable to be reliably used from these un-managed threads.  By using java EE Concurrency Utilities, asynchronous tasks run on WebLogic Server-managed threads. Since WebLogic Server has knowledge of these threads/asynchronous tasks, it can manage them by: Providing the proper execution context, including JNDI, ClassLoader, Security, WorkArea Submitting short-running tasks to the single server-wide self-tuning thread pool to make them prioritized based on defined rules and run-time metrics Limiting the number of threads for long-running tasks to prevent negative affect on server performance and stability Managing the lifecycle of asynchronous tasks by interrupting threads/cancelling tasks when the application shuts down CommonJ API(providing context aware Work Managers and Timers) is WebLogic Server specific, and is the predecessor of Java EE Concurrency Utilities. Comparing to CommonJ API, Java EE Concurrency Utilities is more standardized and easier to use, and provides more functions like custom scheduling, ContextService, ManagedThreadFactory. Read these articles for details: Concurrency Utilities support in WebLogic Server 12.2.1, Part One: ManagedExecutorService Concurrency Utilities support in WebLogic Server 12.2.1, Part Two: ManagedScheduledExecutorService Concurrency Utilities support in WebLogic Server 12.2.1, Part Three: ManagedThreadFactory Concurrency Utilities support in WebLogic Server 12.2.1, Part Four: ContextService Concurrency Utilities support in WebLogic Server 12.2.1, Part Five: Multi-tenancy Support See for more details Configuring Concurrent Managed Objects in the product documentation.

As part of its support for Java EE 7, WebLogic Server 12.2.1 supports the Java EE Concurrency Utilities(JSR236) specification. This specification provides a simple, standardized API(4 types of managedob...

Add category

WLS Replay Statistics

Starting in the 12.1.0.2 Oracle thin driver, thereplay driver has statistics related to replay. This is useful to understandhow many connections are being replayed. It should be completely transparent to the application so you won’t knowif connection replays are occurring unless you check. Thestatistics are available on a per connection basis or on a datasourcebasis. However, connections on a WLSdatasource don’t share a driver-level datasource object so the latter isn’tuseful in WLS. WLS 12.2.1 providesanother mechanism to get the statistics at the datasource level. The following code sample shows how to print out theavailable statistics for an individual connection using the oracle.jdbc.replay.ReplayableConnectioninterface, which exposes the method to get a oracle.jdbc.replay.ReplayStatisticsobject. See https://docs.oracle.com/database/121/JAJDB/oracle/jdbc/replay/ReplayStatistics.htmlfor a description of the statistics values. if (conn instanceof ReplayableConnection) { ReplayableConnection rc = ((ReplayableConnection)conn); ReplayStatistics rs = rc.getReplayStatistics( ReplayableConnection.StatisticsReportType.FOR_CURRENT_CONNECTION); System.out.println("Individual Statistics"); System.out.println("TotalCalls="+rs.getTotalCalls()); System.out.println("TotalCompletedRequests="+rs.getTotalCompletedRequests()); System.out.println("FailedReplayCount="+rs.getFailedReplayCount()); System.out.println("TotalRequests="+rs.getTotalRequests()); System.out.println("TotalCallsTriggeringReplay="+rs.getTotalCallsTriggeringReplay()); System.out.println("TotalReplayAttempts="+rs.getTotalReplayAttempts()); System.out.println("TotalProtectedCalls="+rs.getTotalProtectedCalls()); System.out.println("SuccessfulReplayCount="+rs.getSuccessfulReplayCount()); System.out.println("TotalCallsAffectedByOutages="+rs.getTotalCallsAffectedByOutages()); System.out.println("TotalCallsAffectedByOutagesDuringReplay="+ rs.getTotalCallsAffectedByOutagesDuringReplay()); System.out.println("ReplayDisablingCount="+rs.getReplayDisablingCount());} Besides a getReplayStatistics() method, there is also aclearReplayStatistics() method. To provide for a consolidated view of all of the connectionsassociated with a WLS datasource, the information is available via a new operation on the associatedruntime MBean. You need to look-up the WLSMBean server, get the JDBC service, then search for the datasource name in thelist of JDBC datasource runtime MBeans, and get theJDBCReplayStatisticsRuntimeMBean. Thisvalue will be null if the datasource is not using a replay driver, if thedriver is earlier than 12.1.0.2, or if it’s not a Generic or AGL datasource. To use the replay information, you need tofirst call the refreshStatistics() operation that sets the MBean values, aggregating the values forall connections on the datasource. Thenyou can call the operations on the MBean to get the statistics values, as inthe following sample code. Note thatthere is also a clearStatistics() operation to clear the statistics on allconnections on the datasource. Thefollowing code shows an example of how to print the aggregated statistics fromthe data source. public void printReplayStats(String dsName) throws Exception { MBeanServer server = getMBeanServer(); ObjectName[] dsRTs = getJdbcDataSourceRuntimeMBeans(server); for (ObjectName dsRT : dsRTs) { String name = (String)server.getAttribute(dsRT, "Name"); if (name.equals(dsName)) { ObjectName mb =(ObjectName)server.getAttribute(dsRT, "JDBCReplayStatisticsRuntimeMBean"); server.invoke(mb,"refreshStatistics", null, null); MBeanAttributeInfo[] attributes = server.getMBeanInfo(mb).getAttributes(); System.out.println("Roll-up"); for (int i = 0; i <attributes.length; i++) { if(attributes[i].getType().equals("java.lang.Long")) { System.out.println(attributes[i].getName()+"="+ (Long)server.getAttribute(mb, attributes[i].getName())); } } } }}MBeanServer getMBeanServer() throws Exception { InitialContext ctx = new InitialContext(); MBeanServer server = (MBeanServer)ctx.lookup("java:comp/env/jmx/runtime"); return server;} ObjectName[] getJdbcDataSourceRuntimeMBeans(MBeanServer server) throws Exception { ObjectName service = new ObjectName( "com.bea:Name=RuntimeService,Type=\weblogic.management.mbeanservers.runtime.RuntimeServiceMBean"); ObjectName serverRT = (ObjectName)server.getAttribute(service, "ServerRuntime"); ObjectName jdbcRT = (ObjectName)server.getAttribute(serverRT, "JDBCServiceRuntime"); ObjectName[] dsRTs = (ObjectName[])server.getAttribute(jdbcRT, "JDBCDataSourceRuntimeMBeans"); return dsRTs;} Now run an application that gets a connection, doessome work, kills the session, replays, then gets a second connection and doesthe same thing. Each connection successfullyreplays once. That means that theindividual statistics show a single replay and the aggregated statistics willshow two replays. Here is what theoutput might look like. Individual StatisticsTotalCalls=35TotalCompletedRequests=0FailedReplayCount=0TotalRequests=1TotalCallsTriggeringReplay=1TotalReplayAttempts=1TotalProtectedCalls=19SuccessfulReplayCount=1TotalCallsAffectedByOutages=1TotalCallsAffectedByOutagesDuringReplay=0ReplayDisablingCount=0 Roll-upTotalCalls=83TotalCompletedRequests=2FailedReplayCount=0TotalRequests=4TotalCallsTriggeringReplay=2TotalReplayAttempts=2TotalProtectedCalls=45SuccessfulReplayCount=2TotalCallsAffectedByOutages=2TotalCallsAffectedByOutagesDuringReplay=0ReplayDisablingCount=0 Looking carefully at the numbers, you can see that the individual count was done before the connections were closed (TotalCompletedRequests=0) and the roll-up was done after both connections were closed.  You can also use WLST to get the statistics valuesfor the datasource. The statistics arenot visible in the administration console or FMWC in WLS 12.2.1.

Starting in the 12.1.0.2 Oracle thin driver, the replay driver has statistics related to replay. This is useful to understandhow many connections are being replayed. It should be...

Technical

Patching Oracle Home Across your Domain with ZDT Patching

Now it’s time for the really good stuff! In this post, you will see how Zero Downtime (ZDT) Patching can be used to rollout apatched WebLogic OracleHome directory to all your managed servers (and optionally to your AdminServer) without incurring any downtime or loss of session data for yourend-users. This rollout, like the others, is based on the controlled rolling shutdownof nodes, and using Oracle Traffic Director (OTD) load balancer to route userrequests around the offline node. Thedifference with this rollout is what happens when the managed servers are shutdown. In this case, when the managedservers are shutdown, the rollout will actually move the current OracleHomedirectory to a backup location, and replace it with a patched OracleHomedirectory that the administrator has prepared, verified, and distributed inadvance. (More on the preparation in amoment) When everything has been prepared, starting the rollout is simply a matterof issuing a WLST command like this one: rolloutOracleHome(“Cluster1”, “/pathTo/PatchedOracleHome.jar”,“/pathTo/BackupCurrentOracleHome”, “FALSE”) The AdminServer will then check that the PatchedOracleHome.jar file existseverywhere that it should, and it will begin the rollout. Note that the “FALSE” flag simply indicatesthat this is not a domain level rollback operation where we would be required to update theAdminServer last instead of first. In order to prepare the patched OracleHome directory, as mentioned above,the user can start with a copy of a production OracleHome, usually in a test(non-production) environment, and apply the desired patches in whatever way isalready familiar to them. Once this isdone, the administrator uses the included CIE tool copyBinary to create adistributable jar archive of the OracleHome. Once the jar archive of the patched OracleHome directory has beencreated, it can be distributed to all of the nodes that will be updated. Note that it needs to reside on the same pathfor all nodes. With that, thepreparation is complete and the rollout can begin! Be sure to check back soon to read about how the preparation phase has beenautomated as well by integrating ZDT Patching with another new tool calledOPatchAuto. For more information about updating OracleHome with Zero Downtime Patching,view the documentation.

Now it’s time for the really good stuff!  In this post, you will see how Zero Downtime (ZDT) Patching can be used to rollout apatched WebLogic OracleHome directory to all your managed servers (and...

Technical

Weblogic 12.2.1 Multitenancy Support for Resource Adapter

One of the key features in WLS 12.2.1 is the multi-tenancy support, you can learn more about its concept in Tim Quinn's blog:Domain Partitions for Multi-tenancy in WebLogic Server. For resource adapter, besides deploying it to domain partition, you can also deploy a resource adapter to partition's resource group or resource group template. This can be done by selecting resource group scope or resource group template scope while deploying resource adapter in console. Following graph shows the deployment page in console. In the example graph, we have a resource group Partition1-rg in Partition1 and a resource group template TestRGT: When you select 'Global' scope, the resource adapter will be deployed to domain partition. If selecting 'TestRGT template', the resource adapter will be deployed to resource group template TestRGT, and if Partition1's resource group references TestRGT, the resource adapter will be deployed to Partition1. If selecting 'Partition1-rg in Partition1', the resource adapter will be deployed to Partition1.  You can learn more about multi-tendency deployment in Hong Zhang's blog: Multi Tenancy Deployment. If you deploy resource adapters to different partitions, these resources in different partitions will not interfere each other, because: Resource adapter's JNDI resources in one partition can not be looked up by another partition, you can only lookup resource adapter resources being bound in same partition. Resource adapter classes packaged in resource adapter archive are loaded by different classloaders when they are deployed to different partitions. You do not need to worry about mistakenly using some resource adapter classes loaded by another partition. If you somehow get a reference to one of the following resource adapter's resource objects which belongs to another partition, you still can not use it. You will get a exception if you calling some of the methods of that object. javax.resource.spi.work.WorkManager javax.resource.spi.BootstrapContext javax.resource.spi.ConnectionManager javax.validation.Validator javax.validation.ValidatorFactory javax.enterprise.inject.spi.BeanManager javax.resource.spi.ConnectionEventListener After having resource adapter deployed,  you can access domain resource adapter's runtime mbean through 'ConnectorServiceRuntime' directory under ServerRuntime, using WebLogic Scripting Tool (WLST): In above example, we have a resource adapter named 'jca_ra' deployed in domain partition, so we can see it's runtime mbean under ConnectorServiceRuntime/ConnectorService. jms-internal-notran-adp and jms-internal-xa-adp are also listed here, they are weblogic internal resource adapters. But how can we monitor resource adapters deployed in partition? they are under PartitionRuntimes: In above example, we have a resource adapter named 'jca_ra' deployed in Partition1. You can also get resource adapter's runtimembean through JMX(see how to access runtimembean using JMX):      JMXServiceURL serviceURL = new JMXServiceURL("t3", hostname, port,  "/jndi/weblogic.management.mbeanservers.domainruntime");      Hashtable h = new Hashtable();      h.put(Context.SECURITY_PRINCIPAL, user);   h.put(Context.SECURITY_CREDENTIALS, passwd);   h.put(JMXConnectorFactory.PROTOCOL_PROVIDER_PACKAGES,  "weblogic.management.remote");   h.put("jmx.remote.x.request.waiting.timeout", new Long(10000));   JMXConnector connector = JMXConnectorFactory.connect(serviceURL, h);   MBeanServerConnection connection = connector.getMBeanServerConnection();   Set<ObjectName> names = connection.queryNames(new ObjectName(                            "*:Type=ConnectorComponentRuntime,Name=jca_ra,*"), null);      for (ObjectName oname : names) {          Object o = MBeanServerInvocationHandler.newProxyInstance(connection, oname,                                ConnectorComponentRuntimeMBean.class, false);          System.out.println(o);      } Running above example code in a domain which has a resource adapter named 'jca_ra' deployed to both domain and Partition1, you will get following result: [MBeanServerInvocationHandler]com.bea:Name=jca_ra,ServerRuntime=admin,Location=admin,                                 Type=ConnectorComponentRuntime,ApplicationRuntime=jca_ra [MBeanServerInvocationHandler]com.bea:Name=jca_ra,ServerRuntime=admin,Location=admin,                               Type=ConnectorComponentRuntime,ApplicationRuntime=jca_ra,PartitionRuntime=Partition1 You can see the connection pool runtime mbean(ConnectorComponentRuntime) of the resource adapter which is deployed to Partition1 has a valid PartitionRuntime attribute. So you can query Partition1's resource adapter's runtime mbean by following code: connection.queryNames(new ObjectName(                    "*:Type=ConnectorComponentRuntime,Name=jca_ra,PartitionRuntime=Partition1,*"), null);

One of the key features in WLS 12.2.1 is the multi-tenancy support, you can learn more about its concept in Tim Quinn's blog:Domain Partitions for Multi-tenancy in WebLogic Server. For resource...

Messaging

12.2.1 WebLogic JMS: Dynamic Scalability and High Availability Made Simple

Introduction WebLogic’s 12.2.1 release features a greatly simplified, easy to use JMS configuration and administration model. This simplified modelworks seamlessly in both Cluster and Multi-Tenant/Cloud environments, makingJMS configuration a breeze and portable. It essentially lifts all major limitations for the initial version ofthe JMS ‘cluster targeting’ feature that was added in 12.1.2 plus addsenhancements that aren’t available in the old administration model. Now, all typesof JMS Service artifacts can take full advantage of a Dynamic Clusterenvironment and automatically scale up as well as evenly distribute the loadacross the cluster in response to cluster size changes. In other words, thereis no need for individually configuring and deploying JMS artifacts on everycluster member in response to cluster growth or change. New easily configured high availabilityfail-over, fail-back, and restart-in-place settings provide capabilities thatwere previously only partially supported via individual targeting. Finally, 12.2.1 adds the ability to configuresingleton destinations in a cluster within the simplified configuration model. These capabilities apply to all WebLogic Cluster types,including ‘classic’ static clusters which combine a set of individuallyconfigured WebLogic servers, dynamic clusters which define a single dynamic WLserver that can expand into multiple instances, and mixed clusters that combineboth a dynamic server and one or more individually configured servers. Configuration Changes With this model, you can now easily configure, controldynamic scaling and high availability behavior for JMS in a central location,either on a custom Store for all JMS artifacts that handle persistent data, oron a Messaging Bridge. The newconfiguration parameters introduced by this model are collectively known as“High Availability” policies. These areexposed to the users via management Consoles (WebLogic Administration Console,Fusion Middleware Control (FMWc)) as well as through WLST scripting and Java MBeanAPIs. When they’re configured on a store, all the JMS service artifacts thatreference that store simply inherit these settings from the store and behaveaccordingly. Figure 1. Configuration Inheritance  The most important configuration parameters are, distribution-policy and migration-policy, which control dynamic scalabilityand high availability respectively for their associated service artifacts. When a distribution-policyis set to distributed on oneconfigured artifact, then at the deploy time, the system automatically createsan instance on each cluster member that joins the cluster. When set to singleton, then the system creates asingle instance for the entire cluster. Distributed instances are uniquely named after their host WebLogicServer (their configured name is suffixed with the name of their server), whereit is initially created and started for runtime monitoring and location trackingpurposes. This server is called the home or preferred server for thedistributed instances that are named after it. A singletoninstance is not decorated with a server name, instead it’s simply suffixed with“-01” and the system will choose one of the managed servers in the cluster tohost the instance. A distribution-policy works in concert with a new highavailability option called the migration-policy,to ensure that instances survive any unexpected service failures, servercrashes, or even a planned shutdown of the servers. It does this by automatically migrating themto available cluster members. For the migration-policy,you can choose one of three options: on-failure,where the migration of instances will take place only in the event ofunexpected service failures or server crashes; always, where the migration of instances will take place evenduring a planned administrative shutdown of a server; finally, you can chooseto have off as an option to disablethe service level migration if needed. Figure 2. Console screenshot: HA Configuration  In addition to the migration-policy, the new model offersanother high availability notion for stores called the restart-in-place capability. When enabled, the system will firsttry to restart failing store instances on their current server before failing overto another server in the cluster. This option can be fine tuned to limit thenumber of attempts and delay between each attempt. This capability prevents thesystem from doing unnecessary migration in the event of temporary glitches,such as a database outage, or unresponsive network or IO requests due tolatency and overload. Bridges ignore restart-in-place settings as they alreadyautomatically restart themselves after a failure (they periodically try to reconnect). Note that the high availability enhancement not only offersfailover of the service artifacts in the event of failure, it also offersautomatic failback of distributed instances when their home server getsrestarted after a crash or shutdown – a high availability feature that isn’tavailable in previous releases. This allows the applications to achievehigh-level server/configuration affinity whenever possible. Unlike in previous releases, both duringstartup and failover, the system will also try to ensure that the instances areevenly distributed across the cluster members thus preventing accidentaloverload of any one server in the cluster. Here’s a table that summarizes the new distribution,migration, and restart-in-place settings: Attribute Name Description Options Default distribution-policy Controls JMS service instance counts and names.   [Distributed | Singleton] Distributed migration-policy Controls HA behavior.  [Off | On-Failure | Always] Off restart-in-place Enables automatic restart of a failing store instance(s) with a healthy WebLogic server. [true | false ] true seconds-between-restarts Specifies how many seconds to wait in between attempts to restart-in-place for a failed service. [1 … {Max Integer}] 30 number-of-restart-attempts Specifies how many restart attempts to make before migrating the failed services [-1,0 … {Max Long}] 6 initial-boot-delay-seconds The length of time to wait before starting an artifact's instance on a server. [-1,0 … {Max Long}] 60 failback-delay-seconds The length of time to wait before failing back an artifact to its preferred server. [-1,0 … {Max Long}] 30 partial-cluster-stability-seconds The length of time to wait before the cluster should consider itself at a "steady state". Until that point, only some resources may be started in the cluster. This gives the cluster time to come up slowly and still be easily laid out [-1,0 … {Max Long}] 240 Runtime Monitoring As mentioned earlier, when targeted to cluster, systemautomatically creates one (singleton) or more (distributed) instances from asingle configured artifact. These instances are backed by appropriate runtimembeans, named uniquely and made available for accessing/monitoring under theappropriately scoped server (or partition, in case of multi-tenant environment)runtime mbean tree. Figure 3. Console screenshot: Runtime Monitoring   The above screenshot shows how a cluster targeted SAF Agent runtime instance is decorated with cluster member server name to make it unique. Validation and LegalChecks There are legal checks and validation rules in place toprevent the users from configuring invalid combinations of these newparameters. The following two tables list the supported combinations of thesetwo new policies by service types as well by resource type respectively. Service Artifact Distribution Policy Migration Policy Off Always On-Failure Persistent Store Distributed ✓ ✓ ✓ Singleton ✓ ✓ JMS Server Distributed ✓ ✓ ✓ Singleton ✓ ✓ SAF Agent Distributed ✓ ✓ ✓ Path Service Singleton ✓ Messaging Bridge Distributed ✓ ✓ Singleton ✓ In the above table, the legal combinations are listed basedon the JMS service types. For example, the Path Service, a messaging servicethat persists and holds the routing information for messages that takeadvantage of a popular WebLogic ordering extension called unit-of-order orunit-of-work routing, is a singleton service that should be made highlyavailable in a cluster regardless of whether there is a service failure orserver failure. So, the only valid and legal combinations of HA policies forthis service configuration are: distribution-policyas singleton and migration-policy as always. Some rules are also derived based on the resource types thatare being used in an application. For example, for any JMS Servers that host uniformdistributed destinations or for SAF Agents that would always host importeddestinations, the distribution-policyas singleton does not make any senseand is not allowed. Resource Type Singleton Distributed JMS Servers (hosting Distributed Destinations) ✓ SAF Agent (hosting Imported Destinations) ✓ JMS Servers (hosting Singleton Destinations) ✓ Path Service ✓ Bridge ✓ ✓ In the event of an invalid configuration that violates theselegal checks there will be error or log messaging indicating the same and insome cases it may cause deployment server startup failures. Best Practices To take full advantage of the improved capabilities, first designyour JMS application by carefully identifying the scalability and availabilityrequirements as well as the deployment environments. For example, identifywhether the application will be deployed to a Cluster or to a multi-tenantenvironment and whether it will be using uniform distributed destinations orstandalone (non-distributed) destinations or both. Once the above requirements are identified then alwaysdefine and associate a custom persistent store with the applicable JMS serviceartifacts. Ensure that the new HA parameters are explicitly set as per therequirements (use the above tables as a guidance) and that both a JMS serviceartifact and its corresponding store are similarly targeted (to the samecluster or to the same RG/T in case of multi-tenant environment). Remember, the JMS high availability mechanism depends on WebLogicServer Health and Singleton Monitoring services, which in turn rely on amechanism called “Cluster Leasing”. So you need to setup valid cluster leasingconfiguration particularly when the migration-policyis set to either on-failure or always or when you want to create asingleton instance of a JMS service artifact. Note that WebLogic offers twoleasing options: Consensus and Database, and we highly recommend using Databaseleasing as a best practice. Also it is highly recommended to configure high availabilityfor WebLogic’s transaction system, as JMS apps often directly use transactions,and JMS internals often implicitly use transactions. Note that WebLogic transaction highavailability requires that all managed servers have an explicit listen-addressand listen-port values configured, instead of leaving to the defaults, in orderto yield full transaction HA support. In case of dynamic cluster configuration,you can configure these settings as part of the dynamic server template definition. Finally, it is also preferred to use NodeManager to start all the managed servers of a cluster over anyother methods. For more information on this feature andother new improvements in Oracle WebLogic Server 12.2.1 release, please see What’sNew chapter of the public documentation. Conclusion Using these new enhanced capabilities of WebLogic JMS, onecan greatly reduce the overall time and cost involved in configuring andmanaging WebLogic JMS in general, plus scalability and high availability inparticular, resulting in ease of use with an increased return on investment.

Introduction WebLogic’s 12.2.1 release features a greatly simplified, easy to use JMS configuration and administration model. This simplified modelworks seamlessly in both Cluster and...

Add category

Deploying Java EE 7 Applications to Partitions from Eclipse

The new WebLogic Server 12.2.1 Multi-tenant feature enables partitions to be created in a domain that are isolated from one another and able to be managed independently of one another. From a development perspective, this isolation opens up some interesting opportunities - for instance it enables the use of a single domain to be shared by multiple developers, working on the same application, without them needing to worry about collisions of URLs or cross accessing of resources. The lifecycle of a partition can be managed independently of others so starting and stopping the partition to start and stop applications can be done with no impact on other users of the shared domain. A partition can be exported (unplugged) from a domain, including all of it's resources and application bits that are deployed, and imported (plugged) into a completely different domain to restore the exact same partition in the new location. This enables complete, working applications to be shared and moved between between different environments in a very straightforward manner.As an illustration of this concept of using partitions within a development environment, the YouTube video - WebLogic Server 12.2.1 - Deploying Java EE 7 Application to Partitions - takes the Java EE 7 CargoTracker application and deploys it to different targets from Eclipse.In the first instance, CargoTracker is deployed to a known WebLogic Server target using the well known "Run as Server" approach, with which Eclipse will start the configured server and deploy the application to the base domain.Following that, using a partition that has been created on the same domain called "test", the same application code-base is built and deployed to the partition using maven and the weblogic-maven-plugin. The application is accessed in its partition using its Virtual Target mapping and shown to be working as expected.To finish off the demonstration the index page of the CargoTracker application is modified to mimic a development change and deployed to another partition called "uat" - where it is accessed and seen that the page change is active.At this point, all three instances of the same application are running independently on the same server and are accessible at the same time, essentially showing how a single domain can independently host multiple instances of the same application as it is being developed.

The new WebLogic Server 12.2.1 Multi-tenant feature enables partitions to be created in a domain that are isolated from one another and able to be managed independently of one another. From...

Announcement

Oracle WebLogic Server 12.2.1 Running on Docker Containers

UPDATE April 2016 - We now officiallycertify and support WebLogic 12.1.3 and WebLogic 12.2.1 Clusters in multi-hostenvironments! For more information see thisblog post. The Docker configuration files are also now maintained on the officialOracle GitHub Docker repository.  Links in the Docker section of thisarticle have also been updated to reflect the latest updates and changes. Formore up to date information on Docker scripts and support, check the OracleGitHub project docker-images. OracleWebLogic Server 12.2.1 is now certified to run on Docker containers. As part of this certification, we arereleasing Docker files on GitHub to create Oracle WebLogic Server 12.2.1install images and Oracle WebLogic Server 12.2.1 domain images. These images are built as an extension ofexisting Oracle Linux images Oracle LinuxImages. To help you with this, we have postedDockerfiles and scripts on GitHub as examples for you to getstarted. Docker is a platform that enablesusers to build, package, ship and run distributed applications. Docker users package up their applications,and any dependent libraries or files, into a Docker image. Dockerimages are portable artifacts that can be distributed across Linuxenvironments. Images that have beendistributed can be used to instantiate containers where applications can run inisolation from other applications running in other containers on the same hostoperating system. Thetable below describes the certification provided for various WebLogic Serverversions. You can use these combinationsof Oracle WebLogic Server, JDK, Linux and Docker versions when building yourDocker images. Oracle WebLogic Server Version JDK Version HOST OS Kernel Version Docker Version 12.2.1.0.0 8 Oracle Linux 6 Update 6 or higher UEK Release 3 (3.8.13) 1.7 or higher 12.2.1.0.0 8 Oracle Linux 7 or higher UEK Release 3 (3.8.13) Or RHCK 3 (3.10) 1.7 or higher 12.2.1.0.0 8 RedHat Enterprise Linux 7 or higher RHCK 3 (3.10) 1.7 or higher 12.1.3.0.0 7/8 Oracle Linux 6 Update 5 or higher UEK Release 3 (3.8.13) 1.3.3 or higher 12.1.3.0.0 7/8 Oracle Linux 7 or higher UEK Release 3 (3.8.13) Or RHCK 3 (3.10) 1.3.3 or higher 12.1.3.0.0 7/8 RedHat Enterprise Linux 7 or higher RHCK 3 (3.10) 1.3.3 or higher We support Oracle WebLogic Server in certified Dockercontainers running on other Linux hostoperating systems that have Kernel 3.8.13 or larger and thatsupport Docker Containers, please read our support statement at Supportstatement. For additionaldetails on the most current Oracle WebLogic Server supported configurationsplease refer to OracleFusion Middleware Certification Pages. These Dockerfiles and scripts we have provided enable users to createclustered and non-clustered Oracle WebLogic Server domain configurations,including both development and production running on a single host operatingsystem or VMs. Each server running inthe resulting domain configurations runs in its Docker container, and iscapable of communicating as required with other servers. A topology which is in line with the “Docker-way” forcontainerized applications and services consists of a container designed to runonly an administration server containing all resources, shared libraries anddeployments. These Docker containers canall be on a single physical or virtual server Linux host or on multiplephysical or virtual server Linux hosts. The Dockerfiles in GitHub tocreate an image with a WebLogic Server domain can be used to start these adminserver containers. For documentation on how to use these Dockerfiles andscripts, see the whitepaperon OTN. The OracleWebLogic Server video and demo presents our certification effort and showsa Demo of WebLogic Server 12.2.1 running on Docker Containers. We hope you willtry running the different configurations of WebLogic Server on Dockercontainers, and look forward to hearing any feedback you might have.

UPDATE April 2016 - We now officially certify and support WebLogic 12.1.3 and WebLogic 12.2.1 Clusters in multi-hostenvironments! For more information see this blog post. The Docker configuration...

Add category

WLS UCP Datasource

WebLogic Server (WLS) 12.2.1 introduces a new datasource type that uses the Oracle Universal Connection Pool (UCP) as an alternative connection pool.  The UCP datasource allows for configuration, deployment, and monitoring of the UCP connection pool as part of the WLS domain.  It is certified with the Oracle Thin driver (simple, XA, and replay drivers).  The product documentation is at http://docs.oracle.com/middleware/1221/wls/JDBCA/ucp_datasources.htm#JDBCA746 .  The goal of this article  is not to reproduce that information but to summarize the feature and provide some additional information and screen shots for configuring the datasource.A UCP data source is defined using a jdbc-data-source descriptor as a system resource.  With respect to multi-tenancy, these system resources can be defined at the domain, partition, resource group template, or resource group level.  The configuration  for a UCP data source is pretty simple with the standard datasource parameters.  You can  name it, give it a URL, user, password and JNDI name.  Most of the detailed configuration and tuning comes in the form of UCP connection properties.  The administrator can configure values for any setters supported by oracle.ucp.jdbc.PoolDataSourceImpl except LogWriter  (see oracle.ucp.PoolDaaSourceImpl) by just removing the "set" from the attribute name (the names are case insensitive).  For example, ConnectionHarvestMaxCount=3 Table 8-2 in the documentation lists all of the UCP attributes that are currently supported, based on the 12.1.0.2 UCP jar that ships with WLS 12.2.1. There is some built-in validation of the (common sense) combinations of driver and connection factory: Driver Factory (ConnectionFactoryClassName) oracle.ucp.jdbc.PoolDataSourceImpl (default) oracle.jdbc.pool.OracleDataSource oracle.ucp.jdbc.PoolXADataSourceImpl oracle.jdbc.xa.client.OracleXADataSource oracle.ucp.jdbc.PoolDataSourceImpl oracle.jdbc.replay.OracleDataSourceImpl To simplify the configuration, if the "driver-name" is not specified, it will default to oracle.ucp.jdbc.PoolDataSourceImpl  and the ConnectionFactoryClassName connection property defaults to the corresponding entry from the above table. Example 8.1 in the product documentation gives a complete example of creating a UCP data source using WLST.   WLST usage is very common for application configuration these days. Monitoring is available via the weblogic.management.runtime.JDBCUCPDataSourceRuntimeMBean.  This MBean extends JDBCDataSourceRuntimeMBean so that it can be returned with the list of other JDBC MBeans from the JDBC service for tools like the administration console or your WLST script.  For a UCP data source, the state and the following attributes are set: CurrCapacity, ActiveConnectionsCurrentCount, NumAvailable, ReserveRequestCount, ActiveConnectionsAverageCount, CurrCapacityHighCount, ConnectionsTotalCount, NumUnavailable, and WaitingForConnectionSuccessTotal. The administration console and FMWC make it easy to create, update, and monitor UCP datasources. The following images are from the administration console. For the creation path, there is a drop-down that lists the data source types; UCP is one of the choices.  The resulting data source descriptor datasource-type set to "UCP".  The first step is to specify the JDBC Data Source Properties that determine the identity of the data source. They include the datasource names, the scope (Global or Multi Tenant Partition, Resource Group, or Resource Group Template) and the JNDI names.  The next page handles the user name and password, URL, and additional connection properties. Additional connection properties are used to configure the UCP connection pool. There are two ways to provide the connection properties for a UCP data source in the console. On the Connection Properties page, all of the available connection properties for the UCP driver are displayed so that you only need to enter the property value.  On the next page for Test Database Connection, you can enter a propertyName=value directly into the Properties text box.  Any values entered on the previous Connection Properties page will already appear in the text box.  This page can be used to test the specified values including the connection properties. The Test Database Connection page allows you to enter free-form values for properties and test a database connection before the data source configuration is finalized. If necessary, you can provide additional configuration information using the Properties, System Properties, and Encrypted Properties attributes. The final step is to target the data source. You can select one or more targets to which to deploy your new UCP data source. If you don't select a target, the data source will be created but not deployed. You will need to deploy the data source at a later time before you can get a connection in the application. For editing the data source, minimal tabs and attributes are exposed to configure, target, and monitor this data source type.  The capabilities in FMWC are similar to the administrative console but with a different look and feel. If you select JDBC Data Sources from the WebLogic Domain drop-down, you will see a list of existing data sources with their associated data source type, scope, and if applicable RG, RGT and Partition. Selecting an existing DS name brings up a page to edit the DS. Selecting a resource group name (if it exists) brings up a page to edit the RG. Selecting a partition name of an existing data source brings up a page to edit the Partition attributes. Selecting Create displays a data source type drop-down where you can select UCP Data Source. The first page of the UCP creation requires the data source name, scope, JNDI names(s), and selecting a driver class name. Connection properties are input on the next page.  Unlike the administration console, the UCP connection properties are not listed.  You must add a new entry by selecting "+", type in the property name, and then enter the value.  This page is also used to test the database connection. The final page in the creation sequence allows for targeting the data source and creating the new object. Once you have your datasource configured and deployed, you access it using a JNDI lookup in your application, as with other WLS datasource types. import javax.naming.Context; import javax.naming.InitialContext; import java.sql.Connection; import oracle.ucp.jdbc.PoolDataSource; Context ctx = new InitialContext(); PoolDataSource pds = (PoolDataSource) ctx.lookup("ucpDS"); Connection conn = pds.getConnection();. While the usage in the application looks similar to other WLS datasources, you don't have all of the features of a WLS datasource but you get additional features that the UCP connection pool supports.  Note that there is no integration of the UCP datasource with WLS security or JTA transactions.  UCP has it's own JMX management.    Start at this link for the UCP overview https://docs.oracle.com/database/121/JJUCP/intro.htm#JJUCP8109 .  When you see examples that execute PoolDataSourceFactory.getPoolDataSource() and then call several setters on the datasource, this is replaced with configuring the UCP datasource in WLST or one of the GUIs.  Pick up the example with getting the connection as above.

WebLogic Server (WLS) 12.2.1 introduces a new datasource type that uses the Oracle Universal Connection Pool (UCP) as an alternative connection pool.  The UCP datasource allows for configuration, ...

JMX Authorization policies

With the introduction of the Partition conception in 12.2.1 Weblogic, there is an impact on how an MBean is authorized to a Weblogic user. This is due to the fact that an MBean can now be scoped to either 1) Domain or 2) Partition. A Partition is a slice of a Weblogic Domain which can have its own set of users defined. The users from one Partition are separate from the users from another Partition and also from the users from the Domain. Historically, when there was no concept of a Partition, a user in a Weblogic Domain can have four roles namely 1) Administrator 2) Deployer 3) Monitor 4) Operator. The basic rules of whether an MBean can be authorized can be summarized as follows. Any user can read any MBean except the Encrypted one A user with Administrator role can write/execute any MBean A user with other roles (Deployer, Monitor, Operator) can access an MBean as long as that MBean is annotated with the specific role. E.g @roleAllowed Deployer Public class MyMBean { } In the case above, a user with Deployer role can write/execute that MBean but a user with Monitor role cannot write/execute on that MBean. In 12.2.1 release, with the introduction of Multi-tenancy, the authorization rules have been changed significantly.  An MBean can now be located either in a Domain scope or in a Partition scope. This scoping is sometimes referred to as “owned by”. E.g. in 12.2.1, the DomainMBean is owned by the Domain because the MBean is located in the Domain level in the MBean tree. The MBeans in Weblogic can be visualized as a tree-like structure.  The image above describes how MBeans in Weblogic are now scoped either to a Partition or to the Domain. When a Partition is created in Weblogic, a config MBean named PartitionMBean is created representing that Partition. If you look carefully in the image above, you will see that the PartitionMBean is scoped to the Domain not to the Partition. Any MBean under that PartitionMBean is scoped to a Partition. So, whether a Partition user can access an MBean depends on where that MBean is located in the MBean tree. The location of the MBean in the MBean tree defines whether a Partition owns that MBean or whether the Domain owns that MBean. Users from the Domain are allowed to write/execute the Partition MBeans but the users from the Partitions are not allowed to write/execute MBeans unless they are granted permission by explicit annotations.                 In 12.2.1, we have introduced a new annotation named @owner which overrides the ownership behavior of an MBean based on the location to explicitly specified ownership. There are three values of @owner namely 1) Domain 2) Partition 3) Context. @owner Domain will mark an MBeanowned by the Domain regardless of its location in the MBean tree. @owner Partition will mark anMBean owned by the Partition regardless of its location in the MBean tree. @owner Context will change theownership of an MBean based on the login Context of the user that is trying toaccess the MBean. if a user tries to access an MBeanfrom the Domain context then the MBean will behave as @owner Domain If a user tries to access an MBeanfrom the Partition context then the MBean will behave as @owner Partition. The @owner Context is particularlyuseful when an MBean needs to be accessed by both the Domain users and the Partitionusers of a Partition. The MBean acts like a shared MBean between the Domain andthe Partitions. You must remember that when an MBean scoped to a Domain in theMBean tree is marked with @owner Context, it means that that MBean can be written/executedby all the Partition users not by the users from a particular Partition. Thereis no way to selectively allow users from a particular Partition to access aDomain scoped MBean. Each attribute or operation of anMBean can be marked with @owner to have finer control on the MBean. Putting@owner on an MBean interface acts like putting @owner on all the attributes andoperations of that MBean. Example usage Annotating an MBean like below will allow a user from the Domain with Deployer role and a user from a Partition with Deployer role access to any operations or attributes on DomainRuntimeMBean  /** * @roleAllowed Deployer * @owner Context */ public interface DomainRuntimeMBean Authorization’s relation to visibility of an MBean In  the 12.2.1 release because of the introduction of Multi-tenancy, there are some changes in terms of what MBeans can be seen by a user. Not all MBeans are visible to a user. The authorization rule applies if an MBean is only visible to an end user. Please look at the MBean Visibility for details about the visibility rules in Weblogic 12.2.1. Default Security Policies in 12.2.1 The Domain user with Administratorrole has full access on all MBeans across the domain and the Partitions The getter for any MBean attributeand the lookupXXX operation is authorized for any user from the Domain and the Partitionswithout any annotation required. Setter of an attribute or anoperation of a Domain scoped MBean needs to be marked with @owner context if Partitionusers need to access it. The Partition owned MBean will notrequire @owner annotation to be accessible by the users of that particular Partition. If a Domain user with other roles (Deployer,Monitor, and Operator) requires access to Domain scoped MBeans then they mustbe annotated with @roleAllowed annotation. Remember, unlike DomainAdministrator, these users can only access Domain scoped MBeans not the Partitionscoped MBeans. A user from a Partition withAdministrator role can access any MBean in that Partition but not to MBeans inother Partitions. This is protected by Visibility rule A Partition user can have similarroles (Administrator,Deployer,Monitor or Operator) as in the Domain. If a user from a Partition with otherroles (Deployer, Operator and Monitor) needs access(write and execute) on the Partitionscoped MBean then the MBean needs to be annotated with @roleAllowed. Summary of Authorization rules for users in Weblogic 12.2.1 Table 1: WLS MBeans without any @roleAllowed annotation  Domain MBean: an MBean located in a Domain Scope, an MBean marked with @owner Domain or an Mbean marked with @owner Context and the subject is in domain Context Partition MBean: an MBean located in a Partition Scope, an MBean marked with @owner Partition or an MBean marked with @owner Context and the subject is in a Partition context when trying to access the MBean Table 2: WLS MBeans with @roleAllowed annotation. This annotation can appear on MBean Interface, Attribute or Operation Domain MBean: an MBean located in a Domain Scope, an MBean marked with @owner Domain or an Mbean marked with @owner Context and the subject is in domain Context Partition MBean: an MBean located in a Partition Scope, an MBean marked with @owner Partition or an MBean marked with @owner Context and the subject is in a Partition context when trying to access the MBean Table 3: Previlege of Different Domain Roles Table 4: Previlege of Different Partition Roles

With the introduction of the Partition conception in 12.2.1 Weblogic, there is an impact on how an MBean is authorized to a Weblogic user. This is due to the fact that an MBean can now be scoped to...

Add category

ONS Configuration in WLS

Fast Application Notification (FAN) isused to provide notification events about services and nodes as they are addedand removed from Oracle Real Application Clusters (RAC) and Oracle Data Guardenvironments. The Oracle NotificationSystem is used as the transport for FAN. You can find all of the details in the following link. http://www.oracle.com/technetwork/database/options/clustering/overview/fastapplicationnotification12c-2538999.pdf Configuring ONS has been availablefor Active GridLink (AGL) since WebLogic Server (WLS) 10.3.6. In recent releases, auto-ONS has been added so that it is no longer requiredto explicitly configure ONS and this is generally recommended. There are some cases where it is necessary toexplicitly configure the ONS configuration.  One reason is to specify awallet file and password (this cannot be done with auto-ONS).  Anotherreason is to explicitly specify the ONS topology or modify the number of connections. ONS configuration has been enhancedin WLS 12.2.1. The OnsNodeList valuemust be configured either with a single node list or a property node list (newin WLS 12.2.1), but not both.  If the WLS OnsNodeList contains an equalssign (=), it is assumed to be a property node list and not a single nodelist.  Single Node List:A comma separated list of ONS daemon listen addresses and ports pairs separatedby colon. Example: instance1:6200,instance2:6200 Property Node List: This string is composed of multiplerecords, with each record consisting of a key=value pair and terminated by anewline ('\n') character.  The following keys can be specified. nodes.<id> A list of nodes representing a unique topology of remote  ONS servers. <id> specifies a unique identifier for thenode list -- duplicate entries are ignored. The list of nodes configured in anylist must not include any nodes configured in any other list for the same clientor duplicate notifications will be sent and delivered.  The list format isa comma separated list of ONS daemon listen addresses and ports pairs separatedby colon. maxconnections.<id> Specifies the maximum number of concurrent connections     mainted with the ONS servers. <id> specifiesthe node list to which this parameter applies. The default is 3. active.<id> If true the list is active and connections willautomatically be established to the configured number of ONS servers. If falsethe list is inactive and will only be used as a fail over list in the eventthat no connections for an active list can be established. An inactive list canonly serve as a fail over for one active list at a time, and once a singleconnection is re-established on the active list, the fail-over list will revertto being inactive. Note that only notifications published by the client after alist has failed over   will be sent to the fail over list. <id> specifiesthe node list to which this parameter applies. The default is true. remotetimeout  The timeout period, in milliseconds, for a connectionto each remote server. If the remote server has not responded within thistimeout period, the connection will be closed. The default is 30 seconds. Note that although walletfileand walletpassword are supported in the string, WLS has separateconfiguration elements for these values, OnsWalletFile andOnsWalletPasswordEncrypted. This example that is equivalent to theabove single node list: nodes.1=instance1:6200,instance2:6200 If the datasource is configured toconnect to two clusters and receive FAN events from both, for example in theRAC with Data Guard situation, then two ONS node groups are needed. For example nodes.1=rac1-scan:6200maxconnections.1=4nodes.2=rac2-scan::6200maxconnections.2=4 Remember that the URL needs aseparate ADDRESS_LIST for each cluster and set LOAD_BALANCE=ON per ADDRESS toexpand SCAN names. When using the administrationconsole to configure an Active GridLink datasource, it is not possible tospecify a property node list during the creation flow.  Instead, it isnecessary to modify the ONS Node value on the ONS tab after creation.  Thefollowing figure shows a property node list with two groups for two RACclusters. The following figure shows theMonitoring page for ONS statistics.  Note that there are two entries, onefor each host and port pair. The following figure shows theMonitoring page after testing the rac2 ONS node group. You can also use WLST to create theONS parameter.  Multiple lines in the ONS value need to be separated byembedded newlines.  This is a complete example for creating an AGL datasource. connect('weblogic','welcome1','t3://'+'localhost'+':7001')edit()startEdit()cd('/')dsName='aglds'cmo.createJDBCSystemResource(dsName)cd('/JDBCSystemResources/' + dsName + '/JDBCResource/' + dsName)cmo.setName(dsName)cmo.setDatasourceType(‘AGL’)cd('/JDBCSystemResources/' + dsName + '/JDBCResource/' + dsName +'/JDBCDataSourceParams/' + dsName )set('JNDINames',jarray.array([String('jdbc/' + dsName )], String))cd('/JDBCSystemResources/' + dsName + '/JDBCResource/' + dsName +'/JDBCDriverParams/' + dsName cmo.setUrl('jdbc:oracle:thin:@(DESCRIPTION=(CONNECT_TIMEOUT=4)(RETRY_COUNT=30)(RETRY_DELAY=3) (ADDRESS_LIST=(LOAD_BALANCE=on) (ADDRESS=(PROTOCOL=TCP)(HOST=rac1)(PORT=1521)))(ADDRESS_LIST=(LOAD_BALANCE=on)(ADDRESS=(PROTOCOL=TCP)(HOST=rac2)(PORT=1521)))(CONNECT_DATA=(SERVICE_NAME=otrade)))')cmo.setDriverName( 'oracle.jdbc.OracleDriver' )cmo.setPassword('tiger')cd('/JDBCSystemResources/' + dsName + '/JDBCResource/' + dsName +'/JDBCConnectionPoolParams/' + dsName )cmo.setTestTableName('SQL ISVALID')cd('/JDBCSystemResources/' + dsName + '/JDBCResource/' + dsName +'/JDBCDriverParams/' + dsName + '/Properties/' + dsName )cmo.createProperty('user')cd('/JDBCSystemResources/' + dsName + '/JDBCResource/' + dsName +'/JDBCDriverParams/' + dsName + '/Properties/' + dsName + '/Properties/user')cmo.setValue('scott')cd('/JDBCSystemResources/' + dsName + '/JDBCResource/' + dsName +'/JDBCDataSourceParams/' + dsName )cmo.setGlobalTransactionsProtocol('None')cd('/JDBCSystemResources/' + dsName + '/JDBCResource/' + dsName +'/JDBCOracleParams/' + dsName)cmo.setFanEnabled(true)cmo.setOnsNodeList('nodes.1=rac1:6200\nnodes.2=rac2:6200\nmaxconnections.1=4\n')cd('/SystemResources/' + dsName )set('Targets',jarray.array([ObjectName('com.bea:Name=' + 'myserver' +',Type=Server')], ObjectName))save()activate() If you can go with automatically configured ONS, that's desirable but if you need to configure ONS explicitly, WLS 12.2.1 has a lot more power to specify exactly what you need. 

Fast Application Notification (FAN) is used to provide notification events about services and nodes as they are added and removed from Oracle Real Application Clusters (RAC) and Oracle Data Guardenviro...

Technical

Update your Java Version Easily with ZDT Patching

 Another great feature of ZDT Patching is that it provides a simple way toupdate the Java version used to run WebLogic. Keeping up-to-date with Java security patches is an ongoing task ofcritical importance. Prior to ZDTPatching, there was no easy way to migrate all of your managed servers to a newJava version, but ZDT Patching makes this a simple two step procedure. The first step is to install the updated Java version to all of the nodesthat you will be updating. This can bedone manually or by using any of the normal software distribution tools typicallyused to manage enterprise software installations. This operation can be done outside a plannedmaintenance window as it will not affect any running servers. Note that when installing the new Javaversion, it must not overwrite the existing Java directory, and the location ofthe new directory must be the same on every node. The second step is to simply run the Java rollout using a WLST command likethis one: rolloutJavaHome(“Cluster1”, “/pathTo/jdk1.8.0_66”) In this example, the Admin Server will start the rollout to coordinate therolling restart of each node in the cluster named “Cluster1”. While the managed servers and NodeManager on agiven node are down, the path to the Java executable that they are started withwill be updated. The rollout will thenstart the managed servers and NodeManager from the new Java path. Easy as that! For more information about upgrading Java with Zero Downtime Patching, viewthe documentation.

 Another great feature of ZDT Patching is that it provides a simple way to update the Java version used to run WebLogic.  Keeping up-to-date with Java security patches is an ongoing task ofcritical...

Technical

Application MBeans Visibility in Oracle WebLogic Server 12.2.1

Oracle WebLogic Server (WLS) version 12.2.1 supports a feature called Multi-Tenancy (WLS MT). WLS MT introduces the partition, partition administrator, and partition resource concepts.  Partition isolation is enforced when accessing resources (e.g., MBeans) in a domain. WLS administrators can see MBeans in the domain and the partitions. But a partition administrator as well as other partition roles are only allowed to see the MBeans in their partition, not in other partitions. In this article, I will explore the visibility support on the application MBeans to demonstrate partition isolation in WLS MT in 12.2.1. This includesAn overview of application MBean visibility in WLS MTA simple user case that demonstrates what MBeans are registered on a WLS MBeanServer, what MBeans can be visible by WLS administrators or partition administratorsLinks to reference materials for more informationThe use case used in this article is run based on a domain created in another article "Create WebLogic Server Domain with Partitions using WLST in 12.2.1". In this article, I will Briefly show the domain topologyDemonstrate how to deploy an application to the domain and partitionsDemonstrate how to access the application MBeans via JMX clients using global/domain url or partition specific urlDemonstrate how to enable debugging/logging1. OverviewAn application can be deployed to WLS servers per partition, so the application is multiplied for multiple partitions. WLS contains three MBeanServers: Domain Runtime MBeanServer, Runtime MBeanServer. Each MBeanServer can be used for all partitions. WLS needs to ensure that the MBeans registered on each MBeanServer by the application are unique for each partition.The application MBean visibility in WLS MT can be illustrated by several parts:Partition IsolationApplication MBeans RegistrationQuery Application MBeansAccess Application MBeans1.1 Partition IsolationA WLS administrator can see application MBeans in partitions. But a partition administrator for a partition is not able to see application MBeans from the domain or other partitions.  1.2 Application MBeans RegistrationWhen an application is deployed to a partition, application MBeans are registered during the application deployment. WLS adds a partition specific key (e.g. Partition=<partition name>) to the MBean Object Names when registering them onto the WLS MBeanServer. This will ensure that MBean object names are unique when registered from a multiplied application.Figure on the right shows how application MBean ObjectNames are different when registered onto the WLS MBeanServer on the domain and the partitions. Figure on the right shows there is a WLS domain and an application.WLS domain is configured with two partitions: cokePartition and pepsiPartition.An application registers one MBean, e.g., testDomain:type=testType, during the application deployment.The application is deployed to WLS domain, cokePartition and pepsiPartition. Since an WLS MBeanServer instance is shared by the domain, cokePartition and pepsiPartition, there are three application MBeans registered on the same MBeanServer after three application deployments:An MBean belongs to domain:          testDomain:type=testTypeAn MBean belongs to cokePartition: testDomain:Partition=cokePartition,type=testTypeAn MBean belongs to cokePartition: testDomain:Partition=pepsiPartition,type=testTypeThe MBeans belong to the partitions contains an Partition key property in the ObjectNames.1.3 Query Application MBeansJMX clients, e.g., WebLogic WLST, JConsole etc., connect to a global/domain URL or partition specific URL, then do a query on the WebLogic MBeanServer. The query results are different:When connecting to a global/domain URL, the application MBeans that belong to the partitions can be visible to those JMX clients.When connecting to a partition specific URL, WLS filters the query results. Only the application MBeans that belong to that partition are returned. MBeans belonging to the domain and other partitions are filtered out.1.4 Access Application MBeansJMX clients, e.g., WebLogic WLST, JConcole, etc., connect to a partition specific URL, and do an JMX operation, e.g., getAttribute(<MBean ObjectName>, <attributeName>), the JMX operation is actually done on different MBeans:When connecting to a global/domain URL, the getAttribute() is called on the MBean that belongs to the domain. (The MBean without the Partition key property on the MBean ObjectName.)When connecting to a partition specific URL, the getAttribute() is called on the MBean that belongs to that partition. (The MBean with the Partition key property on the MBean ObjectName.)2. Use caseNow I will demonstrate how MBean visibility works in WebLogic Server MT in 12.2.1 to support partition isolation. 2.1 Domain with PartitionsIn the article "Create WebLogic Server Domain with Partitions using WLST in 12.2.1", a domain with 2 partitions: coke and pepsi is created. This domain is also used for the use case in this article. Here is the summary of the domain topology:A domain is configured with one AdminServer named "admin", one partition named "coke", one partition named "pepsi". The "coke" partition contains one resource group named "coke-rg1", targeting to a target named "coke-vt". The "pepsi" partition contains one resource group named "pepsi-rg1", targeting to a virtual target named "pepsi-vt". More specifically, each domain/partition has the following configuration values: Domain NameUser NamePasswordDomainbase_domainweblogicwelcome1Coke Partitioncokemtadmin1welcome1Pepsi Partitionpepsimtadmin2welcome2Please see details in the article "Create Oracle WebLogic Server Domain with Partitions using WLST in 12.2.1" on how to create this domain.2.2 Application deploymentWhen the domain is set up and started, an application "helloTenant.ear" is deployed to the domain. It is also deployed to the "coke-rg1" in the "coke" partition and to the "pepsi-rg1" in the "pepsi" partition. The deployment can be done using different WLS tools, like FMW Console, WLST, etc.. Below are the WLST commands that deploy an application to the domain and the partitions:startEdit()deploy(appName='helloTenant',target='admin,path='${path-to-the-ear-file}/helloTenant.ear')deploy(appName='helloTenant-coke',partition='coke',resourceGroup='coke-rg1',path='${path-to-the-ear-file}/helloTenant.ear')deploy(appName='helloTenant-pepsi',partition='pepsi',resourceGroup='pepsi-rg1',path='${path-to-the-ear-file}/helloTenant.ear')save()activate()For other WLS deployment tools, please see the "Reference" section.2.3 Access Application MBeansDuring the application deployment, application MBeans are registered onto the WebLogic Server MBeanServer. As mentioned in the previous section 1.2 Application MBean Registration, multiple MBeans are registered, even though there is only one application.To access application MBeans, there are multiple ways to achieve thisWLSTJConsoleJSR 160 apis2.3.1 WLSTThe WebLogic Scripting Tool (WLST) is a command-line scripting interface that system administrators and operators use to monitor and manage WebLogic Server instances and domains. To start WLST:$MW_HOME/oracle_common/common/bin/wlst.shOnce WLST is started, user can connect to the server by providing a connection url. Below will show different values of an application MBean attribute by the WLS administrator or partition administrator when providing different connection urls.2.3.1.1 WLS administratorWLS administrator 'weblogic' connects to the domain using the following connect command:connect("weblogic", "welcome1", "t3://localhost:7001")The picture below shows there are 3 MBeans registered on the WebLogic Server MBeanServer, whose domain is "test.domain", and the value of the attribute "PartitionName" on each MBean.test.domain:Partition=coke,type=testType,name=testNamebelongs to the coke partition. The value of the PartitionName attribute is "coke"test.domain:Partition=pepsi,type=testType,name=testNamebelongs to the pepsi partition. The value of the PartitionName attribute is "pepsi"test.domain:type=testType,name=testNamebelongs to the domain. No Partition key property in the ObjectName. The value of the PartitionName attribute is "DOMAIN"The MBean belonging to the partition will contain a Partition key property in the ObjectName. The Partition key property is added by WLS internally when they are registered in a partition context. 2.3.1.2 Partition administrator for cokeSimilarly, the partition administrator 'mtadmin1' for coke can connect to the coke partition. The connection url uses "/coke" which is the uri prefix defined in the virtual target coke-vt. (Check the config/config.xml in the domain.)connect("mtadmin1", "welcome1", "t3://localhost:7001/coke")From the picture below, when connecting to the coke partition, there is only one MBean listed:test.domain:type=testType,name=testNameEven though there is no Partition key property in the ObjectName, this MBean still belongs to the coke partition. The value of the PartitionName attribute is "coke". 2.3.1.3 Partition administrator for pepsiSimilarly, the partition administrator 'mtadmin2' for pepsi can connect to the pepsi partition. The connection url uses "/pepsi" which is the uri prefix defined in the virtual target pepsi-vt.connect("mtadmin2", "welcome2", "t3://localhost:7001/pepsi")From the picture below, when connecting to the pepsi partition, there is only one MBean listed:test.domain:type=testType,name=testNameEven though there is no Partition key property in the ObjectName, same as the one seen by the partition administrator for coke, this MBean still belongs to the pepsi partition. The value of the PartitionName attibute is "pepsi". 2.3.2 JConsoleThe JConsole graphical user interface is a build-in tool in JDK. It's a monitoring tool that complies to the Java Management Extensions (JMX) specification.  By using JConsole you can get a overview of the MBeans registered on the MBeanServer.To start JConsole, do this:$JAVA_HOME/bin/jconsole -J-Djava.class.path=$JAVA_HOME/lib/jconsole.jar:$JAVA_HOME/lib/tools.jar:$MW_HOME/wlserver/server/lib/wljmxclient.jar -J-Djmx.remote.protocol.provider.pkgs=weblogic.management.remotewhere <MW_HOME> is the location where WebLogic Server is installed.Once JConsole is started, WLS administrator and partition administrator can use it to browse the MBeans given the credentials and the JMX service URL.2.3.2.1 WLS administratorThe WLS administrator "weblogic" provides an JMX service URL to connect to the WLS Runtime MBeanServer like below:service:jmx:t3://localhost:7001/jndi/weblogic.management.mbeanservers.runtimeWhen connected by a WLS administrator, an MBean tree in JConsole shows 3 MBeans with the "test.domain" in the ObjectName:The highlighted ObjectName in the right pane in the picture below is the MBean that belongs to the coke partition. It has the Partition key property: Partition=coke. The highlighted below is the MBean that belongs in the pepsi partition. It has the Partition key property: Partition=pepsi. The highlighted below is the MBean that belongs to the domain. It does not have the Partition key property. The result here is consistent with what we have seen in WLST for WLS administrator.2.3.2.2 Partition administrator for cokeThe partition administrator "mtadmin1" provides a different JMX service URL to JConsole:service:jmx:t3://localhost:7001/coke/jndi/weblogic.management.mbeanservers.runtimeWhen connected via partition specific JMX service url,, the partition administrator can only see one MBean:test.domain:type=testType,name=testNameThis MBean belongs to the coke partition and the value of the PartitionName is coke as shown in the picture below. However, there is no Partition key property in the ObjectName. 2.3.2.3 Partition administrator for pepsiThe partition administrator "mtadmin2" provides a different JMX service URL to JConsole:service:jmx:t3://localhost:7001/pepsi/pepsi/weblogic.management.mbeanservers.runtimeWhen connected via partition specific JMX service url, the partition administrator "mtadmin2" can only see one MBean:test.domain:type=testType,name=testNameThis MBean belongs to the pepsi partition and the value of the PartitionName is pepsi as shown in the picture below. 2.3.3 JSR 160 APIsJMX clients can use JSR 160 APIs to access the MBeans registered on the MBeanServer. For example, the code below shows how to get the JMXConnector by providing a service url and the env, to get the MBean attribute:import javax.management.*;import javax.management.remote.JMXConnector;import javax.management.remote.JMXServiceURL;import javax.management.remote.JMXConnectorFactory;import java.util.* public class TestJMXConnection { public static void main(String[] args) throws Exception { JMXConnector jmxCon = null; try { // Connect to JMXConnector JMXServiceURL serviceUrl = new JMXServiceURL("service:jmx:t3://localhost:7001/jndi/weblogic.management.mbeanservers.runtime"); Hashtable env = new Hashtable(); env.put(JMXConnectorFactory.PROTOCOL_PROVIDER_PACKAGES, "weblogic.management.remote"); env.put(javax.naming.Context.SECURITY_PRINCIPAL, "weblogic"); env.put(javax.naming.Context.SECURITY_CREDENTIALS, "welcome1"); jmxCon = JMXConnectorFactory.newJMXConnector(serviceUrl, env); jmxCon.connect();  // Access the MBean MBeanServerConnection con = jmxCon.getMBeanServerConnection(); ObjectName oname = new ObjectName("test.domain:type=testType,name=testName,*"); Set queryResults = (Set)con.queryNames(oname, null); for (ObjectName theName : queryResults) { System.out.print("queryNames(): " + theName); String partitionName = (String)con.getAttribute(theName, "PartitionName"); System.out.println(", Attribute PartitionName: " + partitionName); } } finally { if (jmxCon != null) jmxCon.close(); }To compile and run this code, provide the wljmxclient.jar on the classpath, like:$JAVA_HOME/bin/java -classpath $MW_HOME/wlserver/server/lib/wljmxclient.jar:. TestJMXConnectionYou will get the results below:Connecting to: service:jmx:t3://localhost:7001/jndi/weblogic.management.mbeanservers.runtimequeryNames(): test.domain:Partition=pepsi,type=testType,name=testName, Attribute PartitionName: pepsiqueryNames(): test.domain:Partition=coke,type=testType,name=testName,  Attribute PartitionName: cokequeryNames(): test.domain:type=testType,name=testName, Attribute PartitionName: DOMAINWhen change the code to use partition administrator "mtadmin1",JMXServiceURL serviceUrl = new JMXServiceURL("service:jmx:t3://localhost:7001/coke/jndi/weblogic.management.mbeanservers.runtime");env.put(javax.naming.Context.SECURITY_PRINCIPAL, "mtadmin1");env.put(javax.naming.Context.SECURITY_CREDENTIALS, "welcome1");Running the code will return only one MBean:Connecting to: service:jmx:t3://localhost:7001/coke/jndi/weblogic.management.mbeanservers.runtimequeryNames(): test.domain:type=testType,name=testName,  Attribute PartitionName: cokeSimilar results would be seen for the partition administrator for pepsi. If provide a pepsi specific JMX service url, only the MBean that belongs to the pepsi partition is returned.2.4 Enable logging/debugging flagsIf it appears the MBean is not behaving correctly in WebLogic Server 12.2.1, for example:Partition administrator can see MBeans from global domain or other partitions when quiery the MBeans, orGot JMX exceptions, e.g., javax.management.InstanceNotFoundException, when accessing an MBeanTry the followings to triage the errors:If it's a connection problem in JConsole, add -debug on the JConsole command line when starting JConsole.Partition administrator can see MBeans from global domain or other partitions when query the MBeans:When connected by JMX clients, e.g., WLST, JConsole, JSR 160 APIs, make sure the host name on the service url matches the host name defined in the virtual target in the config/config.xml in the domain.Make sure the uri prefix on the service url matches the uri prefix defined in the virtual target in the config/config.xml in the domain.Got JMX exceptions, e.g., javax.management.InstanceNotFoundException, when accessing an MBean:When the MBean belongs to a partition, make sure the partition is started. The application deployment is only happened when the partition is started.Enable the debug flags during the server startup, like this:-Dweblogic.StdoutDebugEnabled=true -Dweblogic.log.LogSeverity=Debug -Dweblogic.log.LoggerSeverity=Debug -Dweblogic.debug.DebugPartitionJMX=true -Dweblogic.debug.DebugCIC=falseSearch server logs for the specific MBean ObjectName you are interested. Make sure the MBean you are debugging is registered in a correct partition context. Make sure the MBean operation is called in a correct partition context.Here are sample debug messages for the MBean "test.domain:type=testType,name=testName" related to the MBean registration, queryNames() invocation, and getAttribute() invocation.<Oct 21, 2015 11:36:43 PM PDT> <Debug> <PartitionJMX> <BEA-000000> <Calling register MBean test.domain:type=testType,name=testName in partition DOMAIN><Oct 21, 2015 11:36:44 PM PDT> <Debug> <PartitionJMX> <BEA-000000> <Calling register MBean test.domain:Partition=coke,type=testType,name=testName in partition coke><Oct 21, 2015 11:36:45 PM PDT> <Debug> <PartitionJMX> <BEA-000000> <Calling register MBean test.domain:Partition=pepsi,type=testType,name=testName in partition pepsi><Oct 21, 2015 11:36:56 PM PDT> <Debug> <PartitionJMX> <BEA-000000> <queryNames on MBean test.domain:Partition=coke,type=testType,name=testName,* in partition coke><Oct 21, 2015 11:36:56 PM PDT> <Debug> <MBeanCIC> <BEA-000000> <getAttribute: MBean: test.domain:Partition=coke,type=testType,name=testName, CIC: (pId = 2d044835-3ca9-4928-915f-6bd1d158f490, pName = coke, appId = helloTenant$coke, appName = helloTenant, appVersion = null, mId = null, compName = null)> To check why the partition context is not right, turn on this debug flag, in addition to the debug flags mentioned above, when starting WLS servers:-Dweblogic.debug.DebugCIC=true. Once this flag is used, there are a lot of messages logged into the server log. Search for the messages logged by DebugCIC logger, like ExecuteThread: '<thread id #>' for queue: 'weblogic.kernel.Default (self-tuning)'): Pushed and the messages logged by DebugPartitionJMX logger.<Oct 21, 2015, 23:59:34 PDT> INVCTXT (24-[ACTIVE] ExecuteThread: '0' for queue: 'weblogic.kernel.Default (self-tuning)'): Pushed [(pId = 0, pName = DOMAIN, appId = null, appName = null, appVersion = null, mId = null, compName = null)] on top of [(pId = 0, pName = DOMAIN, appId = null, appName = null, appVersion = null, mId = null, compName = null)]. New size is [2]. Pushed by [weblogic.application.ComponentInvocationContextManagerImpl.pushComponentInvocationContext(ComponentInvocationContextManagerImpl.java:173)...<Oct 21, 2015 11:59:34 PM PDT> <Debug> <PartitionJMX> <BEA-000000> <Calling register MBean test.domain:type=testType,name=testName in partition DOMAIN>...<Oct 21, 2015, 23:59:37 PDT> INVCTXT (29-[STANDBY] ExecuteThread: '2' for queue: 'weblogic.kernel.Default (self-tuning)'): Pushed [(pId = 2d044835-3ca9-4928-915f-6bd1d158f490, pName = coke, appId = helloTenant$coke, appName = helloTenant, appVersion = null, mId = null, compName = null)] on top of [(pId = 2d044835-3ca9-4928-915f-6bd1d158f490, pName = coke, appId = null, appName = null, appVersion = null, mId = null, compName = null)]. New size is [3]. Pushed by [weblogic.application.ComponentInvocationContextManagerImpl.pushComponentInvocationContext(ComponentInvocationContextManagerImpl.java:173)...<Oct 21, 2015 11:59:37 PM PDT> <Debug> <PartitionJMX> <BEA-000000> <Calling register MBean test.domain:Partition=coke,type=testType,name=testName in partition coke>3. ConclusionWebLogic Server 12.2.1 provides a new feature: Multi-Tenancy (MT). With this feature, partition isolation is enforced. Applications can deploy to the domain and the partitions. Users in one partition cannot see the resources in other partitions, including MBeans registered by applications. In this article, a use case is used to briefly demonstrate how application MBeans are affected by partition isolation in regards to MBean visibility. For more detailed information, see "References" section.4. ReferencesWebLogic Server domainConfig WizardWLST command reference JConsoleManaging WebLogic Server with JConsoleJSR 160: Java Management Extensions Remote JMX apiWebLogic Server SecurityWebLogic Server Deployment  

Oracle WebLogic Server (WLS) version 12.2.1 supports a feature called Multi-Tenancy (WLS MT). WLS MT introduces the partition, partition administrator, and partition resource concepts.  Partition...

Technical

Create WebLogic Server Domain with Partitions using WLST in 12.2.1

Oracle WebLogic Server 12.2.1 added support for multitenancy (WLS MT). In WLS MT, WLS can be configured with a domain, as well as one or more partitions. A partition contains new elements introduced in WLS MT, like resource groups, resource group templates, virtual targets, etc. Setting up a domain with partitions requires additional steps compared to a traditional WLS domain. For more detailed information about these new WLS MT related concepts, please see Oracle Docs listed in the "References" section.  Oracle recommends to use Fusion Middleware Control (FMWC) to create WebLogic domains via Restricted JRF template. Oracle also support creating WebLogic Server domains using WLST. In this article, I will demonstrate how to create a WLS domain with 2 partitions using WLST. This includes: Displaying domain topology Creating a domain with 2 partitions using WLST Displaying domain config file sample These tasks are described in the subsequence sections. 1.Domain Topology In this article, I will create a domain that is configured with: One AdminServer named "admin", one partition named "coke", one partition named "pepsi". The "coke" partition contains one resource group named "coke-rg1", targeting to a virtual target named "coke-vt".  The "pepsi" partition contains one resource group named "pepsi-rg1", targeting to a virtual target named "pepsi-vt".  An application "helloTenant.ear" is deployed to the domain, the "coke-rg1" in the "coke" partition and the "pepsi-rg1" in the "pepsi" partition. The following picture shows what the domain topology looks: Note this domain topology does not contain other MT related concepts, like a resource group template. They are not covered in this article. To see more information about other MT related concepts, please check the "References" section for details. 2. Create a domain with partitions To create a domain with the topology shown in the picture above, several steps are required: Create a traditional WLS domain Start the domain Create a partition in the domain Create a security realm for the partition Create a user for the partition Add the user to the groups in the security realm Create a virtual target Create a partition Create a resource group Set a virtual target as a default target Setup security IDD for the partition Restart the server Start the partition Below will illustrate each step in details. 2.1 Create a traditional WLS domain A traditional WLS domain can be created by using the Config Wizard. Start the Config Wizard via a command script: sh $MW_Home/oracle_common/common/bin/config.sh Create a domain using all the defaults, Specify the following: Domain name = base_domain User name = weblogic User password = welcome1 2.2 Start the domain cd $MW_Home/user_projects/domains/base_domain/sh startWebLogic.sh 2.3 Create a partition: coke in a domain The steps below require WLST to be started. Use the following command to start WLST: sh $MW_Home/oracle_common/common/bin/wlst.sh Note, all of the WLST commands shown below are run after connecting to the Admin server "admin" with the admin user "weblogic" credentials, e.g., connect("weblogic", "welcome1", t3://localhost:7001") Now, WLST is ready to run commands to setup the partition for coke. The partition for coke has the following values: Partition name = coke Partition user name = mtadmin1 Partition password = welcome1 To do that, a security realm and a user are created for the partition as shown below. We explain it step-by-step. 2.3.1 Create a security realm for the partition  The security realm is created using the standard WLS APIs. edit()startEdit()realmName = 'coke_realm'security = cmo.getSecurityConfiguration()print 'realm name is ' + realmNamerealm = security.createRealm(realmName)# ATNatnp = realm.createAuthenticationProvider(  'ATNPartition','weblogic.security.providers.authentication.DefaultAuthenticator')atna = realm.createAuthenticationProvider(  'ATNAdmin','weblogic.security.providers.authentication.DefaultAuthenticator')# IAia = realm.createAuthenticationProvider(  'IA','weblogic.security.providers.authentication.DefaultIdentityAsserter')ia.setActiveTypes(['AuthenticatedUser'])# ATZ/Rolerealm.createRoleMapper(  'Role','weblogic.security.providers.xacml.authorization.XACMLRoleMapper')realm.createAuthorizer(  'ATZ','weblogic.security.providers.xacml.authorization.XACMLAuthorizer')# Adjudicatorrealm.createAdjudicator('  ADJ','weblogic.security.providers.authorization.DefaultAdjudicator')# Auditorrealm.createAuditor('  AUD','weblogic.security.providers.audit.DefaultAuditor') # Cred Mapperrealm.createCredentialMapper(  'CM','weblogic.security.providers.credentials.DefaultCredentialMapper')# Cert Pathrealm.setCertPathBuilder(realm.createCertPathProvider(  'CP','weblogic.security.providers.pk.WebLogicCertPathProvider'))# Password Validatorpv = realm.createPasswordValidator('PV',  'com.bea.security.providers.authentication.passwordvalidator.SystemPasswordValidator')pv.setMinPasswordLength(8)pv.setMinNumericOrSpecialCharacters(1)save()activate() 2.3.2 Add a user and group to the security realm for the partition Create a user and add the user to a security group Administrators in the realm. In this use case, the username and the password for the coke partition are mtadmin1 and welcome1. There is no need to start an edit session when looking up the Authentication Provider to create a user/group. realmName = 'coke_realm'userName = 'mtadmin1'groupName = 'Administrators'print 'add user: realmName ' + realmNameif realmName == 'DEFAULT_REALM':   realm = cmo.getSecurityConfiguration().getDefaultRealm()else:   realm = cmo.getSecurityConfiguration().lookupRealm(realmName)print "Creating user " + userName + " in realm: " + realm.getName()atn = realm.lookupAuthenticationProvider('ATNPartition')if atn.userExists(userName):   print "User already exists."else:   atn.createUser(userName, '${password}', realmName + ' Realm User')print "Done creating user. ${password}"print "Creating group " + groupName + " in realm: " + realm.getName()if atn.groupExists(groupName):   print "Group already exists."else:   atn.createGroup(groupName, realmName + ' Realm Group')if atn.isMember(groupName,userName,true) == 0:   atn.addMemberToGroup(groupName, userName)else:   print "User is already member of the group." 2.3.3 Create a virtual target for the partition This virtual target is targeted to the admin server. The uri prefix is /coke. This is the url prefix used for making JMX connections to WebLogic Server MBeanServer. edit()startEdit()vt = cmo.createVirtualTarget("coke-vt")vt.setHostNames(array(["localhost"],java.lang.String))vt.setUriPrefix("/coke")as = cmo.lookupServer("admin")vt.addTarget(as)save()activate() 2.3.4 Create the partition: coke The partition name is coke and it is targeted to the coke-vt virtual target. edit()startEdit()vt = cmo.lookupVirtualTarget("coke-vt")p = cmo.createPartition('coke')p.addAvailableTarget(vt)p.addDefaultTarget(vt)rg=p.createResourceGroup('coke-rg1')rg.addTarget(vt)realm = cmo.getSecurityConfiguration().lookupRealm("coke-realm")p.setRealm(realm)save()activate() 2.3.5 Setup IDD for the partition Set up primary identity domain (IDD) for the partition. edit()startEdit()sec = cmo.getSecurityConfiguration()sec.setAdministrativeIdentityDomain("AdminIDD")realmName = 'coke_realm'realm = cmo.getSecurityConfiguration().lookupRealm(realmName)# ATNdefAtnP = realm.lookupAuthenticationProvider('ATNPartition')defAtnP.setIdentityDomain('cokeIDD')defAtnA = realm.lookupAuthenticationProvider('ATNAdmin')defAtnA.setIdentityDomain("AdminIDD")# Partitionpcoke= cmo.lookupPartition('coke')pcoke.setPrimaryIdentityDomain('cokeIDD')# Default realmrealm = sec.getDefaultRealm()defAtn = realm.lookupAuthenticationProvider('DefaultAuthenticator')defAtn.setIdentityDomain("AdminIDD")save()activate() 2.3.6 Restart the Server Restart WebLogic Server because of the security setting changes. 2.3.7 Start the partition This is required for a partition to receive requests. edit()startEdit()partitionBean=cmo.lookupPartition('coke')# start the partition (required)startPartitionWait(partitionBean)save()activate() 2.4 Create another partition: pepsi in a domain Repeat the same steps in 2.3 to create another partition: pepsi, but with different values: Partition name = pepsi User name = mtadmin2 Password = welcome2 Security realm = pepsi_realm IDD name = pepsiIDD Virtual target name = pepsi-vt Resource group name = pepsi-rg1 2.5 Deploy User Application Now the domain is ready to use. Let's deploy an application ear file. The application, e.g., helloTenant.ear, is deployed to the WebLogic Server domain, the coke partition, the pepsi partition. edit()startEdit()deploy(appName='helloTenant',target='admin,path='${path-to-the-ear-file}/helloTenant.ear')deploy(appName='helloTenant-coke',partition='coke',resourceGroup='coke-rg1',path='${path-to-the-ear-file}/helloTenant.ear')deploy(appName='helloTenant-pepsi',partition='pepsi',resourceGroup='pepsi-rg1',path='${path-to-the-ear-file}/helloTenant.ear')save()activate() 2.6 Domain config file sample When all of the steps are finished, the domain config file in $DOMAIN_HOME/config/config.xml will contain all of the info needed for the domain and the partitions. Here is a sample snippet related to the coke partition in the config.xml: <server>   <name>admin</name>   <listen-address>localhost</listen-address></server><configuration-version>12.2.1.0.0</configuration-version><app-deployment>   <name>helloTenant</name>   <target>admin</target>   <module-type>ear</module-type>   <source-path>${path-to-the-ear-file}/helloTenant.ear</source-path>   <security-dd-model>DDOnly</security-dd-model>   <staging-mode xsi:nil="true"></staging-mode>    <plan-staging-mode xsi:nil="true"></plan-staging-mode>   <cache-in-app-directory>false</cache-in-app-directory></app-deployment><virtual-target>   <name>coke-vt</name>   <target>admin</target>   <host-name>localhost</host-name>   <uri-prefix>/coke</uri-prefix>   <web-server>     <web-server-log>       <number-of-files-limited>false</number-of-files-limited>     </web-server-log>   </web-server></virtual-target><admin-server-name>admin</admin-server-name><partition>   <name>coke</name>   <resource-group>     <name>coke-rg1</name>     <app-deployment>       <name>helloTenant-coke</name>       <module-type>ear</module-type>       <source-path>${path-to-the-ear-file}/helloTenant.ear</source-path>       <security-dd-model>DDOnly</security-dd-model>       <staging-mode xsi:nil="true"></staging-mode>       <plan-staging-mode xsi:nil="true"></plan-staging-mode>       <cache-in-app-directory>false</cache-in-app-directory>     </app-deployment>     <target>coke-vt</target>     <use-default-target>false</use-default-target>   </resource-group>   <default-target>coke-vt</default-target>   <available-target>coke-vt</available-target>   <realm>coke_realm</realm>   <partition-id>2d044835-3ca9-4928-915f-6bd1d158f490</partition-id>   <primary-identity-domain>cokeIDD</primary-identity-domain></partition> For the pepsi partition, there is a similar <virtual-target> element and the <partition> element for pepsi added in the config.xml. From now on, the domain with 2 partitions are created and ready to serve requests. Users can access their applications deployed onto this domain. Check this blog Application MBean Visibility in Oracle WebLogic Server 12.2.1 regarding how to access the application MBeans registered on WebLogic Server MBeanServers in MT in 12.2.1. 3. Debug Flags In case of errors during domain creation, there are debug flags which can be used to triage the errors: If the error is related to security realm setup, restart the WLS server with these debug flags: -Dweblogic.debug.DebugSecurityAtn=true -Dweblogic.debug.DebugSecurity=true -Dweblogic.debug.DebugSecurityRealm=true If the error is related to a bean config error in a domain, restart the WLS server with these debug flags: -Dweblogic.debug.DebugJMXCore=true -Dweblogic.debug.DebugJMXDomain=true If the error is related to an edit session issue, restart the WLS server with these debug flags: -Dweblogic.debug.DebugConfigurationEdit=true -Dweblogic.debug.DebugDeploymentService=true -Dweblogic.debug.DebugDeploymentServiceInternal=true -Dweblogic.debug.DebugDeploymentServiceTransportHttp=true  4. Conclusion An Oracle WebLogic Server domain in 12.2.1 can contain partitions. Creating a domain with partitions needs additional steps compared to creating a traditional WLS domain. This article shows the domain creation using WLST. There are other ways to create domains with partitions, e.g., FMW Control.  For more information on how to create a domain with partitions, please check the "References" section. 5. References WebLogic Server domain Domain partitions for multi tenency Enterprise Manager Fusion Middleware Control (FMWC) Config Wizard Creating WebLogic domains using WLST offline Restricted JRF template WebLogic Server Security WebLogic Server Deployment WebLogic Server Debug Flags

Oracle WebLogic Server 12.2.1 added support for multitenancy (WLS MT). In WLS MT, WLS can be configured with a domain, as well as one or more partitions. A partition contains new elements introduced...

Add category

Local Transaction Leak Profiling for WLS 12.2.1 Datasource

This is the third of this series on profiling enhancements in WLS 12.2.1 (but maybe not the least since this appears to happen quite often).  This is a common application error that is difficult todiagnose when an application leaves a local transaction open on a connectionand it is returned to the connection pool. This error can manifest as XAException/XAER_PROTO errors, or asunintentional local transaction commits or rollbacks of database updates. Current workarounds to internallycommit/rollback the local transaction when a connection is released addssignificant overhead, only masks errors that may be surfaced to the application,and still leaves the possibility of data inconsistency. The Oracle JDBC thin driver supports a proprietary method toobtain the local transaction state of a connection. A new profiling option will be added thatwill generate a log entry when a local transaction is detected on a connectionwhen it is released to the connection pool. The log record will include the call stack and details about the threadreleasing the connection. To enable local transaction leak profiling, the datasourceconnection pool ProfileType attribute bitmask must include the value(0x000200). This is a WLST script to set the values. # java weblogic.WLST prof.py import sys, socket, os hostname = socket.gethostname() datasource='ds' svr='myserver' connect("weblogic","welcome1","t3://"+hostname+":7001") # Edit the configuration to set the leak timeout edit() startEdit() cd('/JDBCSystemResources/' + datasource + '/JDBCResource/' + datasource + '/JDBCConnectionPoolParams/' + datasource ) cmo.setProfileType(0x000200) # turn on  transaction leak profiling save() activate() exit() Note that you can "or" multiple profile options together when setting the profile type.  In the administrative console on the Diagnostics Tab, this may be enabled using the Profile Connection Local Transaction Leak checkbox.  The local transaction leak profile record contains two stacktraces, one of the reserving thread and one of the thread at the time theconnection was closed. An example logrecord is shown below. ####<mydatasource><WEBLOGIC.JDBC.CONN.LOCALTX_LEAK> <Thu Apr 09 15:30:11 EDT 2015><java.lang.Exception at weblogic.jdbc.common.internal.ConnectionEnv.setup(ConnectionEnv.java:398) atweblogic.common.resourcepool.ResourcePoolImpl.reserveResource(ResourcePoolImpl.java:365) atweblogic.common.resourcepool.ResourcePoolImpl.reserveResource(ResourcePoolImpl.java:331) atweblogic.jdbc.common.internal.ConnectionPool.reserve(ConnectionPool.java:568) atweblogic.jdbc.common.internal.ConnectionPool.reserve(ConnectionPool.java:498) atweblogic.jdbc.common.internal.ConnectionPoolManager.reserve(ConnectionPoolManager.java:135) atweblogic.jdbc.common.internal.RmiDataSource.getPoolConnection(RmiDataSource.java:522) atweblogic.jdbc.common.internal.RmiDataSource.getConnectionInternal(RmiDataSource.java:615) atweblogic.jdbc.common.internal.RmiDataSource.getConnection(RmiDataSource.java:566) atweblogic.jdbc.common.internal.RmiDataSource.getConnection(RmiDataSource.java:559) ...> <java.lang.Exception atweblogic.jdbc.common.internal.ConnectionPool.release(ConnectionPool.java:1064) atweblogic.jdbc.common.internal.ConnectionPoolManager.release(ConnectionPoolManager.java:189) atweblogic.jdbc.wrapper.PoolConnection.doClose(PoolConnection.java:249) atweblogic.jdbc.wrapper.PoolConnection.close(PoolConnection.java:157) ...> <[partition-id: 0] [partition-name: DOMAIN] > Once you look at the record, you can see where in the application the close is done and you should complete the transaction appropriately before doing the close.

This is the third of this series on profiling enhancements in WLS 12.2.1 (but maybe not the least since this appears to happen quite often).  This is a common application error that is difficult todiagn...

Add category

Closed JDBC Object Profiling for WLS 12.2.1 Datasource

Accessing a closed JDBC object is a common application errorthat can be difficult to debug. To helpdiagnose such conditions there is a new profiling option to generate adiagnostic log message when a JDBC object (Connection, Statement or ResultSet)is accessed after the close() method has been invoked. The log message will include the stack traceof the thread that invoked the close() method. To enable closed JDBC object profiling, the datasourceProfileType attribute bitmask must have the value 0x000400 set. This is a WLST script to set the value. # java weblogic.WLST prof.py import sys, socket, os hostname = socket.gethostname() datasource='ds' svr='myserver' connect("weblogic","welcome1","t3://"+hostname+":7001")# Edit the configuration to set the leak timeoutedit() startEdit() cd('/JDBCSystemResources/' + datasource + '/JDBCResource/' + datasource + '/JDBCConnectionPoolParams/' +datasource ) cmo.setProfileType(0x000400) # turn on profilingsave() activate()exit() In the administrative console on the Diagnostics Tab, this may be enabled using the Profile Closed Usage checkbox.  The closed usage log record contains two stack traces, oneof the thread that initially closed the object and another of the thread thatattempted to access the closed object. An example record is shown below. ####<mydatasource> <WEBLOGIC.JDBC.CLOSED_USAGE><Thu Apr 09 15:19:04 EDT 2015> <java.lang.Throwable: Thread[[ACTIVE]ExecuteThread: '2' for queue: 'weblogic.kernel.Default (self-tuning)',5,PooledThreads] atweblogic.jdbc.common.internal.ProfileClosedUsage.saveWhereClosed(ProfileClosedUsage.java:31) atweblogic.jdbc.wrapper.PoolConnection.doClose(PoolConnection.java:242) atweblogic.jdbc.wrapper.PoolConnection.close(PoolConnection.java:157) ...> <java.lang.Throwable: Thread[[ACTIVE] ExecuteThread: '2' for queue:'weblogic.kernel.Default (self-tuning)',5,Pooled Threads] atweblogic.jdbc.common.internal.ProfileClosedUsage.addClosedUsageProfilingRecord(ProfileClosedUsage.java:38) at weblogic.jdbc.wrapper.PoolConnection.checkConnection(PoolConnection.java:83) atweblogic.jdbc.wrapper.Connection.preInvocationHandler(Connection.java:106) atweblogic.jdbc.wrapper.Connection.createStatement(Connection.java:581) ...> <[partition-id: 0] [partition-name: DOMAIN] > When this profilingoption is enabled, exceptions indicating that an object is already closed willalso include a nested SQLException indicating where the close was done, asshown in the example below. java.sql.SQLException: Connection has already been closed. atweblogic.jdbc.wrapper.PoolConnection.checkConnection(PoolConnection.java:82) atweblogic.jdbc.wrapper.Connection.preInvocationHandler(Connection.java:107) at weblogic.jdbc.wrapper.Connection.createStatement(Connection.java:582) atApplication.doit(Application.java:156) ... Caused by: java.sql.SQLException: Where closed: Thread[[ACTIVE] ExecuteThread:... at weblogic.jdbc.common.internal.ProfileClosedUsage.saveWhereClosed(ProfileClosedUsage.java:32) atweblogic.jdbc.wrapper.PoolConnection.doClose(PoolConnection.java:239) atweblogic.jdbc.wrapper.PoolConnection.close(PoolConnection.java:154) at Application.doit(Application.java:154) ... This is very helpful when you get an error indicating that a connection has already been closed and you can't figure out where it was done.  Note that there is overhead in getting the stack trace so you wouldn't normally run with this enabled all the time in production (and we don't default to it always being enabled), but it's worth the overhead when you need to resolve a problem.

Accessing a closed JDBC object is a common application error that can be difficult to debug. To help diagnose such conditions there is a new profiling option to generate adiagnostic log message when a...

Add category

Connection Leak Profiling for WLS 12.2.1 Datasource

This is the first of a series of three articles that describesenhancements to datasource profiling in WLS 12.2.1. These enhancements were requested by customers and Oracle support. I think they will be very useful in trackingdown problems in the application. The pre-12.2.1 connection leak diagnostic profiling optionrequires that the connection pool “Inactive Connection Timeout Seconds”attribute be set to a positive value in order to determine how long before anidle reserved connection is considered leaked. Once identified as being leaked, a connection is reclaimed andinformation about the reserving thread is written out to the diagnosticslog. For applications that holdconnections for long periods of time, false positives can result in applicationerrors that complicate debugging. Toaddress this concern and improve usability, two enhancements to connection leakprofiling are available: 1. Connection leakprofile records will be produced for all reserved connections when theconnection pool reaches max capacity and a reserve request results in aPoolLimitSQLException error. 2. An optionalConnection Leak Timeout Seconds attribute will be added to the datasourcedescriptor for use in determining when a connection is considered“leaked”. When an idle connectionexceeds the timeout value a leak profile log message is written and theconnection is left intact. The existing connection leak profiling value (0x000004) mustbe set on the datasource connection pool ProfileType attribute bitmask toenable connection leak detection. Setting the ProfileConnectionLeakTimeoutSeconds attribute may be used inplace of InactiveConnectionTimeoutSeconds for identifying potential connectionleaks. This is a WLST script to set the values. # java weblogic.WLST prof.py import sys, socket, os hostname = socket.gethostname() datasource='ds' svr='myserver' connect("weblogic","welcome1","t3://"+hostname+":7001")# Edit the configuration to set the leak timeoutedit() startEdit() cd('/JDBCSystemResources/' + datasource + '/JDBCResource/' + datasource + '/JDBCConnectionPoolParams/' +datasource ) cmo.setProfileConnectionLeakTimeoutSeconds(120) # set the connection leaktimeout cmo.setProfileType(0x000004) # turn on profilingsave() activate()exit() This is what the console page looks like after it is set.  Note the profile type and timeout value are set on the Diagnostics tab for the datasource. The existing leak detection diagnostic profiling log recordformat is used for leaks triggered by either the ProfileConnectionLeakTimeoutSecondsattribute or when pool capacity is exceeded. In either case a log record is generated only once for each reservedconnection. If a connection issubsequently released to pool, re-reserved and leaked again, a new record willbe generated. An example resource leak diagnosticlog record is shown below.  The output can be reviewed in the console or by looking at the datasource profile output text file. ####<mydatasource><WEBLOGIC.JDBC.CONN.LEAK> <Thu Apr 09 14:00:22 EDT 2015><java.lang.Exception atweblogic.jdbc.common.internal.ConnectionEnv.setup(ConnectionEnv.java:398) atweblogic.common.resourcepool.ResourcePoolImpl.reserveResource(ResourcePoolImpl.java:365) atweblogic.common.resourcepool.ResourcePoolImpl.reserveResource(ResourcePoolImpl.java:331) atweblogic.jdbc.common.internal.ConnectionPool.reserve(ConnectionPool.java:568) atweblogic.jdbc.common.internal.ConnectionPool.reserve(ConnectionPool.java:498) atweblogic.jdbc.common.internal.ConnectionPoolManager.reserve(ConnectionPoolManager.java:135) atweblogic.jdbc.common.internal.RmiDataSource.getPoolConnection(RmiDataSource.java:522) atweblogic.jdbc.common.internal.RmiDataSource.getConnectionInternal(RmiDataSource.java:615) atweblogic.jdbc.common.internal.RmiDataSource.getConnection(RmiDataSource.java:566) at weblogic.jdbc.common.internal.RmiDataSource.getConnection(RmiDataSource.java:559) ...><autoCommit=true,enabled=true,isXA=false,isJTS=false,vendorID=100,connUsed=false,doInit=false,'null',destroyed=false,poolname=mydatasource,appname=null,moduleName=null,connectTime=960,dirtyIsolationLevel=false,initialIsolationLevel=2,infected=false,lastSuccessfulConnectionUse=1428602415037,secondsToTrustAnIdlePoolConnection=10,currentUser=...,currentThread=Thread[[ACTIVE] ExecuteThread: '0' for queue:'weblogic.kernel.Default (self-tuning)',5,PooledThreads],lastUser=null,currentError=null,currentErrorTimestamp=null,JDBC4Runtime=true,supportStatementPoolable=true,needRestoreClientInfo=false,defaultClientInfo={},supportIsValid=true> <[partition-id: 0] [partition-name: DOMAIN] > For applications that may have connection leaks but also have some valid long-running operations, you will now be able to scan through a list of connections that may be problems without interfering with normal application execution.

This is the first of a series of three articles that describesenhancements to datasource profiling in WLS 12.2.1. These enhancements were requested by customers and Oracle support. I think they will...

Add category

Using Eclipse with WebLogic Server 12.2.1

With the installation of WebLogic Server 12.2.1 now including the Eclipse Network Installer, which enables developers to  download and install Eclipse including the specific features of interest, getting up and running with Eclipse and WebLogic Server has never been easier. The Eclipse Network Installer presents developers with a guided interface to enable the custom installation of an Eclipse environment through the selection of an Eclipse version to be installed and which of the available capabilities are required - such as Java EE 7, Maven, Coherence, WebLogic, WLST, Cloud and Database tools amongst others.  It will then download the selected components and install them directly on the developers machine Eclipse and the Oracle Enterprise Pack for Eclipse plugins continue to provide extensive support for WebLogic Server enabling it to be used to throughout the software lifecycle; from develop and test cycles with its Java EE dialogs,  assistants and deployment plugins; through to automation of configuration and provisioning of environments with the authoring, debugging and running of scripts using the WLST Script Editor and MBean palette. The YouTube video WebLogic Server 12.2.1 - Developing with Eclipse provides a short demonstration on how to install Eclipse and the OEPE components using the new Network Installer that is bundled within the WebLogic Server installations.  It then shows the configuring of a new WebLogic Server 12.2.1 server target within Eclipse and finishes with importing a Maven project that contains a Java EE 7 example application that utilizes the new Batch API that is deployed to the server and called from a browser to run.

With the installation of WebLogic Server 12.2.1 now including the Eclipse Network Installer, which enables developers to  download and install Eclipse including the specific features of interest,...

Add category

Getting Started with the WebLogic Server 12.2.1 Developer Distribution

The new WebLogic Server 12.2.1 release continues down the the path of providing an installation that is smaller to download and able to be installed with a single operation, providing a quicker approach for developers to get started with the product. New with the WebLogic Server 12.2.1 release is the use of the quick installer technology which packages the product into an executable jar file, which will silently install the product into a target directory.  Through the use of the quick installer, the installed product can now be patched using the standard Oracle patching utility - opatch - enabling developers to download and apply any patches as needed and to also enable a high degree of consistency with downstream testing and production environments. Despite it's smaller distribution size the developer distribution delivers a full featured WebLogic Server including the rich administration console, the comprehensive scripting environment with WLST, the Configuration Wizard and Domain Builders, the Maven plugins and artifacts and of course all the new WebLogic Server features such as Java EE 7 support, MultiTenancy, Elastic Dynamic Clusters and more. For a quick look at using the new developer distribution, creating a domain and accessing the administration console, check out the YouTube video: Getting Started with the Developer Distribution.

The new WebLogic Server 12.2.1 release continues down the the path of providing an installation that is smaller to download and able to be installed with a single operation, providing a...

Messaging

JMS 2.0 support in WebLogic Server 12.2.1

As part of its support for Java EE 7, WebLogic Server 12.2.1 supports version 2.0 of the JMS (Java Message Service) specification. JMS 2.0 is the first update to the JMS specification since version 1.1 was released in 2002. One might think that an API that has remained unchanged for so long has grown moribund and unused. However, if you judge the success of an API standard by the number of different implementations, JMS is one of the most successful APIs around. In JMS 2.0, the emphasis has been on catching up with the ease-of-use improvements that have been made to other enterprise Java technologies. While technologies such as Enterprise JavaBeans or Java persistence are now much simpler to use than they were a decade ago, JMS had remained unchanged with a successful, but rather verbose, API. The single biggest change in JMS 2.0 is the introduction of a new simplified API for sending and receiving messages that reduces the amount of code a developer must write. For applications that run in WebLogic server itself, the new API also supports resource injection. This allows WebLogic to take care of the creation and management of JMS objects, simplifying the application even further. Other changes in JMS 2.0 asynchronous send,  shared topic subscriptions and delivery delay. These were existing features WebLogic which are now available using an improved, standard, API. To find out more about JMS 2.0, see this 15 minute audio-visual slide presentation. Read these two OTN articles: What's New in JMS 2.0, Part One: Ease of Use What's New in JMS 2.0, Part Two—New Messaging Features See also Understanding the Simplified API Programming Model in the product documentation In a hurry? See Ten ways in which JMS 2.0 means writing less code.

As part of its support for Java EE 7, WebLogic Server 12.2.1 supports version 2.0 of the JMS (Java Message Service) specification.JMS 2.0 is the first update to the JMS specification sinceversion 1.1...

Technical

ZDT Patching; A Simple Case – Rolling Restart

To get started understanding ZDT Patching, let’s take a look at it in itssimplest form, the rolling restart.  Inmany ways, this simple use case is the foundation for all of the other types ofrollouts – Java Version, Oracle Patches, and Application Updates. Executing the rolling restart requires thecoordinated and controlled shutdown of all of the managed servers in a domainor cluster while ensuring that service to the end-user is not interrupted, andnone of their session data is lost. The administrator can start a rolling restart by issuing the WLST commandbelow: rollingRestart(“Cluster1”) In this case, the rolling restart will affect all managed servers in thecluster named “Cluster1”. This is calledthe target. The target can be a singlecluster, a list of clusters, or the name of the domain. When the command is entered, the WebLogic Admin Server will analyze thetopology of the target and dynamically create a workflow (also called arollout), consisting of every step that needs to be taken in order togracefully shutdown and restart each managed server in the cluster, whileensuring that all sessions on that managed server are available to the othermanaged servers. The workflow will alsoensure that all of the running apps on a managed server are fully ready toaccept requests from the end-users before moving on to the next node. The rolling restart is complete once everymanaged server in the cluster has been restarted. A diagram illustrating this process on a very simple topology is shownbelow.  In the diagram you can see that a node is taken offline (shown in red) and end-user requests that would have gone to that node are re-routed to active nodes.  Once the servers on the offline node have been restarted and their applications are again ready to receive requests, that node is added back to the pool of active nodes and the rolling restart moves on to the next node.  Illustration of a Rolling Restart Across a Cluster. The rolling restart functionality was introduced based on customerfeedback.  Some customers have a policyof preemptively restarting their managed servers in order to refresh the memoryusage of applications running on top of them. With this feature we are greatly simplifying that tedious and timeconsuming process, and doing so in a way that doesn’t affect end-users. For more information about Rolling Restarts with Zero Downtime Patching,view the documentation.

To get started understanding ZDT Patching, let’s take a look at it in its simplest form, the rolling restart.  In many ways, this simple use case is the foundation for all of the other types ofrollouts...

Announcement

Elasticity for Dynamic Clusters

Introducing Elasticity for Dynamic Clusters WebLogic Server 12.1.2 introduced the concept of dynamic clusters, which are clusters where the Managed Server configurations are based off of a single, shared template.  It greatly simplified the configuration of clustered Managed Servers, and allows for dynamically assigning servers to machine resources and greater utilization of resources with minimal configuration. In WebLogic Server 12.2.1, we build on the dynamic clusters concept to introduce elasticity to dynamic clusters, allowing them to be scaled up or down based on conditions identified by the user.  Scaling a cluster can be performed on-demand (interactively by the administrator), at a specific date or time, or based on performance as seen through various server metrics. In this blog entry, we take a high level look at the different aspects of elastic dynamic clusters in WebLogic 12.2.1.0, the next piece in the puzzle for on-premise elasticity with WebLogic Server!  In subsequent blog entries, we will provide more detailed examinations of the different ways of achieving elasticity with dynamic clusters. The WebLogic Server Elasticity Framework The diagram below shows the different parts to the elasticity framework for WebLogic Server: The Elastic Services Framework are a set of services residing within the Administration Server for a for WebLogic domain, and consists of A new set of elastic properties on the DynamicServersMBean for dynamic clusters to establish the elastic boundaries and characteristics of the cluster New capabilities in the WebLogic Diagnostics Framework (WLDF) to allow for the creation of automated elastic policies A new "interceptors" framework to allow administrators to interact with scaling events for provisioning and database capacity checks A set of internal services that perform the scaling (Optional) integration with Oracle Traffic Director (OTD) 12c to notify it of changes in cluster membership and allow it to adapt the workload accordingly Note that while tighter integration with OTD is possible in 12.2.1, if the OTD server pool is enabled for dynamic discovery, OTD will adapt as necessary to the set of available servers in the cluster. Configuring Elasticity for Dynamic Clusters To get started, when you're configuring a new dynamic cluster, or modifying an existing dynamic cluster, you'll want to leverage some new properties surfaced though the DynamicServersMBean for the cluster to set some elastic boundaries and control the elastic behavior of the cluster. The new properties to be configured include The starting dynamic cluster size The minimum and maximum elastic sizes of the cluster The "cool-off" period required between scaling events There are several other properties regarding how to manage the shutdown of Managed Servers in the cluster, but the above settings control the boundaries of the cluster (by how many instances it can scale up or down), and how frequently scaling events can occur.  The Elastic Services Framework will allow the dynamic cluster to scale up to the specified maximum number of instances, or down to the minimum you allow.   The cool-off period is a safety mechanism designed to prevent scaling events from occurring too frequently.  It should allow enough time for a scaling event to complete and for its effects to be felt on the dynamic cluster's performance characteristics. Needless to say, the values for these settings should be chosen carefully and aligned with your cluster capacity planning! Scaling Dynamic Clusters Scaling of a dynamic cluster can be achieved through the following means: On-demand through WebLogic Server Administration Console and WLST  Using an automated calendar-based schedule utilizing WLDF policies and actions Through automated WLDF policies based on performance metrics On-Demand Scaling WebLogic administrators have the ability to scale a dynamic cluster up or down on demand when needed: Through the WLST scaleUp() and scaleDown() commands (nicely detailed in Byron Nevins' blog entry here) Using the WebLogic Administration Console, on the dynamic cluster's "Control/Scaling" tab (see image below) In the console case, the administrator simply indicates the total number of desired running servers in the cluster, and the Console will interact with the Elastic Services Framework to scale the cluster up or down accordingly, within the boundaries of the dynamic cluster. Automated Scaling In addition to scaling a dynamic cluster on demand, WebLogic administrators can configure automated polices using the Polices & Actions feature (known in previous releases as the Watch & Notifications Framework) in WLDF. Typically, automated scaling will consist of creating pairs of WLDF policies, one for scaling up a cluster, and one for scaling it down.  Each scaling policy consists of  (Optionally) A policy (previously known as a "Watch Rule") expression A schedule A scaling action To create an automated scaling policy, an administrator must Configure a domain-level diagnostic system module and target it to the Administration Server Configure a scale-up or scale-down action for a dynamic cluster within that WLDF module Configure a policy and assign the scaling action For more information you can consult the documentation for Configuring Policies and Actions. Calendar Based Elastic Policies In 12.2.1, WLDF introduces the ability for cron-style scheduling of policy evaluations.  Policies that monitor MBeans according to a specific schedule are called "scheduled" policies.   A calendar based policy is a policy that unconditionally executes according to its schedule and executes any associated actions.   When combined with a scaling action, you can create a policy that can scale up or scale down a dynamic cluster at specific scheduled times. Each scheduled policy type has its own schedule (as opposed to earlier releases, which were tied to a single evaluation frequency) which is configured in calendar time, and allowing the ability to create the schedule patterns such as (but not limited to): Recurring interval based patterns (e.g., every 5th minute of the hour, or every 30th second of every minute) Days-of-week or days-of-month (e.g., "every Mon/Wed/Fri at 8 AM", or "every 15th and 30th of every month") Specific days and times within a year  (e.g., "December 26th at 8AM EST") So, for example, an online retailer could configure a pair of policies around the Christmas holidays: A "Black Friday" policy to scale up the necessary cluster(s) to meet increased shopping demand for the Christmas shopping season Another policy to scale down the cluster(s) on December 25th when the Christmas shopping season is over Performance-based Elastic Policies In addition to calendar-based scheduling, in 12.2.1 WLDF provides the ability to create scaling policies based on performance conditions within a server ("server-scoped") or cluster ("cluster-scoped").  You can create a policy based on various run-time metrics supported by WebLogic Server.  WLDF also provides a set of pre-packaged, parameterized, out-of-the-box functions called "Smart Rules" to assist in creating performance-based policies. Cluster-scoped Smart Rules allow you to look at trends in a performance metric across a cluster over a specified window of time and (when combined with scaling actions) s