X

Proactive insights, news and tips from Oracle WebLogic Server Support. Learn Oracle from Oracle.

Recent Posts

Announcement

WebLogic Server is now certified to run on OpenShift!

We are pleased to announce the certification of WebLogic Server on the Red Hat OpenShift which is based on Kubernetes.  The WebLogic domain runs on OpenShift managed by the WebLogic Kubernetes Operator. The Operator uses a common set of Kubernetes APIs to provide an improved user experience when automating operations such as: provisioning, lifecycle management, application versioning, product patching, scaling, and security.  We verified the following functionality: Installation of the WebLogic Kubernetes Operator. Displaying Operator Logs in Kibana. Running a WebLogic domain where the domain configuration is in a Docker image or on a Persistent Volume. Accessing the WebLogic Administration Console and WLST. Deploying an application to a WebLogic cluster. Routing and exposing the application outside of OpenShift. Scaling of a WebLogic cluster. Load balancing requests to the application. Exposing WebLogic metrics to Prometheus using the  WebLogic Monitoring Exporter. Controlling lifecycle management of the WebLogic domain/cluster/server. Initiating a rolling restart of the WebLogic domain. Changing  domain configuration using Configuration Overrides.   The following matrix shows the different versions of products used in our certification: Product Version WebLogic Server 12.2.1.3+ WebLogic Kubernetes Operator 2.0.1+ OpenShift 3.11.43+ Kubernetes 1.11.0+ Docker 1.13.1ce+  On January 19th we published a  blog “WebLogic on OpenShift” which describes the steps to run a WebLogic domain/cluster managed by the operator running on  OpenShift. The starting point is the OpenShift Container Platform server set up on OCI in this earlier post.  To run WebLogic on OpenShift  get the operator 2.0.1 Docker image from the Dock­­­­­­­­­er Hub, clone­­­­ the GitHub project, and follow the sample in the blog. Our goal is to support WebLogic Server on all Kubernetes platforms on premises and in both private and public clouds, providing the maximum support to migrate WebLogic workloads to cloud neutral infrastructures.  We continue to expand our certifications on different Kubernetes platforms; stay tuned – thanks! Safe Harbor Statement The preceding is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions.  The development, release, and timing of any features or functionality described for Oracle’s products remains at the sole discretion of Oracle.

We are pleased to announce the certification of WebLogic Server on the Red Hat OpenShift which is based on Kubernetes.  The WebLogic domain runs on OpenShift managed by the WebLogic Kubernetes...

The WebLogic Server

Oracle 18.3 Database Support with WebLogic Server

The Oracle 18.3 database is available and works with WebLogic Server (WLS) . Using Older Drivers with the 18.3 Database Server The simplest integration of WebLogic Server with an Oracle 18.3 database is to use the Oracle driver jar files included in your WebLogic Server installation. There are no known problems or upgrade issues when using 11.2.x or 12.x drivers shiped with WLS when interoperating with an Oracle 18.3 database.  See the Oracle JDBC FAQ for more information on driver support and features of Oracle 18.3 database. Using the Oracle 18.3 Drivers with the 18.3 Database Server To use many of the new 18.3 database features, it is necessary to use the 18.3 database driver jar files. Note that Oracle 18.3 database driver jar files are compiled for JDK 8. The earliest release of WLS that supports JDK 8 is WLS 12.1.3. The Oracle 18.3 database driver jar files cannot work with earlier versions of WLS.  In earlier versions of WLS you can use the drivers that come with the WLS installation to connect to the 18.3 DB, as explained above.  At this time, this article does not apply to Fusion MiddleWare (FMW) deployments of WLS.. Required Oracle 18.3 Driver Files At this time, no release of WLS ships with the 18.3 Oracle database jar files. This section lists the files required to use an Oracle 18.3 driver with these releases of WebLogic Server . Note: These jar files must be added to the CLASSPATH used for running WebLogic Server at the head of the CLASSPATH. They must come before all of the 11.2.x or 12.x Oracle database client jar files. Select one of the following ojdbc files (note that these have "8" in the name instead of "7" from the earlier release)   The _g jar files are using for debugging and required if you want to enable driver level logging.  If you are using FMW, you must use the "dms" version of the jar file.  WLS uses the non-"dms" version of the jar by default. ojdbc8-full/ojdbc8.jar ojdbc8-diag/ojdbc8_g.jar ojdbc8-diag/ojdbc8dms.jar ojdbc8-diag/ojdbc8dms_g.jar The following table lists additional required driver files:   File Description ojdbc8-full/simplefan.jar Fast Application Notification ojdbc8-full/ucp.jar Universal Connection Pool ojdbc8-full/ons.jar Oracle Network Server client ojdbc8-full/orai18n.jar Internationalization support ojdbc8-full/oraclepki.jar Oracle Wallet support ojdbc8-full/osdt_cert.jar Oracle Wallet support ojdbc8-full/osdt_core.jar Oracle Wallet support ojdbc8-full/xdb6.jar SQLXML support   Download Oracle 18.3 Database Files You can download the required jar files from https://www.oracle.com/technetwork/database/application-development/jdbc/downloads/jdbc-ucp-183-5013470.html.  The ojdbc8-full jar files are contained in ojdbc8-full.tar.gz and the ojdbc8-diag files are contained in ojdbc8-diag.tar.gz.  It is recommended to unpackage both of these files under a single directory, maintaining the directory structure (e.g., if the directory is /share, you would end up with /share/ojdbc8-full and /share/ojdbc8-diag directories). Note: In earlier documents, instructions included installation of aqjms.jar to run with AQJMS and xmlparserv2_sans_jaxp_services.jar, orai18n-collation.jar, and orai18n-mapping.jar for XML processing,  These jar files are not available in the Oracle Database 18c (18.3) JDBC Driver & UCP Downloads.  If you need one of these jar files, you will need to install the Oracle Database client, the Administrator package client installation, or a full database installation to get the jar files and add them to the CLASSPATH.. Update the WebLogic Server CLASSPATH or PRE_CLASSPATH To use an Oracle 18.3 JDBC driver, you must update the CLASSPATH in your WebLogic Server environment. Prepend the required files specified in Required Oracle 18.3 Driver Files listed above to the CLASSPATH (before the 12.x Driver jar files).  If you are using startWebLogic.sh, you also need to set the PRE_CLASSPATH. The following code sample outlines a simple shell script that updates the CLASSPATH of your WebLogic environment. Make sure ORACLE183 is set appropriately to the directory where the files were unpackaged (e.g., /share in the example above).. #!/bin/sh # source this file in to add the new 18.3 jar files at the beginning of the CLASSPATH case "`uname`" in *CYGWIN*) SEP=";" ;; Windows_NT) SEP=";" ;; *) SEP=":" ;; esac dir=${ORACLE183:?} # We need one of the following #ojdbc8-full/ojdbc8.jar #ojdbc8-diag/lib/ojdbc8_g.jar #ojdbc8-diag/lib/ojdbc8dms.jar #ojdbc8-diag/lib/ojdbc8dms_g.jar if [ "$1" = "" ] then ojdbc=ojdbc8.jar else ojdbc="$1" fi case "$ojdbc" in ojdbc8.jar) ojdbc=ojdbc8-full/$ojdbc ;; ojdbc8_g.jar|ojdbc8dms.jar|ojdbc8dms_g.jar) ojdbc=ojdbc8-diag/$ojdbc ;; *) echo "Invalid argument - must be ojdbc8.jar|ojdbc8_g.jar|ojdbc8dms.jar|ojdbc8dms_g.jar" exit 1 ;; esac CLASSPATH="${dir}/${ojdbc}${SEP}$CLASSPATH" CLASSPATH="${dir}/ojdbc8-full/simplefan.jar${SEP}$CLASSPATH" CLASSPATH="${dir}/ojdbc8-full/ucp.jar${SEP}$CLASSPATH" CLASSPATH="${dir}/ojdbc8-full/ons.jar${SEP}$CLASSPATH" CLASSPATH="${dir}/ojdbc8-full/orai18n.jar${SEP}$CLASSPATH" CLASSPATH="${dir}/ojdbc8-full/oraclepki.jar ${SEP}$CLASSPATH" CLASSPATH="${dir}/ojdbc8-full/osdt_cert.jar ${SEP}$CLASSPATH" CLASSPATH="${dir}/ojdbc8-full/osdt_core.jar${SEP}$CLASSPATH" CLASSPATH="${dir}/ojdbc8-full/xdb6.jar${SEP}$CLASSPATH" For example, save this script in your environment with the name setdb183_jars.sh. Then run script with ojdbc8.jar: . ./setdb183_jars.sh ojdbc8.jar export PRE_CLASSPATH="$CLASSPATH" 

The Oracle 18.3 database is available and works with WebLogic Server (WLS) . Using Older Drivers with the 18.3 Database Server The simplest integration of WebLogic Server with an Oracle 18.3 database is...

The WebLogic Server

ATP Database use with WebLogic Server

This blog describes the use of Oracle's Autonomous Transaction Processing (ATP) service with a WebLogic Server (WLS) datasource.  There is documentation available from various sources that sort of covers this but this document will try to pull it all together in one place for WLS and try to cover solutions for difficulties seen by our customers. An overview of ATP is given at the Oracle Cloud ATP page.  See the ATP User Guide for more complete documentation on ATP.   The blog  Configuring a WebLogic Data Source to use ATP has screen shots of creating a new ATP database from the OCI console, during which you specify the passwords for the database ADMIN user and the client credentials wallet.  This blog assumes you have already completed that process and downloaded the wallet zip file.  For consistency, this blog assumes the same directory structure with the wallet files being stored in /shared/atp.  You shouldn’t need to modify any of these files for use with WLS (one exception is described below). The use of these files will be described further below in relation to datasource configuration.  The only information that you need is the alias name from the tnsnames.ora file.  For WLS, you should use the alias name of the form dbnnnnnnnnnnnn_tp; this service is configured correctly for WLS transaction processing. The blog link above has screen shots of using the WLS administration console to create the WLS datasource for the ATP database. When using the console, JKS passwords can be encrypted using the process described at this encrypted properties blog. This blog has functional scripts for creating the datasource using either online WLST or REST.  Before running the scripts, we need to check a few prerequisites. The documentation for ATP requires the use of the Oracle 12.x drivers or later.  The earliest version of WLS that supports JDK 8 is WLS 12.1.3. Look at the update number for the JDK by running `java -version` and checking to see if it is 1.8.0_169 or later.  If you haven't been keeping up with quarterly JDK 8 CPU's (shame on you), you have the option of either catching up to at least update 169 or later (this is highly recommended), or you need to download the JCE Unlimited Strength Jurisdiction Policy Files 8;see the associated README file for installation notes.  Without this, you will get a 'fatal alert: handshake_failure' when trying to connect to the database.  If running on JDK7, you need to download and install the JCE Unlimited Strength Jurisdiction Policy Files 7. WLS 12.2.1.3.0 shipped with the 12.2.0.2 Oracle driver.  There are no special requirements for using this driver and the attached scripts should work with no changes. WLS versions 12.1.3 through 12.2.1.2.0 shipped with the 12.1.0.2 Oracle driver.  For versions of WLS earlier than 12.1.3 that only run on JDK7 and shipped with the 11.2.0.3 driver, you would first need to upgrade to the 12.1.0.2 driver using information at this link (driver upgrades are only supported for WLS, not JRF or FA). The 12.1.0.2 driver needs a patch to ojdbc7.jar to support TLSv1.2.  See this link to download the jar file or apply a patch for the bug 23176395. Refer to the MOS note 2122800.1 for more details. When using the 12.1.x driver, the use of connection properties for SSL configuration, as shown in the attached scripts, is not supported.  You must instead use command-line system properties as documented at this link. For WLS 12.1.3 and later versions that support JDK8, you can and may need to update to a later version of jar files if you want some newer features, such as HTTP proxy configuration.  If the client is behind a firewall and your network configuration requires an HTTP proxy to connect to the internet,  you have two options.  You can convince your network administrator to open outbound connections to hosts in the oraclecloud.com domain using port 1522 without going through an HTTP proxy.  The other option if running on JDK8 is to upgrade to the Oracle 18.3 JDBC Thin Client, which enables connections through HTTP proxies.  See the blog Oracle 18.3 Database Support with WebLogic Server for instructions on how to get the jar files and update your CLASSPATH/PRE_CLASSPATH.  In addition, you will need to update the dbnnnnnnnnnnnn_tp; service entry in the tnsnames.ora file to change "address=" to "address= (https_proxy=proxyhostname)(https_proxy_port=80)".  Failure to do this will cause connections to the database to hang or not find the host. Now that you have the necessary credential files, JDK, and driver jar files, you are ready to create the datasource. The online-WLST script is attached at this link to online-WLST script . To run the script, assuming that the server is started with the correct JDK and driver jar files, just run java weblogic.WLST online.py The REST script is attached at this link to REST script . To run the script, assuming that the server is started, just run the following from the domain home directory sh ./rest.sh Both scripts create the same datasource descriptor file and deploy the datasource to a server.  Let's look at descriptor to see how the datasource is configured.  In each of these scripts, the variables that you need to set are at the top so you can update the script quickly and not touch the logic.  WLST uses python variables and the REST script uses shell variables.  The alias name (serviceName variable) of the form of the form dbnnnnnnnnnnnn_tp is taken from the tnsnames.ora file.  The URL is generated by using an @alias format "jdbc:oracle:thin:@dbnnnnnnnnnnnn_tp".  For this to work, we also need to provide the directory where tnsnames.ora file is located (tns_admin variable) using the oracle.net.tns_admin driver property.   Note that the URL information in the tnsnames.ora uses the long format so that the protocol can be specified as TCPS. The datasource name (variable dsname) is also used to generate the JNDI name by prefixing it with "jndi." in the example.  You can change it to match your application requirements. The recommended test table name is "SQL ISVALID" for optimal testing and performance.  You can set other connection pool parameters based on the standards for your organization. The current version of ATP (ATP-S) provides access to a single Pluggable DataBase (PDB) in a Container DataBase (CDB).  Most of the operations on the PDB are similar to a normal Oracle database.  You have a user called ADMIN that does not have the SYSDBA role but does have some administrative permissions to do things like creating schema objects and granting permissions.  The number of sessions is configured at 100 * ATP cores. You cannot create a tablespace and the default available tablespace is named DATA and the temporary table space is named TEMP. The block size is fixed at 8k, meaning you cannot create indexes over approximately 6k in size.   Some additional information about restrictions is provided at Autonomous Transaction Processing for Experienced Oracle Database Users . ATP is configured to not have GRID or RAC installed.  That means that FAN is not supported and only WLS GENERIC datasources can be created (Multi Data Source and Active GridLink cannot be used).  The driver may try to get FAN events from the ONS server and fail because it isn't configured.  To avoid this, we need to set the driver property oracle.jdbc.fanEnabled=false.  This property is no longer needed if using the 18.3 driver. To create connections, we need to provide a user and password (variable names user and password) for the datasource.  The example uses the Admin user configured when the database was created.  More likely, you will create additional users for application use.  You can use your favorite tool like SQLPlus to create the schema objects or you can use WLS utils.Schema to create the objects. The associated SQL DDL statements are described at this link in the user guide.  The password will be encrypted in the datasource descriptor.  The remainder of the configuration is focused on setting up two-way SSL between the client and the database.  There are two options for configuring this and the credentials are available for both in the wallet zip file. For either option, we set the two driver properties oracle.net.ssl_server_dn_match=true oracle.net.ssl_version=1.2  (this should be required only for the 12.x driver) The first option is to use the Oracle auto-open SSO wallet cwallet.ora.  This use of the wallet is to provide the information for two-way SSL connection to the database.  It should not be confused with using the wallet to contain the database user/password credentials so they can be removed from the datasource descriptor (described at the wallet blog).  When using this option, the only driver property that needs to be set is oracle.net.wallet_location (variable wallet_location) to the directory where the wallet is located. The second option is to use Java KeyStore (JKS) files truststore.jks and keystore.jks.  For this option, we need to set the driver properties for javax.net.ssl.keyStoreType, javax.net.ssl.trustStoreType, javax.net.ssl.trustStore, javax.net.ssl.trustStorePassword, javax.net.ssl.keyStore, and javax.net.ssl.keyStorePassword.  We also want to make sure that the password values are stored as encrypted strings. That wraps up the discussion of datasource configuration. If you want to enable Application Continuity on the database service, you need to run a database procedure as documented at this link in the user guide.    

This blog describes the use of Oracle's Autonomous Transaction Processing (ATP) service with a WebLogic Server (WLS) datasource.  There is documentation available from various sources that sort of...

The WebLogic Server

Easily Create an OCI Container Engine for Kubernetes cluster with Terraform Installer to run WebLogic Server

In previous blogs, we have described how to run WebLogic Server on Kubernetes with the Operator using the Oracle Cloud Infrastructure Container Engine for Kubernetes (OKE). To create new Kubernetes clusters quickly, we suggest that you use the Terraform-based Kubernetes installation for the Oracle Cloud Infrastructure (OCI). It consists of Terraform modules and an example base configuration that is used to provision and configure the resources needed to run a highly available and configurable Kubernetes cluster on the Oracle Cloud Infrastructure (OCI). In this blog, we provide sample Terraform scripts and describe the steps to create a basic OKE cluster. Note that this cluster can be used for testing and development purposes only. The provided samples of Terraform scripts should not be considered for creating production clusters, without more of a review in production clusters. Using Terraform installation scripts makes the provisioning of the cloud infrastructure and any required local resources for the Kubernetes cluster fast and easy to perform. They enable you to run WebLogic Server on OKE and leverage WebLogic Server applications in a managed Kubernetes environment in no time. The samples will create: A new Virtual Cloud Network (VCN) for the cluster Two load balancer subnets with security lists Three worker subnets with security lists A Kubernetes cluster with one Node Pool A kubeconfig file to allow access using kubectl Nodes and network settings will be configured to allow SSH access, and the cluster networking policies will allow NodePort services to be exposed. All OCI Container Engine masters are highly available (HA) and employ load balancers. Prerequisites To use these Terraform scripts, you will need to: Have an existing tenancy with sufficient compute and networking resources available for the desired cluster. Have an Identity and Access Management policy in place within that tenancy to allow the OKE service to manage tenancy resources. Have a user defined within that tenancy. Have an API key defined for use with the OCI API, as documented here. Have an SSH key pair for configuring SSH access to the nodes in the cluster. Configuration Files The following configuration files are part of this Terraform plan.  File Description   provider.tf Configures the OCI provider for a single region. vcn.tf Configures the VCN for the cluster, including two load balancer subnets and three worker subnets. cluster.tf Configures the cluster and worker node pool. kube_config.tf Downloads the kubeconfig for the new cluster. template.tfvars Example variable definition file for creating a cluster.   Creating a Cluster Without getting into the configuration details, getting a simple cluster running quickly entails the following: Create a new tfvars file based on the values from the provided oci.props file. Apply the configuration using Terraform. In the sample, we have provided a script that performs all the steps. In addition, the script downloads and installs all the required binaries for Terraform, Terraform OCI Provider, based on OS system (macOS or Linux). Create a Variables File This step involves creating a variables file that provides values for the tenancy that will contain the VCN and cluster you're creating. In the sample, the script oke.create.sh uses values from the property file oci.props. Copy the oci.props.template to the oci.props file and enter the values for all the properties in the oci.prop file:  1.user.ocid - Log in to the OCI console. Click the user icon in the upper-right corner and select user settings. It will display this page: Copy the OCID information and enter it as a value of the user.ocid property. tfvars.filename – Name of the tfvar file that the script will generate for the Terraform configuration (no file extension ) okeclustername – Name of the generated OCI Container Engine for Kubernetes cluster. tenancy.ocid–– In the OCI console, click the user icon in the upper-right corner and select tenancy. Copy the tenancy OCID information and enter it as a value of the tenancy.ocid property.       5. region – Name of the used home region for tenancy​           6. compartment.ocid –– In the OCI console, in the upper-left corner, select ‘Menu’, then ‘Identity’, and then ‘Compartments’. On the ‘Compartments’ page, select the desired compartment, copy the OCID information, and enter it as a value of the compartment.ocid property. compartment.name – Enter the name of the targeted compartment. ociapi.pubkey.fingerprint – During your access to OCI setup, you have generated OCI user public and private keys. Obtain it from the API Keys section of the User Settings page in the OCI Console. Add escape backslash ‘\’ for each colon sign. In our example: c8\:c2\:la\:a2\:e8\:96\:7e\:bf\:ac\:ee\:ce\:bc\:a8\:7f\:07\:c5 ocipk.path – Full path to OCI API private key vcn.cidr.prefix – Check your compartment and use a unique number for the prefix. vcn.cidr - Full CIDR for the VCN, must be unique within the compartment. nodepool.shape – In the OCI console, select  ‘Menu’, then ‘Governance’, and then ‘Service Limits’. On the ‘Service Limits’ page, go to ‘Compute’ and select an available Node Pool shape: k8s.version – OCI Container Engine for Kubernetes supported version for Kubernetes. To check the supported values, select ‘Menu’, then ‘Developer Services’, and then ‘Container Clusters’. Select the Create Cluster button. Check the version: nodepool.imagename – Select supported Node Pool Image  nodepool.ssh.pubkey – Copy and paste the content of your generated SSH public key. This is the key you would use to SSH into one of the nodes. terraform.installdir – Location to install Terraform binaries. The provided samples/terraform/oke.create.sh script will download all the needed artifacts. The following table lists all the properties, descriptions, and examples of their values. Variable Description Example user.ocid   OCID for the tenancy user – can be obtained from the user settings in the OCI console ocid1.user.oc1..aaaaaaaas5vt7s6jdho6mh2dqvyqcychofaiv5lhztkx7u5jlr5wwuhhm  tfvars.filename File name for generated tfvar file myokeclustertf region The name of region for tenancy us-phoenix-1 okeclustername The name for OCI Container Engine for Kubernetes cluster myokecluster tenancy.ocid OCID for the target tenancy ocid1.tenancy.oc1..aaaaaaaahmcbb5mp2h6toh4vj7ax526xtmihrneoumyat557rvlolsx63i compartment.ocid OCID for the target compartment ocid1.compartment.oc1..aaaaaaaaxzwkinzejhkncuvfy67pmb6wb46ifrixtuikkrgnnrp4wswsu compartment.name  Name for the target compartment QualityAssurance ociapi.pubkey.fingerprint Fingerprint of the OCI user's public key   c8\:c2\:da\:a2\:e8\:96\:7e\:bf\:ac\:ee\:ce\:bc\:a8\:7f\:07\:c5 ocipk.path API Private Key -- local path to the private key for the API key pair /scratch/mkogan/.oci/oci_api_key.pem vcn.cidr.prefix Prefix for VCN CIDR, used when creating subnets -- you should examine the target compartment find a CIDR that is available 10.1 vcn.cidr Full CIDR for the VCN, must be unique within the compartment (first 2 octets should match the vcn_cidr_prefix ) 10.1.0.0/16 nodepool.shape A valid OCI VM Shape for the cluster nodes VM.Standard2.1 k8s.version OCI Container Engine for Kubernetes supported Kubernetes version string v1.11.5   nodepool.imagename OCI Container Engine for Kubernetes supported Node Pool Image Oracle-Linux-7.4 nodepool.ssh.pubkey SSH public key (key contents as a string) to use to SSH into one of the nodes. ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC9FSfGdjjL+EZre2p5yLTAgtLsnp49AUVX1yY9V8guaXHol6UkvJWnyFHhL7s0qvWj2M2BYo6WAROVc0/054UFtmbd9zb2oZtGVk82VbT6aS74cMlqlY91H/rt9/t51Om9Sp5AvbJEzN0mkI4ndeG/5p12AUyg9m5XOdkgI2n4J8KFnDAI33YSGjxXb7UrkWSGl6XZBGUdeaExo3t2Ow8Kpl9T0Tq19qI+IncOecsCFj1tbM5voD8IWE2l0SW7V6oIqFJDMecq4IZusXdO+bPc+TKak7g82RUZd8PARpvYB5/7EOfVadxsXGRirGAKPjlXDuhwJYVRj1+IjZ+5Suxz mkog@slc13kef terraform.installdir Location to install Terraform binaries /scratch/mkogan/myterraform Save the oci.props file in the samples/scripts/terraform directory. See the provided template as an example. Execute the oke.create.sh script in the [weblogic-kubernetes-operatorDir]/kubernetes/samples/scripts/terraform: sh oke.create.sh This command will: Generate the Terraform tfvar configuration file. Download Terraform, Terraform OCI Provider binaries. Execute Terraform ‘init’, ‘apply’ commands to create OCI Container Engine for Kubernetes cluster. Generate ${okeclustername}_kubeconfig file, in our example myokecluster_kubeconfig. Wait about 5-10 mins for the OCI Container Engine for Kubernetes Cluster creation to complete. Execute this command to switch to the created OCI Container Engine for Kubernetes cluster configuration: export KUBECONFIG=[fullpath]/myokecluster_kubeconfig  Check the nodes IPs and status by executing this command: kubectl get nodes bash-4.2$ kubectl get nodes NAME             STATUS    ROLES    AGE       VERSION 129.146.56.254   Ready     node     25d       v1.10.11 129.146.64.74    Ready     node     25d       v1.10.11 129.146.8.145    Ready     node     25d       v1.10.11 You can also check the status of the cluster in the OCI console. In the console, select ‘Menu’, then Developer Services’, then ’Container Clusters (OKE). Your newly created OCI Container Engine for Kubernetes cluster (OKE) is ready to use!   Summary In this blog, we demonstrated all the required steps to set up an OCI Container Engine for Kubernetes cluster quickly by using the provided samples of Terraform scripts. Now you can create and run WebLogic Server on Kubernetes in an OCI Container Engine for Kubernetes. See our Quick Start Guide  to quickly get the operator up and running or refer to the User Guide for detailed information on how to run the operator, how to create one or more WebLogic domains in Kubernetes, how to scale up or down a WebLogic cluster manually or automatically using the WebLogic Diagnostics Framework (WLDF) or Prometheus, how the operator manages load balancing for web applications deployed in WebLogic clusters, and how to provide integration for managing operator logs through Elasticsearch, Logstash, and Kibana.    

In previous blogs, we have described how to run WebLogic Server on Kubernetes with the Operator using the Oracle Cloud Infrastructure Container Engine for Kubernetes (OKE). To create new Kubernetes...

The WebLogic Server

Updated WebLogic Kubernetes Support with Operator 2.0

We are excited to announce the release of version 2.0 of the WebLogic Server Kubernetes Operator. The operator uses a common set of Kubernetes APIs to provide an improved user experience when automating operations such as: provisioning, lifecycle management, application versioning, product patching, scaling, and security.  This version of the operator evolves WebLogic to run more natively in cloud neutral infrastructures.  It adds support for WebLogic domain configurations that are included in the Docker images, making these images portable across environments and improving support for CI/CD deployments.  The operator is developed as an open source project fully supported by Oracle. The project can be found in our GitHub repository, and the images are available to be pulled from Docker Hub. In this version of the WebLogic Server Kubernetes Operator, we have added the following functionality and support for: Kubernetes versions 1.10.11+, 1.111.5+, and 1.12.3+. A Helm chart to install the operator. Creating a WebLogic domain in a Docker image.  We have developed samples in our Docker GitHub project for creating these images with the WebLogic Deployment Tooling (WDT) or with WebLogic Scripting Tool (WLST).  Samples for deploying these images with the operator can be found in the GitHub project. Creating WebLogic domains in a Kubernetes persistent volume or persistent volume claims (PV/PVC). This persistent volume can reside in an NFS file system or other Kubernetes volume types. See our samples to create PV or PCV and to deploy the WebLogic domain in the persistent volume. When the WebLogic domain, application binaries, and application configuration are inside of a Docker image, this configuration is immutable.  We offer configuration overrides for certain aspects of the WebLogic domain configuration to maintain the portability of these images between different environments. The Apache HTTP Server, Traefik, and Voyager (HAProxy-backed) Ingress controller running within the Kubernetes cluster for load balancing HTTP requests across WebLogic Server Managed Servers running in clustered configurations. Unlike previous versions of the operator, operator 2.0 no longer deploys load balancers. We provide Helm charts to deploy these load balancers (see the samples listed below): Sample Traefik Helm chart for setting up a Traefik load balancer for WebLogic clusters. Sample Voyager Helm chart for setting up a Voyager load balancer for WebLogic clusters. Sample Ingress Helm chart for setting up a Kubernetes Ingress for each WebLogic cluster using a Traefik or Voyager load balancer. Sample Apache HTTP Server Helm chart and Apache samples using the default or custom configurations for setting up a load balancer for WebLogic clusters using the Apache HTTP Server with WebLogic Server Plugins. User-initiated lifecycle operations for WebLogic domains, clusters, and servers, including rolling restart.  See the  details in Starting, stopping, and restarting servers. Managing WebLogic configured and dynamic clusters. Scaling WebLogic domains by starting and stopping Managed Servers on demand, or by integrating with a REST API to initiate scaling based on the WebLogic Diagnostic Framework (WLDF), Prometheus, Grafana, or other rules.  If you want to learn more about scaling with Prometheus, read the blog  Using Prometheus to Automatically Scale WebLogic Clusters on Kubernetes.  Also, see this blog which walks you through a sample of scaling with WLDF, WebLogic Dynamic Clusters on Kubernetes. Exposing T3 channels outside the Kubernetes domain, if desired. Exposing HTTP paths on a WebLogic domain outside the Kubernetes domain with load balancing.  Updating the load balancer when a Managed Server is added or removed from a cluster during scaling up or shrinking actions. Publishing operator and WebLogic Server logs into Elasticsearch and interacting with them in Kibana.  See our documentation, Configuring Kibana and Elasticsearch. Our future plans include formal certification of WebLogic Server on Open Shift.  If you are interested in deploying a WebLogic domain and operator 2.0 in Open Shift, read the blog, Running WebLogic on Open Shift.  We are building and open-sourcing a new tool, the WebLogic Logging Exporter, to export WebLogic Server logs directly to the Elastic Stack.  Also, we are publishing blogs that describe how to take advantage of the new functionality in operator 2.0. Please stay tuned for more information. The fastest way to experience the operator is to follow the Quick Start guide.  We hope this announcement is helpful to those of you seeking to deploy WebLogic Server on Kubernetes, and look forward to your feedback.  

We are excited to announce the release of version 2.0 of the WebLogic Server Kubernetes Operator. The operator uses a common set of Kubernetes APIs to provide an improved user experience...

The WebLogic Server

Running WebLogic on OpenShift

In this post I am going to walk through setting up and using WebLogic on OpenShift, using the Oracle WebLogic Server Kubernetes Operator. My starting point is the OpenShift Container Platform server that I set up on OCI in this earlier post. I am going to use the operator to manage my domains in OpenShift. The operator pattern is common in Kubernetes for managing complex software products that have special lifecycle requirements, different to the base assumptions made by Kubernetes. For example, when there is state in a pod that needs to be saved or migrated before terminating a pod. The WebLogic Kubernetes operator includes such built-in knowledge of WebLogic, so it greatly simplifies the management of WebLogic in a Kubernetes environment. Plus it is completely open source and supported by Oracle. Overview Here is an overview of the process I am going to walk through: Create a new project (namespace) where I will be deploying WebLogic, Prepare the project for the WebLogic Kubernetes Operator, Install the operator, View the operator logs in Kibana, Prepare Docker images to run my domain, Create the WebLogic domain, Verify access to the WebLogic administration console and WLST, Deploy a test application into the cluster, Set up a route to expose the application publicly, Test scaling and load balancing, and Install the WebLogic Exporter to get metrics into Prometheus. Before we get started, you should clone the WebLogic operator project from GitHub. It contains many of the samples and helpers we will need. git clone https://github.com/oracle/weblogic-kubernetes-operator Create a new project (namespace) In the OpenShift web user interface, create a new project. If you already have other projects, go to the Application Console, and then click on the project pulldown at the top and click on “View All Projects” and then the “Create Project” button. If you don’t have existing projects, OpenShift will take you right to the create project page when you log in. I called my project “weblogic” as you can see in the image below: Creating a new project Then navigate into your project view. Right now it will be empty, as shown below: The new “weblogic” project Prepare the project for the WebLogic Kubernetes Operator The easiest way to get the operator Docker image is to just pull it from the Docker Hub. You can review details of the image in the Docker Hub. The WebLogic Kubernetes Operator in the Docker Hub You can use the following command to pull the image. You may need to docker login first if you have not previously done so: docker pull oracle/weblogic-kubernetes-operator:2.0 Instead of pulling the image and manually copying it onto our OpenShift nodes, we could also just add an Image Pull Secret to our project (namespace) so that OpenShift will be able to pull the image for us. We can do this with the following commands (at this stage we are using a user with the cluster-admin role): oc project weblogic oc create secret docker-registry docker-store-secret \ --docker-server=store.docker.com \ --docker-username=DOCKER_USER \ --docker-password=DOCKER_PASSWORD \ --docker-email=DOCKER_EMAIL In this command, replace DOCKER_USER with your Docker store userid, DOCKER_PASSWORD with your password, and DOCKER_EMAIL with the email address associated with your Docker Hub account. We also need to tell OpenShift to link this secret to our service account. Assuming we want to use the default service account in our weblogic project (namespace), we can run this command: oc secrets link default docker-store-secret --for=pull (Optional) Build the image yourself It is also possible to build the image yourself, rather than pulling it from Docker Hub. If you want to do that, first go to Docker Hub and accept the license for the Server JRE image, ensure you have the listed prerequisites installed, and then run these commands: mvn clean install docker build -t weblogic-kubernetes-operator:2.0 --build-arg VERSION=2.0 . Install the Elastic stack The operator can optionally send its logs to Elasticsearch and Kibana. This provides a nice way to view the logs, and to search and filter and so on, so let’s install this too. A sample YAML file is provided in the project to install them: kubernetes/samples/scripts/elasticsearch-and-kibana/elasticsearch_and_kibana.yaml Edit this file to set the namespace to “weblogic” for each deployment and service (i.e. on lines 30, 55, 74 and 98 - just search for namespace) and then install them using this command: oc apply -f kubernetes/samples/scripts/elasticsearch-and-kibana/elasticsearch_and_kibana.yaml After a few moments, you should see the pods running in our namespace: oc get pods,services NAME READY STATUS RESTARTS AGE pod/elasticsearch-75b6f589cb-c9hbw 1/1 Running 0 10s pod/kibana-746cc75444-nt8pr 1/1 Running 0 10s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/elasticsearch ClusterIP 172.30.143.158 <none> 9200/TCP,9300/TCP 10s service/kibana NodePort 172.30.18.210 <none> 5601:32394/TCP 10s So based on the service shown above and our project (namespace) named weblogic, the URL for Elasticsearch will be elasticsearch.weblogic.svc.cluster.local:9200. We will need this URL later. Install the operator Now we are ready to install the operator. In the 2.0 release, we use Helm to install the operator. So first we need to download Helm and set up Tiller on our OpenShift cluster (if you have not already installed it). Helm provide installation instructions on their site. I just downloaded the latest release, unzipped it, and made helm executable. Before we install Tiller, let’s create a cluster role binding to make sure the default service account in the kube-system namespace (which tiller will run under) has the cluster-admin role, which it will need to install and manage the operation. cat << EOF | oc apply -f - apiVersion: authorization.openshift.io/v1 kind: ClusterRoleBinding metadata: name: tiller-cluster-admin roleRef: name: cluster-admin subjects: - kind: ServiceAccount name: default namespace: kube-system userNames: - system:serviceaccount:kube-system:default EOF Now we can execute helm init to install tiller on the OpenShift cluster. Check it was successful with this command: oc get deploy -n kube-system NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE tiller-deploy 1 1 1 1 18s When you install the operator you can either pass the configuration parameters into Helm on the command line, or if you prefer, you can store them in a YAML file and pass that file in. I like to store them in a file. There is a sample provided, so we can just make a copy and update it with our details. cp kubernetes/charts/weblogic-operator/values.yaml my-operator-values.yaml Here are the updates we need to make: Set the domainNamespaces parameter to include just weblogic, i.e. the project (namespace) that we created to install WebLogic in. domainNamespaces: - “weblogic” Set the image parameter to match the name of the image you pulled from Docker Hub or built yourself. If you just create the image pull secret then use the value I have shown here: # image specifies the docker image containing the operator code. image: “oracle/weblogic-kubernetes-operator:2.0” Set the imagePullSecrets list to include the secret we created earlier. If you did not create the secret you can leave this commented out. imagePullSecrets: - name: “docker-store-secret” Set the elkIntegrationEnabled parameter to true. # elkIntegrationEnabled specifies whether or not Elastic integration is enabled. elkIntegrationEnabled: true Set the elasticSearchHost to the address of the Elasticsearch server that we set up earlier. # elasticSearchHost specifies the hostname of where elasticsearch is running. # This parameter is ignored if 'elkIntegrationEnabled' is false. elasticSearchHost: “elasticsearch.weblogic.svc.cluster.local” Now we can use helm to install the operator with this command. Notice that I pass in the name of my parameters YAML file in the --values option: helm install kubernetes/charts/weblogic-operator \ --name weblogic-operator \ --namespace weblogic \ --values my-operator-values.yaml \ --wait This command will wait until the operator starts up successfully. If it has to pull the image, that will obviously take a little while, but if this command does not finish in a minute or so, then it is probably stuck. You can send it to the background and start looking around to see what went wrong. Most often it will be a problem pulling the image. If you see the pod has status ImagePullBackOff then OpenShift was not able to pull the image. You can verify the pod was created with this command: oc get pods NAME READY STATUS RESTARTS AGE elasticsearch-75b6f589cb-c9hbw 1/1 Running 0 2h kibana-746cc75444-nt8pr 1/1 Running 0 2h weblogic-operator-54d99679f-dkg65 1/1 Running 0 48s View the operator logs in Kibana Now we have the operator running, let's take a look at the logs in Kibana. We installed Kibana earlier. Let's expose Kibana outside our cluster: oc expose service kibana route.route.openshift.io/kibana exposed You can check this worked with these commands: oc get routes NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD kibana kibana-weblogic.sub11201828382.certificationvc.oraclevcn.com kibana 5601 None oc get services NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE elasticsearch ClusterIP 172.30.143.158 <none> 9200/TCP,9300/TCP 2h internal-weblogic-operator-svc ClusterIP 172.30.252.148 <none> 8082/TCP 7m kibana NodePort 172.30.18.210 <none> 5601:32394/TCP 2h Now you should be able to access Kibana using the OpenShift front-end address and the node port for the Kibana service. In my case the node port is 32394 and my OpenShift server is accessible to me as openshift so I would use the address https://openshift:32394. You will see a page like this one: The initial Kibana page Click on the “Create” button, then click on the "Discover" option in the menu on the left hand side. Now hover over the entries for level and log in the field list, and click on the "Add" button that appears next to each one. Now you should have a nice log screen like this: The operator logs in Kibana Great! We have the operator installed. Now we are ready to move on to create some WebLogic domains. Prepare Docker images to run the domain Now we have some choices to make. There are two main ways to run WebLogic in Docker - we can use a standard Docker image which contains the WebLogic binaries but keep the domain configuration, applications, etc., outside the image, for example in a persistent volume; or we can create Docker images with both the WebLogic binaries and the domain burnt into them. There are advantages and disadvantages to both approaches, so it really depends on how we want to treat our domain. The first approach is good if you just want to run WebLogic in Kubernetes but you still want to use the admin console and WLST and so on to manage it. The second approach is better if you want to drive everything from a CI/CD pipeline where you do not mutate the running environment, but instead you update the source and then build new images and roll the environment to uptake them. A number of these kinds of considerations are listed here. For the sake of this post, let's use the "domain in image" option (the second approach). So we will need a base WebLogic image with the necessary patches installed, and then we will create our domain on top of that. Let's create a domain with a web application deployed in it, so that we have something to use to test our load balancing configuration and scaling later on. The easiest way to get the base image is to grab it from Oracle using this command: docker pull store/oracle/weblogic:12.2.1.3 The standard WebLogic Server 12.2.1.3.0 image from Docker Hub has the necessary patches already installed. It is worth knowing how to install patches, in case you need some additional one-off patches. If you are not interested in that, skip forward to here. (Optional) Manually creating a patched WebLogic image Here is an example Dockerfile that we can use to install the necessary patches. You can modify this to add any additional one-off patches that you need. Follow that pattern already there to copy the patch into the container, apply it, and then remove the temporary files after you are done. # --------------------------------------------- # Install patches to run WebLogic on Kubernetes # --------------------------------------------- # Start with an unpatched WebLogic 12.2.1.3.0 Docker image FROM your/weblogic-image:12.2.1.3 MAINTAINER Mark Nelson <mark.x.nelson@oracle.com> # We need patch 29135930 to run WebLogic on Kubernetes # We will also also install the latest PSU which is 28298734 # That prereqs a newer version of OPatch, which is provided by 28186730 ENV PATCH_PKG0="p28186730_139400_Generic.zip" ENV PATCH_PKG2="p28298734_122130_Generic.zip" ENV PATCH_PKG3="p29135930_12213181016_Generic.zip" # Copy the patches into the container COPY $PATCH_PKG0 /u01/ COPY $PATCH_PKG2 /u01/ COPY $PATCH_PKG3 /u01/ # Install the psmisc package which is a prereq for 28186730 USER root RUN yum -y install psmisc # Install the three patches we need - do it all in one command to # minimize the number of layers and the size of the resulting image. # Also run opatch cleanup and remove temporary files. USER oracle RUN cd /u01 && \ $JAVA_HOME/bin/jar xf /u01/$PATCH_PKG0 && \ $JAVA_HOME/bin/java -jar /u01/6880880/opatch_generic.jar \ -silent oracle_home=/u01/oracle -ignoreSysPrereqs && \ echo "opatch updated" && \ sleep 5 && \ cd /u01 && \ $JAVA_HOME/bin/jar xf /u01/$PATCH_PKG2 && \ cd /u01/28298734 && \ $ORACLE_HOME/OPatch/opatch apply -silent && \ cd /u01 && \ $JAVA_HOME/bin/jar xf /u01/$PATCH_PKG3 && \ cd /u01/29135930 && \ $ORACLE_HOME/OPatch/opatch apply -silent && \ $ORACLE_HOME/OPatch/opatch util cleanup -silent && \ rm /u01/$PATCH_PKG0 && \ rm /u01/$PATCH_PKG2 && \ rm /u01/$PATCH_PKG3 && \ rm -rf /u01/6880880 && \ rm -rf /u01/28298734 && \ rm -rf /u01/29135930 WORKDIR ${ORACLE_HOME} CMD ["/u01/oracle/createAndStartEmptyDomain.sh"] This Dockerfile assumes the patch archives are available in the same directory. You would need to download the patches from My Oracle Support and then you can build the image with this command: docker build -t my-weblogic-base-image:12.2.1.3.0 . Creating the image with the domain in it I am going to use the WebLogic Deploy Tooling to define my domain. If you are not familiar with this tool, you might want to check it out! It lets you define your domain declaratively instead of writing custom WLST scripts. For just one domain, maybe not such a big deal, but if you need to create a lot of domains it is pretty useful. It also lets you parameterize them, and it can introspect existing domains to create a model and associated artifacts. You can also use it to "move" domains from place to place, say from an on-premises install to Kubernetes, and you can change the version of WebLogic on the way without needing to worry about differences in WLST from version to version - it takes care of all that for you. Of course, we don't need all those features for what we need to do here, but it is good to know they are there for when you might need them! I created a GitHub repository with my domain model here. You can just clone this repository and then run the commands below to download the WebLogic Deploy Tooling and then build the domain in a new Docker image that we will tag my-domain1-image:1.0: git clone https://github.com/markxnelson/simple-sample-domain cd simple-sample-domain curl -Lo weblogic-deploy.zip https://github.com/oracle/weblogic-deploy-tooling/releases/download/weblogic-deploy-tooling-0.14/weblogic-deploy.zip # make sure JAVA_HOME is set correctly, and `mvn` is on your PATH ./build-archive.sh ./quickBuild.sh I won't go into all the nitty gritty details of how this works, that's a subject for another post (if you are interested, take a look at the documentation in the GitHub project), but take a look at the simple-toplogy.yaml file to get a feel for what is happening: domainInfo: AdminUserName: '@@FILE:/u01/oracle/properties/adminuser.properties@@' AdminPassword: '@@FILE:/u01/oracle/properties/adminpass.properties@@' topology: Name: '@@PROP:DOMAIN_NAME@@' AdminServerName: '@@PROP:ADMIN_NAME@@' ProductionModeEnabled: '@@PROP:PRODUCTION_MODE_ENABLED@@' Log: FileName: '@@PROP:DOMAIN_NAME@@.log' Cluster: '@@PROP:CLUSTER_NAME@@': DynamicServers: ServerTemplate: '@@PROP:CLUSTER_NAME@@-template' CalculatedListenPorts: false ServerNamePrefix: '@@PROP:MANAGED_SERVER_NAME_BASE@@' DynamicClusterSize: '@@PROP:CONFIGURED_MANAGED_SERVER_COUNT@@' MaxDynamicClusterSize: '@@PROP:CONFIGURED_MANAGED_SERVER_COUNT@@' Server: '@@PROP:ADMIN_NAME@@': ListenPort: '@@PROP:ADMIN_PORT@@' NetworkAccessPoint: T3Channel: ListenPort: '@@PROP:T3_CHANNEL_PORT@@' PublicAddress: '@@PROP:T3_PUBLIC_ADDRESS@@' PublicPort: '@@PROP:T3_CHANNEL_PORT@@' ServerTemplate: '@@PROP:CLUSTER_NAME@@-template': ListenPort: '@@PROP:MANAGED_SERVER_PORT@@' Cluster: '@@PROP:CLUSTER_NAME@@' appDeployments: Application: # Quote needed because of hyphen in string 'test-webapp': SourcePath: 'wlsdeploy/applications/test-webapp.war' Target: '@@PROP:CLUSTER_NAME@@' ModuleType: war StagingMode: nostage PlanStagingMode: nostage As you can see it is all parameterized. Most of those properties are defined in properties/docker-build/domain.properties: # These variables are used for substitution in the WDT model file. # Any port that will be exposed through Docker is put in this file. # The sample Dockerfile will get the ports from this file and not the WDT model. DOMAIN_NAME=domain1 ADMIN_PORT=7001 ADMIN_NAME=admin-server ADMIN_HOST=domain1-admin-server MANAGED_SERVER_PORT=8001 MANAGED_SERVER_NAME_BASE=managed-server- CONFIGURED_MANAGED_SERVER_COUNT=2 CLUSTER_NAME=cluster-1 DEBUG_PORT=8453 DEBUG_FLAG=false PRODUCTION_MODE_ENABLED=true JAVA_OPTIONS=-Dweblogic.StdoutDebugEnabled=false T3_CHANNEL_PORT=30012 T3_PUBLIC_ADDRESS=openshift CLUSTER_ADMIN=cluster-1,admin-server On lines 10-17 we are defining a cluster named cluster-1 with two dynamic servers in it. On 18-25 we are defining the admin server. And on 30-38 we are defining an application that we want deployed. This is a simple web application that prints out the IP address of the managed server it is running on. Here is the main page of that application: <%@ page import="java.net.UnknownHostException" %> <%@ page import="java.net.InetAddress" %> <%@ taglib prefix="c" uri="http://java.sun.com/jsp/jstl/core" %> <%@page contentType="text/html" pageEncoding="UTF-8"%> <!DOCTYPE html> <html> <head> <meta http-equiv="Content-Type" content="text/html; charset=UTF-8"> <c:url value="/res/styles.css" var="stylesURL"/> <link rel="stylesheet" href="${stylesURL}" type="text/css"> <title>Test WebApp</title> </head> <body> <% String hostname, serverAddress; hostname = "error"; serverAddress = "error"; try { InetAddress inetAddress; inetAddress = InetAddress.getLocalHost(); hostname = inetAddress.getHostName(); serverAddress = inetAddress.toString(); } catch (UnknownHostException e) { e.printStackTrace(); } %> <li>InetAddress: <%=serverAddress %> <li>InetAddress.hostname: <%=hostname %> </body> </html> The source code for the web application is in the test-webapp directory. We will use this application later to verify that scaling and load balancing is working as we expect. So now we have a Docker image with our custom domain in it, and the WebLogic Server binaries and the patches we need. So we are ready to deploy it! Create the WebLogic domain First, we need to create a Kubernetes secret with the WebLogic credentials in it. This is used by the operator to start the domain. You can use the sample provided here to create the secret: ./create-weblogic-credentials.sh \ -u weblogic \ -p welcome1 \ -d domain1 \ -n weblogic Next, we need to create the domain custom resource. To do this, we prepare a Kubernetes YAML file as follows. I have removed the comments to make this more readable, you can find a sample here which has extensive comments to explain how to create these files: apiVersion: "weblogic.oracle/v2" kind: Domain metadata: name: domain1 namespace: weblogic labels: weblogic.resourceVersion: domain-v2 weblogic.domainUID: domain1 spec: domainHome: /u01/oracle/user_projects/domains/domain1 domainHomeInImage: true image: "my-domain1-image:1.0" imagePullPolicy: "IfNotPresent" webLogicCredentialsSecret: name: domain1-weblogic-credentials includeServerOutInPodLog: true serverStartPolicy: "IF_NEEDED" serverPod: annotations: openshift.io/scc: anyuid env: - name: JAVA_OPTIONS value: "-Dweblogic.StdoutDebugEnabled=false" - name: USER_MEM_ARGS value: "-Xms64m -Xmx256m " adminServer: serverStartState: "RUNNING" adminService: channels: - channelName: default nodePort: 30701 - channelName: T3Channel clusters: - clusterName: cluster-1 serverStartState: "RUNNING" replicas: 2 Now we can use this file to create the domain custom resource, using the following command: oc apply -f domain.yaml You can verify it was created, and view the resource that was created with these commands: oc get domains oc describe domain domain1 The operator will notice this new domain custom resource and it will react accordingly. In this case, since we have asked for the admin server and the servers in the the cluster to come to the "RUNNING" state (in lines 27 and 25 above) the operator will start up the admin server first, and then both managed servers. You can watch this happen using this command: oc get pods -w This will print out the current pods, and then update every time there is a change in status. You can hit Ctrl-C to exit from the command when you have seen enough. The operator also creates services for the admin server, each managed server and the cluster. You can see the services with this command: oc get services You will notice a service called domain1-admin-server-external which is used to expose the admin server's default channel outside of the cluster, to allow us to access the admin console and to use WLST. We need to tell OpenShift to make this service available externally by creating a route with this command: oc expose service domain1-admin-server-external --port=default This will expose that service on the NodePort it declared. Verify access to the WebLogic administration console and WLST Now you can start a browser and point it to any one of your worker nodes and use the NodePort from the service (30701 in the example above) to access the admin console. For me, since I have an entry in my /etc/hosts for my OpenShift server, this address is http://openshift:30701/console. You can log in to the admin console and use it as normal. You might like to navigate into "Deployments" to verify that our web application is there: Viewing the test application in the WebLogic admin console You might also like to go to the "Server" page to validate that you can see all of the managed servers: Viewing the managed servers in the WebLogic admin console We can also use WLST against the domain, if desired. To do this, just start up WLST as normal on your client machine and then use the OpenShift server address and the NodePort to form the t3 URL. Using the example above, my URL is t3://openshift:30701: ~/wls/oracle_common/common/bin/wlst.sh Initializing WebLogic Scripting Tool (WLST) ... Jython scans all the jar files it can find at first startup. Depending on the system, this process may take a few minutes to complete, and WLST may not return a prompt right away. Welcome to WebLogic Server Administration Scripting Shell Type help() for help on available commands wls:/offline> connect('weblogic','welcome1','t3://openshift:30701') Connecting to t3://openshift:30701 with userid weblogic ... Successfully connected to Admin Server "admin-server" that belongs to domain "domain1". Warning: An insecure protocol was used to connect to the server. To ensure on-the-wire security, the SSL port or Admin port should be used instead. wls:/domain1/serverConfig/> ls('Servers') dr-- admin-server dr-- managed-server-1 dr-- managed-server-2 wls:/domain1/serverConfig/> You can use WLST as normal, either interactively, or you can run scripts. Keep in mind though, that since you have your domain burnt into the image, when you restart the pods, any changes you made with WLST would be lost. If you want to make permanent changes, you would need to include the WLST scripts in the image building process and then re-run it to build a new version of the image. Of course, if you have chosen to put your domain in peristent storage instead of burning it into the image, this caveat would not apply. Set up a route to expose the application publicly Now, let's expose our web application outside the OpenShift cluster. To do this, we are going to want to set up a load balancer to distribute requests across all of the managed servers, and then expose the load balancer endpoint. We can use the provided sample to install the Traefik load balancer using the following command: helm install stable/traefik \ --name traefik-operator \ --namespace traefik \ --values kubernetes/samples/charts/traefik/values.yaml \ --set "kubernetes.namespaces={traefik,weblogic}" \ --wait Make sure you include the weblogic namespace so the Traefik ingress controller knows to load balance ingresses in our namespace. Next, we need to create the ingress object. We can also do this with the provided sample using this command: helm install kubernetes/samples/charts/ingress-per-domain \ --name domain1-ingress \ --namespace weblogic \ --set wlsDomain.domainUID=domain1 \ --set traefik.hostname=domain1.org Note that you would set the hostname to your real DNS hostname when you do this for real. In this example, I am just using a made up hostname. Test scaling and load balancing Now we can hit the web application to verify the load balancing is working. You can hit it from a browser, but in that case session affinity will kick in, so you will likely see a response from the same managed server over and over again. If you use curl though, you should see it round robin. You can run curl in a loop using this command: while true do sleep 2 curl -v -H 'host: domain1.org' http://openshift:30305/testwebapp/ done The web application just prints out the name and IP address of the managed server. So you should see the output alternate between all of the managed servers in sequence. Now, let's scale the cluster down and see what happens. To initiate scaling, we can just edit the domain custom resource with this command: oc edit domain domain1 This will open the domain custom resource in an editor. Find the entry for cluster-1 and underneath that the replicas entry: clusters: - clusterName: cluster-1 clusterService: annotations: {} labels: {} replicas: 4 You can change the replicas to another value, for example 2, and then save and exit. The operator will notice this change and will react by gracefully shutting down two of the managed servers. You can watch this happen with the command: oc get pods -w You will also notice in the other window where you have curl running that those two managed servers no longer get requests. You will also notice that there are not failed requests - the servers are removed from the domain1-cluster-cluster-1 service early so they will not receive requests and lead to a connection refused or timeout. The ingress and the load balancer automatically adjust. Once the scaling is finished, you might want to scale back up to 4 and watch the operation in reverse. Conclusion Well at this point we have our custom WebLogic domain, with our own configuration and applications deployed, running on OpenShift under the control of the operator. We have seen how we can access the admin console, how to use WSLT, how to set up load balancing and expose applications outside the OpenShift cluster, and how to control scaling. Here are a few screenshots from the OpenShift console showing what we have done: The overview page Drilling down to the admin server pod The monitoring page

In this post I am going to walk through setting up and using WebLogic on OpenShift, using the Oracle WebLogic Server Kubernetes Operator. My starting point is the OpenShift Container Platform server...

The WebLogic Server

Voyager/HAProxy as Load Balancer to Weblogic Domains in Kubernetes

Overview Load balancing is a widely-used technology to build scalable and resilient applications. The major function of load balancing is to monitor servers and distribute network traffic among multiple servers, for example, web applications, databases. For containerized applications running on Kubernetes, load balancing is also a necessity. In the WebLogic Server on Kubernetes Operator version 1.0 we have added support for Voyager/HAProxy. We enhanced the script create-weblogic-domain.sh to provide out-of-the-box support for Voyager/HAProxy. The script supports load balancing to servers of a single WebLogic domain/cluster. This blog describes how to configure Voyager/HAProxy to expand load balancing support to applications deployed to multiple WebLogic domains in Kubernetes. Basics of Voyager/HAProxy If you are new to HAProxy and Voyager, it's worth spending some time learning the basics of HAProxy and Voyager. HAProxy is free, open source software that provides a high availability load balancer and proxy server for TCP and HTTP-based applications. It's well known for being fast and efficient (in terms of processor speed and memory usage). See Starter Guide of HAProxy. Voyager is a HAProxy backed Ingress controller (refer to the Kubernetes documents about Ingress). After installed in a Kubernetes cluster, the Voyager operator watches for Kubernetes Ingress resources and Voyager’s own Ingress CRD and automatically creates, updates, and deletes HAProxy instances accordingly. See voyager overview to understand how the Voyager operator works. Running WebLogic Domains in Kubernetes Check out the project wls-operator-quickstart from GitHub to your local environment. This project helps you set up WebLogic Operator and domains with minimal manual steps. Please complete the steps in the 'Pre-Requirements' section of the README to set up your local environment. With the help of the wls-operator-quickstart project, we want to set up two WebLogic domains running on Kubernetes using the WebLogic Operator, each in its own namespace: The domain named 'domain1' is running in the namespace 'default' which has one cluster 'cluster-1' and the cluster contains two Managed Servers, 'domain1-managed-server1' and 'domain1-managed-server2'. The domain named 'domain2' is running in the namespace 'test1' which has one cluster 'cluster-1' and the cluster contains two Managed Servers, 'domain2-managed-server1' and 'domain2-managed-server2'. A web application 'testwebapp.war' is deployed separately to the cluster in both domain1 and domain2. This web application has a default page which displays the information about which the Managed Server is processing the HTTP request. Use the following steps to prepare the WebLogic domains which are the back ends to the HAProxy: # change directory to root folder of wls-operator-quickstart $ cd xxx/wls-operator-quickstart # Build and deploy weblogic operator $ ./operator.sh create # Create domain1. Change value of `loadBalancer` to `NONE` in domain1-inputs.yaml before run. $ ./domain.sh create # Create domain2. Change value of `loadBalancer` to `NONE` in domain2-inputs.yaml before run. $ ./domain.sh create -d domain2 -n test1 # Install Voyager $ kubectl create namespace voyager $ curl -fsSL https://raw.githubusercontent.com/appscode/voyager/6.0.0/hack/deploy/voyager.sh \ | bash -s -- --provider=baremetal --namespace=voyager Check the status of the WebLogic domains, as follows: # Check status of domain1 $ kubectl get all NAME READY STATUS RESTARTS AGE pod/domain1-admin-server 1/1 Running 0 5h pod/domain1-managed-server1 1/1 Running 0 5h pod/domain1-managed-server2 1/1 Running 0 5h NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/domain1-admin-server NodePort 10.105.135.58 <none> 7001:30705/TCP 5h service/domain1-admin-server-extchannel-t3channel NodePort 10.111.9.15 <none> 30015:30015/TCP 5h service/domain1-cluster-cluster-1 ClusterIP 10.108.34.66 <none> 8001/TCP 5h service/domain1-managed-server1 ClusterIP 10.107.185.196 <none> 8001/TCP 5h service/domain1-managed-server2 ClusterIP 10.96.86.209 <none> 8001/TCP 5h service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 5h # Verify web app in domain1 via running curl on admin server pod to access the cluster service $ kubectl -n default exec -it domain1-admin-server -- curl http://domain1-cluster-cluster-1:8001/testwebapp/ # Check status of domain2 $ kubectl -n test1 get all NAME READY STATUS RESTARTS AGE pod/domain2-admin-server 1/1 Running 0 5h pod/domain2-managed-server1 1/1 Running 0 5h pod/domain2-managed-server2 1/1 Running 0 5h NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/domain2-admin-server NodePort 10.97.77.35 <none> 7001:30701/TCP 5h service/domain2-admin-server-extchannel-t3channel NodePort 10.98.239.28 <none> 30012:30012/TCP 5h service/domain2-cluster-cluster-1 ClusterIP 10.102.228.204 <none> 8001/TCP 5h service/domain2-managed-server1 ClusterIP 10.96.59.190 <none> 8001/TCP 5h service/domain2-managed-server2 ClusterIP 10.101.102.102 <none> 8001/TCP 5h # Verify the web app in domain2 via running curl in admin server pod to access the cluster service $ kubectl -n test1 exec -it domain2-admin-server -- curl http://domain2-cluster-cluster-1:8001/testwebapp/ After both WebLogic domains are running on Kubernetes, I will demonstrate two approaches that use different HAProxy features to set up Voyager as a single entry point to the two WebLogic domains. Using Host Name-Based Routing Create the Ingress resource file 'voyager-host-routing.yaml' which contains an Ingress resource using host name-based routing. apiVersion: voyager.appscode.com/v1beta1 kind: Ingress metadata: name: hostname-routing namespace: default annotations: ingress.appscode.com/type: 'NodePort' ingress.appscode.com/stats: 'true' ingress.appscode.com/affinity: 'cookie' spec: rules: - host: domain1.org http: nodePort: '30305' paths: - backend: serviceName: domain1-cluster-cluster-1 servicePort: '8001' - host: domain2.org http: nodePort: '30305' paths: - backend: serviceName: domain2-cluster-cluster-1.test1 servicePort: '8001' Then deploy the YAML file using`kubectl create -f voyager-host-routing.yaml`. Testing Load Balancing with Host Name-Based Routing To make host name-based routing work, you need to set up virtual hosting which usually involves DNS changes. For demonstration purposes, we will use curl commands to simulate load balancing with host name-based routing. # Verify load balancing on domain1 $ curl --silent -H 'host: domain1.org' http://${HOSTNAME}:30305/testwebapp/ | grep InetAddress.hostname <li>InetAddress.hostname: domain1-managed-server1 $ curl --silent -H 'host: domain1.org' http://${HOSTNAME}:30305/testwebapp/ | grep InetAddress.hostname <li>InetAddress.hostname: domain1-managed-server2 # Verify load balancing on domain2 $ curl --silent -H 'host: domain2.org' http://${HOSTNAME}:30305/testwebapp/ | grep InetAddress.hostname <li>InetAddress.hostname: domain2-managed-server1 $ curl --silent -H 'host: domain2.org' http://${HOSTNAME}:30305/testwebapp/ | grep InetAddress.hostname <li>InetAddress.hostname: domain2-managed-server2 The result is: If host name 'domain1.org' is specified, the request will be processed by Managed Servers in domain1. If host name 'domain2.org' is specified, the request will be processed by Managed Servers in domain2. Using Path-Based Routing and URL Rewriting In this section we use path-based routing with URL rewriting to achieve the same behavior as host name-based routing. Create the Ingress resource file 'voyager-path-routing.yaml'. apiVersion: voyager.appscode.com/v1beta1 kind: Ingress metadata: name: path-routing namespace: default annotations: ingress.appscode.com/type: 'NodePort' ingress.appscode.com/stats: 'true' ingress.appscode.com/rewrite-target: "/testwebapp" spec: rules: - host: '*' http: nodePort: '30307' paths: - path: /domain1 backend: serviceName: domain1-cluster-cluster-1 servicePort: '8001' - path: /domain2 backend: serviceName: domain2-cluster-cluster-1.test1 servicePort: '8001' Then deploy the YAML file using `kubectl create -f voyager-path-routing.yaml`. Verify Load Balancing with Path-Based Routing To verify the load balancing result, we use the curl command. Another approach is to access the URL from a web browser directly.  # Verify load balancing on domain1 $ curl --silent http://${HOSTNAME}:30307/domain1/ | grep InetAddress.hostname <li>InetAddress.hostname: domain1-managed-server1 $ curl --silent http://${HOSTNAME}:30307/domain1/ | grep InetAddress.hostname <li>InetAddress.hostname: domain1-managed-server2 # Verify load balancing on domain2 $ curl --silent http://${HOSTNAME}:30307/domain2/ | grep InetAddress.hostname <li>InetAddress.hostname: domain2-managed-server1 $ curl --silent http://${HOSTNAME}:30307/domain2/ | grep InetAddress.hostname <li>InetAddress.hostname: domain2-managed-server2 You can see that we specify different URLs to dispatch traffic to different WebLogic domains with path-based routing. With the URL rewriting feature, we eventually access the web application with the same context path in each domain. Cleanup After you finish your exercise using the instructions in this blog, you may want to clean up all the resources created in Kubernetes.  # Cleanup voyager ingress resources $ kubectl delete -f voyager-host-routing.yaml $ kubectl delete -f voyager-path-routing.yaml   # Uninstall Voyager $ curl -fsSL https://raw.githubusercontent.com/appscode/voyager/6.0.0/hack/deploy/voyager.sh \ | bash -s -- --provider=baremetal --namespace=voyager --uninstall --purge # Delete wls domains and wls operator $ cd <QUICKSTART_ROOT> $ ./domain.sh delete --clean-all $ ./domain.sh delete -d domain2 -n test1 --clean-all $ ./operator.sh delete Summary In this blog, we describe how to set up a Voyager load balancer to provide high availability load balancing and a proxy server for TCP and HTTP-based requests to applications deployed in WebLogic Server domains. The samples provided in this blog describe how to use Voyager as a single point in front of multiple WebLogic domains. We provide examples to show you how to use Voyager features like host name-based routing, path-based routing, and URL rewriting. I hope you find this blog helpful and try using Voyager in your WebLogic on Kubernetes deployments.

Overview Load balancing is a widely-used technology to build scalable and resilient applications. The major function of load balancing is to monitor servers and distribute network traffic among...

Make WebLogic Domain Provisioning and Deployment Easy!

The Oracle WebLogic Deploy Tooling (WDT) makes the automation of WebLogic Server domain provisioning and applications deployment easy. Instead of writing WLST scripts that need to be maintained, WDT creates a declarative, metadata model that describes the domain, applications, and the resources used by the applications.  This metadata model makes it easy to provision, deploy, and perform domain lifecycle operations in a repeatable fashion, which makes it perfect for the Continuous Delivery of applications. The WebLogic Deploy Tooling provides maximum flexibility by supporting a wide range of WebLogic Server versions from 10.3.6 to 12.2.1.3. WDT supports both Windows and UNIX operating systems, and provides the following benefits: Introspects a WebLogic domain into a metadata model (JSON or YAML). Creates a new WebLogic Server domain using a metadata model and allows version control of the domain configuration. Updates the configuration of an existing WebLogic Server domain, deploys applications and resources into the domain. Allows runtime alterations to the metadata model (also referred as the model) before applying it. Allows the same model to apply to multiple environments by accepting value placeholders provided in a separate property file. Passwords can be encrypted directly in the model or property file. Supports a sparse model so that the model only needs to describe what is required for the specific operation without describing other artifacts. Provides easy validation of the model content and verification that its related artifacts are well-formed. Allows automation and continuous delivery of deployments. Facilitates Lift and Shift of the domain into other environments, like Docker images and Kubernetes.   Currently, the project provides five single-purpose tools, all exposed as shell scripts: The Create Domain Tool (createDomain) understands how to create a domain and populate the domain with all the resources and applications specified in the model. The Update Domain Tool (updateDomain) understands how to update an existing domain and populate the domain with all the resources and applications specified in the model, either in offline or online mode. The Deploy Applications Tool (deployApps) understands how to add resources and applications to an existing domain, either in offline or online mode. The Discover Domain Tool (discoverDomain) introspects an existing domain and creates a model file describing the domain and an archive file of the binaries deployed to the domain. The Encrypt Model Tool (encryptModel) encrypts the passwords in a model (or its variable file) using a user-provided passphrase. The Validate Model Tool (validateModel) provides both standalone validation of a model as well as model usage information to help users write or edit their models. The WebLogic on Docker and Kubernetes projects take advantage of WDT to provision WebLogic domains and deploy applications inside of a Docker image or in a Kubernetes persistent volume (PV).  The Discover and Create Domain Tools enable us to take a domain running in a non-Docker/Kubernetes environment and lift and shift them into these environments. Docker/Kubernetes environments require a specific WebLogic configuration (for example, network). The Validate Model Tool provides mechanisms to validate the WebLogic configuration and ensure that it can run in these environments. We have created a sample in the GitHub WebLogic Docker project, https://github.com/oracle/docker-images/tree/master/OracleWebLogic/samples/12213-domain-wdt, to demonstrate how to provision a WebLogic 12.2.1.3 domain inside of a Docker image.  The WebLogic domain is configured with a WebLogic dynamic cluster, a simple application deployed, and a data source that connects to an Oracle database running inside of a container. This sample includes a basic WDT model, simple-topology.yaml, that describes the intended configuration of the domain within the Docker image. WDT models can be created and modified using a text editor, following the format and rules described in the README file for the WDT project in GitHub.  Alternatively, the model can be created using the WDT Discover Domain Tool to introspect an already existing WebLogic domain. Domain creation may require the deployment of applications and libraries. This is accomplished by creating a ZIP archive with a specific structure, then referencing those items in the model. This sample creates and deploys a simple ZIP archive, containing a small application WAR. The archive is built in the sample directory prior to creating the Docker image. How to Build and Run The image is based on a WebLogic Server 12.2.1.3 image in the docker-images repository. Follow the README in https://github.com/oracle/docker-images/tree/master/OracleWebLogic/dockerfiles/12.2.1.3 to build the WebLogic Server install image to your local repository. The WebLogic Deploy Tool installer is used to build this sample WebLogic domain image. This sample deploys a simple, one-page web application contained in a ZIP archive, archive.zip. This archive needs to be built before building the domain Docker image.     $ ./build-archive.sh Before the domain image is built, we also need the WDT model simple-topology.yaml.  If you want to customize this WebLogic domains sample, you can either use an editor to change the model simple-topology.yaml or use the WDT Discover Domain Tool to introspect an already existing WebLogic domain. The image below shows you a snippet of the sample WDT model simple-topology.yaml where the database password will be encrypted and replaced by the value in the properties file we will supply before running the WebLogic domain containers. To build this sample, run:     $ docker build \     --build-arg WDT_MODEL=simple-topology.yaml \     --build-arg WDT_ARCHIVE=archive.zip \     --force-rm=true \     -t 12213-domain-wdt . You should have a WebLogic domain image in your local repository. How to Run In this sample, each of the Managed Servers in the WebLogic domain have a data source deployed to them. We want to connect the data source to an Oracle database running in a container. Pull the Oracle database image from the Docker Store or the Oracle Container Registry into your local repository.     $ docker pull container-registry.oracle.com/database/enterprise:12.2.0.1 Create the Docker network for the WLS and database containers to run:     $ docker network create -d bridge SampleNET Run the Database Container To create a database container, use the environment file below to set the database name, domain, and feature bundle. The example environment file, properties/env.txt, is:     DB_SID=InfraDB     DB_PDB=InfraPDB1     DB_DOMAIN=us.oracle.com     DB_BUNDLE=basic Run the database container by running the following Docker command:     $ docker run -d --name InfraDB --network=SampleNET  \     -p 1521:1521 -p 5500:5500  \     --env-file /Users/mriccell/Docker/docker-images/OracleWebLogic/samples/12213-domain-wdt/properties/env.txt  \     -it --shm-size="8g"  \     container-registry.oracle.com/database/enterprise:12.2.0.1     Verify that the database is running and healthy. The STATUS field shows (healthy) in the output of docker ps.  The database is created with the default password 'Oradoc_db1'. To change the database password, you must use sqlplus.  To run sqlplus pull the Oracle Instant Client from the Oracle Container Registry or the Docker Store, and run a sqlplus container with the following command:     $ docker run -ti --network=SampleNET --rm \     store/oracle/database-instantclient:12.2.0.1 \     sqlplus  sys/Oradoc_db1@InfraDB:1521/InfraDB.us.oracle.com \     AS SYSDBA       SQL> alter user system identified by dbpasswd container=all; Make sure you add the new database password 'dbpasswd ' in the properties file, properties/domain.properties DB_PASSWORD. Verify that you can connect to the database:     $ docker exec -ti InfraDB  \     /u01/app/oracle/product/12.2.0/dbhome_1/bin/sqlplus \     system/dbpasswd@InfraDB:1521/InfraPDB1.us.oracle.com       SQL> select * from Dual; Run the WebLogic Domain You will need to modify the domain.properties file in properties/domain.properties with all the parameters required to run the WebLogic domain, including the database password. To start the containerized Administration Server, run:     $ docker run -d --name wlsadmin --hostname wlsadmin \     --network=SampleNET -p 7001:7001 \     -v /Users/mriccell/Docker/docker-images/OracleWebLogic/samples/12213-domain-wdt/properties:/u01/oracle/properties  \     12213-domain-wdt To start a containerized Managed Server (ms-1) to self-register with the Administration Server above, run:     $ docker run -d --name ms-1 --link wlsadmin:wlsadmin \     --network=SampleNET -p 9001:9001 \     -v /Users/mriccell/Docker/docker-images/OracleWebLogic/samples/12213-domain-wdt/properties:/u01/oracle/properties  \     -e MS_NAME=ms-1 12213-domain-wdt startManagedServer.sh To start an additional Managed Server (in this example, ms-2), run:     $ docker run -d --name ms-2 --link wlsadmin:wlsadmin  \     --network=SampleNET -p 9002:9001 \     -v /Users/mriccell/Docker/docker-images/OracleWebLogic/samples/12213-domain-wdt/properties:/u01/oracle/properties  \     -e MS_NAME=ms-2 12213-domain-wdt startManagedServer.sh The above scenario will give you a WebLogic domain with a dynamic cluster set up on a single host environment. Let’s verify that the servers are running and that the data source connects to the Oracle database running in the container. Invoke the WLS Administration Console by entering this URL in your browser, ‘http://localhost:7001/console’. Log in using the credentials you provided in the domain.properties file. The WebLogic Deploy Tooling simplifies the provisioning of WebLogic domains, deployment of applications, and the resources these applications need.  The WebLogic on Docker/Kubernetes projects take advantage of these tools to simplify the provisioning of domains inside of an image or persisted to a Kubernetes persistent volume.  We have released the General Availability version of the WebLogic Kubernetes Operator which simplifies the management of WebLogic domains in Kubernetes. Soon we will release the WebLogic Kubernetes Operator version 2.0 which provides enhancements to the management of WebLogic domains. We continue to provide tooling to make it simple to provision, deploy, and manage WebLogic domains with the goal of providing the greatest degree of flexibility for where these domains can run.  We hope this sample is helpful to anyone wanting to use the WebLogic Deploy Tooling for provisioning and deploying WebLogic Server domains, and we look forward to your feedback.

The Oracle WebLogic Deploy Tooling (WDT) makes the automation of WebLogic Server domain provisioning and applications deployment easy. Instead of writing WLST scripts that need to be maintained, WDT...

The WebLogic Server

WebLogic Kubernetes Operator Image Now Available in Docker Hub

To facilitate the management of Oracle WebLogic Server domains in Kubernetes, we have made available the WebLogic Server Kubernetes Operator images in the Docker Hub repository, https://hub.docker.com/r/oracle/weblogic-kubernetes-operator/. In this repository, we provide several WebLogic Kubernetes Operator images: Version 1.0 and latest. The general availability version of the operator. Version develop. The latest pre-released version of the operator image.   The open source code and documentation for the WebLogic Kubernetes Operator can be found in the GitHub repository, https://github.com/oracle/weblogic-kubernetes-operator. The WebLogic Server Kubernetes Operator uses the standard Oracle WebLogic Server 12.2.1.3 Docker image from the Docker store. It treats this image as immutable and all of the state is persisted in a Kubernetes persistent volume. This allows us to treat all of the pods as throwaway and replaceable, and it completely eliminates the need to manage state written into Docker containers.   Get Started The Oracle WebLogic Server Kubernetes Operator has the following requirements: Kubernetes 1.7.5+, 1.8.0+, 1.9.0+, 1.10.0 (check with kubectl version). Flannel networking v0.9.1-amd64 (check with docker images | grep flannel) Docker 17.03.1.ce (check with docker version) Oracle WebLogic Server 12.2.1.3.0 To obtain the WebLogic Kubernetes Operator image from Docker Hub, run: $ docker pull oracle/weblogic-kubernetes-operator:1.0 Customize the operator parameters file The operator is deployed with the provided installation script, kubernetes/create-weblogic-operator.sh. The input to this script is the file, kubernetes/create-operator-inputs.yaml, which needs to be updated to reflect the target environment. Parameters must be provided in the input file. For a description of each parameter, see https://github.com/oracle/weblogic-kubernetes-operator/blob/master/site/installation.md. Decide which REST configuration to use The operator provides three REST certificate options: none  Disables the REST server. self-signed-cert  Generates self-signed certificates. custom-cert  Provides a mechanism to provide certificates that were created and signed by some other means. Decide which options to enable The operator provides some optional features that can be enabled in the configuration file. Load balancing with an Ingress controller or a web server You can choose a load balancer provider for your WebLogic domains running in a Kubernetes cluster. Please refer to Load balancing with Voyager/HAProxy, Load balancing with Traefik, and Load balancing with the Apache HTTP Server for information about the current capabilities and setup instructions for each of the supported load balancers. Note these limitations: Only HTTP(S) is supported. Other protocols are not supported. A root path rule is created for each cluster. Rules based on the DNS name, or on URL paths other than ‘/’, are not supported. No non-default configuration of the load balancer is performed in this release. The default configuration gives round-robin routing and WebLogic Server will provide cookie-based session affinity. Note that Ingresses are not created for servers that are not part of a WebLogic Server cluster, including the Administration Server. Such servers are exposed externally using NodePort services. Log integration with Elastic Stack The operator can install the Elastic Stack and publish its logs to it. If enabled, Elasticsearch and Kibana will be installed in the default namespace, and a Logstash container will be created in the operator pod. Logstash will be configured to publish the operator’s logs to Elasticsearch, and the log data will be available for visualization and analysis in Kibana. To enable the ELK integration, set the enableELKintegration option to true. 
 Deploying the operator to a Kubernetes cluster To deploy the operator, run the deployment script and give it the location of your inputs file, ./create-weblogic-operator.sh –i /path/to/create-operator-inputs.yaml. What the script does The script will carry out the following actions: A set of Kubernetes YAML files will be created from the inputs provided. A namespace will be created for the operator. A service account will be created in that namespace. If Elastic Stack integration was enabled, a persistent volume for the Elastic Stack will be created. A set of RBAC roles and bindings will be created. The operator will be deployed. If requested, the load balancer will be deployed. If requested, Elastic Stack will be deployed and Logstash will be configured for the operator’s logs. The script will validate each action before it proceeds. This will deploy the operator in your Kubernetes cluster. Please refer to the documentation for next steps including using the REST services, creating a WebLogic Server domain, starting a domain, and so on. Our future plans include, enhancements to the WebLogic Server Kubernetes Operator which can manage a WebLogic domain inside a Docker image as well as on a persistent volume. Enhancements to add CI/CD of WebLogic deployments on Kubernetes with Oracle Container Pipelines, and new features and enhancements over time. Please stay tuned for more information. We hope this announcement is helpful to those of you seeking to deploy WebLogic Server on Kubernetes, and look forward to your feedback.  

To facilitate the management of Oracle WebLogic Server domains in Kubernetes, we have made available the WebLogic Server Kubernetes Operator images in the Docker Hub repository, https://hub.docker.com/...

WebLogic Server JTA in a Kubernetes Environment

This blog post describes WebLogic Server global transactions running in a Kubernetes environment.  First, we’ll review how the WebLogic Server Transaction Manager (TM) processes distributed transactions.  Then, we’ll walk through an example transactional application that is deployed to WebLogic Server domains running in a Kubernetes cluster with the WebLogic Kubernetes Operator.    WebLogic Server Transaction Manager Introduction The WebLogic Server Transaction Manager (TM) is the transaction processing monitor implementation in WebLogic Server that supports the Java Enterprise Edition (Java EE) Java Transaction API (JTA).  A Java EE application uses JTA to manage global transactions to ensure that changes to resource managers, such as databases and messaging systems, either complete as a unit, or are undone. This section provides a brief introduction to the WebLogic Server TM, specifically around network communication and related configuration, which will be helpful when we examine transactions in a Kubernetes environment.  There are many TM features, optimizations, and configuration options that won’t be covered in this article.  Refer to the following WebLogic Server documentation for additional details: ·      For general information about the WebLogic Server TM, see the WebLogic Server JTA documentation. ·      For detailed information regarding the Java Transaction API, see the Java EE JTA Specification. How Transactions are Processed in WebLogic Server To get a basic understanding of how the WebLogic Server TM processes transactions, we’ll look at a hypothetical application.  Consider a web application consisting of a servlet that starts a transaction, inserts a record in a database table, and sends a message to a Java Messaging Service (JMS) queue destination.  After updating the JDBC and JMS resources, the servlet commits the transaction.   The following diagram shows the server and resource transaction participants. Transaction Propagation The transaction context builds up state as it propagates between servers and as resources are accessed by the application.  For this application, the transaction context at commit time would look something like the following. Server participants, identified by domain name and server name, have an associated URL that is used for internal TM communication.  These URLs are typically derived from the server’s default network channel, or default secure network channel.  The transaction context also contains information about which server participants have javax.transaction.Synchronization callbacks registered.  The JTA synchronization API is a callback mechanism where the TM invokes the Synchronization.beforeCompletion() method before commencing two-phase commit processing for a transaction.   The Synchronization.afterCompletion(int status) method is invoked after transaction processing is complete with the final status of the transaction (for example, committed, rolled back, and such).  Transaction Completion When the TM is instructed to commit the transaction, the TM takes over and coordinates the completion of the transaction.  One of the server participants is chosen as the transaction coordinator to drive the two-phase commit protocol.  The coordinator instructs the remaining subordinate servers to process registered synchronization callbacks, and to prepare, commit, or rollback resources.  The TM communication channels used to coordinate the example transaction are illustrated in the following diagram. The dashed-line arrows represent asynchronous RMI calls between the coordinator and subordinate servers.  Note that the Synchronization.beforeCompletion() communication can take place directly between subordinate servers.  It is also important to point out that application communication is conceptually separate from the internal TM communication, as the TM may establish network channels that were not used by the application to propagate the transaction.  The TM could use different protocols, addresses, and ports depending on how the server default network channels are configured. Configuration Recommendations There are a few TM configuration recommendations related to server network addresses, persistent storage, and server naming. Server Network Addresses As mentioned previously, server participants locate each other using URLs included in the transaction context.  It is important that the network channels used for TM URLs be configured with address names that are resolvable after node, pod, or container restarts where IP addresses may change.  Also, because the TM requires direct server-to-server communication, cluster or load-balancer addresses that resolve to multiple IP addresses should not be used. Transaction Logs The coordinating server persists state in the transaction log (TLOG) that is used for transaction recovery processing after failure.  Because a server instance may relocate to another node, the TLOG needs to reside in a network/replicated file system (for example, NFS, SAN, and such) or in a highly-available database such as Oracle RAC.  For additional information, refer to the High Availability Guide. Cross-Domain Transactions Transactions that span WebLogic Server domains are referred to as cross-domain transactions.  Cross-domain transactions introduce additional configuration requirements, especially when the domains are connected by a public network. Server Naming The TM identifies server participants using a combination of the domain name and server name.  Therefore, each domain should be named uniquely to prevent name collisions.  Server participant name collisions will cause transactions to be rolled back at runtime. Security Server participants that are connected by a public network require the use of secure protocols (for example, t3s) and authorization checks to verify that the TM communication is legitimate.  For the purpose of this demonstration, we won’t cover these topics in detail.  For the Kubernetes example application, all TM communication will take place on the private Kubernetes network and will use a non-SSL protocol. For details on configuring security for cross-domain transactions, refer to the Configuring Secure Inter-Domain and Intra-Domain Transaction Communication chapter of the Fusion Middleware Developing JTA Applications for Oracle WebLogic Server documentation. WebLogic Server on Kubernetes In an effort to improve WebLogic Server integration with Kubernetes, Oracle has released the open source WebLogic Kubernetes Operator.   The WebLogic Kubernetes Operator supports the creation and management of WebLogic Server domains, integration with various load balancers, and additional capabilities.  For details refer to the GitHub project page, https://github.com/oracle/weblogic-kubernetes-operator, and the related blogs at https://blogs.oracle.com/weblogicserver/how-to-weblogic-server-on-kubernetes. Example Transactional Application Walkthrough To illustrate running distributed transactions on Kubernetes, we’ll step through a simplified transactional application that is deployed to multiple WebLogic Server domains running in a single Kubernetes cluster.  The environment that I used for this example is a Mac running Docker Edge v18.05.0-ce that includes Kubernetes v1.9.6. After installing and starting Docker Edge, open the Preferences page, increase the memory available to Docker under the Advanced tab (~8 GiB) and enable Kubernetes under the Kubernetes tab.  After applying the changes, Docker and Kubernetes will be started.  If you are behind a firewall, you may also need to add the appropriate settings under the Proxies tab.  Once running, you should be able to list the Kubernetes version information. $ kubectl version Client Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.6", GitCommit:"9f8ebd171479bec0ada837d7ee641dec2f8c6dd1", GitTreeState:"clean", BuildDate:"2018-03-21T15:21:50Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"darwin/amd64"} Server Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.6", GitCommit:"9f8ebd171479bec0ada837d7ee641dec2f8c6dd1", GitTreeState:"clean", BuildDate:"2018-03-21T15:13:31Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"} To keep the example file system path names short, the working directory for input files, operator sources and binaries, persistent volumes, and such, are created under $HOME/k8sop. You can reference the directory using the environment variable $K8SOP. $ export K8SOP=$HOME/k8sop $ mkdir $K8SOP Install the WebLogic Kubernetes Operator The next step will be to build and install the weblogic-kubernetes-operator image.  Refer to the installation procedures at https://github.com/oracle/weblogic-kubernetes-operator/blob/master/site/installation.md.  Note that for this example, the weblogic-kubernetes-operator GitHub project will be cloned under the $K8SOP/src directory ($K8SOP/src/weblogic-kubernetes-operator).  Also note that when building the Docker image, use the tag “local” in place of “some-tag” that’s specified in the installation docs. $ mkdir $K8SOP/src $ cd $K8SOP/src $ git clone https://github.com/oracle/weblogic-kubernetes-operator.git $ cd weblogic-kubernetes-operator $ mvn clean install $ docker login $ docker build -t weblogic-kubernetes-operator:local --no-cache=true . After building the operator image, you should see it in the local registry. $ docker images weblogic-kubernetes-operator REPOSITORY                     TAG                 IMAGE ID            CREATED             SIZE weblogic-kubernetes-operator   local               42a5f70c7287        10 seconds ago      317MB The next step will be to deploy the operator to the Kubernetes cluster.  For this example, we will modify the create-weblogic-operator-inputs.yaml file to add an additional target namespace (weblogic) and specify the correct operator image name. Attribute Value targetNamespaces default,weblogic weblogicOperatorImage weblogic-kubernetes-operator:local javaLoggingLevel WARNING   Save the modified input file under $K8SOP/create-weblogic-operator-inputs.yaml. Then run the create-weblogic-operator.sh script, specifying the path to the modified create-weblogic-operator.yaml input file and the path of the operator output directory. $ cd $K8SOP $ mkdir weblogic-kubernetes-operator $ $K8SOP/src/weblogic-kubernetes-operator/kubernetes/create-weblogic-operator.sh -i $K8SOP/create-weblogic-operator-inputs.yaml -o $K8SOP/weblogic-kubernetes-operator When the script completes you will be able to see the operator pod running. $ kubectl get po -n weblogic-operator NAME                                 READY     STATUS    RESTARTS   AGE weblogic-operator-6dbf8bf9c9-prhwd   1/1       Running   0          44s WebLogic Domain Creation The procedures for creating a WebLogic Server domain are documented at https://github.com/oracle/weblogic-kubernetes-operator/blob/master/site/creating-domain.md.  Follow the instructions for pulling the WebLogic Server image from the Docker store into the local registry.  You’ll be able to pull the image after accepting the license agreement on the Docker store. $ docker login $ docker pull store/oracle/weblogic:12.2.1.3 Next, we’ll create a Kubernetes secret to hold the administrative credentials for our domain (weblogic/weblogic1). $ kubectl -n weblogic create secret generic domain1-weblogic-credentials --from-literal=username=weblogic --from-literal=password=weblogic1 The persistent volume location for the domain will be under $K8SOP/volumes/domain1. $ mkdir -m 777 -p $K8SOP/volumes/domain1 Then we’ll customize the $K8SOP/src/weblogic-kubernetes-operator/kubernetes/create-weblogic-domain-inputs.yaml example input file, modifying the following attributes: Attribute Value weblogicDomainStoragePath {full path of $HOME}/k8sop/volumes/domain1 domainName domain1 domainUID domain1 t3PublicAddress {your-local-hostname} exposeAdminT3Channel true exposeAdminNodePort true namespace weblogic   After saving the updated input file to $K8SOP/create-domain1.yaml, invoke the create-weblogic-domain.sh script as follows. $ $K8SOP/src/weblogic-kubernetes-operator/kubernetes/create-weblogic-domain.sh -i $K8SOP/create-domain1.yaml -o $K8SOP/weblogic-kubernetes-operator After the create-weblogic-domain.sh script completes, Kubernetes will start up the Administration Server and the clustered Managed Server instances.  After a while, you can see the running pods. $ kubectl get po -n weblogic NAME                                        READY     STATUS    RESTARTS   AGE domain1-admin-server                        1/1       Running   0          5m domain1-cluster-1-traefik-9985d9594-gw2jr   1/1       Running   0          5m domain1-managed-server1                     1/1       Running   0          3m domain1-managed-server2                     1/1       Running   0          3m Now we will access the running Administration Server using the WebLogic Server Administration Console to check the state of the domain using the URL http://localhost:30701/console with the credentials weblogic/weblogic1.  The following screen shot shows the Servers page. The Administration Console Servers page shows all of the servers in domain1.  Note that each server has a listen address that corresponds to a Kubernetes service name that is defined for the specific server instance.  The service name is derived from the domainUID (domain1) and the server name. These address names are resolvable within the Kubernetes namespace and, along with the listen port, are used to define each server’s default network channel.  As mentioned previously, the default network channel URLs are propagated with the transaction context and are used internally by the TM for distributed transaction coordination. Example Application Now that we have a WebLogic Server domain running under Kubernetes, we will look at an example application that can be used to verify distributed transaction processing.  To make the example as simple as possible, it will be limited in scope to transaction propagation between servers and synchronization callback processing.  This will allow us to verify inter-server transaction communication without the need for resource manager configuration and the added complexity of writing JDBC or JMS client code. The application consists of two main components: a servlet front end and an RMI remote object.  The servlet processes a GET request that contains a list of URLs.  It starts a global transaction and then invokes the remote object at each of the URLs.  The remote object simply registers a synchronization callback that prints a message to stdout in the beforeCompletion and afterCompletion callback methods.  Finally, the servlet commits the transaction and sends a response containing information about each of the RMI calls and the outcome of the global transaction. The following diagram illustrates running the example application on the domain1 servers in the Kubernetes cluster.  The servlet is invoked using the Administration Server’s external port.  The servlet starts the transaction, registers a local synchronization object, and invokes the register operation on the Managed Servers using their Kubernetes internal URLs:  t3://domain1-managed-server1:8001 and t3://domain1-managed-server2:8001. TxPropagate Servlet As mentioned above, the servlet starts a transaction and then invokes the RemoteSync.register() remote method on each of the server URLs specified.  Then the transaction is committed and the results are returned to the caller. package example;   import java.io.IOException; import java.io.PrintWriter;   import javax.naming.Context; import javax.naming.InitialContext; import javax.naming.NamingException; import javax.servlet.ServletException; import javax.servlet.annotation.WebServlet; import javax.servlet.http.HttpServlet; import javax.servlet.http.HttpServletRequest; import javax.servlet.http.HttpServletResponse; import javax.transaction.HeuristicMixedException; import javax.transaction.HeuristicRollbackException; import javax.transaction.NotSupportedException; import javax.transaction.RollbackException; import javax.transaction.SystemException;   import weblogic.transaction.Transaction; import weblogic.transaction.TransactionHelper; import weblogic.transaction.TransactionManager;   @WebServlet("/TxPropagate") public class TxPropagate extends HttpServlet {   private static final long serialVersionUID = 7100799641719523029L;   private TransactionManager tm = (TransactionManager)       TransactionHelper.getTransactionHelper().getTransactionManager();     protected void doGet(HttpServletRequest request,       HttpServletResponse response) throws ServletException, IOException {     PrintWriter out = response.getWriter();       String urlsParam = request.getParameter("urls");     if (urlsParam == null) return;     String[] urls = urlsParam.split(",");       try {       RemoteSync forward = (RemoteSync)           new InitialContext().lookup(RemoteSync.JNDINAME);       tm.begin();       Transaction tx = (Transaction) tm.getTransaction();       out.println("<pre>");       out.println(Utils.getLocalServerID() + " started " +           tx.getXid().toString());       out.println(forward.register());       for (int i = 0; i < urls.length; i++) {         out.println(Utils.getLocalServerID() + " " + tx.getXid().toString() +             " registering Synchronization on " + urls[i]);         Context ctx = Utils.getContext(urls[i]);         forward = (RemoteSync) ctx.lookup(RemoteSync.JNDINAME);         out.println(forward.register());       }       tm.commit();       out.println(Utils.getLocalServerID() + " committed " + tx);     } catch (NamingException | NotSupportedException | SystemException |         SecurityException | IllegalStateException | RollbackException |         HeuristicMixedException | HeuristicRollbackException e) {       throw new ServletException(e);     }   } Remote Object The RemoteSync remote object contains a single method, register, that registers a javax.transaction.Synchronization callback with the propagated transaction context. RemoteSync Interface The following is the example.RemoteSync remote interface definition. package example;   import java.rmi.Remote; import java.rmi.RemoteException;   public interface RemoteSync extends Remote {   public static final String JNDINAME = "propagate.RemoteSync";   String register() throws RemoteException; } RemoteSyncImpl Implementation The example.RemoteSyncImpl class implements the example.RemoteSync remote interface and contains an inner synchronization implementation class named SynchronizationImpl.  The beforeCompletion and afterCompletion methods simply write a message to stdout containing the server ID (domain name and server name) and the Xid string representation of the propagated transaction. The static main method instantiates a RemoteSyncImpl object and binds it into the server’s local JNDI context.  The main method is invoked when the application is deployed using the ApplicationLifecycleListener, as described below. package example;   import java.rmi.RemoteException;   import javax.naming.Context; import javax.transaction.RollbackException; import javax.transaction.Synchronization; import javax.transaction.SystemException;   import weblogic.jndi.Environment; import weblogic.transaction.Transaction; import weblogic.transaction.TransactionHelper;   public class RemoteSyncImpl implements RemoteSync {     public String register() throws RemoteException {     Transaction tx = (Transaction)         TransactionHelper.getTransactionHelper().getTransaction();     if (tx == null) return Utils.getLocalServerID() +         " no transaction, Synchronization not registered";     try {       Synchronization sync = new SynchronizationImpl(tx);       tx.registerSynchronization(sync);       return Utils.getLocalServerID() + " " + tx.getXid().toString() +           " registered " + sync;     } catch (IllegalStateException | RollbackException |         SystemException e) {       throw new RemoteException(           "error registering Synchronization callback with " +       tx.getXid().toString(), e);     }   }     class SynchronizationImpl implements Synchronization {     Transaction tx;         SynchronizationImpl(Transaction tx) {       this.tx = tx;     }         public void afterCompletion(int arg0) {       System.out.println(Utils.getLocalServerID() + " " +           tx.getXid().toString() + " afterCompletion()");     }       public void beforeCompletion() {       System.out.println(Utils.getLocalServerID() + " " +           tx.getXid().toString() + " beforeCompletion()");     }   }     // create and bind remote object in local JNDI   public static void main(String[] args) throws Exception {     RemoteSyncImpl remoteSync = new RemoteSyncImpl();     Environment env = new Environment();     env.setCreateIntermediateContexts(true);     env.setReplicateBindings(false);     Context ctx = env.getInitialContext();     ctx.rebind(JNDINAME, remoteSync);     System.out.println("bound " + remoteSync);   } } Utility Methods The Utils class contains a couple of static methods, one to get the local server ID and another to perform an initial context lookup given a URL.  The initial context lookup is invoked under the anonymous user.  These methods are used by both the servlet and the remote object. package example;   import java.util.Hashtable;   import javax.naming.Context; import javax.naming.InitialContext; import javax.naming.NamingException;   public class Utils {     public static Context getContext(String url) throws NamingException {     Hashtable env = new Hashtable();     env.put(Context.INITIAL_CONTEXT_FACTORY,         "weblogic.jndi.WLInitialContextFactory");     env.put(Context.PROVIDER_URL, url);     return new InitialContext(env);   }     public static String getLocalServerID() {     return "[" + getDomainName() + "+"         + System.getProperty("weblogic.Name") + "]";   }     private static String getDomainName() {     String domainName = System.getProperty("weblogic.Domain");     if (domainName == null) domainName = System.getenv("DOMAIN_NAME");     return domainName;   } } ApplicationLifecycleListener When the application is deployed to a WebLogic Server instance, the lifecycle listener preStart method is invoked to initialize and bind the RemoteSync remote object. package example;   import weblogic.application.ApplicationException; import weblogic.application.ApplicationLifecycleEvent; import weblogic.application.ApplicationLifecycleListener;   public class LifecycleListenerImpl extends ApplicationLifecycleListener {     public void preStart (ApplicationLifecycleEvent evt)       throws ApplicationException {     super.preStart(evt);     try {       RemoteSyncImpl.main(null);     } catch (Exception e) {       throw new ApplicationException(e);     }   } } Application Deployment Descriptor The application archive contains the following weblogic-application.xml deployment descriptor to register the ApplicationLifecycleListener object. <?xml version = '1.0' ?> <weblogic-application xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.bea.com/ns/weblogic/weblogic-application http://www.bea.com/ns/weblogic/weblogic-application/1.0/weblogic-application.xsd" xmlns="http://www.bea.com/ns/weblogic/weblogic-application">    <listener>     <listener-class>example.LifecycleListenerImpl</listener-class>     <listener-uri>lib/remotesync.jar</listener-uri>   </listener> </weblogic-application> Deploying the Application The example application can be deployed using a number of supported deployment mechanisms (refer to https://blogs.oracle.com/weblogicserver/best-practices-for-application-deployment-on-weblogic-server-running-on-kubernetes-v2).  For this example, we’ll deploy the application using the WebLogic Server Administration Console. Assume that the application is packaged in an application archive named txpropagate.ear.  First, we’ll copy txpropagate.ear to the applications directory under the domain1 persistent volume location ($K8SOP/volumes/domain1/applications).  Then we can deploy the application from the Administration Console’s Deployment page. Note that the path of the EAR file is /shared/applications/txpropagate.ear within the Administration Server’s container, where /shared is mapped to the persistent volume that we created at $K8SOP/volumes/domain1. Deploy the EAR as an application and then target it to the Administration Server and the cluster. On the next page, click Finish to deploy the application.  After the application is deployed, you’ll see its entry in the Deployments table. Running the Application Now that we have the application deployed to the servers in domain1, we can run a distributed transaction test.  The following CURL operation invokes the servlet using the load balancer port 30305 for the clustered Managed Servers and specifies the URL of managed-server1. $ curl http://localhost:30305/TxPropagate/TxPropagate?urls=t3://domain1-managed-server1:8001 <pre> [domain1+managed-server2] started BEA1-0001DE85D4EE [domain1+managed-server2] BEA1-0001DE85D4EEC47AE630 registered example.RemoteSyncImpl$SynchronizationImpl@562a85bd [domain1+managed-server2] BEA1-0001DE85D4EEC47AE630 registering Synchronization on t3://domain1-managed-server1:8001 [domain1+managed-server1] BEA1-0001DE85D4EEC47AE630 registered example.RemoteSyncImpl$SynchronizationImpl@585ff41b [domain1+managed-server2] committed Xid=BEA1-0001DE85D4EEC47AE630(844351585),Status=Committed,numRepliesOwedMe=0,numRepliesOwedOthers=0,seconds since begin=0,seconds left=120,useSecure=false,SCInfo[domain1+managed-server2]=(state=committed),SCInfo[domain1+managed-server1]=(state=committed),properties=({ackCommitSCs={managed-server1+domain1-managed-server1:8001+domain1+t3+=true}, weblogic.transaction.partitionName=DOMAIN}),OwnerTransactionManager=ServerTM[ServerCoordinatorDescriptor=(CoordinatorURL=managed-server2+domain1-managed-server2:8001+domain1+t3+ CoordinatorNonSecureURL=managed-server2+domain1-managed-server2:8001+domain1+t3+ coordinatorSecureURL=null, XAResources={WSATGatewayRM_managed-server2_domain1},NonXAResources={})],CoordinatorURL=managed-server2+domain1-managed-server2:8001+domain1+t3+) The following diagram shows the application flow. Looking at the output, we see that the servlet request was dispatched on managed-server2 where it started the transaction BEA1-0001DE85D4EE.   [domain1+managed-server2] started BEA1-0001DE85D4EE The local RemoteSync.register() method was invoked which registered the callback object SynchronizationImpl@562a85bd. [domain1+managed-server2] BEA1-0001DE85D4EEC47AE630 registered example.RemoteSyncImpl$SynchronizationImpl@562a85bd The servlet then invoked the register method on the RemoteSync object on managed-server1, which registered the synchronization object SynchronizationImpl@585ff41b. [domain1+managed-server2] BEA1-0001DE85D4EEC47AE630 registering Synchronization on t3://domain1-managed-server1:8001 [domain1+managed-server1] BEA1-0001DE85D4EEC47AE630 registered example.RemoteSyncImpl$SynchronizationImpl@585ff41b Finally, the servlet committed the transaction and returned the transaction’s string representation (typically used for TM debug logging). [domain1+managed-server2] committed Xid=BEA1-0001DE85D4EEC47AE630(844351585),Status=Committed,numRepliesOwedMe=0,numRepliesOwedOthers=0,seconds since begin=0,seconds left=120,useSecure=false,SCInfo[domain1+managed-server2]=(state=committed),SCInfo[domain1+managed-server1]=(state=committed),properties=({ackCommitSCs={managed-server1+domain1-managed-server1:8001+domain1+t3+=true}, weblogic.transaction.partitionName=DOMAIN}),OwnerTransactionManager=ServerTM[ServerCoordinatorDescriptor=(CoordinatorURL=managed-server2+domain1-managed-server2:8001+domain1+t3+ CoordinatorNonSecureURL=managed-server2+domain1-managed-server2:8001+domain1+t3+ coordinatorSecureURL=null, XAResources={WSATGatewayRM_managed-server2_domain1},NonXAResources={})],CoordinatorURL=managed-server2+domain1-managed-server2:8001+domain1+t3+) The output shows that the transaction was committed, that it has two server participants (managed-server1 and managed-server2) and that the coordinating server (managed-server2) is accessible using t3://domain1-managed-server2:8001. We can also verify that the registered synchronization callbacks were invoked by looking at the output of admin-server and managed-server1.  The .out files for the servers can be found under the persistent volume of the domain. $ cd $K8SOP/volumes/domain1/domain/domain1/servers $ find . -name '*.out' -exec grep -H BEA1-0001DE85D4EE {} ';' ./managed-server1/logs/managed-server1.out:[domain1+managed-server1] BEA1-0001DE85D4EEC47AE630 beforeCompletion() ./managed-server1/logs/managed-server1.out:[domain1+managed-server1] BEA1-0001DE85D4EEC47AE630 afterCompletion() ./managed-server2/logs/managed-server2.out:[domain1+managed-server2] BEA1-0001DE85D4EEC47AE630 beforeCompletion() ./managed-server2/logs/managed-server2.out:[domain1+managed-server2] BEA1-0001DE85D4EEC47AE630 afterCompletion() To summarize, we were able to process distributed transactions within a WebLogic Server domain running in a Kubernetes cluster without having to make any changes.  The WebLogic Kubernetes Operator domain creation process provided all of the Kubernetes networking and WebLogic Server configuration necessary to make it possible.  The following command lists the Kubernetes services defined in the weblogic namespace. $ kubectl get svc -n weblogic NAME                                        TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)           AGE domain1-admin-server                        NodePort    10.102.156.32    <none>        7001:30701/TCP    11m domain1-admin-server-extchannel-t3channel   NodePort    10.99.21.154     <none>        30012:30012/TCP   9m domain1-cluster-1-traefik                   NodePort    10.100.211.213   <none>        80:30305/TCP      11m domain1-cluster-1-traefik-dashboard         NodePort    10.108.229.66    <none>        8080:30315/TCP    11m domain1-cluster-cluster-1                   ClusterIP   10.106.58.103    <none>        8001/TCP          9m domain1-managed-server1                     ClusterIP   10.108.85.130    <none>        8001/TCP          9m domain1-managed-server2                     ClusterIP   10.108.130.92    <none>        8001/TCP We were able to access the servlet through the Traefik NodePort service using port 30305 on localhost.  From inside the Kubernetes cluster, the servlet is able to access other WebLogic Server instances using their service names and ports.  Because each server’s listen address is set to its corresponding Kubernetes service name, the addresses are resolvable from within the Kubernetes namespace even if a server’s pod is restarted and assigned a different IP address. Cross-Domain Transactions Now we’ll look at extending the example to run across two WebLogic Server domains.  As mentioned in the TM overview section, cross-domain transactions can require additional configuration to properly secure TM communication.  However, for our example, we will keep the configuration as simple as possible.  We’ll continue to use a non-secure protocol (t3), and the anonymous user, for both application and internal TM communication. First, we’ll need to create a new domain (domain2) in the same Kubernetes namespace as domain1 (weblogic).  Before generating domain2 we need to create a secret for the domain2 credentials (domain2-weblogic-credentials) in the weblogic namespace and a directory for the persistent volume ($K8SOP/volumes/domain2). Next, modify the create-domain1.yaml file, changing the following attribute values, and save the changes to a new file named create-domain2.yaml. Attribute Value domainName domain2 domainUID domain2 weblogicDomainStoragePath {full path of $HOME}/k8sop/volumes/domain2 weblogicCredentialsSecretName domain2-weblogic-credentials t3ChannelPort 32012 adminNodePort 32701 loadBalancerWebPort 32305 loadBalancerDashboardPort 32315   Now we’re ready to invoke the create-weblogic-domain.sh script with the create-domain2.yaml input file. $ $K8SOP/src/weblogic-kubernetes-operator/kubernetes/create-weblogic-domain.sh -i $K8SOP/create-domain2.yaml -o $K8SOP/weblogic-kubernetes-operator After the create script completes successfully, the servers in domain2 will start and, using the readiness probe, report that they have reached the RUNNING state. $ kubectl get po -n weblogic NAME                                         READY     STATUS    RESTARTS   AGE domain1-admin-server                         1/1       Running   0          27m domain1-cluster-1-traefik-9985d9594-gw2jr    1/1       Running   0          27m domain1-managed-server1                      1/1       Running   0          25m domain1-managed-server2                      1/1       Running   0          25m domain2-admin-server                         1/1       Running   0          5m domain2-cluster-1-traefik-5c49f54689-9fzzr   1/1       Running   0          5m domain2-managed-server1                      1/1       Running   0          3m domain2-managed-server2                      1/1       Running   0          3m After deploying the application to the servers in domain2, we can invoke the application and include the URLs for the domain2 Managed Servers.  $ curl http://localhost:30305/TxPropagate/TxPropagate?urls=t3://domain2-managed-server1:8001,t3://domain2-managed-server2:8001 <pre> [domain1+managed-server1] started BEA1-0001144553CC [domain1+managed-server1] BEA1-0001144553CC5D73B78A registered example.RemoteSyncImpl$SynchronizationImpl@2e13aa23 [domain1+managed-server1] BEA1-0001144553CC5D73B78A registering Synchronization on t3://domain2-managed-server1:8001 [domain2+managed-server1] BEA1-0001144553CC5D73B78A registered example.RemoteSyncImpl$SynchronizationImpl@68d4c2d6 [domain1+managed-server1] BEA1-0001144553CC5D73B78A registering Synchronization on t3://domain2-managed-server2:8001 [domain2+managed-server2] BEA1-0001144553CC5D73B78A registered example.RemoteSyncImpl$SynchronizationImpl@1ae87d94 [domain1+managed-server1] committed Xid=BEA1-0001144553CC5D73B78A(1749245151),Status=Committed,numRepliesOwedMe=0,numRepliesOwedOthers=0,seconds since begin=0,seconds left=120,useSecure=false,SCInfo[domain1+managed-server1]=(state=committed),SCInfo[domain2+managed-server1]=(state=committed),SCInfo[domain2+managed-server2]=(state=committed),properties=({ackCommitSCs={managed-server2+domain2-managed-server2:8001+domain2+t3+=true, managed-server1+domain2-managed-server1:8001+domain2+t3+=true}, weblogic.transaction.partitionName=DOMAIN}),OwnerTransactionManager=ServerTM[ServerCoordinatorDescriptor=(CoordinatorURL=managed-server1+domain1-managed-server1:8001+domain1+t3+ CoordinatorNonSecureURL=managed-server1+domain1-managed-server1:8001+domain1+t3+ coordinatorSecureURL=null, XAResources={WSATGatewayRM_managed-server1_domain1},NonXAResources={})],CoordinatorURL=managed-server1+domain1-managed-server1:8001+domain1+t3+) The application flow is shown in the following diagram. In this example, the transaction includes server participants from both domain1 and domain2, and we can verify that the synchronization callbacks were processed on all participating servers. $ cd $K8SOP/volumes $ find . -name '*.out' -exec grep -H BEA1-0001144553CC {} ';' ./domain1/domain/domain1/servers/managed-server1/logs/managed-server1.out:[domain1+managed-server1] BEA1-0001144553CC5D73B78A beforeCompletion() ./domain1/domain/domain1/servers/managed-server1/logs/managed-server1.out:[domain1+managed-server1] BEA1-0001144553CC5D73B78A afterCompletion() ./domain2/domain/domain2/servers/managed-server1/logs/managed-server1.out:[domain2+managed-server1] BEA1-0001144553CC5D73B78A beforeCompletion() ./domain2/domain/domain2/servers/managed-server1/logs/managed-server1.out:[domain2+managed-server1] BEA1-0001144553CC5D73B78A afterCompletion() ./domain2/domain/domain2/servers/managed-server2/logs/managed-server2.out:[domain2+managed-server2] BEA1-0001144553CC5D73B78A beforeCompletion() ./domain2/domain/domain2/servers/managed-server2/logs/managed-server2.out:[domain2+managed-server2] BEA1-0001144553CC5D73B78A afterCompletion() Summary In this article we reviewed, at a high level, how the WebLogic Server Transaction Manager processes global transactions and discussed some of the basic configuration requirements.   We then looked at an example application to illustrate how cross-domain transactions are processed in a Kubernetes cluster.   In future articles we’ll look at more complex transactional use-cases such as multi-node, cross Kubernetes cluster transactions, failover, and such.

This blog post describes WebLogic Server global transactions running in a Kubernetes environment.  First, we’ll review how the WebLogic Server Transaction Manager (TM) processes distributed...

The WebLogic Server

Announcing WebLogic Server Certification on Oracle Cloud Infrastructure Container Engine for Kubernetes

On May 7th we announced the General Availability (GA) version of the WebLogic Server Kubernetes Operator, including certification of WebLogic Server and Operator configurations running on the Oracle Cloud Infrastructure (OCI).   In this initial announcement, WebLogic Server and Operator OCI certification was provided on Kubernetes clusters created on OCI using the Terraform Kubernetes Installer.    For more details on this announcement, please refer to the announcement blog Announcing General Availability version of the WebLogic Server Kubernetes Operator. Today we are announcing the additional certification of WebLogic Server and Operator configurations on the Oracle Container Engine for Kubernetes running on OCI, please see blog Kubernetes: A Cloud (and Data Center) Operating System?.  The Oracle Container Engine for Kubernetes is a fully-managed, scalable, and highly available service that you can use to deploy your containerized applications to the cloud.  In the blog How to run WebLogic clusters on the Oracle Cloud Infrastructure Container Engine for Kubernetes, we describe the steps to run a WebLogic domain/cluster managed by the WebLogic Kubernetes Operator running on OCI Container Engine for Kubernetes  with WebLogic and Operator images stored in the OCI Registry. Very soon, we hope to provide an easy way to migrate existing WebLogic Server domains in Kubernetes using the WebLogic Deploy Tooling, add CI/CD of WebLogic deployments on Kubernetes with Oracle Container Pipelines, and add new features and enhancements over time. The WebLogic Server and Operator capabilities described are supported on standard Kuberntes infrastructure with full compatibility between OCI, and other private and public cloud platforms that use Kubernetes.  The Operator, Prometheus Exporter, and WebLogic Deploy Tooling are all being developed in open source.   We are open to your feedback – thanks! Safe Harbor Statement The preceding is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions.  The development, release, and timing of any features or functionality described for Oracle’s products remains at the sole discretion of Oracle.

On May 7th we announced the General Availability (GA) version of the WebLogic Server Kubernetes Operator, including certification of WebLogic Server and Operator configurations running on the Oracle...

The WebLogic Server

How to run WebLogic clusters on the Oracle Cloud Infrastructure Container Engine for Kubernetes

There are various options for setting up a Kubernetes environment in order to run WebLogic clusters. Oracle supports customers who want to run WebLogic clusters in production or development mode and on Kubernetes clusters on-premises or in the cloud. In this blog, we describe the steps to run a WebLogic cluster using the Oracle Cloud Infrastructure (OCI) Container Engine for Kubernetes. The Kubernetes managed service is fully integrated with the underlying Oracle Cloud Infrastructure (OCI), making it easy to provision a Kubernetes cluster and to provide the required services, such as a load balancer, volumes, and network fabric. Prerequisites: Docker images: WebLogic Server (weblogic-12.2.1.3:latest). WebLogic Kubernetes Operator (weblogic-operator:latest) Traefik Load Balancer (traefik:1.4.5) A workstation with Docker and kubectl, installed and configured. The Oracle Container Engine for Kubernetes on OCI. To setup a Kubernetes managed service on OCI, follow the documentation Overview of Container Engine for Kubernetes. OCI Container Engine for Kubernetes nodes are accessible using ssh. The Oracle Cloud Infrastructure Registry to push the WebLogic Server, Operator, and Load Balancer images. Prepare the WebLogic Kubernetes Operator environment To prepare the environment, we need to: ·Test accessibility and set up the RBAC policy for the OCI Container Engine for the Kubernetes cluster Set up the NFS server Upload the Docker images to the OCI Registry (OCIR) Modify the configuration YAML files to reflect the Docker images’ names in the OCIR Test accessibility and set up the RBAC policy for the OKE cluster To check the accessibility to the OCI Container Engine for Kubernetes nodes, enter the command: kubectl get nodes The output of the command will display the nodes, similar to the following: NAME              STATUS    ROLES     AGE       VERSION 129.146.109.106   Ready     node      5h        v1.9.4 129.146.22.123    Ready     node      5h        v1.9.4 129.146.66.11     Ready     node      5h        v1.9.4 In order to have permission to access the Kubernetes cluster, you need to authorize your OCI account as a cluster-admin on the OCI Container Engine for Kubernetes cluster.  This will require your OCID, which is available on the OCI console page, under your user settings. For example, if your user OCID is ocid1.user.oc1..aaaaaaaac26kw7qvuij7i6fadabklqfb7svyuhpitedmguspv6ht67i5l32q, the command would be: kubectl create clusterrolebinding my-cluster-admin-binding --clusterrole=cluster-admin --user=ocid1.user.oc1..aaaaaaaac26kw7qvuij7i6fadabklqfb7svyuhpitedmguspv6ht67i5l32q Set up the NFS server In the current GA version, the OCI Container Engine for Kubernetes supports network block storage that can be shared across nodes with access permission RWOnce (meaning that only one can write, others can read only). At this time, the WebLogic on Kubernetes domain created by the WebLogic Server Kubernetes Operator, requires a shared file system to store the WebLogic domain configuration, which MUST be accessible from all the pods across the nodes. As a workaround, you need to install an NFS server on one node and share the file system across all the nodes. Note: Currently, we recommend that you use NFS version 3.0 for running WebLogic Server on OCI Container Engine for Kubernetes. During certification, we found that when using NFS 4.0, the servers in the WebLogic domain went into a failed state intermittently. Because multiple threads use NFS (default store, diagnostics store, Node Manager, logging, and domain_home), there are issues when accessing the file store. These issues are removed by changing the NFS to version 3.0. In this demo, the Kubernetes cluster is using nodes with these IP addresses: Node1: 129.146.109.106   Node2: 129.146.22.123   Node3: 129.146.66.11 In the above case, let’s install the NFS server on Node1 with the IP address 129.146.109.106, and use Node2 (IP:129.146.22.123)and Node3 (IP:129.146.66.11) as clients. Log in to each of the nodes using ssh to retrieve the private IP address, by executing the command: ssh -i ~/.ssh/id_rsa opc@[Public IP of Node] ip addr | grep ens3 ~/.ssh/id_rsa is the path to the private ssh RSA key. For example, for Node1: ssh -i ~/.ssh/id_rsa opc@129.146.109.106 ip addr | grep ens3 Retrieve the inet value for each node. For this demo, here is the collected information: Nodes: Public IP Private IP Node1 (NFS Server)   129.146.109.106       10.0.11.3   Node2   129.146.22.123       10.0.11.1   Node3   129.146.66.11     10.0.11.2   Log in using ssh to Node1, and install and set up NFS for Node1 (NFS Server): sudo su - yum install -y nfs-utils mkdir /scratch chown -R opc:opc /scratch Edit the /etc/exports file to add the internal IP addresses of Node2 and Node3: vi /etc/exports /scratch 10.0.11.1(rw) /scratch 10.0.11.2(rw) systemctl restart nfs exit Log in using ssh to Node2: ssh -i ~/.ssh/id_rsa opc@129.146.22.123 sudo su - yum install -y nfs-utils mkdir /scratch Edit the /etc/fstab file to add the internal IP address of Node1: vi /etc/fstab 10.0.11.3:/scratch /scratch nfs nfsvers=3 0 0 mount /scratch exit Repeat the same steps for Node3: ssh -i ~/.ssh/id_rsa opc@129.146.66.11 sudo su - yum install -y nfs-utils mkdir /scratch Edit the /etc/fstab file to add the internal IP address of Node1: vi /etc/fstab 10.0.11.3:/scratch /scratch nfs nfsvers=3 0 0 mount /scratch exit Upload the Docker images to the OCI Registry Build the required Docker images for WebLogic 12.2.1.3 and WebLogic Kubernetes Operator. Pull the Traefik Docker Image from the Docker Hub repository, for example:   docker login docker pull traefik:1.4.5   Tag the Docker images, as follows:   docker tag [Name Of Your Image For Operator] phx.ocir.io/weblogicondocker/weblogic-kubernetes-operator:latest docker tag [Name Of Your Image For WebLogic Domain] phx.ocir.io/weblogicondocker/weblogic:12.2.1.3 docker tag traefik:1.4.5 phx.ocir.io/weblogicondocker/traefik:1.4.5   Generate an authentication token to log in to the phx.ocir.io OCIR Docker repository: Log in to your OCI dashboard. Click ‘User Settings’, then ‘Auth Tokens’ on the left-side menu. Save the generated password in a secured place. Log in to the OCIR Docker registry by entering this command: docker login phx.ocir.io When prompted for your username, enter your OCI tenancy name/oci username. For example: docker login phx.ocir.io Username: weblogicondocker/myusername           Password: Login Succeeded Create a Docker registry secret.  The secret name must consist of lower case alphanumeric characters: kubectl create secret docker-registry <secret_name> --docker-server=<region>.ocir.io --docker-username=<oci_tenancyname>/<oci_username> --docker-password=<auth_token> --docker-email=example_email For example, for the PHX registry create docker secret ocisecret: kubectl create secret docker-registry ocisecret --docker-server=phx.ocir.io --docker-username=weblogicondocker/myusername --docker-password= _b5HiYcRzscbC48e1AZa --docker-email=myusername@oracle.com Push Docker images into OCIR:   docker push phx.ocir.io/weblogicondocker/weblogic-kubernetes-operator:latest docker push phx.ocir.io/weblogicondocker/weblogic:12.2.1.3 docker push phx.ocir.io/weblogicondocker/traefik:1.4.5   Log in to the OCI console and verify the image: Log in to the OCI console. Verify that you are using the correct region, for example, us-phoenix-1. Under Containers, select Registry. The image should be visible on the Registry page. Click on image name, select ‘Actions’ to make it ‘Public’ Modify the configuration YAML files to reflect the Docker image names in the OCIR Our final steps are to customize the parameters in the input files and generate deployment YAML files for the WebLogic cluster, WebLogic Operator, and to use the Traefik load balancer to reflect the image changes and local configuration. We will use the provided open source scripts:  create-weblogic-operator.sh and create-weblogic-domain.sh. Use Git to download the WebLogic Kubernetes Operator project: git clone https://github.com/oracle/weblogic-kubernetes-operator.git Modify the YAML inputs to reflect the image names: cd $SRC/weblogic-kubernetes-operator/kubernetes Change the ‘image’ field to the corresponding Docker repository image name in the OCIR: ./internal/create-weblogic-domain-job-template.yaml:   image: phx.ocir.io/weblogicondocker/weblogic:12.2.1.3 ./internal/weblogic-domain-traefik-template.yaml:      image: phx.ocir.io/weblogicondocker/traefik:1.4.5 ./internal/domain-custom-resource-template.yaml:       image: phx.ocir.io/weblogicondocker/weblogic:12.2.1.3 ./create-weblogic-operator-inputs.yaml:         weblogicOperatorImage: phx.ocir.io/weblogicondocker/weblogic-kubernetes-operator:latest Review and customize the other parameters in the create-weblogic-operator-inputs.yaml and create-weblogic-domain-inputs.yaml files. Check all the available options and descriptions in the installation instructions for the Operator and WebLogic Domain. Here is the list of customized values in the create-weblogic-operator-inputs.yaml file for this demo: targetNamespaces: domain1 weblogicOperatorImage: phx.ocir.io/weblogicondocker/weblogic-kubernetes-operator:latest externalRestOption: SELF_SIGNED_CERT externalSans: IP:129.146.109.106 Here is the list of customized values in the create-weblogic-domain-inputs.yaml file for this demo: domainUID: domain1 t3PublicAddress: 0.0.0.0 exposeAdminNodePort: true namespace: domain1 loadBalancer: TRAEFIK exposeAdminT3Channel: true weblogicDomainStoragePath: /scratch/external-domain-home/pv001 Note: Currently, we recommend that you use Traefik and the Apache HTTP Server load balancers for running WebLogic Server on the OCI Container Engine for Kubernetes. At this time we cannot certify the Voyager HAProxy Ingress Controller due to a lack of support in OKE. The WebLogic domain will use the persistent volume mapped to the path, specified by the parameter weblogicDomainStoragePath. Let’s create the persistent volume directory on the NFS server, Node1, using the command: ssh -i ~/.ssh/id_rsa opc@129.146.109.106 "mkdir -m 777 -p /scratch/external-domain-home/pv001" Our demo domain is configured to be run in the namespace domain1. To create namespace domain1, execute this command: kubectl create namespace domain1 The username and password credentials for access to the Administration Server must be stored in a Kubernetes secret in the same namespace that the domain will run in. The script does not create the secret in order to avoid storing the credentials in a file. Oracle recommends that this command be executed in a secure shell and that the appropriate measures be taken to protect the security of the credentials. To create the secret, issue the following command: kubectl -n NAMESPACE create secret generic SECRET_NAME   --from-literal=username=ADMIN-USERNAME   --from-literal=password=ADMIN-PASSWORD For our demo values: kubectl -n domain1 create secret generic domain1-weblogic-credentials --from-literal=username=weblogic --from-literal=password=welcome1 Finally, run the create script, pointing it at your inputs file and the output directory: ./create-weblogic-operator.sh –i create-weblogic-operator-job-inputs.yaml  -o /path/to/weblogic-operator-output-directory It will create and start all the related operator deployments. Run this command to check the operator pod status: Execute the same command for the WebLogic domain creation: ./create-weblogic-domain.sh –i create-weblogic-domain-job-inputs.yaml  -o /path/to/weblogic-domain-output-directory To check the status of the WebLogic cluster, run this command: bash-4.2$ kubectl get pods -n domain1 Let’s see how the load balancer works. For that, let’s access the WebLogic Server Administration Console and deploy the testwebapp.war application. In the customized inputs for the WebLogic domain, we have specified to expose the AdminNodePort. To review the port number, run this command: Let’s use one of the node’s external IP addresses to access the Administration Console. In our demo, it is http://129.146.109.106:30701/console. Log in to the WebLogic Server Administration Console using the credentials weblogic/welcome1. Click ‘Deployments’, ‘Lock&Edit’, and upload the testwebapp.war application. Select cluster-1 as a target and click ‘Finish’, then ‘Release Configuration’. Select the ‘Control’ tab and click ‘Start serving all requests’. The status of the deployment should change to ‘active’. Let’s demonstrate load balancing HTTP requests using Traefik as Ingress controllers on Kubernetes clusters. To check the NodePort number for the load balancer, run this command:  The Traefik load balancer is running on port 30305. Every time we access the testwebapp application link, http://129.146.22.123:30305/testwebapp/, the application will display the currently used Managed Server’s information. Another load of the same URL, displays the information about Managed Server 1. Because the WebLogic cluster is exposed to the external world and accessible using the external IP addresses of the nodes, the authorized WebLogic user can use the T3 protocol to access all the available WebLogic resources by using WLST commands. With a firewall, you have to run T3 using tunneling with a proxy (use T3 over HTTP; turn on tunneling in the WLS Server and then use the "HTTP" protocol instead of "T3"). See this blog for more details. If you are outside of the corporate network, you can use T3 with no limitations.   Summary In this blog, we demonstrated all the required steps to set up a WebLogic cluster using the OCI Container Engine for Kubernetes that runs on the Oracle Cloud Infrastructure and load balancing for a web application, deployed on the WebLogic cluster. Running WebLogic Server on Kubernetes in OCI Container Engine for Kubernetes enables users to leverage WebLogic Server applications in a managed Kubernetes environment, to integrate WebLogic Server applications with other cloud applications, and to evolve their usage of WebLogic Server and expand their usage of Kubernetes. We are also publishing a series of blog entries that describe in detail, how to run the operator, how to stand up one or more WebLogic domains in Kubernetes, how to scale up or down  a WebLogic cluster manually or automatically using the WebLogic Diagnostics Framework (WLDF) or Prometheus, how the Operator manages load balancing for web applications deployed in WebLogic clusters, and how to provide integration for managing operator logs through Elasticsearch, Logstash, and Kibana.

There are various options for setting up a Kubernetes environment in order to run WebLogic clusters. Oracle supports customers who want to run WebLogic clusters in production or development mode...

The WebLogic Server

Announcing General Availability version of the WebLogic Server Kubernetes Operator

We are very pleased to announce the release of our General Availability (GA) version of the WebLogic Server Kubernetes Operator.  The Operator, first released in February as a Technology Preview version, simplifies the creation and management of WebLogic Server 12.2.1.3 domains on Kubernetes.  The GA operator supports additional WebLogic features, and is certified and supported for use in development and production.  Certification includes support for the Operator and WebLogic Server configurations running on the Oracle Cloud Infrastructure (OCI), on Kubernetes clusters created using the Terraform Kubernetes Installer for OCI, and using the Oracle Cloud Infrastructure Registry (OCIR) for storing Operator and WebLogic Server domain images. For additional information about WebLogic on Kubernetes  certification and WebLogic Server Kubernetes Operator, see Support Doc ID 2349228.1, and reference the announcement blog, WebLogic on Kubernetes Certification. We have developed the Operator to integrate WebLogic Server and Kubernetes, allowing Kubernetes to serve as a container infrastructure hosting WebLogic Server instances. The WebLogic Server Kubernetes Operator extends Kubernetes to create, configure, and manage a WebLogic domain. Read our prior announcement blog, Announcing WebLogic Server Kubernetes Operator, and find the WebLogic Server Kubernetes Operator GitHub project at https://github.com/oracle/weblogic-kubernetes-operator.   Running WebLogic Server on Kubernetes enables users to leverage WebLogic Server applications in Kubernetes environments, to integrate WebLogic Server applications with other cloud applications, and to evolve their usage of WebLogic Server and expand their usage of Kubernetes. The WebLogic Server Kubernetes Operator allows users to: Simplify WebLogic management in Kubernetes Ensure Kubernetes resources are allocated for WebLogic domains Manage the overall environment, including load balancers, Ingress controllers, network fabric, and security, through Kubernetes APIs Simplify and automate patching and scaling operations Ensure that WebLogic best practices are followed Run WebLogic domains well and securely In this version of the WebLogic Server Kubernetes Operator and the WebLogic Server Kubernetes certification, we have added the following functionality and support: Support for Kubernetes versions 1.7.5, 1.8.0, 1.9.0, 1.10.0 In our Operator GitHub project, we provide instructions for how to build, test, and publish the Docker image for the Operator directly from Oracle Container Pipelines using the wercker.yml . Support for dynamic clusters, and auto-scaling of a WebLogic Server cluster with dynamic clusters. Please read the blog for details WebLogic Dynamic Cluster on Kubernetes. Support for the Apache HTTP Server and Voyager (HAProxy-backed) Ingress controller running within the Kubernetes cluster for load balancing HTTP requests across WebLogic Server Managed Servers running in clustered configurations. Integration with the Operator automates the configuration of these load balancers.  Find documentation for the Apache HTTP Server and Voyager Ingress Controller. Support for Persistent Volumes (PV) in NFS storage for multi-node environments. In our project, we provide a cheat sheet to configure the NFS volume on OCI, and some important notes about NFS volumes and the WebLogic Server domain in Kubernetes. The  Delete WebLogic domain resources script, which permanently removes the Kubernetes resources for a domain or domains, from a Kubernetes cluster. Please see “Removing a domain” in the README of the Operator project. Improved Prometheus support.   See Using Prometheus to Automatically Scale WebLogic Clusters on Kubernetes. Integration tests posted on our WebLogic Server Kubernetes Operator GitHub project. Our future plans include, certification of WebLogic Server on Kubernetes running on the OCI Container Engine for Kubernetes, providing an easy way to reprovision and redeploy existing  WebLogic Server domains in Kubernetes using the WebLogic Deploy Tooling, adding CI/CD of WebLogic deployments on Kubernetes with Oracle Container Pipelines, and new features and enhancements over time. Please stay tuned for more information. We hope this announcement is helpful to those of you seeking to deploy WebLogic Server on Kubernetes, and look forward to your feedback.    

We are very pleased to announce the release of our General Availability (GA) version of the WebLogic Server Kubernetes Operator.  The Operator, first released in February as a Technology Preview...

The WebLogic Server

WebLogic Dynamic Clusters on Kubernetes

Overview A WebLogic Server cluster consists of multiple Managed Server instances running simultaneously and working together to provide increased scalability and reliability.  WebLogic Server supports two types of clustering configurations, configured and dynamic clustering.  Configured clusters are created by manually configuring each individual Managed Server instance.  In dynamic clusters, the Managed Server configurations are generated from a single, shared template.  Using a template greatly simplifies the configuration of clustered Managed Servers and allows for dynamically assigning servers to Machine resources, thereby providing a greater utilization of resources with minimal configuration.  With dynamic clusters, when additional server capacity is needed, new server instances can be added to the cluster without having to manually configure them individually. Also, unlike configured clusters, scaling up of dynamic clusters is not restricted to the set of servers defined for a cluster, but can be increased based on runtime demands.   For more information on how to create, configure, and use dynamic clusters in WebLogic Server, see Dynamic Clusters.   Support for Dynamic Clusters by Oracle WebLogic Server Kubernetes Operator Previously, the WebLogic Server Kubernetes Operator supported configured clusters only.  That is, the operator could only manage and scale Managed Servers defined for a configured cluster.  Now, this limitation has been removed. By supporting dynamic clusters, the operator can easily scale the number of Managed Server instances based on a server template instead of requiring that you first manually configure them.   Creating a Dynamic Cluster in a WebLogic Domain in Kubernetes   The WebLogic Server team has been actively working to integrate WebLogic Server in Kubernetes, WebLogic Server Certification on Kubernetes.  The Oracle WebLogic Server Kubernetes Operator provides a mechanism for creating and managing any number of WebLogic domains, automates domain startup, allows scaling of WebLogic clusters, manages load balancing for web applications deployed in WebLogic clusters, and provides integration with Elasticsearch, Logstash, and Kibana. The operator is currently available as an open source project at https://oracle.github.io/weblogic-kubernetes-operator.  To create a WebLogic domain, the recommended approach is to use the provided create-weblogic-domain.sh script, which automates the creation of a WebLogic domain within a Kubernetes cluster.  The create-weblogic-domain.sh script takes an input file, create-weblogic-domain-inputs.yaml, which specifies the configuration properties for the WebLogic domain. The following parameters of the input file are used when creating a dynamic cluster:   Parameter Definition Default clusterName The name of the WebLogic cluster instance to generate for the domain. cluster-1 clusterType The type of WebLogic cluster. Legal values are "CONFIGURED" or "DYNAMIC". CONFIGURED configuredManagedServerCount   The number of Managed Server instances to generate for the domain. 2 initialManagedServerReplicas The number of Managed Servers to start initially for the domain. 2 managedServerNameBase Base string used to generate Managed Server names.  Used as the server name prefix in a server template for dynamic clusters. managed-server       The following example configuration will create a dynamic cluster named ‘cluster-1’ with four defined Managed Servers (managed-server1 … managed-server4) in which the operator will initially start up two Managed Servers instances, managed-server1and managed-server2:   # Type of WebLogic Cluster # Legal values are "CONFIGURED" or "DYNAMIC" clusterType: DYNAMIC   # Cluster name clusterName: cluster-1   # Number of Managed Servers to generate for the domain configuredManagedServerCount: 4   # Number of Managed Servers to initially start for the domain initialManagedServerReplicas: 2       # Base string used to generate Managed Server names managedServerNameBase: managed-server   To create the WebLogic domain, you simply run the create-weblogic-domain.sh script specifying your input file and output directory for any generated configuration files:   #> create-weblogic-domain.sh  –i create-domain-job-inputs.yaml  -o /path/to/weblogic-domain-output-directory   There are some limitations when creating WebLogic clusters using the create domain script:   The script creates the specified number of Managed Server instances and places them all in one cluster. The script always creates one cluster. Alternatively, you can create a WebLogic domain manually as outlined in Manually Creating a WebLogic Domain.   How WebLogic Kubernetes Operator Manages a Dynamic Cluster   A Kubernetes Operator is “an application-specific controller that extends the Kubernetes API to create, configure, and manage instances of complex applications.”  For more information on operators, see Introducing Operators: Putting Operational Knowledge into Software. The Oracle WebLogic Server Kubernetes Operator extends Kubernetes to create, configure, and manage any number of WebLogic domains running in a Kubernetes environment.  It provides a mechanism to create domains, automate domain startup, and allow scaling of both configured and dynamic WebLogic clusters. For more details about the WebLogic Server Kubernetes Operator, see the blog, Announcing the Oracle WebLogic Server Kubernetes Operator.   Because the WebLogic Kubernetes Operator manages the life cycle of Managed Servers in a Kubernetes cluster, it provides the ability to start up and scale (up or down) WebLogic dynamic clusters. The operator manages the startup of a WebLogic domain based on the settings defined in a Custom Resource Domain (CRD).   The number of WLS pods/Managed Server instances running in a Kubernetes cluster, for a dynamic cluster, is represented by the ‘replicas’ attribute value of the ClusterStartup entry in the following domain custom resource YAML file:   clusterStartup:   - desiredState: "RUNNING"     clusterName: "cluster-1"     replicas: 2     env:     - name: JAVA_OPTIONS       value: "-Dweblogic.StdoutDebugEnabled=false"     - name: USER_MEM_ARGS       value: "-Xms64m -Xmx256m"   For the above example entry, during WebLogic domain startup, the operator would start two pod/Managed Server instances for the dynamic cluster ‘cluster-1’. Details of a domain custom resource YAML file can be found in Starting a WebLogic Domain.   Scaling of WebLogic Dynamic Clusters on Kubernetes   There are several ways to initiate scaling through the operator, including:   On-demand, updating the Custom Resource Domain specification directly (using kubectl). Calling the operator's REST scale API, for example, from curl. Using a WLDF policy rule and script action to call the operator's REST scale API. Using a Prometheus alert action to call the Operator's REST scale API. On-Demand, Updating the Custom Resource Domain Directly Scaling a dynamic cluster can be achieved by editing the Custom Resource Domain directly by using the ‘kubectl edit’ command and modifying the ‘replicas’ attribute value:   #> kubectl edit domain domain1 -n [namespace]   This command will open an editor which will allow you to edit the defined Custom Resource Domain specification.  Once committed, the operator will be notified of the change and will immediately attempt to scale the corresponding dynamic cluster by reconciling the number of running pods/Managed Server instances with the ‘replicas’ value specification.   Calling the Operator's REST Scale API Alternatively, the WebLogic Server Kubernetes Operator exposes a REST endpoint, with the following URL format, that allows an authorized actor to request scaling of a WebLogic cluster:   http(s)://${OPERATOR_ENDPOINT}/operator/<version>/domains/<domainUID>/clusters/<clusterName>/scale   <version> denotes the version of the REST resource. <domainUID> is the unique ID that will be used to identify this particular domain. This ID must be unique across all domain in a Kubernetes cluster. <clusterName> is the name of the WebLogic cluster instance to be scaled.   For example:   http(s)://${OPERATOR_ENDPOINT}/operator/v1/domains/domain1/clusters/cluster-1/scale The /scale REST endpoint: Accepts an HTTP POST request. The request body supports the JSON "application/json" media type. The request body will be a simple name-value item named managedServerCount: {       ”managedServerCount": 3 }   The managedServerCount value designates the number of WebLogic Server instances to scale to.   Note: An example use of the REST API, using the curl command, can be found in scalingAction.sh.   Using a WLDF Policy Rule and Script Action to Call the Operator's REST Scale API A WebLogic Server dynamic cluster can be automatically scaled by increasing (or decreasing) the number of pods based on resource metrics provided by the WebLogic Diagnostic Framework (WLDF). WLDF is a suite of services and APIs that collect and surface metrics that provide visibility into server and application performance. WLDF provides a Policies and Actions component to support the automatic scaling of dynamic clusters.  There are two types of scaling supported by WLDF:   Calendar-based scaling — Scaling operations on a dynamic cluster that are executed on a particular date and time. Policy-based scaling — Scaling operations on a dynamic cluster that are executed in response to changes in demand. In this blog, we will focus on policy-based scaling which lets you write policy expressions for automatically executing configured actions when the policy expression rule is satisfied. These policies monitor one or more types of WebLogic Server metrics, such as memory, idle threads, and CPU load. When the configured threshold in a policy is met, the policy is triggered, and the corresponding scaling action is executed.   Example Policy Expression Rule   The following is an example policy expression rule that was used in Automatic Scaling of WebLogic Clusters on Kubernetes:   wls:ClusterGenericMetricRule("cluster-1","com.bea:Type=WebAppComponentRuntime, ApplicationRuntime=OpenSessionApp,*","OpenSessionsCurrentCount","&gt;=",0.01,5,"1 seconds","10 seconds"   This ‘ClusterGenericMetricRule’ smart rule is used to observe trends in JMX metrics that are published through the Server Runtime MBean Server and can be read as:   For the cluster, ‘cluster-1’, WLDF will monitor the OpenSessionsCurrentCount attribute of the WebAppComponentRuntime MBean for the OpenSessionApp application.  If the OpenSessionsCurrentCount is greater than or equal to 0.01 for 5% of the servers in the cluster, then the policy will be evaluated as true. Metrics will be collected at a sampling rate of 1 second and the sample data will be averaged out over the specified 10 second period of time of the retention window.   You can use any of the following tools to configure policies for diagnostic system modules:   WebLogic Server Administration Console WLST REST JMX application   Below is an example configuration of a policy, named ‘myScaleUpPolicy’, shown as it would appear in the WebLogic Server Administration Console:       Example Action   An action is an operation that is executed when a policy expression rule evaluates to true. WLDF supports the following types of diagnostic actions: Java Management Extensions (JMX) Java Message Service (JMS) Simple Network Management Protocol (SNMP) Simple Mail Transfer Protocol (SMTP) Diagnostic image capture Elasticity framework REST WebLogic logging system Script   The WebLogic Server team has an example shell script, scalingAction.sh, for use as a Script Action, which illustrates how to issue a request to the operator’s REST endpoint.  Below is an example screen shot of the Script Action configuration page from the WebLogic Server Administration Console:       Important notes about the configuration properties for the Script Action:   Working Directory and Path to Script configuration entries specify the volume mount path (/shared) to access the WebLogic domain home. The scalingAction.sh script requires access to the SSL certificate of the operator’s endpoint and this is provided through the environment variable ‘INTERNAL_OPERATOR_CERT’.  The operator’s SSL certificate can be found in the ‘internalOperatorCert’ entry of the operator’s ConfigMap weblogic-operator-cm: For example:   #> kubectl describe configmap weblogic-operator-cm -n weblogic-operator   Name:         weblogic-operator-cm Namespace:    weblogic-operator Labels:       weblogic.operatorName=weblogic-operator Annotations:  kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"v1","data":{"externalOperatorCert":"","internalOperatorCert":"LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0t...   Data ==== internalOperatorCert: ---- LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUR3akNDQXFxZ0F3SUJBZ0lFRzhYT1N6QU...   The scalingAction.sh script accepts a number of customizable parameters: •       action - scaleUp or scaleDown (Required) •       domain_uid - WebLogic domain unique identifier (Required) •       cluster_name - WebLogic cluster name (Required) •       kubernetes_master - Kubernetes master URL, default=https://kubernetes •       access_token - Service Account Bearer token for authentication and authorization for access to REST Resources •       wls_domain_namespace - Kubernetes namespace in which the WebLogic domain is defined, default=default •       operator_service_name - WebLogic Operator Service name of the REST endpoint, default=internal-weblogic-operator-service •       operator_service_account - Kubernetes Service Account name for the WebLogic Operator, default=weblogic-operator •       operator_namespace – Namespace in which the WebLogic Operator is deployed, default=weblogic-operator •       scaling_size – Incremental number of WebLogic Server instances by which to scale up or down, default=1   For more information about WLDF and diagnostic policies and actions, see Configuring Policies and Actions in Configuring and Using the Diagnostics Framework for Oracle WebLogic Server. Note: A more detailed description of automatic scaling using WLDF can be found in WebLogic on Kubernetes, Try It! and Automatic Scaling of WebLogic Clusters on Kubernetes.   There are a few key differences between the automatic scaling of WebLogic clusters described in this blog and my previous blog, Automatic Scaling of WebLogic Clusters on Kubernetes:   In the previous blog, as in the earlier release, only scaling of configured clusters was supported. In this blog: To scale the dynamic cluster, we use the WebLogic Server Kubernetes Operator instead of using a Webhook. To scale the dynamic cluster, we use a Script Action, instead of a REST action. To scale pods, scaling actions invoke requests to the operator’s REST endpoint, instead of the Kubernetes API server. Using a Prometheus Alert Action to Call the Operator's REST Scale API   In addition to using the WebLogic Diagnostic Framework, for automatic scaling of a dynamic cluster, you can use a third party monitoring application like Prometheus.  Please read the following blog for details about Using Prometheus to Automatically Scale WebLogic Clusters on Kubernetes.   What Does the Operator Do in Response to a REST Scaling Request?   When the WebLogic Server Kubernetes Operator receives a scaling request through its scale REST endpoint, it performs the following actions: Performs an authentication and authorization check to verify that the specified user is allowed to perform the specified operation on the specified resource. Validates that the specified domain, identified by the domainUID, exists. The domainUID is the unique ID that will be used to identify this particular domain. This ID must be unique across all domains in a Kubernetes cluster. Validates that the WebLogic cluster, identified by the clusterName, exists. The clusterName is the name of the WebLogic cluster instance to be scaled. Verifies that the scaling request’s ‘managedServerCount’ value does not exceed the configured maximum cluster size for the specified WebLogic cluster.  For dynamic clusters, ‘MaxDynamicClusterSize’ is a WebLogic attribute that specifies the maximum number of running Managed Server instances allowed for scale up operations.  See Configuring Dynamic Clusters for more information on attributes used to configure dynamic clusters. Initiates scaling by setting the ‘Replicas’ property within the corresponding domain custom resource, which can be done in either:   A clusterStartup entry, if defined for the specified WebLogic cluster. For example:   Spec:   …   Cluster Startup:     Cluster Name:   cluster-1     Desired State:  RUNNING     Env:       Name:     JAVA_OPTIONS       Value:    -Dweblogic.StdoutDebugEnabled=false       Name:     USER_MEM_ARGS       Value:    -Xms64m -Xmx256m     Replicas:   2    …   At the domain level, if a clusterStartup entry is not defined for the specified WebLogic cluster and the startupControl property is set to AUTO For example:     Spec:     Domain Name:  base_domain     Domain UID:   domain1     Export T 3 Channels:     Image:              store/oracle/weblogic:12.2.1.3     Image Pull Policy:  IfNotPresent     Replicas:           2     Server Startup:       Desired State:  RUNNING       Env:         Name:         JAVA_OPTIONS         Value:        -Dweblogic.StdoutDebugEnabled=false         Name:         USER_MEM_ARGS         Value:        -Xms64m -Xmx256m       Server Name:    admin-server     Startup Control:  AUTO   Note: You can view the full WebLogic Kubernetes domain resource with the following command: #> kubectl describe domain <domain resource name> In response to a change to the ‘Replicas’ property in the Custom Resource Domain, the operator will increase or decrease the number of pods (Managed Servers) to match the desired replica count. Wrap Up The WebLogic Server team has developed an Oracle WebLogic Server Kubernetes Operator, based on the Kubernetes Operator pattern, for integrating WebLogic Server in a Kubernetes environment.  The operator is used to manage the life cycle of a WebLogic domain and, more specifically, to scale a dynamic cluster.  Scaling a WebLogic dynamic cluster can be done, either on-demand or automatically, using either the WebLogic Diagnostic Framework or third party monitoring applications, such as Prometheus.  In summary, the advantages of using WebLogic dynamic clusters over configured clusters in a Kubernetes cluster are:   Managed Server configuration is based on a single server template. When additional server capacity is needed, new server instances can be added to the cluster without having to manually configure them individually. Unlike configured clusters, scaling up of dynamic clusters is not restricted to the set of servers defined in the cluster but can be increased based on runtime demands. I hope you’ll take the time to download and take the Oracle WebLogic Server Kubernetes Operator for a spin and experiment with the automatic scaling feature for dynamic clusters. Stay tuned for more blogs on future features that are being added to enhance the Oracle WebLogic Server Kubernetes Operator.

Overview A WebLogic Server cluster consists of multiple Managed Server instances running simultaneously and working together to provide increased scalability and reliability.  WebLogic Server supports...

Using Prometheus to Automatically Scale WebLogic Clusters on Kubernetes

Elasticity (scaling up or scaling down) of a WebLogic Server cluster provides the benefits of being able to manage resources based on demand and enhances the reliability of customer applications while managing resource costs. There are different ways to trigger the automatic scaling of WebLogic Server clusters in a Kubernetes environment. The architecture of the WebLogic Server Elasticity component as well as a detailed explanation of how to scale up a WebLogic cluster using a WebLogic Diagnostic Framework (WLDF) policy can be found in the Automatic Scaling of WebLogic Clusters on Kubernetes blog. In this demo, we demonstrate another way to automatically scale a WebLogic cluster on Kubernetes, by using Prometheus. Since Prometheus has access to all available WebLogic metrics data, user has flexibility to use any of it to specify the rules for scaling. Based on collected metrics data and configured alert rule conditions, Prometheus’s Alert Manager will send an alert to trigger the desired scaling action and change the number of running Managed Servers in the WebLogic Server cluster. We use the WebLogic Monitoring Exporter to scrape runtime metrics for specific WebLogic Server instances and feed them to Prometheus. We also implement a custom notification integration using the webhook receiver, a user-defined REST service that is triggered when a scaling alert event occurs. After the alert rule matches the specified conditions, the Prometheus Alert Manager sends an HTTP request to the URL specified as a webhook to request the scaling action. For more information about the webhook used in the sample demo, see adnanh/webhook/. In this blog, you will learn how to configure Prometheus, Prometheus Alert Manager, and a webhook to perform automatic scaling of WebLogic Server instances running in Kubernetes clusters. This picture shows all the components running in the pods in the Kubernetes environment: The WebLogic domain, running in a Kubernetes cluster, consists of: An Administration Server (AS) instance, running in a Docker container, in its own pod (POD 1). A WebLogic Server cluster, composed of a set of Managed Server instances, in which each instance is running in a Docker container in its own pod (POD 2 to POD 5). The WebLogic Monitoring Exporter web application, deployed on a WebLogic Server cluster. Additional components, running in a Docker container, in their own pod are: Prometheus Prometheus Alert Manager WebLogic Kubernetes Operator Webhook server   Installation and Deployment of the Components in the Kubernetes Cluster Follow the installation instructions to create the WebLogic Kubernetes Operator and domain deployments. In this blog, we will be using the following parameters to create the WebLogic Kubernetes Operator and WebLogic domain: 1. Deploy the WebLogic Kubernetes Operator (create-weblogic-operator.sh) In create-operator-inputs.yaml: serviceAccount: weblogic-operator targetNamespaces: domain1 namespace: weblogic-operator weblogicOperatorImage: container-registry.oracle.com/middleware/weblogic-kubernetes-operator:latest weblogicOperatorImagePullPolicy: IfNotPresent externalRestOption: SELF_SIGNED_CERT externalRestHttpsPort: 31001 externalSans: DNS:slc13kef externalOperatorCert: externalOperatorKey: remoteDebugNodePortEnabled: false internalDebugHttpPort: 30999 externalDebugHttpPort: 30999 javaLoggingLevel: INFO 2. Create and start a domain (create-domain-job.sh) In create-domain-job-inputs.yaml: domainUid: domain1 managedServerCount: 4 managedServerStartCount: 2 namespace: weblogic-domain adminPort: 7001 adminServerName: adminserver startupControl: AUTO managedServerNameBase: managed-server managedServerPort: 8001 weblogicDomainStorageType: HOST_PATH weblogicDomainStoragePath: /scratch/external-domain-home/pv001 weblogicDomainStorageReclaimPolicy: Retain weblogicDomainStorageSize: 10Gi productionModeEnabled: true weblogicCredentialsSecretName: domain1-weblogic-credentials exposeAdminT3Channel: true adminNodePort: 30701 exposeAdminNodePort: true namespace: weblogic-domain loadBalancer: TRAEFIK loadBalancerWebPort: 30305 loadBalancerDashboardPort: 30315 3. Run this command to identify the admin NodePort to access the console : kubectl -n weblogic-domain  describe service domain1-adminserver weblogic-domain – is the namespace where the WebLogic domain pod is deployed. Previous blog entries have described how to start and run WebLogic Server instances in Kubernetes with the WebLogic Monitoring Exporter deployed on Managed Servers running in the cluster. Access the WebLogic Server Administration Console at this URL, http://[hostname]:30701/console, using the WebLogic credentials, “weblogic/welcome1”. In our example, we setup an alert rule based on the number of the opened sessions produced by this web application, “testwebapp.war”. Deploy the testwebapp.war application and WebLogic Monitoring Exporter “wls-exporter.war” to DockerCluster. Review the DockerCluster NodePort for external access: kubectl -n weblogic-domain  describe service domain1-dockercluster-traefik To make sure that the WebLogic Monitoring Exporter is deployed and running, access the application with a URL like the following: http://[hostname]:30305/wls-exporter/metrics You will be prompted for the WebLogic user credentials that are required to access the metrics data, weblogic/welcome1. The metrics page will show the metrics configured for the WebLogic Monitoring Exporter:     Make sure that the alert rule you want to setup in the Prometheus Alert Manager matches the metrics configured for the WebLogic Exporter. Here is an example of the alert rule we used: if sum(webapp_config_open_sessions_current_count{webapp=”testwebapp”) > 15 ; The metric used, ‘webapp_config_open_sessions_current_count’, should be listed on the metric’s web page.   Setting Up the Webhook for Alert Manager We used this webhook application in our example. To build the Docker image, create this directory structure: - apps -scripts -webhooks 1. Copy the webhook application executable file to the ‘apps’ directory and copy the scalingAction.sh script to ‘scripts’ directory. Create a scaleUpAction.sh file in the ‘scripts’ directory and edit it with the code listed below: #!/bin/bash echo scale up action >> scaleup.log MASTER=https://$KUBERNETES_SERVICE_HOST:$KUBERNETES_SERVICE_PORT echo Kubernetes master is $MASTER source /var/scripts/scalingAction.sh --action=scaleUp --domain_uid=domain1 --cluster_name=DockerCluster --kubernetes_master=$MASTER --wls_domain_namespace=domain1 2. Create a Docker file for the webhook, Docker.webhook, as suggested: FROM store/oracle/serverjre:8 COPY apps/webhook /bin/webhook COPY webhooks/hooks.json /etc/webhook/ COPY scripts/scaleUpAction.sh /var/scripts/ COPY scripts/scalingAction.sh /var/scripts/ CMD ["-verbose", "-hooks=/etc/webhook/hooks.json", "-hotreload"] ENTRYPOINT ["/bin/webhook"] 3. Create hooks.json file in the webhooks directory, for example: [ { "id": "scaleup", "execute-command": "/var/scripts/scaleUpAction.sh", "command-working-directory": "/var/scripts", "response-message": "scale-up call ok\n" } ] 4. Build the ‘webhook’ Docker image: docker rmi webhook:latest docker build -t webhook:latest -f Dockerfile.webhook .   Deploying Prometheus, Alert Manager, and Webhook We will run Prometheus, the Alert Manager and the webhook pods under the namespace ‘monitoring’. Execute the following command to create a ‘monitoring’ namespace: kubectl create namespace monitoring To deploy a Prometheus instance in Kubernetes, create a Prometheus configuration file, prometheus-kubernetes.yml. A sample file is provided here. The example of the Prometheus configuration file specifies: -    weblogic/welcome1 as the user credentials -    Five seconds as the interval between updates of WebLogic Server metrics -    32000 as the external port to access the Prometheus dashboard -    scaling rules: ALERT scaleup if sum(webapp_config_open_sessions_current_count{webapp=”testwebapp”}) > 15 ANNOTATIONS { summary = "Scale up when current sessions is greater than 15", description = "Firing when total sessions active greater than 15" }         -   Alert Manager is configured to listen port 9093 As required, you can change these values to reflect your specific environment and configuration. You can also change the Alert Rule by constructing Prometheus-defined queries matching your elasticity needs. To generate alerts, we need to deploy the Prometheus Alert Manager as a separate pod, running in the Docker container. In our provided sample Prometheus Alert Manager configuration file, we use the webhook: Update the ‘INTERNAL_OPERATOR_CERT’ property from the webhook-deployment.yaml file with the value of the ‘internalOperatorCert’ property from the generated weblogic-operator.yaml file, used for WebLogic Kubernetes Operator deployment, for example:   Start the webhook, Prometheus, and the Alert Manager to monitor the Managed Server instances: kubectl apply -f  alertmanager-deployment.yaml kubectl apply –f prometheus-deployment.yaml kubectl apply –f webhook-deployment.yaml Verify that all the pods are started: Check that Prometheus is monitoring all Managed Server instances by browsing to http://[hostname]:32000. Examine the Insert metric at cursor pull-down menu. It should list the metric names based on the current configuration of the WebLogic Monitoring Exporter web application.   You can check the Prometheus Alert Setting by accessing this URL, http://[hostname]:32000/alerts: It should show the configured rule, listed in the prometheus-deployment.yaml configuration file. Auto Scaling of WebLogic Clusters in K8s In this demo, we configured each WebLogic Server cluster to have two running Managed Server instances, with a total number of Managed Servers equal to four. You can modify the values of these parameters, configuredManagedServerCount and initialManagedServerReplicas, in the create-domain-job-inputs.yaml file, to reflect your desired number of Managed Servers running in the cluster and maximum limit of allowed replicas. Per our sample file configuration, initially we have only two Managed Servers pods started. Let’s check all the running pods now: Per our configuration in the Alert Rule, the scale up will happen when the number of open session for the application ‘testwebapp’ on the cluster is more than 15.  Let’s invoke the application URL 17 times using curl.sh: #!/bin/bash COUNTER=0 MAXCURL=17 while [ $COUNTER -lt $MAXCURL ]; do OUTPUT="$(curl http:/$1:30305/testwebapp/)" if [ "$OUTPUT" != "404 page not found" ]; then echo $OUTPUT let COUNTER=COUNTER+1 sleep 1 fi done Issue the command: . ./curl.sh [hostname] When the sum of open sessions for the “testwebapp” application becomes more than 15, Prometheus will fire an alert via the Alert Manager. We can check the current alert status by accessing this URL, http://[hostname]:32000/alert To verify that the Alert Manager sent the HTTP POST to the webhook, check the webhook pod log: When the hook endpoint is invoked, the command specified by the “execute-command” property is executed, which in this case is the shell script, /var/scripts/scaleUpAction.sh. The scaleUpAction.sh script passes the parameters for the scalingAction.sh script, provided by the WebLogic Kubernetes Operator. The scalingAction.sh script issues a request to the Operator Service REST URL for scaling. To verify the scale up operation, let’s check the number of running Managed Server pods. It should be increased to a total of three running pods: Summary In this blog, we demonstrated how to use the Prometheus integration with WebLogic Server to trigger the automatic scaling of WebLogic Server clusters in a Kubernetes environment. The WebLogic Server cluster can be automatically scaled by increasing (or decreasing) the number of pods based on a very comprehensive set of WebLogic domain-specific (custom) metrics monitored and analyzed by Prometheus. Our sample demonstrates that in addition to being a great monitoring tool, Prometheus can easily be configured for WebLogic Server cluster scaling decisions.  

Elasticity (scaling up or scaling down) of a WebLogic Server cluster provides the benefits of being able to manage resources based on demand and enhances the reliability of customer applications while...

Ensuring high level of performance with WebLogic JDBC

Written by Joseph Weinstein.   In this post, you will find some common best practices aimed at ensuring high levels of performance with WebLogic JDBC.   Use WebLogic DataSources for connection pooling of JDBC connections Making a real DBMS connection is expensive and slow, so you should use our datasources to retain and re-use connections. The ideal mode for using pooled connections is to use them as quickly and briefly as possible, getting them just when needed, and closing them (returning them to the pool) as soon as possible. This maximizes concurrency.  (It is crucial that the connection is a method-level object, not shared between application threads, and that it *is* closed, no matter what exit path the application code takes, else the pool could be leaked dry, all connections taken out and abandoned. See a best-practices code example farther down in this article. The long-term hardening and integration of WebLogic Datasources with applications and other WebLogic APIs make them much the preferred choice to UCP or third-party options.)   Use Oracle JDBC thin driver (Type 4) rather than OCI driver (Type 2) The Oracle JDBC thin driver is lightweight (easy to install and administrate), platform-independent (entirely written in Java), and provides slightly higher performance than the JDBC OCI (Oracle Call Interface) driver.  The thin driver does not require any additional software on the client side.  Oracle JDBC FAQ stipulates that the performance benefit with the thin driver is not consistent and that the OCI driver can even deliver better performances in some scenarios. Using OCI in WebLogic carries the danger that any bug in the native library can take down an entire WebLogic server.WebLogic officially no longer supports using the driver in the OCI mode.   Use PreparedStatements objects rather than plain Statements  With PreparedStatements, the compiled SQL query plans will be kept in the DBMS cache, only parsed once and re-used thereafter.   Use/configure the WebLogic Datasource’s statement cache size wisely. The datasource can actually cache and allow you to transparently re-use a Prepared/CallableStatement made from a given pooled connection. The pool’s statement cache size (default=10) determines how many. This may take some memory but is usually worth the performance gain. Note well though that the cache size is purged in the least recently used policy so if your app(s) that use a datasource typically make 30 distinct prepared statements, each next request would put the new one in the cache and kick out one used 10 statements ago, and this would thrash the cache, with no statement ever surviving long enough to be re-used. The console makes several statement cache statistics available to allow you to size the cache to service all your statements, but if memory becomes a huge issue, it may be better to set the cache size to zero. When using the Oracle JDBC driver, also consider using its statement caching as a lower-level alternative to WebLogic caching. There have been times when the driver uses significant memory per open statement, such as if cached by WebLogic, but if cached at the driver level instead, the driver knows it can share and minimize this memory. To use driver-level statement caching instead, make sure the WebLogic statement cache size is zero, and add these properties to the list of driver properties for the datasource: implicitCachingEnabled=true     and    maxStatements=XXX  where XXX is ideally a number of statements enough to cover all your common calls. Similarly to the WebLogic cache size, a too-small number might be useless or worse. Observe your memory usage after the server has run under full load for a while.   Close all JDBC resources ASAP, inline, and for safety, verify so in a finally block This includes Lobs, ResultSets, Statements, and Connections objects to maximize memory and avoid certain DBMS-side resource issues.  By spec, the Connection.close() should close all sub-objects from it, and the WebLogic version of close() intends to do that while putting the actual connection back into the pool, but some objects may have different implementations in different drivers that won’t allow WebLogic to release everything. JDBC objects like Lobs not properly closed can lead to this error: java.sql.SQLException: ORA-01000: maximum open cursors exceeded.   If you don't explicitly close Statements and ResultSets right away, cursors may accumulate and exceed the maximum number allowed in your DB before the Connection is closed.    Here is a code example for WebLogic JDBC best practices:   Public void myTopLevelJDBCMethod() {     Connection c = null; // defined as a method-level object, not accessible or kept where other threads can use it.       … do all pre-JDBC stuff…       // The try block, in which all JDBC for this method (and sub-methods) will be done     Try {       // Get the connection directly, fresh from a WLS datasource       c = myDatasource.getConnection();         … do all your JDBC… You can pass the connection to sub-methods, but they should not keep it,       or expect it or any of the objects gotten from it to be open/viable after the end of the method…       doMyJDBCSubTaskWith( c );          c.close(); // close the connection as soon as all JDBC is done   c = null;  // so the finally block knows it’s been closed if it was ever obtained.         .. do whatever else that may remain that doesn’t need JDBC. I have seen *huge* concurrency improvements by       closing the connection ASAP before doing any non-JDBC post-processing of the data etc.          } catch (Exception e) {       .. do what you want/need, if you need a catch-block, but *always* have the finally block:     } finally {       // If we got here somehow without closing c, do it now, without fail, as the first thing in the finally block so it always happens       If (c != null) try {c.close();} catch (Exception ignore){}       … do whatever else you want in the finally block     } }   Set The Datasource Shrink frequency to 0 for fastest connection availability A datasource can be configured to vary its count of real connections, closing an unneeded portion (above the minimum capacity) when there is insufficient load currently, and it will repopulate itself as/when needed. This will impose slowness on apps during the uptick in load, while new replacement connections are made. By setting the shrink frequency to zero, the datasource will keep all working connections indefinitely, ready. This is sometimes a tradeoff in the DBMS, if there are too many idle sessions…   Set the datasource test frequency to something infrequent or zero The datasource can be configured to periodically test any connections that are currently unused, idle in the pool, replacing bad ones, independently of any application load. This has some benefits, such as keeping the connections looking busy enough for firewalls and DBMSes that might otherwise silently kill them for inactivity. However, it is overhead in WLS, and is mostly superfluous if you have test-connections-on-reserve as you should.   Consider skipping the SQL-query connection test on reserve sometimes You should always explicitly enable ‘test connections on reserve’ because even with Active GridLink information about DBMS health, individual connections may go bad, unnoticed. The only way to ensure a connection you’re getting is good is to have the datasource test it just before you get it. However, there may be cases where this connection test every time is too expensive, either because it adds too much time to the short user use-case, or it burdens the DBMS too much. In these cases, if it is somewhat tolerable that an application occasionally gets a bad connection, there is a datasource option ‘seconds to trust an idle connection’ (default 10 seconds) which means that if a connection in the pool has been tested successfully, or previously used by an application successfully, within that number of seconds, we will trust the connection, and give it to the requester without testing it. In a heavy-load, quick-turnover environment this can safely and completely avoid the explicit overhead of testing. For maximal safety however, set ‘seconds to trust an idle connection’ explicitly to zero.   Consider making the test as lightweight as possible If the datasource’s ‘Test Table’ parameter is set, the pool will test a connection by doing a ‘select  count(*) from’ from that table. DUAL is the traditional choice for Oracle. There are options to use the JDBC isValid() call instead, which for *some* drivers is faster. When using the Oracle driver you can set the ‘test table’ to SQL ISVALID. The Oracle dbping() is an option, enabled by the ‘test table’ being set to SQL PINGDATABAS, which checks the to-DBMS net connectivity without actually invoking any user-level DBMS functionality. These are faster, but there are rare cases where the user session functionality is broken, even if the net connectivity is still good. For XA connections, there is a heavier tradeoff. A test table query will be done in its own XA transaction, which is more overhead, but this is useful sometimes because catches and works around some session state problems that would otherwise cause the next user XA transaction to fail. For maximal safety, do a quick real query, such as by setting the test table to SQL SELECT 1 FROM DUAL.   Pinned-to-Thread not recommended Disabled by default, this option can improve performance by transparently assigning pool connections to specific WLS threads. This eliminates contention between threads while accessing a datasource.  However, this parameter should be used with great care because the connection pool maximum capacity is ignored when pinned-to-thread is enabled. Each thread (numbering possibly in the several hundreds) will need/get its own connection, and no shrinking can apply to that pool. That being said, pinned-to-thread is not recommended, for historical/trust reasons. It has not gotten the historical usage, testing, and hardening that the rest of WebLogic pooling has gotten.   Match the Maximum Thread Constraint property with the maximum capacity of database connections This property (See Environment Work Manager in the console) will set a maximum number of possible concurrent application threads/executions. If your applications can run concurrently, unbounded in number except for this WebLogic limit, the maximum capacity of the datasource should match this thread-count so none of your application threads have to wait at an empty pool until some other thread returns a connection.   Visit Tuning Data Source Connection Pools and Tuning Data Sources for additional parameters tuning in JDBC data sources and connection pools to improve system performance with Weblogic Server, and Performance Tuning Your JDBC Application for application-specific design and configuration.     

Written by Joseph Weinstein.   In this post, you will find some common best practices aimed at ensuring high levels of performance with WebLogic JDBC.   Use WebLogic DataSources for connection pooling of...

Processing the Oracle WebLogic Server Kubernetes Operator Logs using Elastic Stack

  Oracle has been working with the WebLogic community to find ways to make it as easy as possible for organizations using WebLogic Server to run important workloads and to move those workloads into the cloud. One aspect of that effort is the delivery of the Oracle WebLogic Server Kubernetes Operator. In this article we will demonstrate a key feature that assists with the management of WebLogic domains in a Kubernetes environment: the ability to publish and analyze logs from the operator using products from the Elastic Stack.  What Is the Elastic Stack? The Elastic Stack (ELK) consists of several open source products, including Elasticsearch, Logstash, and Kibana. Using the Elastic Stack with your log data, you can gain insight about your application's performance in near real time. Elasticsearch is a scalable, distributed and RESTful search and analytics engine based on Lucene. It provides a flexible way to control indexing and fast search over various sets of data. Logstash is a server-side data processing pipeline that can consume data from several sources simultaneously, transform it, and route it to a destination of your choice. Kibana is a browser-based plug-in for Elasticsearch that you use to visualize and explore data that has been collected. It includes numerous capabilities for navigating, selecting, and arranging data in dashboards. A customer who uses the operator to run a WebLogic Server cluster in a Kubernetes environment will need to monitor the operator and servers. Elasticsearch and Kibana provide a great way to do it. The following steps explain how to set this up. Processing Logs Using ELK In this example, the operator and the Logstash agent are deployed in one pod, and Elasticsearch and Kibana are deployed as two independent pods in the default namespace. We will use a memory-backed volume that is shared between the operator and Logstash containers and that is used to store the logs. The operator instance places the logs into the shared volume, /logs. Logstash collects the logs from the volume and transfers the filtered logs to Elasticsearch. Finally, we will use Kibana and its browser-based UI to analyze and visualize the logs. Operator and ELK integration To enable ELK integration with the operator, first we need to set the elkIntegrationEnabled parameter in the create-operator-inputs.yaml file to true. This causes Elasticsearch, Logstash and Kibana to be installed, and Logstash to be configured to export the operator's logs to Elasticsearch. Then simply follow the installation instructions to install and start the operator. To verify that ELK integration is activated, check the output produced by the following command: $ . ./create-weblogic-operator.sh -i create-operator-inputs.yaml This command should print the following information for ELK: Deploy ELK... deployment "elasticsearch" configured service "elasticsearch" configured deployment "kibana" configured service "kibana" configured To ensure that all three deployments are up and running, perform these steps: Check that the Elasticsearch and Kibana pods are deployed and started (note that they run in the default Kubernetes namespace): $ kubectl get pods The following output is expected: Verify that the operator pod is deployed and running. Note that it runs in the weblogic-operator namespace: $ kubectl -n weblogic-operator get pods The following output is expected: Check that the operator and Logstash containers are running inside the operator’s pod: $ kubectl get pods -n weblogic-operator --output json    | jq '.items[].spec.containers[].name' The following output is expected:   Verify that the Elasticsearch pod has started: $ kubectl exec -it elasticsearch-3938946127-4cb2s /bin/bash $ curl   "http://localhost:9200" $ curl  “http://localhost:9200/_cat/indices?v” We get the following indices if Elasticstash was successfully started: If Logstash is not listed, then you might check the Logstash log output: $ kubectl logs weblogic-operator-501749275-nhjs0 -c logstash   -n weblogic-operator If there are no errors in the Logstash log, then it is possible that the Elasticsearch pod has started after the Logstash container. If that is the case, simply restart Logstash to fix it. Using Kibana Kibana provides a web application for viewing logs. Its Kubernetes service configuration includes a NodePort so that the application can be accessed outside of the Kubernetes cluster. To find its port number, run the following command: $ kubectl describe service kibana This should print the service NodePort information, similar to this: From the description of the service in our example, the NodePort value is 30911. Kibana's web application can be accessed at the address http://[NODE_IP_ADDRESS]:30911. To verify that Kibana is installed correctly and to check its status, connect to the web page at http://[NODE_IP_ADDRESS]:30911/status. The status should be Green. The next step is to define a Kibana index pattern. To do this, click Discover in the left panel. Notice that the default index pattern is logstash-*, and that the default time filter field name is @timestamp. Click Create. The Management page displays the fields for the logstash* index: The next step is to customize how the operator logs are presented. To configure the time interval and auto-refresh settings, click the upper-right corner of the Discover page, double-click the Auto-refresh tab, and select the desired interval. For example, 10 seconds. You can also set the time range to limit the log messages to those generated during a particular interval: Logstash is configured to split the operator log records into separate fields. For example: method: dispatchDomainWatch level: INFO log: Watch event triggered for WebLogic Domain with UID: domain1 thread: 39 timeInMillis: 1518372147324 type: weblogic-operator path: /logs/operator.log @timestamp: February 11th 2018, 10:02:27.324 @version: 1 host: weblogic-operator-501749275-nhjs0 class: oracle.kubernetes.operator.Main _id: AWGGCFGulCyEnuJh-Gq8 _type: weblogic-operator _index: logstash-2018.02.11 _score: You can limit the fields that are displayed. For example, select the level, method, and log fields, then click add. Now only those fields will be shown. You can also use filters to display only those log messages whose fields match an expression. Click Add a filter at the top of the Discover page to create a filter expression. For example, choose method, is one of, and onFailure. Kibana will display all log messages from the onFailure methods: Kibana is now configured to collect the operator logs. You can use its browser-based viewer to easily view and analyze the data in those logs. Summary In this blog, you learned about the Elastic Stack and the Oracle WebLogic Server Kubernetes Operator integration architecture, followed by a detailed explanation of how to set up and configure Kibana for interacting with the operator logs. You will find its capabilities, flexibility, and rich feature set to be an extremely valuable asset for monitoring WebLogic domains in a Kubernetes environment.    

  Oracle has been working with the WebLogic community to find ways to make it as easy as possible for organizations using WebLogic Server to run important workloads and to move those workloads into the...

The WebLogic Server

Announcing the New WebLogic Server Kubernetes Operator

We are pleased to announce the release and open sourcing of the Technology Preview version of the Oracle WebLogic Server Kubernetes Operator! We are releasing this Operator to GitHub for creating and managing a WebLogic Server 12.2.1.3 domain on Kubernetes. We are also publishing a blog that describes in detail how to run the Operator, how to stand up one or more WebLogic domains in Kubernetes, how to scale up or down  a WebLogic cluster manually or automatically using the WebLogic Diagnostics Framework (WLDF) or Prometheus, how the Operator manages load balancing for web applications deployed in WebLogic clusters, and how to provide integration for managing Operator logs through ElasticSearch, logstash and Kibana. A Kubernetes Operator is "an application specific controller that extends the Kubernetes API to create, configure, and manage instances of complex applications". We are adopting the Operator pattern and using it to provide an adapter to integrate WebLogic Server and Kubernetes, allowing Kubernetes to serve as a container infrastructure hosting WebLogic Server instances. And so the WebLogic Server Kubernetes Operator is an operator that extends Kubernetes to create, configure, and manage a WebLogic domain. The Operator uses the standard Oracle WebLogic Server 12.2.1.3 Docker image, which can be found in the Docker Store or in the Oracle Container Registry.  It treats this image as immutable, and all application and product runtime state is persisted in a Kubernetes persistent volume.  This allows us to treat all of the pods as throwaway and replaceable, and it completely eliminates the need to manage state written into Docker containers at run time (because there is none). The Oracle WebLogic Server Kubernetes Operator has the following requirements: Kubernetes 1.7.5+, 1.8.0+ (check with kubectl version) Flannel networking v0.9.1-amd64 (check with docker images | grep flannel) Docker 17.03.1.ce (check with docker version) Oracle WebLogic Server 12.2.1.3 WebLogic Server 12.2.1.3  domains on Kubernetes are certified and supported, as described in detail in My Oracle Support Doc Id 2349228.1 for details.  The WebLogic Kubernetes Operator is a Technical Preview version and is not yet supported by Oracle Support.  If users encounter problems related to the WebLogic Kubernetes Operator, they should open an issue in the GitHub project https://github.com/oracle/weblogic-kubernetes-operator.  GitHub project members will respond to the issues and resolve them in a timely fashion.  A series of video demonstrations of the operator are available here: Installing the operator shows the installation and also shows using the operator's REST API. Creating a WebLogic domain with the operator shows creation of two WebLogic domains including accessing the WebLogic Server Administration Console and looking at the various resources created in Kubernetes:  services, Ingresses, pods, load balancers, and so on. Deploying a web application, scaling a WebLogic cluster with the operator and verifying load balancing Using WLST against a domain running in Kubernetes shows how to create a data source for an Oracle database that is also running in Kubernetes. Scaling a WebLogic cluster with WLDF. Prometheus integration shows exporting WebLogic Server metrics to Prometheus and creating a Prometheus alert to trigger scaling. The overall process of installing and configuring the Operator and using it to manage WebLogic domains consists of the following steps. The provided scripts will perform most of these steps, but some must be performed manually: Registering for access to the Oracle Container Registry Setting up secrets to access the Oracle Container Registry Customizing the Operator parameters file Deploying the Operator to a Kubernetes cluster Setting up secrets for the Administration Server credentials Creating a persistent volume for a WebLogic domain Customizing the domain parameters file Creating a WebLogic domain Full up to date instructions are available at https://github.com/oracle/weblogic-kubernetes-operator/blob/master/site/installation.md. We hope to provide formal support for this operator soon, and intend to add new features and enhancements over time. WebLogic Server Kubernetes Operator.  Please stay tuned for more information. We hope this announcement is helpful to those of you seeking to deploy WebLogic Server on Kubernetes, and look forward to your feedback.          

We are pleased to announce the release and open sourcing of the Technology Preview version of the Oracle WebLogic Server Kubernetes Operator! We are releasing this Operator to GitHubfor creating and...

T3 RMI Communication for WebLogic Server Running on Kubernetes

Overview Oracle WebLogic Server supports Java EE and includes several vendor-specific enhancements. It has two RMI implementations and, beyond the standard Java EE-based IIOP RMI, WebLogic Server has a proprietary RMI protocol called T3. This blog describes the configuration aspects of generic RMI that also apply to T3, and also some T3-specific aspects for running WebLogic RMI on Kubernetes. Background T3 RMI is a proprietary WebLogic Server high performance RMI protocol and is a major communication component for WebLogic Server internally, and also externally for services like JMS, EJB, OAM, and many others. WebLogic Server T3 RMI configuration has evolved. It starts with a single multi-protocol listen port and listen address on WebLogic Server known as the default channel. We enhanced the default channel by adding a network access point layer, which allows users to configure multiple ports, as well as different protocols for each port, known as custom channels. When WebLogic Server is running on Kubernetes, the listen port number of WebLogic Server may or may not be the same as the Kubernetes exposed port number. For WebLogic Server running on Kubernetes, a custom channel allows us to map these two port numbers. The following table lists key terms that are used in this blog and provides links to documentation that gives more details. TERMINOLOGY   Listen port The TCP/IP port that WebLogic Server physically binds to. Public port Public port The port number that the caller uses to define the T3 URL. Usually it is the same as the listen port, unless the connection goes through “port mapping Port mapping An application of network address translation (NAT) that redirects a communication request from one address and port number combination to another. See port mapping. Default channel Every WebLogic Server domain has a default channel that is generated automatically by WebLogic Server. See definition. Custom channel Used for segregating different types of network traffic. ServerTemplateMBean Also known as the default channel. Learn more. NetworkAccessPointMBean Also known as a custom channel. Learn more. WebLogic cluster communication WebLogic Server instances in a cluster communicate with one another using either of two basic network technologies: multicast and unicast. Learn more about multicast and unicast. WebLogic transaction coordinator WebLogic Server transaction manager that serves as coordinator of the transaction Kubernetes service See Kubernetes concepts https://kubernetes.io/docs/concepts/services-networking/service/, which defines Kubernetes back end and NodePort. Kubernetes pod IP address Kubernetes assumes that pods can communicate with other pods, regardless of which host they land on. We give every pod its own cluster-private-IP address so you do not need to explicitly create links between pods or mapping container ports to host ports. Learn more. ClusterMBean See ClusterBroadcastChannel. WebLogic Server Listen Address WebLogic Server supports two cluster messaging protocols: multicast and unicast. The WebLogic Server on Kubernetes certification was done using the Flannel network fabric. Currently, we only certify unicast communication. By default WebLogic Server will use the default channel for unicast communication. Users can override it by setting a custom channel on the associated WebLogic Server ClusterMBean. As part of the unicast configuration in a WebLogic cluster, a designated listen address and port is required for each WebLogic cluster member so that they can locate each other. By default, the default channel or custom channel has a null listen address and is assigned at run time as 0.0.0.0. In a multinode Kubernetes cluster environment, neither 0.0.0.0 nor localhost will allow other cluster members from different nodes to discover each other. Instead, users can use the Kubernetes pod IP address that the WebLogic Server instance is running on. TCP Load Balancing In general, WebLogic T3 is TCP/IP-based, so it can support TCP load balancing when services are homogeneous, such as in a Kubernetes service with multiple back ends. In WebLogic Server some subsystems are homogeneous, such as JMS and EJB. For example, a JMS front end subsystem can be configured in a WebLogic cluster in which remote JMS clients can connect to any cluster member. By contrast, a JTA subsystem cannot safely use TCP load balancing in transactions that span across multiple WebLogic domains that, in turn, extend beyond a single Kubernetes cluster. The JTA transaction coordinator must establish a direct RMI connection to the server instance that is chosen as the subcoordinator of the transaction when that transaction is either committed or rolled back. The following figure shows a WebLogic transaction coordinator using the T3 protocol to connect to a subcoordinator. The WebLogic transaction coordinator cannot connect to the chosen subcoordinator due to the TCP load balancing. Figure 1: Kubernetes Cluster with Load Balancing Service To support cluster communication between the WebLogic transaction coordinator and the transaction subcoordinator across a Kubernetes environment, the recommended configuration is to have an individual NodePort service defined for each default channel and custom channel. Figure 2: Kubernetes Cluster with One-on-One Service Depending on the application requirements and the WebLogic subsystem used, TCP load balancing might or might not be suitable. Port Mapping and Address Mapping WebLogic Server supports two styles of T3 RMI configuration. One is defined by means of the default channel (see ServerTemplateMBean), and the other is defined by means of the custom channel (see NetworkAccessPointMBean). When running WebLogic Server in Kubernetes, we need to give special attention to the port mapping. When we use NodePort to expose the WebLogic T3 RMI service outside the Kubernetes cluster, we need to map the NodePort to the WebLogic Server listen port. If the NodePort is the same as the WebLogic Server listen port, then users can use the WebLogic Server default channel. Otherwise, users must configure a custom channel that defines a "public port" that matches the NodePort nodePort value, and a “listen port” that matches the NodePort port value. The following graph shows a nonworking NodePort/default channel configuration and a working NodePort/custom channel configuration: Figure 3: T3 External Clients in K8S The following table describes the properties of the default channel versus the corresponding ones in the custom channel:   Default Channel (ServerTemplateMBean) Custom Channel (NetworkAccessPointMBean) Multiple protocol support (T3, HTTP, SNMP, LDAP, and more) Yes No RMI over HTTP tunneling Yes (disable by default) Yes (disable by default) Port mapping No Yes Address Yes Yes Examples of WebLogic T3 RMI configurations WebLogic Server supports several ways to configure T3 RMI. The following examples show the common ones. Using the WebLogic Server Administration Console The following console page shows a WebLogic Server instance called AdminServer with a listen port of 9001 on a null listen address and with no SSL port. Because this server instance is configured with the default channel, port 9001 will support T3, http, iiop, snmp, and ldap. Figure 4: T3 RMI via ServerTemplateMBean on WebLogic console The following console page shows a custom channel with a listen port value of 7010, a null listen address, and a mapping to public port 30010. By default, the custom channel supports T3 protocol. Figure 5: T3 RMI via NetworkAccessPointMBean on WebLogic Console Using WebLogic RESTful management services The following shell script will create a custom channel with listen port ${CHANNEL_PORT} and a paired public port ${CHANNEL_PUBLIC_PORT}. #!/bin/sh HOST=$1 PORT=$2 USER=$3 PASSWORD=$4 CHANNEL=$5 CHANNEL_PORT=$6 CHANNEL_PUBLIC_PORT=$7 echo "Rest EndPoint URL http://${HOST}:${PORT}/management/weblogic/latest/edit" if [ $# -eq 0 ]; then echo "Please specify HOST, PORT, USER, PASSWORD CHANNEL CHANNEL_PORT CHANNEL_PUBLIC_PORT" exit 1 fi # Start edit curl -j --noproxy '*' --silent \ --user $USER:$PASSWORD \ -H X-Requested-By:MyClient \ -H Accept:application/json \ -H Content-Type:application/json \ -d "{}" \ -X POST http://$HOST:$PORT/management/weblogic/latest/edit/changeManager/startEdit # Create curl -j --noproxy '*' --silent \ --user $USER:$PASSWORD \ -H X-Requested-By:MyClient \ -H Accept:application/json \ -H Content-Type:application/json \ -d "{ name: '${CHANNEL}' }" \ -X POST http://$HOST:$PORT/management/weblogic/latest/edit/Servers/myServer/networkAccessPoints curl -j --noproxy '*' --silent \ --user $USER:$PASSWORD \ -H X-Requested-By:MyClient \ -H Accept:application/json \ -H Content-Type:application/json \ -d "{ listenPort: ${CHANNEL_PORT}, publicPort: ${CHANNEL_PUBLIC_PORT} }" \ -X POST http://$HOST:$PORT/management/weblogic/latest/edit/Servers/myServer/networkAccessPoints/${CHANNEL} curl -j --noproxy '*' --silent \ --user $USER:$PASSWORD \ -H X-Requested-By:MyClient \ -H Accept:application/json \ -H Content-Type:application/json \ -d "{}" \ -X POST http://$HOST:$PORT/management/weblogic/latest/edit/changeManager/activate Using a WLST script The following WLST script creates a custom T3 channel named t3Channel that has a listen port listen_port and a paired public port public_port. host = sys.argv[1] port = sys.argv[2] user_name = sys.argv[3] password = sys.argv[4] listen_port = sys.argv[5] public_port = sys.argv[6] print('custom host : [%s]' % host); print('custom port : [%s]' % port); print('custom user_name : [%s]' % user_name); print('custom password : ********'); print('public address : [%s]' % public_address); print('channel listen port : [%s]' % listen_port); print('channel public listen port : [%s]' % public_port); connect(user_name, password, 't3://' + host + ':' + port) edit() startEdit() ls() cd('/') cd('Servers') cd('myServer') create('t3Channel','NetworkAccessPoint') cd('NetworkAccessPoints/t3Channel') set('Protocol','t3') set('ListenPort',int(listen_port)) set('PublicPort',int(public_port)) print('Channel t3Channel added ...') activate() disconnect() Summary WebLogic Server uses RMI communication using the T3 protocol to communicate between WebLogic Servers instances and with other Java programs and clients. When WebLogic Server runs in a Kubernetes cluster, there are special considerations and configuration requirements that need to be taken into account to make the RMI communication work. This blog describes how to configure WebLogic Server and Kubernetes so that RMI communication from outside the Kubernetes cluster can successfully reach the WebLogic Server instances running inside the Kubernetes cluster. For many WebLogic Server features using T3 RMI, such as EJBs, JMS, JTA, and WLST, we support clients inside and outside the Kubernetes cluster. In addition, we support both a single WebLogic domain in a multinode Kubernetes cluster, and multiple WebLogic domains in a multinode Kubernetes cluster as well.

Overview Oracle WebLogic Server supports Java EE and includes several vendor-specific enhancements. It has two RMI implementations and, beyond the standard Java EE-based IIOP RMI, WebLogic Server has...

The WebLogic Server

WebLogic Server Certification on Kubernetes

We are pleased to announce the certification of Oracle WebLogic Server on Kubernetes! As part of this certification, we are releasing a sample on GitHub to create an Oracle WebLogic Server 12.2.1.3 domain image running on Kubernetes. We are also publishing a series of blogs that describe in detail the WebLogic Server configuration and feature support as well as best practices.  A video of a WebLogic Server domain running in Kubernetes can be seen at WebLogic Server on Kubertnetes Video. Kubernetes is an open-source system for automating deployment, scaling, and management of containerized applications. It supports a range of container tools, including Docker.  Oracle WebLogic Server configurations running in Docker containers can be deployed and orchestrated on Kubernetes platforms. The following table identifies versions of Oracle WebLogic Server, JDK, Linux, Kubernetes, Docker, and network fabric that are certified for running WebLogic Server configurations on Kubernetes.   WebLogic Server JDK Host OS Kubernetes Docker Network Fabric 12.2.1.3 8 Oracle Linux 7  UEK 4 1.7.5 and 1.8.0 17.03-ce Flannel  v0.9.1-amd64   For additional information about Docker certification with Oracle WebLogic Server, see My Oracle Support Doc ID 2017945.1.  Support for running WebLogic Server domains on Kubernetes platforms other than on Oracle Linux with a network fabric other than Flannel, see My Oracle Support Doc ID 2349228.1.  For the most current information on supported configurations, see the Oracle Fusion Middleware Supported System Configurations page on Oracle Technology Network. This certification enables users to create clustered and nonclustered Oracle WebLogic Server domain configurations, including both development and production modes, running on Kubernetes clusters.  This certification includes support for the following: Running one or more WebLogic domains in a Kubernetes cluster Single or multiple node Kubernetes clusters WebLogic managed servers in clustered and nonclustered configurations WebLogic Server Configured clusters (vs dynamic clusters). See documentation for details. Unicast WebLogic Server cluster messaging protocol Load balancing HTTP requests using Træfik as Ingress controllers on Kubernetes clusters HTTP session replication JDBC communication with external database systems JMS JTA JDBC store and file store using persistent volumes Inter-domain communication (JMS, Transactions, EJBs, and so on) Auto scaling of a WebLogic cluster Integration with Prometheus monitoring using the WebLogic Monitoring Exporter RMI communication from outside and inside the Kubernetes cluster Upgrading applications Patching WebLogic domains Service migration of singleton services Database leasing In this certification of WebLogic Server on Kubernetes the following configurations and features are not supported: WebLogic domains spanning Kubernetes clusters Whole server migration Use of Node Manager for WebLogic Servers lifecycle management (start/stop) Consensus leasing Dynamic clusters (will add certification of dynamic clusters at a future date) MultiCast WebLogic Server cluster messaging protocol Multitenancy Production redeployment Flannel with portmap We have released a sample to GitHub (https://github.com/oracle/docker-images/tree/master/OracleWebLogic/samples/wls-k8s-domain)  that show how to  create and run a WebLogic Server domain on Kubernetes.  The README.md in the sample provides all the steps.  This sample extends the certified Oracle WebLogic Server 12.2.1.3 developer install image by creating a sample domain and cluster that runs on Kubernetes. The WebLogic domain consists of an administrator server and several managed servers running in a WebLogic cluster. All WebLogic Server share the same domain home, which is mapped to an external volume. The persistent volumes must have the correct read/write permissions so that all WebLogic Server instances have access to the files in the domain home.  Check out the best practices in the blog WebLogic Server on Kubernetes Data Volume Usage, which explains the WebLogic Server services and files that are typically configured to leverage shared storage, and provides full end-to-end samples that show mounting shared storage for a WebLogic domain that is orchestrated by Kubernetes. After you have this domain up and running you can deploy JMS and JDBC resources.  The blog Run a WebLogic JMS Sample on Kubernetes provides a step-by-step guide to configure and run a sample WebLogic JMS application in a Kubernetes cluster.  This blog also describes how to deploy WebLogic JMS and JDBC resources, deploy an application, and then run the application. This application is based on a sample application named 'Classic API - Using Distributed Destination' that is included in the WebLogic Server sample applications. The application implements a scenario in which employees submit their names when they are at work, and a supervisor monitors employee arrival time. Employees choose whether to send their check-in messages to a distributed queue or a distributed topic. These destinations are configured on a cluster with two active managed servers. Two message-driven beans (MDBs), corresponding to these two destinations, are deployed to handle the check-in messages and store them in a database. A supervisor can then scan all of the check-in messages by querying the database. The follow up blog, Run Standalone WebLogic JMS Clients on Kubernetes, expands on the previous blog and demonstrates running standalone JMS clients communicating with each other through WebLogic JMS services, and database-based message persistence. WebLogic Server and Kubernetes each provide a rich set of features to support application deployment. As part of the process of certifying WebLogic Server on Kubernetes, we have identified a set of best practices for deploying Java EE applications on WebLogic Server instances that run in Kubernetes and Docker environments. The blog Best Practices for Application Deployment on WebLogic Server Running on Kubernetes describes these best practices. They include the general recommendations described in Deploying Applications to Oracle WebLogic Server, and also include the application deployment features provided in Kubernetes. One of the most important tasks in providing optimal performance and security of any software system is to make sure that the latest software updates are installed, tested, and rolled out promptly and efficiently with minimal disruption to system availability.  Oracle provides different types of patches for WebLogic Server, such as Patch Set Updates, and One-Off patches.  The patches you install, and the way in which you install them, depends upon your custom needs and environment. In Kubernetes, Docker, and on-premises environments, we use the same OPatch tool to patch WebLogic Server.  However, with Kubernetes orchestrating the cluster, we can leverage the update strategy options in the StatefulSet controller to roll out the patch from an updated WebLogic Server image.  The blog Patching WebLogic Server in a Kubernetes Environment; explains how. And, of course, a very important aspect of certification is security. We have identified best practices for securing Docker and Kubernetes environments when running WebLogic Server, explained in the blog Security Best Practices for WebLogic Server Running in Docker and Kubernetes. These best practices are in addition to the general WebLogic Server recommendations documented in Securing a Production Environment for Oracle WebLogic Server 12c documentation. In the area of monitoring and diagnostics, we have developed for open source a new tool The WebLogic Monitoring Exporter. WebLogic Server generates a rich set of metrics and runtime state information that provides detailed performance and diagnostic data about the servers, clusters, applications, and other resources that are running in a WebLogic domain. This tool leverages WebLogic’s monitoring and diagnostics capabilities when running in Docker/Kubernetes environments. The blog Announcing The New Open Source WebLogic Monitoring Exporter on GitHub describes how to build the exporter from a Dockerfile and source code in the GitHub project https://github.com/oracle/weblogic-monitoring-exporter. The exporter is implemented as a web application that is deployed to the WebLogic Server managed servers in the WebLogic cluster that will be monitored. For detailed information about the design and implementation of the exporter, see Exporting Metrics from WebLogic Server. Once after the exporter has been deployed to the running managed servers in the cluster and is gathering metrics and statistics, the data is ready to be collected and displayed via Prometheus and Grafana. Follow the blog entry Using Prometheus and Grafana to Monitor WebLogic Server on Kubernetes that steps you through collecting metrics in Prometheus and displaying them in Grafana dashboards. Elasticity (scaling up or scaling down) of a WebLogic Server cluster provides increased reliability of customer applications as well as optimization of resource usage. The WebLogic Server cluster can be automatically scaled by increasing (or decreasing) the number of pods based on resource metrics provided by the WebLogic Diagnostic Framework (WLDF).  When the WebLogic cluster scales up or down, WebLogic Server capabilities like HTTP session replication and service migration of singleton services are leveraged to provide the highest possible availability. Refer to the blog entry Automatic Scaling of WebLogic Clusters on Kubernetes for an illustration of automatic scaling of a WebLogic Server cluster in a Kubernetes cloud environment. In addition to certifying WebLogic Server on Kubernetes, the WebLogic Server team is developing a WebLogic Server Kubernetes Operator that will be released in the near future.  A Kubernetes Operator is "an application specific controller that extends the Kubernetes API to create, configure, and manage instances of complex applications".  Please stay tuned for information on the release of the WebLogic Server Kubernetes Operator. The certification of WebLogic Server on Kubernetes encompasses all the various  WebLogic configurations and capabilities described in this blog. Our intent is to enable you to run WebLogic Server in Kubernetes, to run WebLogic Server in the Kubernetes-based Oracle Container Engine that Oracle intends to release shortly, and to enable integration of WebLogic Server applications with applications developed on our Kubernetes-based Container Native Application Development Platform. We hope this information is helpful to customers seeking to deploy WebLogic Server on Kubernetes, and look forward to your feedback        

We are pleased to announce the certification of Oracle WebLogic Server on Kubernetes! As part of this certification, we are releasing a sample on GitHub to create an Oracle WebLogic Server 12.2.1.3...

Run Standalone WebLogic JMS Clients on Kubernetes

Overview JMS applications are applications that use JMS services to send and receive messages. There are two main types of WebLogic JMS applications: server-side JMS applications and standalone JMS clients. Server-side applications are applications that are running on WebLogic servers or clusters and they are usually Java EE applications like MDBs, servlets and so on. Standalone JMS clients can be applications running on a foreign EE server, desktop applications, or microservices. In my last blog, Run a WebLogic JMS Sample on Kubernetes, we demonstrated WebLogic JMS communication between Java EE applications on Kubernetes and we used file-based message persistence. In this blog, we will expand the previous blog to demonstrate running standalone JMS clients communicating with each other through WebLogic JMS services, and we will use database-based message persistence. First we create a WebLogic domain based on the sample WebLogic domain on GitHub, with an Administrator Server, and a WebLogic cluster. Then we deploy a data source, a JDBC store, and JMS resources to the WebLogic domain on a Kubernetes cluster. After the WebLogic JMS services are ready and running, we create and deploy a Java microservice to the same Kubernetes cluster to send/receive messages to/from the WebLogic JMS destinations. We use REST API and run scripts against the Administration Server pod to deploy the resources which are targeted to the cluster. Creating WebLogic JMS Services on Kubernetes Preparing the WebLogic Base Domain and Data Source If you completed the steps to create the domain, set up the MySQL database, and create the data source as described in the blog Run a WebLogic JMS Sample on Kubernetes, you can go directly to the next section. Otherwise, you need to finish the steps in the following sections of the blog Run a WebLogic JMS Sample on Kubernetes: Section "Creating the WebLogic Base Domain" Section "Setting Up and Running MySQL Server in Kubernetes" Section "Creating a Data Source for the WebLogic Server Domain" Now you should have a WebLogic base domain running on a Kubernetes cluster and a data source which connects to a MySQL database, running in the same Kubernetes cluster. Deploying the JMS Resources with a JDBC Store First, prepare a JSON data file that contains definitions for one database store, one JMS server, and one JMS module. The file will be processed by a Python script to create the resources, one-by-one, using the WebLogic Server REST API. File jms2.json: {"resources": { "jdbc1": { "url": "JDBCStores", "data": { "name": "jdbcStore1", "dataSource": [ "JDBCSystemResources", "ds1" ], "targets": [{ "identity":["clusters", "myCluster"] }] } }, "jms2": { "url": "JMSServers", "data": { "messagesThresholdHigh": -1, "targets": [{ "identity":["clusters", "myCluster"] }], "persistentStore": [ "JDBCStores", "jdbcStore1" ], "name": "jmsserver2" } }, "module": { "url": "JMSSystemResources", "data": { "name": "module2", "targets":[{ "identity": [ "clusters", "myCluster" ] }] } }, "sub2": { "url": "JMSSystemResources/module2/subDeployments", "data": { "name": "sub2", "targets":[{ "identity": [ "JMSServers", "jmsserver2" ] }] } } }} Second, prepare the JMS module file, which contains a connection factory, a distributed queue, and a distributed topic. File module2-jms.xml: <?xml version='1.0' encoding='UTF-8'?> <weblogic-jms xmlns="http://xmlns.oracle.com/weblogic/weblogic-jms" xmlns:sec="http://xmlns.oracle.com/weblogic/security" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:wls="http://xmlns.oracle.com/weblogic/security/wls" xsi:schemaLocation="http://xmlns.oracle.com/weblogic/weblogic-jms http://xmlns.oracle.com/weblogic/weblogic-jms/1.1/weblogic-jms.xsd"> <connection-factory name="cf2"> <default-targeting-enabled>true</default-targeting-enabled> <jndi-name>cf2</jndi-name> <transaction-params> <xa-connection-factory-enabled>true</xa-connection-factory-enabled> </transaction-params> <load-balancing-params> <load-balancing-enabled>true</load-balancing-enabled> <server-affinity-enabled>false</server-affinity-enabled> </load-balancing-params> </connection-factory> <uniform-distributed-queue name="dq2"> <sub-deployment-name>sub2</sub-deployment-name> <jndi-name>dq2</jndi-name> </uniform-distributed-queue> <uniform-distributed-topic name="dt2"> <sub-deployment-name>sub2</sub-deployment-name> <jndi-name>dt2</jndi-name> <forwarding-policy>Partitioned</forwarding-policy> </uniform-distributed-topic> </weblogic-jms> Third, copy these two files to the Administration Server pod. Then, in the Administration Server pod, run the Python script to create all the JMS resources: $ kubectl exec $adminPod -- mkdir /u01/wlsdomain/config/jms/ $ kubectl cp ./module2-jms.xml $adminPod:/u01/wlsdomain/config/jms/ $ kubectl cp ./jms2.json $adminPod:/u01/oracle/ $ kubectl exec $adminPod -- python /u01/oracle/run.py createRes /u01/oracle/jms2.json Launch the WebLogic Server Administration Console by going to your browser and entering the URL http://<hostIP>:30007/console in the address bar. Make sure that all the JMS resources are running successfully. Visit the monitoring page of the destination dq2 to check whether it has two members, jmsserver2@managed-server-0@dq2 and jmsserver2@managed-server-1@dq2. Now that the WebLogic JMS services are ready, JMS messages sent to this service will be stored in the MySQL database. Running the WebLogic JMS Client The JMS client pod is a Java microservice which is based on the openjdk8 image packaged with the WebLogic client JAR file. The client-related scripts are on GitHub which include Dockerfile, JMS client Java files and yaml files. NOTE: You need to get wlthint3client.jar from the installed WebLogic directory $WL_HOME/server/lib and put it in the folder jms-client/container-scripts/lib. Step 1: Build the Docker image for JMS clients and the image will contain the compiled JMS client classes which can be run directly. $ cd jms-client $ docker build -t jms-client . Step 2: Create the JMS client pod. $ kubectl create -f jmsclient.yml Run the Java programs to send and receive messages from the WebLogic JMS destinations. Please replace $clientPod with the actual client pod name. Run the sender program to send messages to the destination dq2. $ kubectl exec -it $clientPod java samples.JMSSender By default, the sender sends 10 messages on each run and these messages are distributed to two members of dq2. Check the Administration Console to verify this. Run the receiver program to receive messages from destination dq2. $ kubectl exec -it $clientPod java samples.JMSReceiver dq2 The receiver uses WebLogic JMSDestinationAvailabilityHelper API to get notifications about the distributed queue's membership change, so the receiver can receive messages from both members of dq2. Please refer to the WebLogic document, "Advanced Programming with Distributed Destinations Using the JMS Destination Availability Helper API", for the detailed usage. Summary In this blog, we expanded our sample Run a WebLogic Sample on Kubernetes to demonstrate using external JMS clients to communicate with WebLogic JMS services running inside a Kubernetes cluster. We leveraged basic Kubernetes facilities to manage WebLogic Server life cycles and used database-based message persistence to persist data beyond the life cycle of the pods. In future blogs, we’ll explore hosting a WebLogic JMS cluster using Oracle’s upcoming fully certified WebLogic Kubernetes ‘operator-based’ Kubernetes environment. In addition, we’ll also explore using WebLogic JMS automatic service migration to migrate JMS instances from shutdown pods to running pods.  

Overview JMS applications are applications that use JMS services to send and receive messages. There are two main types of WebLogic JMS applications: server-side JMS applications and standalone...

Patching WebLogic Server in a Kubernetes Environment

Of course, one of the most important tasks in providing optimal performance and security of any software system is to make sure that the latest software updates are installed, tested, and rolled out promptly and efficiently with minimal disruption to system availability.  Oracle provides different types of patches for WebLogic Server, such as Patch Set Updates, Security Patch Updates, and One-Off patches.  The patches you install, and the way in which you install them, depends upon your custom needs and environment. For WebLogic Server running on Kubernetes, we recently shared in Github the steps for creating a WebLogic Server instance with a shared domain home directory that is mapped to a Kubernetes volume.  In Kubernetes, Docker, and on-premises environments, we use the same OPatch tool to patch WebLogic Server.  However, with Kubernetes orchestrating the cluster, we can leverage the update strategy options in the StatefulSet controller to roll out the patch from an updated WebLogic Server image.  In this blog, I explain how. Prerequisites Create the WebLogic Server environment on Kubernetes based on the instructions provided at https://github.com/oracle/docker-images/tree/master/OracleWebLogic/samples/wls-k8s-domain.  The patching processes described below will be based on the environment created. Make sure that the WebLogic Server One-Off patches or Patch Set Updates are accessible from the environment created in the preceding step. Patch Set Updates and One-Off Patches Patch Set Updates are cumulative patches that include security fixes and critical fixes. They are used to patch Oracle WebLogic Server only and released in a regular basis.  For additional details related to Patch Set Updates, see Fusion Middleware Patching with OPatch. One-Off patches are targeted to solve some known issues or to add feature enhancements. For information about how to download patches, see My Oracle Support. Kubernetes Update Strategies for StatefulSets There are three different update strategy options available for StatefulSets that you can use for the following tasks: To configure and disable automated rolling updates for container images To configure resource requests or limits, or both To configure labels and annotations of the pods For details about the Kubernetes StatefulSets update strategies, see Update StatefulSets at the following location: https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/#updating-statefulsets. These update strategies are as follows: On Delete Manually delete the pod in any random sequence based on your environment configuration. When Kubernetes detects that a pod is deleted, it creates a new pod that is based on the specification defined for that StatefulSet.  This is the default update strategy when StatefulSet is created. Rolling Updates Perform a rolling update for all the pods in the cluster.  Kubernetes will delete and re-create all the Pods defined in the StatefulSet controller one at a time, but in reverse ordinal order. Rolling Update + Partition A rolling update can also be partitioned, which is determined by the partition value in the specification defined for the StatefulSet.  For example, if there are four pods in the cluster and the partition value is 2, only two pods are updated.  The other two pods are updated only if partition value is set to 4 or if the partition attribute is removed from the specification. Methods for Updating the StatefulSet Controller In a StatefulSet, there are three attributes that need to be verified prior to roll out: image, imagePullPolicy, and updateStrategy.  Any one of these attributes can be updated by using the in-place update approach.  These in-place options are: kubectl apply Update the yaml file that is used for creating the StatefulSet, and execute kubectl apply -f <statefulset yml file> to roll out the new configuration to the Kubernetes cluster. Sample updated yaml file: kubectl edit Directly update the attribute value using kubectl edit statefulset <statefulset name> Sample edit of a StatefulSet kubectl patch Directly update the attribute using kubectl patch. An example command that updates updateStrategy to RollingUpdate: An example command that updates updateStrategy to RollingUpdate with the partition option: An example command to use JSON format to update the image attribute to wls-k8s-domain-v1: Kubernetes Dashboard Drill down to the specific StatefulSet from the menu path and update the value of image, imagePullPolicy, and updateStrategy. Steps to Apply One-Off Patches and Patch Set Updates with an External Domain Home To create a patched WebLogic image with a new One-Off patch and apply it to all the pods: Complete the steps in Example of Image with WLS Domain in Github to create a patched WebLogic image. If the Kubernetes cluster is configured on multiple nodes, and the newly created image is not available in the Docker registry, complete the steps provided in docker save and docker load to copy the image to all the nodes in the cluster. Update the controller definition using one of the 3 methods described in Methods for Updating the StatefulSet Controller.  If you want Kubernetes to automatically apply the new image on all the pods for the StatefulSet, you can set the updateStrategy value to RollingUpdate. Apply the new image on the admin-server pod. Because there is only one Administration Server in the cluster, the preferred option is to use the RollingUpdate update strategy option. After the change is committed to the StatefulSet controller, Kubernetes will delete and re-create the Administration Server pod automatically. Apply the new image to all the pods defined in the Managed Server StatefulSet: a)     For the OnDelete option, get the list of the pods in the cluster and change the updateStrategy value to OnDelete. You need to manually delete all the pods in the cluster to roll out the new image, using the following commands: b)    For the RollingUpdate option, after you change the updateStrategy value to RollingUpdate, Kubernetes will delete and re-create the pods created for the Managed Server instances in a rolling fashion, but in reverse ordinal order, as shown in Figure 1 below. c)     If the Partition attribute is added to the RollingUpdate value, the rolling update order depends on the partition value. Kubernetes will roll out the new image to the pods with the ordinal whereby the order is greater than or equal to the partition value. Figure 1: Before and After Patching the Oracle Home Image Using the Kubernetes Statefulset RollingUpdate Strategy Roll Back In case you need to rollback the patch, you use the same steps as when applying a new image; that is, by changing the image value to the one for the original image.  You should retain at least two or three versions of the images in the registry. Monitoring There are several ways you can monitor the rolling update progress of your WebLogic domain:  Use the kubectl command to check the pod status, For example, the following output is produced when doing a rolling update of two Managed Server pods:   kubectl get pod -o wide   kubectl rollout status statefulset 2.  You can use the REST API to query the Administration Server to monitor the status of the Managed Servers during the rolling update. For information about how to use the REST API to monitor WebLogic Server, see Oracle WebLogic RESTful Management Services: From Command Line to JavaFX. The following example command queries the status of the Administration Server: The proceeding command generates output similar to the following: 3.  You can use the WebLogic Server Administration Console to monitor the status of the update. The server instance wls-domain-ms-1 is stopped: The update is done on wls-domain-ms-1, then switched to wls-domain-ms-0 :   4. A fourth way is to use the Kubernetes Dashboard.  From your browser enter the URL https:<hostname>:<nodePort>    Summary The process for applying a One-Off Patch or Patch Set Updates to WebLogic Server on Kubernetes is the same as when running in a bare metal environment.  When you use a Kubernetes policy of Statefulset, we recommend that you create a new patched image by extending the image with a previous version and then using the update strategy (OnDelete, RollingUpdate, or "RollingUpdate + partition") that is best suited for your environment. In a future blog, we will explore the patch options available with Kubernetes Operator. You might be able to integrate some of the manual steps shared in above with Operator to further simplify the overall WebLogic Server patching process when running on Kubernetes.  

Of course, one of the most important tasks in providing optimal performance and security of any software system is to make sure that the latest software updates are installed, tested, and rolled out...

Best Practices for Application Deployment on WebLogic Server Running on Kubernetes

Overview WebLogic Server and Kubernetes each provide a rich set of features to support application deployment. As part of the process of certifying WebLogic Server on Kubernetes, we have identified a set of best practices for deploying Java EE applications on WebLogic Server instances that run in Kubernetes and Docker environments. This blog describes those best practices. They include the general recommendations described in Deploying Applications to Oracle WebLogic Server, and also include the application deployment features provided in Kubernetes. Application Deployment Terminology  Both WebLogic Server and Kubernetes use similar terms for resources they manage, but with different meanings. For example, the notion of application or deployment has slightly different meaning, which can create confusion. The table below lists key terms that are used in this blog and how they are defined differently in WebLogic Server and Kubernetes. See Kubernetes Reference Documentation for a standardized glossary with a list of Kubernetes terminology. Table 1 Application Deployment Terminology   WebLogic Server Kubernetes Application A Java EE application (an enterprise application or Web application) or a standalone Java EE module (such as an EJB or resource adapter) that has been organized according to the Java EE specification. An application unit includes a web application, enterprise application, enterprise javaBean, resource adapter, web service, Java EE library or an optional package. An application unit may also include JDBC, JMS, WLDF modules or a client application archive. A software that is containerized and managed in a cluster environment by Kubernetes. WebLogic Server is an example of a Kubernetes application. Application Deployment A process of making a Java Enterprise Edition (Java EE) application or module available for processing client requests in WebLogic Server. A way of packaging, instantiating, running and communicating the containerized applications in a cluster environment. Kubernetes also has an API object called a Deployment that manages a replicated application. Deployment Tool weblogic.Deployer utility Administration Console WebLogic Scripting Tool (WLST) wldeploy Ant task weblogic-maven-plugin Maven plug-in WebLogic Deployment API Auto-deployment feature kubeadm kubectl minikube Helm Chart kops Cluster A WebLogic cluster consists of multiple WebLogic Server instances running simultaneously and working together to provide increased scalability and reliability. A cluster appears to clients to be a single WebLogic Server instance. The server instances that constitute a cluster can run on the same machine, or be located on different machines. You can increase a cluster's capacity by adding additional server instances to the cluster on an existing machine, or you can add machines to the cluster to host the incremental server instances. Each server instance in a cluster must run the same version of WebLogic Server. A Kubernetes cluster consists of a master node and a set of worker nodes. In a production environment these run in a distributed setup on multiple nodes. For testing purposes, all the components can run on the same node (either a physical host or a virtual machine). Within the context of this blog, the following definitions are used: The application mentioned in this page is the Java EE application.  The application deployment in this page is the Java EE application deployment on WebLogic Server.  A Kubernetes application is the software managed by Kubernetes. For example, a WebLogic Server. Summary of Best Practices for Application Deployment in Kubernetes In this blog, the best practices for application deployment on WebLogic Server running in Kubernetes includes several parts: Distributing Java EE application deployment files to a Kubernetes environment so the WebLogic Server containers in pods can access the deployment files. Deploying Java EE applications in a Kubernetes environment so the applications are available for the WebLogic Server containers in pods to process the client requests. Integrating Kubernetes applications with the ReadyApp framework to check the Kubernetes applications' readiness status. General Java EE Application Deployment Best Practices Overview Before drilling down into the best practices details, let’s briefly review the general Java EE application deployment best practices, which are described in Deploying Applications to Oracle WebLogic Server. The general Java EE application deployment process involves multiple parts, mainly: Preparing the Java EE application or module. See Preparing Applications and Modules for Deployment, including Best Practices for Preparing Deployment Files. Configuring the Java EE application or module for deployment. See Configuring Applications for Production Deployment, including Best Practices for Managing Application Configuration. Exporting the Java EE application or module for deployment to a new environment. See Exporting an Application for Deployment to New Environments, including Best Practices for Exporting a Deployment Configuration. Deploying the Java EE application or module. See Deploying Applications and Modules with weblogic.Deployer, including Best Practices for Deploying Applications. Redeploying the Java EE application or module. See Redeploying Applications in a Production Environment. Distributing Java EE Application Deployment Files in Kubernetes Assume the WebLogic Server instances have been deployed into Kubernetes and Docker environments. Before you deploy the Java EE applications on WebLogic Server instances, the Java EE application deployment files, for example, the EAR, WAR, RAR files,  need to be distributed to the locations that can be accessed by the WebLogic Server instances in the pods. In Kubernetes, the deployment files can be distributed by means of a Docker images, or manually by an administrator. Pre-distribution of Java EE Applications in Docker Images A Docker image can contain a pre-built WebLogic Server domain home directory that has one or more Java EE applications deployed to it. When the containers in the pods are created and started using the same Docker image, all containers should have the same Java EE applications deployed to them. If the Java EE applications in the Docker image are updated to a newer version, a new Docker image can be created on top of the current existing Docker image, as shown in Figure 1. However as newer application versions are introduced, additional layers are needed in the image, which consumes more resources, such as disk space. Consequently, having an excessive number of layers in the Docker image is not recommended. Figure 1 Pre-distribution of Java EE Application in layered Docker Images Using Volumes in a Kubernetes Cluster Application files can be shared among all the containers in all the pods by mapping the application volume directory in the pods to an external directory on the host. This makes the application files accessible to all the containers in the pods. When using volumes, the application files need to be copied only once to the directory on the host. There is no need to copy the files to each pod. This saves disk space and the deployment time especially for large applications. Using volumes is recommended for distributing the Java EE applications to WebLogic Server instances running in Kubernetes. Figure 2 Mounting Volumes to an External Directory As shown in Figure 2, every container in each of the three pods has an application volume directory /shared/applications. Each of these directories is mapped to the same external directory on the host: /host/apps. After the administrator puts the application file simpleApp.war in the /host/apps directory on the host, this file can then be accessed by the containers in each pod from the /shared/applications directory. Note that Kubernetes supports different volume types. For information about determining the volume type to use, creating the volume directory, determining the medium that backs it, and identifying the contents of the volume, see Volumes in the Kubernetes documentation. Best Practices for Distributing Java EE Application Deployment Files in Kubernetes Use volumes to persist and share the application files across the containers in all pods. On-disk files in a container are ephemeral. When using a pre-built WebLogic Server domain home in a Docker image, use a volume to store the domain home directory on the host. A sample WebLogic domain wls-k8s-domain that includes a pre-built WebLogic Server domain home directory is available from GitHub at https://github.com/oracle/docker-images/tree/master/OracleWebLogic/samples/wls-k8s-domain Store the application files in a volume whose location is separate from the domain home volume directory on the host. A deployment plan generated for an existing Java EE web application that is deployed to WebLogic Server can be stored in a volume as well. For more details about using the deployment plan,  see the tutorial at http://www.oracle.com/webfolder/technetwork/tutorials/obe/fmw/wls/12c/09-DeployPlan--4464/deployplan.htm. By default all processes in WebLogic Server pods are running with user ID 1000 and group ID 1000. Make sure that the proper access permissions are set to the application volume directory so that user ID 1000 or group ID 1000 has read and write access to the application volume directory. Java EE Application Deployment in Kubernetes After the application deployment files are distributed throughout the Kubernetes cluster, you have several WebLogic Server deployment tools to choose from for deploying the Java EE applications to the containers in the pods. WebLogic Server supports the following deployment tools for deploying, undeploying and redeploying the Java EE applications: WebLogic Administration Console WebLogic Scripting Tool (WLST) weblogic.Deployer utility REST API wldeploy Ant task The WebLogic Deployment API which allows you to perform deployment tasks programmatically using Java classes. The auto-deployment feature. When auto-deployment is enabled, copying an application into the /autodeploy directory of the Administration Server causes that application to be deployed automatically. Auto-deployment is intended for evaluation or testing purposes in a development environment only For more details about using these deployment tools, see Deploying Applications to Oracle WebLogic Server. These tools can also be used in Kubernetes. The following samples show multiple ways to deploy and undeploy an application simpleApp.war in a WebLogic cluster myCluster.  Using WLST in a Dockerfile Using the weblogic.Deployer utility Using the REST API Note that the environment in which the deployment command is run is created based upon the sample WebLogic domain wls-k8s-domain available on GitHub at  https://github.com/oracle/docker-images/tree/master/OracleWebLogic/samples/wls-k8s-domain. In this environment,  A sample WLS 12.2.1.3 domain and cluster are created by extending the Oracle WebLogic developer install image and running it in Kubernetes. The WebLogic domain (for example base_domain) consists of an Admininstrator Server and several Managed Servers running in the WebLogic cluster myCluster. Each WebLogic Server is started in a container. Each pod has one WebLogic Server container. For details about the wls-k8s-domain sample, see the GitHub page. Each pod has one domain home volume directory (for example /u01/wlsdomain). This domain home volume directory is mapped to an external directory (for example /host/domain). The sample WLS 12.2.1.3 domain is created under this external directory. Each pod can have an application volume directory (for example /shared/applications) created in the same way as the domain home volume directory. This application volume directory is mapped to an external directory (for example /host/apps). The Java EE applications can be distributed to this external directory. Sample of Using Offline WLST in a Dockerfile to Deploy a Java EE Application In this sample, a Dockerfile is used for building an application Docker image. This application Docker image extends a wls-k8s-domain image that creates the sample wls-k8s-domain domain. This Dockerfile also calls WLST with a py script to update the sample wls-k8s-domain domain configuration with a new application deployment in a offline mode. # Dockerfile # Extends wls-k8s-domain FROM wls-k8s-domain # Copy the script files and call a WLST script. COPY container-scripts/* /u01/oracle/ # Run a py script to add a new application deployment into the domain configuration RUN wlst /u01/oracle/app-deploy.py  The script app-deploy.py is called to deploy the application simpleApp.war using the Offline WLST apis: # app-deploy.py # Read the domain readDomain(domainhome) # Create application # ================== cd('/') app = create('simpleApp', 'AppDeployment') app.setSourcePath('/shared/applications/simpleApp.war') app.setStagingMode('nostage') # Assign application to cluster # ================================= assign('AppDeployment', 'simpleApp, 'Target', 'myCluster') # Update domain. Close It. Exit # ================================= updateDomain() closeDomain() exit() The application is deployed during the application Docker image build phase. When a WebLogic Server container is started, the simpleApp application is started and is ready to service the client requests. Sample of Using weblogic.Deployer to Deploy and Undeploy a Java EE Application in Kubernetes In this sample, the application simpleApp.war exists in the external directory: /host/apps which is located on the host as described in the prior section: Using External Volumes in Kubernetes Cluster. The following commands show running the webloigc.Deployer utility in the Adminstration Server pod: # Find the pod id for Admin Server pod: admin-server-1238998015-f932w > kubectl get pods NAME READY STATUS RESTARTS AGE admin-server-1238998015-f932w 1/1 Running 0 11m managed-server-0 1/1 Running 0 11m managed-server-1 1/1 Running 0 8m # Find the Admin Server service name that can be connected to from the deployment command. # Here the Admin Server service name is admin-server which has a port 8001. > kubectl get services NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE admin-server 10.102.160.123 <nodes> 8001:30007/TCP 11m kubernetes 10.96.0.1 <none> 443/TCP 39d wls-service 10.96.37.152 <nodes> 8011:30009/TCP 11m wls-subdomain None <none> 8011/TCP 11m # Execute the /bin/bash in the Admin Server pod > kubectl exec -it admin-server-1238998015-f932w /bin/bash # Once in the Admin Server pod, setup a WebLogic env, then run weblogic.Deployer # to deploy the simpleApp.war located in the /shared/applications directory to # the cluster "myCluster" ]$ cd /u01/wlsdomain/base_domain/bin ]$ . setDomainEnv.sh ]$ java weblogic.Deployer -adminurl t3://admin-server:8001 -user weblogic -password weblogic1 -name simpleApp -targets myCluster -deploy /shared/applications/simpleApp.war The next command verifies that the Java EE application deployment to the WebLogic cluster is completed successfully: # Kubernetes routes the traffic to both managed-server-0 and managed-server-1 via the wls-service port 30009. http://<hostIP>:30009/simpleApp/Scrabble.jsp The following command uses the weblogic.Deployer utility to undeploy the application. Note its similarity to the steps for deployment: # Execute the /bin/bash in the Admin Server pod > kubectl exec -it admin-server-1238998015-f932w /bin/bash # Undeploy the simpleApp ]$ cd /u01/wlsdomain/base_domain/bin ]$ . setDomainEnv.sh ]$ java weblogic.Deployer -adminurl t3://admin-server:8001 -user weblogic -password weblogic1 -undeploy -name simpleApp Sample of Using REST APIs to Deploy and Undeploy a Java EE Application in Kubernetes In this sample, the application simpleApp.war has already been distributed to the host directory /host/apps. This host directory, in turn, mounts to the application volume directory /shared/applications, which is in the pod admin-server-1238998015-f932w. The following example shows executing a curl command in the pod admin-server-1238998015-f932w. This curl command sends a REST request to the Adminstration Server using NodePort 30007 to deploy the simpleApp to the WebLogic cluster myCluster. # deploy simpleApp.war file to the WebLogic cluster > kubectl exec admin-server-1238998015-f932w -- curl -v --user weblogic:weblogic1 \           -H X-Requested-By:MyClient \           -H Content-Type:application/json \           -d "{ name: 'simpleApp', \                 sourcePath: '/shared/applications/simpleApp.war', \                 targets: [ { identity: [ 'clusters', 'myCluster' ] } ] }" \           -X POST http://<hostIP>:30007/management/weblogic/latest/edit/appDeployments The following command uses the REST API to undeploy the application: # undeploy simpleApp.war file from the WebLogic cluster > kubectl exec admin-server-1238998015-f932w -- curl -v --user weblogic:weblogic1 \           -H X-Requested-By:MyClient \           -H Accept:application/json \           -X DELETE http://<hostIP>:30007/management/wls/latest/deployments/application/id/simpleApp Best Practices for Deploying Java EE Applications in Kubernetes Deploy Java EE applications or modules to a WebLogic cluster, instead of individual WebLogic Server instances. This simplifies scaling the WebLogic cluster later because changes to deployment strategy are not necessary. WebLogic Server deployment tools can be used in the Kubernetes environment. When updating an application, follow the same steps as described above to distribute and deploy the application. When using a pre-built WebLogic Server domain home in a Docker image, deploying the applications to the domain automatically updates the domain configuration. However deploying applications this way results in the domain configuration in the pods to become out-of-sync with the domain configuration in the Docker image. You can avoid this synchronization issue whenever possible by including the required applications in the pre-built domain home in the Docker image. This way you can avoid extra deployment steps later on. Integrating ReadyApp Framework in Kubernetes Readiness Probe Kubernetes provides a flexible approach to configuring load balancers and frontends in a way that isolates clients from the details of how services are deployed. As part of this approach, Kubernetes performs and reacts to a readiness probe to determine when a container is ready to accept traffic. By contrast, WebLogic Server provides the ReadyApp framework, which reports whether the WebLogic Server instance startup is completed and ready to service client requests. The ReadyApp framework uses two states: READY and NOT READY. The READY state means that not only is a WebLogic Server instance in a RUNNING state, but also that all applications deployed on the WebLogic Server instance are ready to service requests. When in the NOT READY state, the WebLogic Server instance startup is incomplete and is unable to accept traffic. When starting a WebLogic Server container in a Kubernetes environment, you can use a Kubernetes readiness probe to access the ReadyApp framework on WebLogic Server. When the ReadyApp framework reports a READY state of a WebLogic Server container startup, the readiness probe notifies Kubernetes that the traffic to the WebLogic Server container may begin. The following example shows how to use the ReadyApp framework integrated in a readiness probe to determine whether a WebLogic Server container running on the port 8011 is ready to accept traffic. apiVersion: apps/v1beta1 kind: StatefulSet metadata: [...] spec:   [...]   template:     [...]     spec:       containers:         [...]         readinessProbe:           failureThreshold: 3           httpGet:           path: /weblogic/ready           port: 8011           scheme: HTTP [...] The ReadyApp framework on WebLogic Server can be accessed from the URL: http://<hostIP>:<port>/weblogic/ready When WebLogic Server is running, this URL returns a page with either a status 200 (READY) or 503 (NOT READY). When WebLogic Server is not running, an Error 404 page appears.  Similar to WebLogic Server, other Kubernetes applications can register with the ReadyApp framework and use a readiness probe to check the state of the ReadyApp framework on the Kubernetes applications. See Using the ReadyApp Framework for information about how to register an application with the ReadyApp framework.  Best Practices for Integrating ReadyApp Framework in Kubernetes Readiness Probe The use of the ReadyApp framework to register Kubernetes applications and also of the readinessProbe to check the status of the ReadyApp framework to determine whether the applications are ready to service requests, is recommended. Only when the status of the ReadyApp framework is in a READY state, Kubernetes routes traffic to those Kubernetes applications. Conclusion When integrating WebLogic Server in Kubernetes and Docker environments, customers can use the existing powerful WebLogic Server deployment tools to deploy their Java EE applications onto WebLogic Server instances running in Kubernetes. Customers also can use Kubernetes features to manage WebLogic Server: they can use volumes to share the application files with all the containers among all pods in a Kubernetes cluster, and also of the readinessProbe to monitor WebLogic Server startup state, and more. This integration not only allows customers to support flexible deployment scenarios that fit into their company's business practices, but also provides ways to quickly deploy WebLogic Server in a cloud environment, to autoscale it on the fly, and to update it seamlessly.

Overview WebLogic Server and Kubernetes each provide a rich set of features to support application deployment. As part of the process of certifying WebLogic Server on Kubernetes, we have identified...

WebLogic Server on Kubernetes Data Volume Usage

As part of certifying WebLogic Server on Kubernetes, we have identified best practices for sharing file data among WebLogic Server pods that are running in a Kubernetes environment. In this blog, I review the WebLogic Server services and files that are typically configured to leverage shared storage, and I provide full end-to-end samples, which you can download and run, that show mounting shared storage for a WebLogic domain that is orchestrated by Kubernetes. WebLogic Server Persistence in Volumes When running WebLogic Server on Kubernetes, refer to the blog Docker Volumes in WebLogic for information about the advantages of using data volumes. This blog also identifies the WebLogic Server artifacts that are good candidates for being persisted in those data volumes. Kubernetes Solutions In a Kubernetes environment, pods are ephemeral. To persist data, Kubernetes provides the Volume abstraction, and the PersistentVolume (PV) and PersistentVolumeClaim (PVC) API resources. Based on the official Kubernetes definitions [Kubernetes Volumes and Kubernetes Persistent Volumes and Claims], a PV is a piece of storage in the cluster that has been provisioned by an administrator, and a PVC is a request for storage by a user. Therefore, PVs and PVCs are independent entities outside of pods. They can be easily referenced by a pod for file persistence and file sharing among pods inside a Kubernetes cluster. When running WebLogic Server on Kubernetes, using PVs and PVCs to handle shared storage is recommended for the following reasons: Usually WebLogic Server instances run in pods on multiple nodes that require access to a shared PV. The life cycle of a WebLogic Server instance is not limited to a single pod. PVs and PVCs can provide more control. For example, the ability to specify: access modes for concurrent read/write management, mount options provided by volume plugins, storage capacity requirements, reclaim policies for resources, and more. Use Cases of Kubernetes Volumes for WebLogic Server To see the details about the samples, or to run them locally, please download the examples and follow the steps provided below. Software Versions Host machine: Oracle Linux 7u3 UEK4 (x86-64) Kubernetes v1.7.8 Docker 17.03 CE Prepare Dependencies Build the oracle/weblogic:12.2.1.3-developer image locally based on the Dockerfile and scripts at https://github.com/oracle/docker-images/tree/master/OracleWebLogic/dockerfiles/12.2.1.3/. Download the WebLogic Kubernets domain sample source code from https://github.com/oracle/docker-images/tree/master/OracleWebLogic/samples/wls-k8s-domain. Put the sample source code to a local folder named wls-k8s-domain. Build the WebLogic domain image locally based on the Dockerfile and scripts. $ cd wls-k8s-domain $ docker build -t was-k8s-domain . For Use Case 2, below, prepare a NFS server and a shared directory by entering the following commands (in this example I use machine 10.232.128.232). Note that Use Case 1 uses a host path instead of NFS and does not require this step. # systemctl start rpcbind.service # systemctl start nfs.service # systemctl start nfslock.service $ mkdir -p /scratch/nfsdata $ chmod o+rw /scratch/nfadata # echo /scratch/nfsdata *(rw,fsid=root,no_root_squash,no_subtree_check) >> /etc/exports By default, in the WebLogic domain wls-k8s-domain, all processes in pods that contain WebLogic Server instances run with user ID 1000 and group ID 1000. Proper permissions need to be set to the external NFS shared directory to make sure that user ID 1000 and group ID 1000 have read and write permission to the NFS volume. To simplify the permissions management in the examples, we grant read and write permission to others to the shared directory as well. Use Case 1: Host Path Mapping at Individual Machine with a Kubernetes Volume The WebLogic domain consists of an Administration Server and multiple Managed Servers, each running inside its own pod. All pods have volumes directly mounted to a folder on the physical machine. The domain home is created in a shared folder when the Administration Server pod is first started. At runtime, all WebLogic Server instances, including the Administration Server, share the same domain home directory via a mounted volume. Note: This example runs on a single machine, or node, but this approach also works when running the WebLogic domain across multiple machines. When running on multiple machines, each WebLogic Server instance must share the same directory. In turn, the host path can refer to this directory, thus access to the volume is controlled by the underlying shared directory. Given a set of machines that are already set up with a shared directory, this approach is simpler than setting up an NFS client (although maybe not as portable). To run this example, complete the following steps: Prepare the yml file for the WebLogic Administration Server. Edit wls-admin.yml to mount the host folder /scratch/data to /u01/wlsdomain in the Administration Server pod: apiVersion: apps/v1beta1 kind: Deployment metadata: name: admin-server spec: replicas: 1 template: metadata: labels: app: admin-server spec: containers: - name: admin-server image: wls-k8s-domain imagePullPolicy: Never command: ["sh"] args: ["/u01/oracle/startadmin.sh"] readinessProbe: httpGet: path: /weblogic/ready port: 8001 initialDelaySeconds: 15 timeoutSeconds: 5 ports: - containerPort: 8001 env: - name: WLUSER valueFrom: secretKeyRef: name: wlsecret key: username - name: WLPASSWORD valueFrom: secretKeyRef: name: wlsecret key: password volumeMounts: # name must match the volume name below - name: domain-home mountPath: "/u01/wlsdomain" volumes: - name: domain-home hostPath: path: /scratch/data type: Directory Prepare the yml file for the Managed Servers. Edit wls-stateful.yml to mount the host folder /scratch/data to /u01/wlsdomain in the Managed Server pods: apiVersion: apps/v1beta1 # for versions before 1.6.0 use extensions/v1beta1 kind: StatefulSet metadata: name: managed-server spec: serviceName: wls-subdomain replicas: 2 template: metadata: name: ms labels: app: managed-server spec: subdomain: wls-subdomain containers: - name: managed-server image: wls-k8s-domain imagePullPolicy: Never command: ["sh"] args: ["/u01/oracle/startms.sh"] readinessProbe: httpGet: path: /weblogic/ready port: 8011 initialDelaySeconds: 15 timeoutSeconds: 5 ports: - containerPort: 8011 env: - name: JAVA_OPTIONS value: "-Dweblogic.StdoutDebugEnabled=true" - name: USER_MEM_ARGS value: "-Xms64m -Xmx256m " - name: MY_POD_IP valueFrom: fieldRef: fieldPath: status.podIP - name: MY_POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: DNS_DOMAIN_NAME value: "wls-subdomain" - name: WLUSER valueFrom: secretKeyRef: name: wlsecret key: username - name: WLPASSWORD valueFrom: secretKeyRef: name: wlsecret key: password volumeMounts: # name must match the volume name below - name: domain-home mountPath: "/u01/wlsdomain" volumes: - name: domain-home hostPath: path: /scratch/data type: Directory Create the Administration Server and Managed Server pods with the shared volume. These WebLogic Server instances will start from the mounted domain location. $ kubectl create -f wls-admin.yml $ kubectl create -f wls-stateful.yml Use Case 2: NFS Sharing with Kubernetes PV and PVC This example shows a WebLogic Server cluster with one Administration Server and several Managed Server instances, each server residing in a dedicated pod. All the pods have volumes mounted to a central NFS server that is located in a physical machine that the pods can reach. The first time the Administration Server pod is started, the WebLogic domain is created in the shared NFS folder. At runtime, all WebLogic Server instances, including the Administration Server, share the same domain home directory via a mounted volume by PV and PVC. In this sample we have the NFS server on host 10.232.128.232, which has a read/write export to all external hosts on /scratch/nfsdata. Prepare the PV. Edit pv.yml to make sure each WebLogic Server instance has read and write access to the NFS shared folder: kind: PersistentVolume apiVersion: v1 metadata: name: pv1 labels: app: wls-domain spec: storageClassName: manual capacity: storage: 10Gi accessModes: - ReadWriteMany persistentVolumeReclaimPolicy: Recycle # Retain, Recycle, Delete nfs: # Please use the correct NFS server host name or IP address server: 10.232.128.232 path: "/scratch/nfsdata" Prepare the PVC. Edit pvc.yml: apiVersion: v1 kind: PersistentVolumeClaim metadata: name: wlserver-pvc-1 labels: app: wls-server spec: storageClassName: manual accessModes: - ReadWriteMany resources: requests: storage: 10Gi Kubernetes will find the matching PV for the PVC, and bind them together [Kubernetes Persistent Volumes and Claims]. Create the PV and PVC: $ kubectl create -f pv.yml $ kubectl create -f pvc.yml Then check the PVC status to make sure it binds to the PV: $ kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESSMODES STORAGECLASS AGE wlserver-pvc-1 Bound pv1 10Gi RWX manual 7s Prepare the yml file for the Administration Server. It has a reference to the PVC wlserver-pvc-1. Edit wls-admin.yml to mount the NFS shared folder to /u01/wlsdomain in the WebLogic Server Administration Server pod: apiVersion: apps/v1beta1 kind: Deployment metadata: name: admin-server spec: replicas: 1 template: metadata: labels: app: admin-server spec: containers: - name: admin-server image: wls-k8s-domain imagePullPolicy: Never command: ["sh"] args: ["/u01/oracle/startadmin.sh"] readinessProbe: httpGet: path: /weblogic/ready port: 8001 initialDelaySeconds: 15 timeoutSeconds: 5 ports: - containerPort: 8001 env: - name: WLUSER valueFrom: secretKeyRef: name: wlsecret key: username - name: WLPASSWORD valueFrom: secretKeyRef: name: wlsecret key: password volumeMounts: # name must match the volume name below - name: domain-home mountPath: "/u01/wlsdomain" volumes: - name: domain-home persistentVolumeClaim: claimName: wlserver-pvc-1 Prepare the yml file for the Managed Servers. It has a reference to the PVC wlserver-pvc-1. Edit wls-stateful.yml to mount the NFS shared folder to /u01/wlsdomain in each Managed Server pod: apiVersion: apps/v1beta1 # for versions before 1.6.0 use extensions/v1beta1 kind: StatefulSet metadata: name: managed-server spec: serviceName: wls-subdomain replicas: 2 template: metadata: name: ms labels: app: managed-server spec: subdomain: wls-subdomain containers: - name: managed-server image: wls-k8s-domain imagePullPolicy: Never command: ["sh"] args: ["/u01/oracle/startms.sh"] readinessProbe: httpGet: path: /weblogic/ready port: 8011 initialDelaySeconds: 15 timeoutSeconds: 5 ports: - containerPort: 8011 env: - name: JAVA_OPTIONS value: "-Dweblogic.StdoutDebugEnabled=true" - name: USER_MEM_ARGS value: "-Xms64m -Xmx256m " - name: MY_POD_IP valueFrom: fieldRef: fieldPath: status.podIP - name: MY_POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: DNS_DOMAIN_NAME value: "wls-subdomain" - name: WLUSER valueFrom: secretKeyRef: name: wlsecret key: username - name: WLPASSWORD valueFrom: secretKeyRef: name: wlsecret key: password volumeMounts: - name: domain-home mountPath: "/u01/wlsdomain" volumes: - name: domain-home persistentVolumeClaim: claimName: wlserver-pvc-1 Create the Administration Server and Managed Server pods with the NFS shared volume. Each WebLogic Server instance will start from the mounted domain location: $ kubectl create -f wls-admin.yml $ kubectl create -f wls-stateful.yml Summary This blog describes the best practices of setting Kubernetes data volumes when running a WebLogic domain in a Kubernetes environment. Because Kubernetes pods are ephemeral, it is a best practice to persist the WebLogic domain to volumes, as well as files such as logs, stores, and so on. Kubernetes provides persistent volumes and persistent volume claims to simplify externalizing state and persisting important data to volumes. We provide two use cases: the first describes how to map the volume to a host machine where the Kubernetes nodes are running; and the second describes how to use an NFS shared volume. In both use cases, all WebLogic Server instances must have access to the files that are mapped to these volumes.

As part of certifying WebLogic Server on Kubernetes, we have identified best practices for sharing file data among WebLogic Server pods that are running in a Kubernetes environment. In this blog, I...

Exporting Metrics from WebLogic Server

As it runs, WebLogic Server generates a rich set of metrics and runtime state information. Several thousand individual metrics are available to capture performance data, such as invocation counts, session activity, work manager threads, and so forth. These metrics are very useful for tracking activity, diagnosing problems, and ensuring sufficient resources are available. Exposed through both JMX and web services, these metrics are supported by Oracle administration tools, such as Enterprise Manager and the WebLogic Server Administration Console, as well as third-party clients.  One of those third-party clients is Prometheus. Prometheus is an open source monitoring toolkit that is commonly used in cloud environments as a framework for gathering, storing, and querying time series data. A number of exporters have been written to scrape information from various services and feed that information into a Prometheus server. Once there, this data can be retrieved using Prometheus itself or other tools that can process Prometheus data, such as Grafana. Oracle customers have been using the generic Prometheus JMX Exporter to scrape information from WebLogic Server instances, but this solution is hampered by usability issues and scalability at larger sites. Consider the following portion of an MBean tree: In this tree, ServerRuntime represents the top of the MBean tree and has several ApplicationRuntime MBeans, each of which has multiple ComponentRuntime MBeans. Some of those are of type WebAppComponentRuntime, which has multiple Servlet MBeans. We can configure the JMX Exporter as follows: jmxUrl: service:jmx:t3://@HOST@:@PORT@/jndi/weblogic.management.mbeanservers.runtime  username: system  password: gumby1234  lowercaseOutputName: false  lowercaseOutputLabelNames: false  whitelistObjectNames:    - "com.bea:ServerRuntime=*,Type=ApplicationRuntime,*"    - "com.bea:Type=WebAppComponentRuntime,*"    - "com.bea:Type=ServletRuntime,*"    rules:    - pattern: "^com.bea<ServerRuntime=.+, Name=(.+), ApplicationRuntime=(.+), Type=ServletRuntime, WebAppComponentRuntime=(.+)><>(.+): (.+)"      attrNameSnakeCase: true      name: weblogic_servlet_$4      value: $5      labels:        name: $3        app: $2        servletName: $1      - pattern: "^com.bea<ServerRuntime=(.+), Name=(.+), ApplicationRuntime=(.+), Type=WebAppComponentRuntime><>(.+): (.+)$"      attrNameSnakeCase: true      name: webapp_config_$4      value: $5      labels:        app: $3        name: $2 This selects the appropriate MBeans and allows the exporter to generate metrics such as: webapp_config_open_sessions_current_count{app="receivables",name="accounting"} 3  webapp_config_open_sessions_current_count{app="receivables",name="inventory"} 7  weblogic_servlet_invocations_total_count{app="receivables",name="accounting",servletName="Balance"} 0  weblogic_servlet_invocations_total_count{app="receivables",name="accounting",servletName="Login"} 0  weblogic_servlet_invocations_total_count{app="receivables",name="inventory",servletName="Count"} 0  weblogic_servlet_invocations_total_count{app="receivables",name="inventory",servletName="Reorder"} 0 However, this approach has challenges. The JMX Exporter can be difficult to set up because it must run as a Java agent. In addition, because JMX is built on top of RMI, and JMX over RMI/IIOP has been removed from the JRE as of Java SE 9, the exporter must be packaged with a platform-specific RMI implementation. The JMX Exporter is also somewhat processor-intensive. It requires a separate invocation of JMX to obtain each bean in the tree, which adds to the processing that must be done by the server. And configuring the exporter can be difficult because it relies on MBean names and regular expressions. While it is theoretically possible to select a subset of the attributes for a given MBean, in practice that adds further complexity to the regular expressions, thereby making it impractical. As a result, it is common to scrape everything and incur the transport and storage costs, and then to apply filtering only when the data is eventually viewed. The WebLogic Monitoring Exporter Along with JMX, Oracle WebLogic Server 12.2.1 and later provides a RESTful Management Interface for accessing runtime state and metrics. Included in this interface is a powerful bulk access capability that allows a client to POST a query that describes exactly what information is desired and to retrieve a single response that includes only that information. Oracle has now created the WebLogic Monitoring Exporter, which takes advantage of this interface. This exporter is implemented as a web application that is deployed to the WebLogic Server instance being monitored. Its configuration explicitly follows the MBean tree, starting below the ServerRuntime MBean. To obtain the same result as in the previous example, we could use the following: metricsNameSnakeCase: true   queries:     - applicationRuntimes:       key: name       keyName: app       componentRuntimes:         type: WebAppComponentRuntime         prefix: webapp_config_         key: name         values: [openSessionsCurrentCount, openSessionsHighCount]         servlets:           prefix: weblogic_servlet_           key: servletName This exporter can scrape the desired metrics with a single HTTP query rather than multiple JMX queries, requires no special setup, and provides an easy way to select the metrics that should be produced for an MBean, while defaulting to using all available fields. Note that the exporter does not need to specify a URL because it always connects to the server on which it is deployed, and does not specify username and password, but rather requires its clients to specify them when attempting to read the metrics. Managing the Application Because the exporter is a web application, it includes a landing page: Not only does the landing page include the link to the metrics, but it also displays the current configuration. When the app is first loaded, the configuration that’s used is the one embedded in the WAR file. However, the landing page contains a form that allows you to change the configuration by selecting a new yaml file. Only the queries from the new file are used, and we can combine queries by selecting the Append button before submitting. For example, we could add some JVM metrics: The new metrics will be reported the next time a client accesses the metrics URL. The new elements above will produce metrics such as: jvm_heap_free_current{name="myserver"} 285027752 jvm_heap_free_percent{name="myserver"} 71 jvm_heap_size_current{name="myserver"} 422051840 Metrics in a WebLogic Server Cluster In a WebLogic Server cluster, of course, it is of little value to change the metrics collected by a single server instance; because all cluster members are serving the same applications, we want them to report the same metrics. To do this, we need a way to have all the servers respond to the changes made to any one of them. The exporter does this by using a separate config_coordinator process to track changes. To use it, we need to add a new top-level element to the initial configuration that describes the query synchronization: query_sync:    url: http://coordinator:8099    refreshInterval: 10   This specifies the URL of the config_coordinator process, which runs in its own Docker container. When the exporter first starts, and its configuration contains this element, it will contact the coordinator to see if it already has a new configuration. Thereafter, it will do so every time either the landing page or the metrics page is queried. The optional refreshInterval element limits how often the exporter looks for a configuration update. When it finds one, it will load it immediately without requiring a server restart. When you update the configuration in an exporter that is configured to use the coordinator, the new queries are sent to the coordinator where other exporters can load them. In this fashion, an entire cluster of Managed Servers can have its metrics configurations kept in sync. Summary The WebLogic Monitoring Exporter greatly simplifies the process of exporting metrics from clusters of WebLogic Server instances in a Docker/Kubernetes environment. It does away with the need to figure out MBean names and work with regular expressions. It also allows metric labels to be defined explicitly from field names, and then automatically uses those definitions for metrics from subordinate MBeans, ensuring consistency. In our testing, we have found enormous improvements in performance using it versus the JMX Exporter. It uses less CPU and responds more quickly. In the graphs below, the green lines represent the JMX Exporter, and the yellow lines represent the WebLogic Monitoring Exporter. We expect users who wish to monitor WebLogic Server performance will gain great benefits from our efforts. See Using Prometheus and Grafana to Monitor WebLogic Server on Kubernetes for more information.

As it runs, WebLogic Server generates a rich set of metrics and runtime state information. Several thousand individual metrics are available to capture performance data, such as invocation...

The WebLogic Server

Announcing the New WebLogic Monitoring Exporter 

Very soon we will be announcing certification of WebLogic Server on Kubernetes.  To give our users the best possible experience when running WebLogic domains in Docker/Kubernetes environments, we have developed the WebLogic Monitoring Exporter.  This new tool exposes WebLogic Server metrics that can be read and collected by monitoring tools such as Prometheus, and displayed in Grafana. We are also making the WebLogic Monitoring Exporter tool available in open source here, which will allow our community to contribute to this project and be part of enhancing it.  As it runs, WebLogic Server generates a rich set of metrics and runtime state information that provides detailed performance and diagnostic data about the servers, clusters, applications, and other resources that are running in a WebLogic domain.  The WebLogic Monitoring Exporter enables administrators of Kubernetes environments to easily monitor this data using tools like Prometheus and Grafana, tools that are commonly used for monitoring Kubernetes environments. For more information on the design and implementation of the WebLogic Monitoring Exporter, see Exporting Metrics from WebLogic Server. For more information on Using Prometheus and Grafana to Monitor WebLogic Server on Kubernetes see WebLogic on Kubernetes monitoring using Prometheus and Grafana. Stay tuned for more information about WebLogic Server certification on Kubernetes. Our intent is to enable you to run WebLogic Server in Kubernetes, to run WebLogic Server in the Kubernetes-based Oracle Container Engine, and to enable integration of WebLogic Server applications with applications developed on our Kubernetes-based Container Native Application Development Platform.

Very soon we will be announcing certification of WebLogic Server on Kubernetes.  To give our users the best possible experience when running WebLogic domains in Docker/Kubernetes environments, we have...

Run a WebLogic JMS Sample on Kubernetes

Overview This blog is a step-by-step guide to configuring and running a sample WebLogic JMS application in a Kubernetes cluster. First we explain how to create a WebLogic domain that has an Administration Server, and a WebLogic cluster. Next we add WebLogic JMS resources and a data source, deploy an application, and finally run the application. This application is based on a sample application named 'Classic API - Using Distributed Destination' that is included in the WebLogic Server sample applications. This application implements a scenario in which employees submit their names when they arrive, and a supervisor monitors employee arrival time. Employees choose whether to send their check-in messages to a distributed queue or a distributed topic. These destinations are configured on a cluster with two active Managed Servers. Two message-driven beans (MDBs), corresponding to these two destinations, are deployed to handle the check-in messages and store them in a database. A supervisor can then scan all of the check-in messages by querying the database. The two main approaches for automating WebLogic configuration changes are WLST and the REST API. To run the scripts, WLST or REST API, in a WebLogic domain on Kubernetes, you have two options: Running the scripts inside Kubernetes cluster pods — If you use this option,  use 'localhost', NodePort service name, or Statefulset's headless service name,  pod IP,  Cluster IP, and the internal ports. The instructions in this blog use 'localhost'. Running the scripts outside the Kubernetes cluster — If you use this option, use hostname/IP and the NodePort. In this blog we use the REST API and run the scripts within the Administration Server pod to deploy all the resources. All the resources are targeted to the whole cluster which is the recommended approach for WebLogic Server on Kubernetes because it works well when the cluster scales up or scales down. Creating the WebLogic Base Domain We use the sample WebLogic domain in GitHub to create the base domain. In this WebLogic sample you will find a Dockerfile, scripts, and yaml files to build and run the WebLogic Server instances and cluster in the WebLogic domain on Kubernetes. The sample domain contains an Administration Server named AdminServer and a WebLogic cluster with four Managed Servers named managed-server-0 through managed-server-3. We configure four Managed Servers but we start only the first two: managed-server-0 and managed-server-1.  One feature that distinguishes a JMS service from others is that it's highly stateful and most of its data needs to be kept in a persistent store, such as persistent messages, durable subscriptions, and so on. A persistent store can be a database store or a file store, and in this sample we demonstrate how to use external volumes to store this data in file stores. In this WebLogic domain we configure three persistent volumes for the following: The domain home folder – This volume is shared by all the WebLogic Server instances in the domain; that is, the Administration Server and all Managed Server instances in the WebLogic cluster. The file stores – This volume is shared by the Managed Server instances in the WebLogic cluster. A MySQL database – The use of this volume is explained later in this blog. Note that by default a domain home folder contains configuration files, log files, diagnostic files, application binaries, and the default file store files for each WebLogic Server instance in the domain. Custom file store files are also placed in the domain home folder by default, but we customize the configuration in this sample to place these files in a separate, dedicated persistent volume. The two persistent volumes – one for the domain home, and one for the customer file stores – are shared by multiple WebLogic Servers instances. Consequently, if the Kubernetes cluster is running on more than one machine, these volumes must be in a shared storage. Complete the steps in the README.md file to create and run the base domain. Wait until all WebLogic Server instances are running; that is, the Administration Server and two Managed Servers. This may take a short while because Managed Servers are started in sequence after the Administration Server is running and the provisioning of the initial domain is complete. $ kubectl get pod NAME READY    STATUS    RESTARTS    AGE admin-server-1238998015-kmbt9  1/1      Running   0           5m managed-server-0               1/1      Running   0          3m managed-server-1               1/1      Running   0   3m Note that in the commands used in this blog you need to replace $adminPod and $mysqlPod with the actual pod names. Deploying the JMS Resources with a File Store When the domain is up and running, we can deploy the JMS resources. First, prepare a JSON data file that contains definitions for one file store, one JMS server, and one JMS module. The file will be processed by a Python script to create the resources, one-by-one, using the WebLogic Server REST API. file jms1.json: {"resources": { "filestore1": { "url": "fileStores", "data": { "name": "filestore1", "directory": "/u01/filestores/filestore1", "targets": [{ "identity":["clusters", "myCluster"] }] } }, "jms1": { "url": "JMSServers", "data": { "messagesThresholdHigh": -1, "targets": [{ "identity":["clusters", "myCluster"] }], "persistentStore": [ "fileStores", "filestore1" ], "name": "jmsserver1" } }, "module": { "url": "JMSSystemResources", "data": { "name": "module1", "targets":[{ "identity": [ "clusters", "myCluster" ] }] } }, "sub1": { "url": "JMSSystemResources/module1/subDeployments", "data": { "name": "sub1", "targets":[{ "identity": [ "JMSServers", "jmsserver1" ] }] } } }} Second, prepare the JMS module file, which contains a connection factory, a distributed queue, and a distributed topic. file module1-jms.xml: <?xml version='1.0' encoding='UTF-8'?> <weblogic-jms xmlns="http://xmlns.oracle.com/weblogic/weblogic-jms" xmlns:sec="http://xmlns.oracle.com/weblogic/security" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:wls="http://xmlns.oracle.com/weblogic/security/wls" xsi:schemaLocation="http://xmlns.oracle.com/weblogic/weblogic-jms http://xmlns.oracle.com/weblogic/weblogic-jms/1.1/weblogic-jms.xsd"> <connection-factory name="cf1"> <default-targeting-enabled>true</default-targeting-enabled> <jndi-name>cf1</jndi-name> <transaction-params> <xa-connection-factory-enabled>true</xa-connection-factory-enabled> </transaction-params> <load-balancing-params> <load-balancing-enabled>true</load-balancing-enabled> <server-affinity-enabled>false</server-affinity-enabled> </load-balancing-params> </connection-factory> <uniform-distributed-queue name="dq1"> <sub-deployment-name>sub1</sub-deployment-name> <jndi-name>dq1</jndi-name> </uniform-distributed-queue> <uniform-distributed-topic name="dt1"> <sub-deployment-name>sub1</sub-deployment-name> <jndi-name>dt1</jndi-name> <forwarding-policy>Partitioned</forwarding-policy> </uniform-distributed-topic> </weblogic-jms> Third, copy these two files to the Administration Server pod, then run the Python script to create the JMS resources within the Administration Server pod: $ kubectl exec $adminPod -- mkdir /u01/wlsdomain/config/jms/ $ kubectl cp ./module1-jms.xml $adminPod:/u01/wlsdomain/config/jms/ $ kubectl cp ./jms1.json $adminPod:/u01/oracle/ $ kubectl exec $adminPod -- python /u01/oracle/run.py createRes /u01/oracle/jms1.json Launch the WebLogic Server Administration Console, by going to your browser and entering the URL http://<hostIP>:30007/console in the address bar, and make sure that all the JMS resources are running successfully. Deploying the Data Source Setting Up and Running MySQL Server in Kubernetes This sample stores the check-in messages in a database. So let's set up MySQL Server and get it running in Kubernetes. First, let's prepare the mysql.yml file, which defines a secret to store encrypted username and password credentials, a persistent volume claim (PVC) to store database data in an external directory, and a MySQL Server deployment and service. In the base domain, one persistent volume is reserved and available so that it can be used by the PVC that is defined in mysql.yml. file mysql.yml: apiVersion: v1 kind: Secret metadata: name: dbsecret type: Opaque data: username: bXlzcWw= password: bXlzcWw= rootpwd: MTIzNHF3ZXI= --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: mysql-pv-claim labels: app: mysql-server spec: storageClassName: manual accessModes: - ReadWriteOnce resources: requests: storage: 10Gi --- apiVersion: apps/v1beta1 kind: Deployment metadata: name: mysql-server spec: replicas: 1 template: metadata: labels: app: mysql-server spec: containers: - name: mysql-server image: mysql:5.7 imagePullPolicy: IfNotPresent ports: - containerPort: 3306 env: - name: MYSQL_ROOT_PASSWORD valueFrom: secretKeyRef: name: dbsecret key: rootpwd - name: MYSQL_USER valueFrom: secretKeyRef: name: dbsecret key: username - name: MYSQL_PASSWORD valueFrom: secretKeyRef: name: dbsecret key: password - name: MYSQL_DATABASE value: "wlsdb" volumeMounts: - mountPath: /var/lib/mysql name: db-volume volumes: - name: db-volume persistentVolumeClaim: claimName: mysql-pv-claim --- apiVersion: v1 kind: Service metadata: name: mysql-server labels: app: mysql-server spec: ports: - name: client port: 3306 protocol: TCP targetPort: 3306 clusterIP: None selector: app: mysql-server Next, deploy MySQL Server to the Kubernetes cluster: $ kubectl create -f mysql.yml Creating the Sample Application Table First, prepare the DDL file for the sample application table: file sampleTable.ddl: create table jms_signin ( name varchar(255) not null, time varchar(255) not null, webServer varchar(255) not null, mdbServer varchar(255) not null); Next, create the table in MySQL Server: $ kubectl exec -it $mysqlPod -- mysql -h localhost -u mysql -pmysql wlsdb < sampleTable.ddl Creating a Data Source for the WebLogic Server Domain We need to configure a data source so that the sample application can communicate with MySQL Server. First, prepare the ds1-jdbc.xml module file. file ds1-jdbc.xml: <?xml version='1.0' encoding='UTF-8'?> <jdbc-data-source xmlns="http://xmlns.oracle.com/weblogic/jdbc-data-source" xmlns:sec="http://xmlns.oracle.com/weblogic/security" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:wls="http://xmlns.oracle.com/weblogic/security/wls" xsi:schemaLocation="http://xmlns.oracle.com/weblogic/jdbc-data-source http://xmlns.oracle.com/weblogic/jdbc-data-source/1.0/jdbc-data-source.xsd"> <name>ds1</name> <datasource-type>GENERIC</datasource-type> <jdbc-driver-params> <url>jdbc:mysql://mysql-server:3306/wlsdb</url> <driver-name>com.mysql.jdbc.Driver</driver-name> <properties> <property> <name>user</name> <value>mysql</value> </property> </properties> <password-encrypted>mysql</password-encrypted> <use-xa-data-source-interface>true</use-xa-data-source-interface> </jdbc-driver-params> <jdbc-connection-pool-params> <capacity-increment>10</capacity-increment> <test-table-name>ACTIVE</test-table-name> </jdbc-connection-pool-params> <jdbc-data-source-params> <jndi-name>jndi/ds1</jndi-name> <algorithm-type>Load-Balancing</algorithm-type> <global-transactions-protocol>EmulateTwoPhaseCommit</global-transactions-protocol> </jdbc-data-source-params> <jdbc-xa-params> <xa-transaction-timeout>50</xa-transaction-timeout> </jdbc-xa-params> </jdbc-data-source> Then deploy the data source module to the WebLogic Server domain: $ kubectl cp ./ds1-jdbc.xml $adminPod:/u01/wlsdomain/config/jdbc/ $ kubectl exec $adminPod -- curl -v \ --user weblogic:weblogic1 \ -H X-Requested-By:MyClient \ -H Accept:application/json \ -H Content-Type:application/json \ -d '{ "name": "ds1", "descriptorFileName": "jdbc/ds1-jdbc.xml", "targets":[{ "identity":["clusters", "myCluster"] }] }' -X POST http://localhost:8001/management/weblogic/latest/edit/JDBCSystemResources Deploying the Servlet and MDB Applications First, download the two application archives: signin.war and signinmdb.jar. Enter the commands below to deploy these two applications using REST APIs within the pod running the WebLogic Administration Server. # copy the two app files to admin pod $ kubectl cp signin.war $adminPod:/u01/wlsdomain/signin.war $ kubectl cp signinmdb.jar $adminPod:/u01/wlsdomain/signinmdb.jar # deploy the two app via REST api $ kubectl exec $adminPod -- curl -v \ --user weblogic:weblogic1 \ -H X-Requested-By:MyClient \ -H Content-Type:application/json \ -d "{ name: 'webapp', sourcePath: '/u01/wlsdomain/signin.war', targets: [ { identity: [ 'clusters', 'myCluster' ] } ] }" \ -X POST http://localhost:8001/management/weblogic/latest/edit/appDeployments $ kubectl exec $adminPod -- curl -v \ --user weblogic:weblogic1 \ -H X-Requested-By:MyClient \ -H Content-Type:application/json \ -d "{ name: 'mdb', sourcePath: '/u01/wlsdomain/signinmdb.jar', targets: [ { identity: [ 'clusters', 'myCluster' ] } ] }" \ -X POST http://localhost:8001/management/weblogic/latest/edit/appDeployments Next, go to the WebLogic Server Administration Console (http://<hostIP>:30007/console) to verify the applications have been successfully deployed and running. Running the Sample Invoke the application on the Managed Server by going to a browser and entering the URL http://<hostIP>:30009/signIn/. Using a number of different browsers and machines to simulate multiple web clients, submit several unique employee names. Then check the result by entering the URL http://<hostIP>:30009/signIn/response.jsp. You can see that there are two different levels of load balancing taking place: HTTP requests are load balanced among Managed Servers within the cluster. Notice the entries beneath the column labeled Web Server Name. For each employee check-in, thiscolumn identifies the name of the WebLogic Server instance that contains the servlet instance that is processing the corresponding HTTP request. JMS messages that are sent to a distributed destination are load balanced among the MDB instances within the cluster. Notice the entries beneath the column labeled MDB Server Name. This column identifies the name of the WebLogic Server instance that contains the MDB instance that is processing the message. Restarting All Pods Restart the MySQL pod, WebLogic Administration Server pod and WebLogic Managed Server pods. This will demonstrate that the data in your external volumes is indeed preserved independently of your pod life cycles. First, gracefully shut down the MySQL Server pod: $ kubectl exec -it $mysqlpod /etc/init.d/mysql stop After the MySQL Server pod is stopped, the Kubernetes control panel will restart it automatically. Next, follow the section "Restart Pods" in the README.md in order to restart all WebLogic servers pods.  $ kubectl get pod NAME     READY   STATUS   RESTARTS   AGE admin-server-1238998015-kmbt9   1/1     Running  1         7d managed-server-0                1/1     Running  1          7d managed-server-1                1/1     Running  1     7d mysql-server-3736789149-n2s2l   1/1     Running  1          3h You will see that the restart count for each pod has increased from 0 to 1. After all pods are running again, access the WebLogic Server Administration Console to verify that the servers are in running state. After servers restart all messages will get recovered. You'll get the same results as you did prior to the restart because all data is persisted in the external data volumes and therefore can be recovered after the pods are restarted. Cleanup Enter the following command to clean up the resources used by the MySQL Server instance: $ kubectl delete -f mysql.yml Next, follow the steps in the "Cleanup" section of the README.md to remove the base domain and delete all other resources used by this example. Summary and Futures This blog helped demonstrate using Kubernetes as a flexible and scalable environment for hosting WebLogic Server JMS cluster deployments. We leveraged basic Kubernetes facilities to manage WebLogic server life-cycles, used file based message persistence, and demonstrated intra-cluster JMS communication between Java EE applications. We also demonstrated that File based JMS persistence works well when externalizing files to a shared data volume outside the Kubernetes pods, as this persists data beyond the life cycle of the pods. In future blogs we’ll explore hosting a WebLogic JMS cluster using Oracle’s upcoming fully certified WebLogic’s Kubernetes ‘operator based’ Kubernetes environment. In addition, we’ll also explore using external JMS clients to communicate with WebLogic JMS services running inside a Kubernetes cluster, using database instead of file persistence, and using WebLogic JMS automatic service migration to automatically migrate JMS instances from shutdown pods to running pods.

Overview This blog is a step-by-step guide to configuring and running a sample WebLogic JMS application in a Kubernetes cluster. First we explain how to create a WebLogic domain that has an...

Using Prometheus and Grafana to Monitor WebLogic Server on Kubernetes

As part of the release of our general availability (GA) version of the WebLogic Server Kubernetes Operator, the WebLogic team has created instructions that show how to create and run a WebLogic Server domain on Kubernetes.  The README.md in the project provides all the steps. In the area of monitoring and diagnostics, for open source, we have developed a new tool, WebLogic Monitoring Exporter, which was implemented to scrape runtime metrics for specific WebLogic Server instances and feed them to the Prometheus and Grafana tools. The Weblogic Monitoring Exporter is a web application that you can deploy on a WebLogic Server instance that you want to monitor. The exporter uses the WebLogic Server 12.2.1.x RESTful Management Interface for accessing runtime state and metrics. For a detailed description of WebLogic Monitoring Exporter configuration and usage, see The WebLogic Monitoring Exporter. In this blog you will learn how to configure Prometheus and Grafana to monitor WebLogic Server instances that are running in Kubernetes clusters. Monitoring Using Prometheus We’ll be using the WebLogic Monitoring Exporter to scrape WebLogic Server metrics and feed them to Prometheus. Previous blog entries have described how to start and run WebLogic Server instances in Kubernetes with the WebLogic Monitoring Exporter deployed on Managed Servers running in the cluster. To make sure that the WebLogic Monitoring Exporter is deployed and running, click the link: http://[hostname]:30011/wls-exporter/metrics You will be prompted for the WebLogic user credentials that are required to access the metrics data, which are weblogic/welcome1. The metrics page should show the metrics configured for the WebLogic Monitoring Exporter: To create a Prometheus instance in Kubernetes, create a Prometheus configuration file, prometheus-deployment.yml. A sample file is provided in our sample, which may be modified as required to match your environment. The above example of the Prometheus configuration file specifies: -    weblogic/welcome1 as the user credentials -    5 seconds as the interval between updates of WebLogic Server metrics -    32000 as the external port to access the Prometheus dashboard -    Use of namespace: ‘monitoring’ You can change these values as required to reflect your specific environment and configuration. To create the namespace ‘monitoring’:     $ kubectl create -f monitoring-namespace.yaml To create the corresponding RBAC policy to allow the required permission for the pods. See the provided sample here:     $ kubectl create -f crossnsrbac.yaml The RBAC policy in the sample uses the namespaces: ‘weblogic-domain’ for WebLogic Server domain deployment ‘weblogic-operator’ for WebLogic Server Kubernetes Start Prometheus to monitor the Managed Server instances:    $ kubectl create -f prometheus-deployment.yaml Verify that Prometheus is monitoring all Managed Servers by browsing to http://[hostname]:32000. Examine the Insert metric at cursor pull-down. It should list metric names based on the current configuration of the WebLogic Monitoring Exporter web application.   To check that the WebLogic Monitoring Exporter is configured correctly, connect to the web page at  http//:[hostname]:30011/wls-exporter. The current configuration will be listed there. For example: Below is the corresponding WebLogic Monitoring Exporter configuration yml file: metricsNameSnakeCase: true queries: - applicationRuntimes:     key: name     keyName: app     componentRuntimes:       type: WebAppComponentRuntime       prefix: webapp_config_       key: name       values: [deploymentState, contextRoot, sourceInfo, openSessionsHighCount, openSessionsCurrentCount, sessionsOpenedTotalCount, sessionCookieMaxAgeSecs, sessionInvalidationIntervalSecs, sessionTimeoutSecs, singleThreadedServletPoolSize, sessionIDLength, servletReloadCheckSecs, jSPPageCheckSecs]       servlets:         prefix: weblogic_servlet_         key: servletName         values: [invocationTotalCount, reloadTotal, executionTimeAverage, poolMaxCapacity, executionTimeTotal, reloadTotalCount, executionTimeHigh, executionTimeLow] - JVMRuntime:     key: name     values: [heapFreeCurrent, heapFreePercent, heapSizeCurrent, heapSizeMax, uptime, processCpuLoad] The configuration listed above was embedded into the WebLogic Monitoring Exporter WAR file. To change or add more metrics data, simply connect to the landing page at http//:[hostname]:30011/wls-exporter and use the Append or Replace buttons to load the configuration file in yml format. For example, workmanager.yml: metricsNameSnakeCase: true queries: - applicationRuntimes:     key: name     workManagerRuntimes:       prefix: workmanager_       key: applicationName       values: [pendingRequests, completedRequests, stuckThreadCount] By constructing Prometheus-defined queries, you can generate any data output you require to monitor and diagnose the servers, applications, and resources that are running in your WebLogic domain. For example, you can enter the following into the query box, and Prometheus will return current data from all running Managed Servers in the WebLogic cluster: weblogic_servlet_execution_time_average > 1 Prometheus also generates graphs that are based on provided data. For example, if you click on the Graph tab, Prometheus will generate a graph showing the number of servlets with an average execution time exceeding the threshold time by 1 second or more.   Monitoring Using Grafana For better visual presentation and dashboards with multiple graphs, use Grafana. Here is an example configuration file, grafana-deployment.yaml, which can be used to start Grafana in the Kubernetes environment. To start Grafana to monitor the Managed Servers, use the following kubectl command:  $ kubectl create -f grafana-deployment.yaml To start Grafana to monitor the Managed Servers, use the following kubectl command:  $ kubectl create -f grafana-kubernetes.yml Connect to Grafana at http://[hostname]:31000. Log in to the home page with the username admin, and the password pass. The Grafana home page will be displayed. To connect Grafana to Prometheus, select Add Data Source and then enter the following values:        Name:    Prometheus        Type:      Prometheus        Url:         http://prometheus:9090        Access:   Proxy Select the Dashboards tab and click Import: Now we are ready to generate a dashboard to monitor WebLogic Server. Complete the following steps: Click the Grafana symbol in the upper left corner of the home page, and select Dashboards add new. Select Graph and pull it into the empty space. It will generate an empty graph panel:​ Click on Click on the panel and select the edit option. It will open an editable panel where you can customize how the metrics graph will be presented. In the Graph panel, select the General tab, and enter WebLogic Servlet Execution Average Time as the title. Select the Metrics tab, then select the Prometheus option in the Panel Data Source pull-down menu. If you click in the empty Metric lookup field, all metrics configured in the WebLogic Monitoring Exporter will be pulled in, the same way as in Prometheus. Let’s enter the same query we used in the Prometheus example, weblogic_servlet_execution_time_average > 1. The generated graphs will show data for all available servlets with an average execution time greater than 1 second, on all Managed Servers in the cluster. Each color represents a specific pod and servlet combination. To show data for particular pod, simply click on the corresponding legend. This will remove all other pods’ data from the graph, and their legends will no longer be highlighted. To add more data, just press the shift key and click on any desired legend. To reset, just click the same legend again, and all others will be redisplayed the graph. To customize the legend, click the desired values in the Legend Format field. For example: {{pod_name}} :appName={{webapp}} : servletName={{servletName}} Grafana will begin to display your customized legend. If you click the graph, you can see all values for the selected time: Select the Graph → Legend tab to obtain more options for customizing the legend view. For example, you can move the placement of the legend, show the minimum, maximum, or average values, and more. By selecting the Graph → Axes tab, you can switch units to the corresponding metrics data, in this example it is time(millisecs): Grafana also provides alerting tools. For example, we can configure an alert for specified conditions. In the example below, Grafana will fire an alert if the average servlet execution time is greater than 100 msec. It will also send an email to the administrator: Last, we want our graph to be refreshed every 5 seconds, the same refresh interval as the Prometheus scrape interval. We can also customize the time range for monitoring the data. To do that, we need to click on right upper corner of the created dashboard. By default, it is configured to show metrics for the prior 6 hours up to the current time. Perform the desired changes. For example, switch to refresh every 5 seconds and click Apply: When you are done, simply click the ‘save’ icon in the upper left corner of the window, and enter a name for the dashboard. Summary WebLogic Server today has a rich set of metrics that can be monitored using well-known tools such as the WebLogic Server Administration Console and the Monitoring Dashboard. These tools are used to monitor the WebLogic Server instances, applications, and resources running in a WebLogic deployment in Kubernetes. In this container ecosystem, tools like Prometheus and Grafana offer an alternative way of exporting and monitoring the metrics from clusters of WebLogic Server instances running in Kubernetes. It also makes monitored data easy to collect, access, present, and customize in real time without restarting the domain. In addition, it provides a simple way to create alerts and send notifications to any interested parties. Start using it, you will love it!

As part of the release of our general availability (GA) version of the WebLogic Server Kubernetes Operator, the WebLogic team has created instructions that show how to create and run a WebLogic Server...

The WebLogic Server

How to... WebLogic Server on Kubernetes

The WebLogic Server team is working on certifying WebLogic domains being orchestrated in Kubernetes.  As part of this work we are releasing a series of blogs that answer questions our users might have, and describe best practices for running WebLogic Server on Kubernetes. These blogs cover topics such as security best practices, monitoring, logging, messaging, transactions, scaling clusters, externalizing state in volumes, patching, updating applications, and much more.  Our first blog walks you through a sample on GitHub that lets you jump right in and try it!  We will continue to update this list of blogs to make it easy for you to follow them, so stay tuned. Security Best Practices for WebLogic Server Running in Docker and Kubernetes Automatic Scaling of WebLogic Clusters on Kubernetes Let WebLogic work with Elastic Stack in Kubernetes Docker Volumes in WebLogic Exporting Metrics from WebLogic Server Using Prometheus and Grafana to Monitor WebLogic Server on Kubernetes Run a WebLogic JMS Sample on Kubernetes Run Standalone WebLogic JMS Clients on Kubernetes Best Practices for Application Deployment on WebLogic Server Running on Kubernetes Patching WebLogic Server in a Kubernetes Environment WebLogic Server on Kubernetes Data Volume Usage T3 RMI Communication for WebLogic Server Running on Kubernetes Processing the Oracle WebLogic Server Kubernetes Operator Logs using Elastic Stack Using Prometheus to Automatically Scale WebLogic Clusters on Kubernetes WebLogic Dynamic Clusters on Kubernetes How to run WebLogic clusters on the Oracle Cloud Infrastructure Container Engine for Kubernetes WebLogic Server JTA in a Kubernetes Environment Voyager/HAProxy as Load Balancer to Weblogic Domains in Kubernetes Running WebLogic on Open Shift Easily Create an OCI Container Engine for Kubernetes cluster with Terraform Installer to run WebLogic Server

The WebLogic Server team is working on certifying WebLogic domains being orchestrated in Kubernetes.  As part of this work we are releasing a series of blogs that answer questions our users might...

The WebLogic Server

Migrating from Multi Data Source to Active GridLink - Take 2

In the original blog article on this topic at this link, I proposed that you delete the multi data source (MDS) and create a replacement Active GridLink (AGL) data source.  In the real world, the multi data source is likely referenced by another objects like a JDBC store and deleting the MDS will create an invalid configuration.  Further, those objects using connections from the MDS will fail during and after this re-configuration.  That implies that for this type of operation the related server needs to be shut down, the configuration updated with offline WLST, and the server restarted.  The administration console cannot be used for this type of migratoin.  Except for the section that describes using the console, the other information in the earlier blog article still aplies to this process.  No changes should be required in the application, only in the configuration, because we preserve the JNDI name. The following is a sample of what the offline WLST script might look like.  You could parameterize it and make it more flexible in handling multiple datasources. # java weblogic.WLST file.py import sys, socket, os # Application values dsName='myds' memberds1='ds1' memberds2='ds2' domain='/domaindir onsNodeList=host1:6200,host2:6200' url='jdbc:oracle:thin:@(DESCRIPTION=(ADDRESS_LIST=' \  + '(ADDRESS=(PROTOCOL=TCP)(HOST=host1)(PORT=1521))' \  + '(ADDRESS=(PROTOCOL=TCP)(HOST=host2)(PORT=1521)))' \  + '(CONNECT_DATA=(SERVICE_NAME=servicename)))' user='user1' password='password1' readDomain(domain) # change type from MDS to AGL # The following is for WLS 12.1.2 and 12.1.3 if not setting # FanEnabled true, which is not recommended # set('ActiveGridlink','true') # The following is for WLS 12.2.1 and later #cd('/JDBCSystemResources/' + dsName + '/JdbcResource/' + dsName ) #set('DatasourceType', 'AGL') # set the AGL parameters cd('/JDBCSystemResources/' + dsName + '/JdbcResource/' + dsName) create('myJdbcOracleParams','JDBCOracleParams') cd('JDBCOracleParams/NO_NAME_0') set('FanEnabled','true') set('OnsNodeList',onsNodeList) # Set the data source parameters cd('/JDBCSystemResources/' + dsName + '/JdbcResource/' + dsName + '/JDBCDataSourceParams/NO_NAME_0') set('GlobalTransactionsProtocol','None') unSet('DataSourceList') unSet('AlgorithmType') # Set the driver parameters cd('/JDBCSystemResources/' + dsName + '/JdbcResource/' + dsName) create('myJdbcDriverParams','JDBCDriverParams') cd('JDBCDriverParams/NO_NAME_0') set('Url',url) set('DriverName','oracle.jdbc.OracleDriver') set('PasswordEncrypted',password) create('myProps','Properties') cd('Properties/NO_NAME_0') create('user', 'Property') cd('Property') cd('user') set('Value', user) # Set the connection pool parameters cd('/JDBCSystemResources/' + dsName + '/JdbcResource/' + dsName) create('myJdbcConnectionPoolParams','JDBCConnectionPoolParams') cd('/JDBCSystemResources/' + dsName + '/JdbcResource/' + dsName + '/JDBCConnectionPoolParams/NO_NAME_0') set('TestTableName','SQL ISVALID') # remove member data sources if they are not needed cd('/') delete(memberds1, 'JDBCSystemResource') delete(memberds2, 'JDBCSystemResource') updateDomain() closeDomain() exit()   In WLS 12.1.2 and 12.1.3, there is an explicit ActiveGridlink flag that can be used to identify an AGL datasource, even if FanEnabled is not set to true (which is not recommended) and auto-ONS is used (12.1.2 is the first release in which auto-ONS is supported). In the script above, uncomment the line to set it only if FanEnabled and OnsNodeList are not set. Starting in WLS 12.2.1, there is an explicit datasource type at the JDBCResource level. If that is set to GENERIC, you must re-set it using set("DatasourceType", "AGL"). In the script above, uncomment the lines to set it.  In this case, the ActiveGridlink flag is not necessary. If you are using an Oracle wallet for ONS, that needs to be added to the JDBCOracleParams object as well.  Prior to WLS 12.2.1, the ONS information needs to be explicitly specified.  In WLS 12.2.1 and later, the ONS information can be excluded and the database will try to determine the correct information.  For more complex ONS topologies, the configuration can be specified using the format described in http://docs.oracle.com/middleware/1221/wls/JDBCA/gridlink_datasources.htm#BABGJBIC.   Note: the unSet() method was not added to offline WLST until WLS 12.2.1.2.0.  There is a related patch to add this feature to WLS 12.1.3 at Patch 25695948.  For earlier releases, one option is to edit the MDS descriptor file and delete the lines for "data-source-list" and "algorithm-type" and commend out the "unSet()" calls before running the offline WLST script.  Another option is to run the following online WLST script, which does support the unSet() method.  However, the server will need to be restarted after the update and before the member datasources can be deleted. # java weblogic.WLST file.py import sys, socket, os # Application values dsName='myds' memberds1='ds1' memberds2='ds2' onsNodeList=host1:6200,host2:6200' url='jdbc:oracle:thin:@(DESCRIPTION=(ADDRESS_LIST=' \  + '(ADDRESS=(PROTOCOL=TCP)(HOST=host1)(PORT=1521))' \  + '(ADDRESS=(PROTOCOL=TCP)(HOST=host2)(PORT=1521)))' \  + '(CONNECT_DATA=(SERVICE_NAME=otrade)))' user='user1' password='password1' hostname='localhost' admin='weblogic' adminpw='welcome1' connect(admin,adminpw,"t3://"+hostname+":7001") edit() startEdit() # change type from MDS to AGL # The following is for WLS 12.1.2 and 12.1.3 if not setting # FanEnabled to  true.  It is recommended to always set FanEnabled to true. # cd('/JDBCSystemResources/' + dsName) # set('ActiveGridlink','true') # The following is for WLS 12.2.1 and later # cd('/JDBCSystemResources/' + dsName + '/JDBCResource/' + dsName ) # set('DatasourceType', 'AGL') # set the AGL parameters cd('/JDBCSystemResources/' + dsName + '/JDBCResource/' + dsName + '/JDBCOracleParams/' + dsName) set('FanEnabled','true') set('OnsNodeList',onsNodeList) # Set the data source parameters cd('/JDBCSystemResources/' + dsName + '/JDBCResource/' + dsName + '/JDBCDataSourceParams/' + dsName) set('GlobalTransactionsProtocol','None') cmo.unSet('DataSourceList') cmo.unSet('AlgorithmType') # Set the driver parameters cd('/JDBCSystemResources/' + dsName + '/JDBCResource/' + dsName + '/JDBCDriverParams/' + dsName) set('Url',url) set('DriverName','oracle.jdbc.OracleDriver') set('PasswordEncrypted',password) cd('Properties/' + dsName) userprop=cmo.createProperty('user') userprop.setValue(user) # Set the connection pool parameters cd('/JDBCSystemResources/' + dsName + '/JDBCResource/' + dsName + '/JDBCConnectionPoolParams/' + dsName) set('TestTableName','SQL PINGDATABASE') # cannot remove member data sources until server restarted #cd('/') #delete(memberds1, 'JDBCSystemResource') #delete(memberds2, 'JDBCSystemResource') save() activate() exit() A customer was having problems setting multiple targets in a WLST script.  It's not limited to this topic but here's how it is done. targetlist = [] targetlist.append(ObjectName('com.bea:Name=myserver1,Type=Server')) targetlist.append(ObjectName('com.bea:Name=myserver2,Type=Server')) targets = array(targetlist, ObjectName) cd('/JDBCSystemResources/' + dsName) set('Targets',targets)          

In the original blog article on this topic at this link, I proposed that you delete the multi data source (MDS) and create a replacement Active GridLink (AGL) data source.  In the real world, the...

Docker Volumes in WebLogic

Background Information In the Docker world, containers are ephemeral; they can be destroyed and replaced. After a container is destroyed, it is gone and all the changes made to the container are gone. If you want to persist data which is independent of the container's lifecycle, you need to use volumes. Volumes are directories that exist outside of the container file system. Docker Data Volume Introduction This blog provides a generic introduction to Docker data volumes and is based on a WebLogic Server 12.2.1.3 image. You can build the image using scripts in github. In this blog, this base image is used only to demonstrate the usage of data volumes; no WebLogic Server instance is actually running. Instead, it uses the 'sleep 3600' command to keep the container running for 6 minutes and then stop. Local Data Volumes Anonymous Data Volumes For an anonymous data volume, a unique name is auto-generated internally. Two ways to create anonymous data volumes are: Create or run a container with '-v /container/fs/path' in docker create or docker run Use the VOLUME instruction in Dockerfile: VOLUME ["/container/fs/path"]   $ docker run --name c1 -v /mydata -d weblogic-12.2.1.3-developer 'sleep 3600' $ docker inspect c1 | grep Mounts -A 10 "Mounts": [ { "Name": "625672972c137c2e29b85f1e6ae70d6d4eb756002062792721a5ac0e9d0f1421", "Source": "/scratch/docker/volumes/625672972c137c2e29b85f1e6ae70d6d4eb756002062792721a5ac0e9d0f1421/_data", "Destination": "/mydata", "Driver": "local", "Mode": "", "RW": true, "Propagation": "" } ],   # now we know that the volume has a random generated name 625672972c137c2e29b85f1e6ae70d6d4eb756002062792721a5ac0e9d0f1421 $ docker volume inspect 625672972c137c2e29b85f1e6ae70d6d4eb756002062792721a5ac0e9d0f1421 [ { "Name": "625672972c137c2e29b85f1e6ae70d6d4eb756002062792721a5ac0e9d0f1421", "Driver": "local", "Mountpoint": "/scratch/docker/volumes/625672972c137c2e29b85f1e6ae70d6d4eb756002062792721a5ac0e9d0f1421/_data", "Labels": null, "Scope": "local" } ] Named Data Volumes Named data volumes are available in Docker 1.9.0 and later. Two ways to create named data volumes are: Use docker volume create --name volume_name Create or run a container with '-v volume_name:/container/fs/path' in docker create or docker run $ docker volume create --name testv1 $ docker volume inspect testv1 [ { "Name": "testv1", "Driver": "local", "Mountpoint": "/scratch/docker/volumes/testv1/_data", "Labels": {}, "Scope": "local" } ] Mount Host Directory or File You can mount an existing host directory to a container directly.  To mount a host directly when running a container: Create or run a container with '-v /host/path:/container/path' in docker create or docker run You can mount an individual host file in the same way: Create or run a container with '-v /host/file:/container/file' in docker create or docker run Note that the mounted host directory or file is not an actual data volume managed by Docker so it is not shown when running docker volume ls. Also, you cannot mount a host directory or file in Dockerfile. $ docker run --name c2 -v /home/data:/mydata -d weblogic-12.2.1.3-developer 'sleep 3600' $ docker inspect c2 | grep Mounts -A 8 "Mounts": [ { "Source": "/home/data", "Destination": "/mydata", "Mode": "", "RW": true, "Propagation": "rprivate" } ], Data Volume Containers Data volume containers are data-only containers. After a data volume container is created, it doesn't need to be started. Other containers can access the shared data using --volumes-from. # step 1: create a data volume container 'vdata' with two anonymous volumes $ docker create -v /vv/v1 -v /vv/v2 --name vdata weblogic-12.2.1.3-developer # step 2: run two containers c3 and c4 with reference to the data volume container vdata $ docker run --name c3 --volumes-from vdata -d weblogic-12.2.1.3-developer 'sleep 3600' $ docker run --name c4 --volumes-from vdata -d weblogic-12.2.1.3-developer 'sleep 3600' Data Volume Plugins Docker 1.8 and later support a volume plugin which can extend Docker with new volume drivers. You can use volume plugins to mount remote folders in a shared storage server directly, such as iSCSI, NFS, or FC. The same storage can be accessed, in the same manner, from another container running in another host. Containers in different hosts can share the same data. There are plugins available for different storage types. Refer to the Docker documentation for volume plugins: https://docs.docker.com/engine/extend/legacy_plugins/#volume-plugins.  WebLogic Persistence in Volumes When running WebLogic Server in Docker, there are basically two use cases for using data volumes: To separate data from the WebLogic Server lifecycle, so you can reuse the data even after the WebLogic Server container is destroyed and later restarted or moved. To share data among different WebLogic Server instances, so they can recover each other's data, if needed (service migration). The following WebLogic Server artifacts are candidates for using data volumes: Domain home folders Server logs Persistent file stores for JMS, JTA, and such. Application deployments Refer to the following table for the data stored by WebLogic Server subsystems. Subsystem or Service What It Stores More Information Diagnostic Service Log records, data events, and harvested metrics Understanding WLDF Configuration in Configuring and Using the Diagnostics Framework for Oracle WebLogic Server JMS Messages Persistent messages and durable subscribers Understanding the Messaging Models in Developing JMS Applications for Oracle WebLogic Server JMS Paging Store One per JMS server. Paged persistent and non-persistent messages. Main Steps for Configuring Basic JMS System Resources in Administering JMS Resources for Oracle WebLogic Server  JTA Transaction Log (TLOG) Information about committed transactions, coordinated by the server, that may not have been completed. TLOGs can be stored in the default persistent store or in a JDBC TLOG store.    Managing Transactions in Developing JTA Applications for Oracle WebLogic Server     Using a JDBC TLog Store in Developing JTA Applications for Oracle WebLogic Server  Path Service The mapping of a group of messages to a messaging resource Using the WebLogic Path Service in Administering JMS Resources for Oracle WebLogic Server Store-and-Forward (SAF) Service Agents Messages from a sending SAF agent for re-transmission to a receiving SAF agent Understanding the Store-and-Forward Service in Administering the Store-and-Forward Service for Oracle WebLogic Server  Web Services Request and response SOAP messages from an invocation of a reliable WebLogic Web Service Using Reliable SOAP Messaging in Programming Advanced Features of JAX-RPC Web Services for Oracle WebLogic Server  EJB Timer Services EJB Timer objects Understanding Enterprise JavaBeans in Developing Enterprise JavaBeans, Version 2.1, for Oracle WebLogic Server A best practice is to run each WebLogic Server instance in its own container and share domain configuration in a data volume. This is the basic usage scenario for data volumes in WebLogic Server. When the domain home is in an external volume, server logs are also in the external volume, by default. But, you can explicitly configure server logs to be located in a different volume because server logs may contain more sensitive data than other files in the domain home and need more permission control.  File stores for JMS and JTA etc should be located in an external volume and use shared directories. This is required for service migration to work. It’s fine for all default and custom stores in the same domain to use the same shared directory, as different instances automatically, uniquely decorate their file names. But different domains must never share the same directory location, as the file names can collide. Similarly, two running, duplicate domains must never share the same directory location. File collisions usually result in file locking errors, and may corrupt data.  File stores create a number of files for different purposes. Cache and paging files can be stored in the container file system locally. Refer to following table for detailed information about the different files and locations. Store Type Directory Configuration Store Path Not Configured   Relative Store Path   Absolute Store Path   File Name   default The directory configured on a WebLogic Server default store. See Using the Default Persistent Store. <domainRoot>/servers/<serverName>/data/store/default <domainRoot>/<relPath> <absPath> _WLS_<serverName>NNNNNN.DAT custom file The directory configured on a custom file store. See Using Custom File Stores. <domainRoot>/servers/<serverName>/data/store/<storeName> <domainRoot>/<relPath> <absPath> <storeName>NNNNNN.DAT cache The cache directory configured on a custom or default file store that has a DirectWriteWithCache synchronous write policy. See Tuning the WebLogic Persistent Store in Tuning Performance of Oracle WebLogic Server. ${java.io.tmpdir}/WLStoreCache/${domainName}/<storeUuid> <domainRoot>/<relPath> <absPath> <storeName>NNNNNN.CACHE paging The paging directory configured on a SAF agent or JMS server. See Paging Out Messages To Free Up Memory in Tuning Performance of Oracle WebLogic Server. <domainRoot>/servers/<serverName>/tmp <domainRoot>/<relPath> <absPath> <jmsServerName>NNNNNN.TMP <safAgentName>NNNNNN.TMP   In order to properly secure data in external volumes, it is an administrator's responsibility to set the appropriate permissions on those directories. To allow the WebLogic Server process to access data in a volume, the user running the container needs to have the proper permission to the volume folder.  Summary Use local data volumes: Docker 1.8.x and earlier recommends that you use data volume containers (with anonymous data volumes). Docker 1.9.0 and later recommends that you use named data volumes.  If you have multiple volumes, to be shared among multiple containers, we recommend that you use a data volume container with named data volumes. To share data among containers in different hosts, first mount the folder in a shared storage server, and then choose one volume plugin to mount it to Docker. We recommend that the WebLogic Server domain home be externalized to a data volume. The externalized domain home must be shared by the Admin server and Managed servers, each running in their own container. For high availability all Managed Servers need to read and write to the stores in the shared data volume. The kind of data volume that is selected should be chosen thinking of  persistence of the stores, logs, and diagnostic files.

Background Information In the Docker world, containers are ephemeral; they can be destroyed and replaced. After a container is destroyed, it is gone and all the changes made to the container are gone....

The WebLogic Server

WebLogic Server in Eclipse IDE for Java EE Developers

This article describes how to integrate WebLogic Server in the latest supported version of Eclipse IDE for Java EE Developers. You need to start by getting all of the pieces - Java SE Development Kit, WebLogic Server, and Eclipse IDE. Go to http://www.oracle.com/technetwork/java/javase/downloads/jdk8-downloads-2133151.html to download Java SE Development kit (neither WLS nor Eclipse come with it).  Accept the license agreement, download the binary file, and install it on your computer.  Set your JAVA_HOME to the installation directory and add $JAVA_HOME/bin to your PATH. Next get and install a copy of WebLogic Server (WLS).  The latest version of WLS that is currently supported in Eclipse is WLS 12.2.1.3.0. The standard WLS download page at http://www.oracle.com/technetwork/middleware/weblogic/downloads/wls-main-097127.html  has 12.2.1.3.0 at the top.  If you are running on Windows, your command window to run the jar command will need to be running as the Administrator.  unzip fmw_12.2.1.3.0_wls_Disk1_1of1.zip java -jar fmw_12.2.1.3.0_wls.jar You can take all of the default values.  You will need to specify an installation directory.  You might want to install the examples.  On the last page, make sure the “Automatically Launch the Quickstart Configuration Wizard” is clicked to create the domain.  In the Configuration Wizard, take the defaults, you may want to change the domain directory, enter a password, and click on Create. Download “Eclipse IDE for Java EE Developers” from http://www.eclipse.org/downloads/eclipse-packages/ and unzip it.  The latest version that OEPE supports is Oxygen.3 (it's not currently running correctly on 2018-12-R or later).  Change to the Eclipse installation directory and run eclipse. Select a directory as a work space and optionally select to use this as the default.  You can close the Welcome screen so we can get to work. If you are running behind a firewall and need to configure a proxy server to download files from an external web site, select the Preferences menu item under the Window menu, expand the General node in the preferences tree and select Network Connections, change the drop-down to Manual, edit the http and https entries to provide the name of the proxy server and the port. Click on the Windows menu item, select Show View, and select the Servers view.     Then click on the link “No servers are available.  Click this link to create a new server”.  Expand Oracle, select Oracle WebLogic Server Tools, and click on Next.   It will then go off and get a bunch of files including the Oracle Enterprise Pack for Eclipse (OEPE).  You will need to accept the OEPE license agreement to finish the installation.   The installation will continue in the background.  Click on the progress bar in the lower right order to wait for it to complete.  Eventually it will complete and ask if you want to restart Eclipse.   Eclipse needs to restart to adopt the changes.  To pick up where you left off, click on the Windows menu item, select Show View, select the Servers view, click on the link “No servers are available.  Click this link to create a new server”, expand Oracle, and select Oracle WebLogic Server. If you want to access the server remotely, you will need to enter the computer name; otherwise using localhost is sufficient for local access.  Click Next. On the next screen, browse to the directory name where you installed WLS 12.2.1.3.0 and then select the wlserver subdirectory (i.e., WebLogic home is always the wlserver subdirectory of the installation directory).  Eclipse will automatically put in the value of JAVA_HOME for the “Java home” value.  Click Next. On the next screen, browse to the directory where you created the domain using the WLS Configuration Wizard (you can also click on the button to select from known domains).  Click Next and Finish. You can double-click on the new server entry that you just created to bring up a window for the server.  Click on “Open Launch configuration” to configure any options that you want while running the server and then click OK. Back in the server view, right click on the server entry and select start to boot the server. A Console window will open to display the server log output and eventually you will see that the server is in RUNNING mode. That covers the logistics of getting everything installed to run WLS in Eclipse.  Now you can create your project and start your development.  Let’s pick a Dynamic Web Project from the new options. The project will automatically be associated with the server runtime that you just set up.  For example, selecting a dynamic web project will display a window like the following where the only value to provide is the name of the project. There are many tutorials on creating projects within Eclipse, once you get the tools set up.  

This article describes how to integrate WebLogic Server in the latest supported version of Eclipse IDE for Java EE Developers. You need to start by getting all of the pieces - Java SE Development Kit,...

The WebLogic Server

Automatic Scaling of WebLogic Clusters on Kubernetes

Elasticity (scaling up or scaling down) of a WebLogic Server cluster provides increased reliability of customer applications as well as optimization of resource usage.  Elasticity was introduced in WebLogic Server 12.2.1 and was built on the concepts of the elastic services framework and dynamic clusters:     Elasticity in WebLogic Server is achieved by either:   ·      Manually adding or removing a running server instance in a dynamic WebLogic Server cluster using the WebLogic Server Administration Console, or the WebLogic Scripting Tool (WLST). This is known as on-demand scaling.   ·      Establishing WLDF scaling policies that set the conditions under which a dynamic cluster should be scaled up or down, and actions that define the scaling operations themselves. When the conditions defined in the scaling policy occur, the corresponding scaling action is triggered automatically.   When a scaling action occurs, Managed Server instances are started and stopped through the use of WebLogic Server Node Managers.  Node Manager is a WebLogic Server utility that manages the lifecycle (startup, shutdown, and restart) of Managed Server instances.   The WebLogic Server team is investing in running WebLogic Server in Kubernetes cloud environments.  A WebLogic Server cluster can be automatically scaled by increasing (or decreasing) the number of pods based on resource metrics provided by the WebLogic Diagnostic Framework (WLDF).  We will use the sample demo from WebLogic on Kubernetes, Try It! to illustrate automatic scaling of a WebLogic Server cluster in a Kubernetes cloud environment. There are a few key differences between how elasticity works in the sample demo for a Kubernetes cloud environment versus in traditional WebLogic Server deployment environments:   1.     The sample demo uses statically-configured clusters, whereas elasticity works with dynamic clusters in a traditional deployment.  We’ll discuss elasticity of WebLogic Server clusters in a Kubernetes cloud environment in a future blog. 2.     In the sample demo, scaling actions invoke requests to the Kubernetes API server to scale pods, versus requests to Node Manager in traditional deployments.   In this blog entry, we will show you how a WebLogic Server cluster can be automatically scaled up or down in a Kubernetes environment based on metrics provided by WLDF.    WebLogic on Kubernetes Sample Demo We will use the WebLogic domain running on Kubernetes described in the following blog entry, WebLogic on Kubernetes, Try It!.        The WebLogic domain, running in a Kubernetes cluster, consists of:   1.     An Administration Server (AS) instance, running in a Docker container, in its own pod (POD 1). 2.     A webhook implementation, running in its own Docker container, in the same pod as the Administration Server (POD 1).   What is a webhook? A webhook is a lightweight HTTP server that can be configured with multiple endpoints (hooks) for executing configured commands, such as shell scripts. More information about the webhook used in the sample demo, see adnanh/webhook/.   NOTE: As mentioned in WebLogic on Kubernetes, Try It!, a prerequisite for running  WLDF initiated scaling is building and installing a Webhook Docker image  (oow-demo-webhook).     3.     A WebLogic Server cluster composed of a set of Managed Server instances in which each instance is running in a Docker container in its own pod (POD 2 to POD 6). WebLogic Diagnostic Framework The WebLogic Diagnostics Framework (WLDF) is a suite of services and APIs that collect and surface metrics that provide visibility into server and application performance.  To support automatic scaling of a dynamic cluster, WLDF provides the Policies and Actions component, which lets you write policy expressions for automatically executing scaling operations on a dynamic cluster. These policies monitor one or more types of WebLogic Server metrics, such as memory, idle threads, and CPU load.  When the configured threshold in a policy is met, the policy is triggered, and the corresponding scaling action is executed. For more information about WLDF and diagnostic policies and actions, see Configuring Policies and Actions in Configuring and Using the Diagnostics Framework for Oracle WebLogic Server.   Policies can be based on the following types of data: ·       Trends over time, or historical data, such as changes in average values during a specific time interval. For example, a policy can be based on average JVM heap usage above a certain threshold. ·       Runtime metrics relevant to all server instances in a cluster, not just one server instance. ·       Data from multiple services that are considered together. For example, a policy can be based on response-time metrics reported by a load balancer and message-backlog metrics from a message queue. ·       Calendar-based schedules. Policies can identify a specific calendar time, such as time of day or day of week, when a scaling action must be executed. ·       Log rules or event data rules. Automatic Scaling of a WebLogic Server Cluster in Kubernetes Here is how we can achieve automatic scaling of a WebLogic Server cluster in a Kubernetes environment using WLDF.  I’ll be discussing only the relevant configuration changes for automatic scaling. You can find instructions for setting up and running the sample demo in WebLogic on Kubernetes, Try It!.   First, I’ll quickly describe how automatic scaling of a WebLogic Server cluster in Kubernetes works.   In the sample demo, we have a WebLogic Server cluster running in a Kubernetes cluster with a one-to-one mapping of WebLogic Server Managed Server instances to Kubernetes pods. The pods are managed by a StatefulSet controller.  Like ReplicaSets and Deployments, StatefulSets are a type of replication controller that can be scaled by simply increasing or decreasing the desired replica count field.  A policy and scaling action is configured for the WebLogic Server cluster. While the WebLogic Server cluster is running, WLDF collects and monitors various runtime metrics, such as the OpenSessionsCurrentCount attribute of the WebAppComponentRuntimeMBean.  When the conditions defined in the policy occur, the policy is triggered, which causes the corresponding scaling action to be executed. For a WebLogic Server cluster running in a Kubernetes environment, the scaling action is to scale the corresponding StatefulSet by setting the desired replica count field.  In turn, this causes the StatefulSet controller to increase or decrease the number of pods (that is, the WebLogic Server Managed Server instances) to match the desired replica count.   Because StatefulSets are managing the pods in which the Managed Server instances are running, a WebLogic Server cluster can also be scaled on-demand by using tools such as kubectl:   For example: $ kubectl scale statefulset ms --replicas=3   WLDF Policies and Actions   For information about configuring the WLDF Policies and Actions component, see Configuring Policies and Actions.  For this sample, the policy and action is configured in a WLDF diagnostic system module, whose corresponding resource descriptor file, Module-0-3905.xml, is shown below:     <?xml version='1.0' encoding='UTF-8'?> <wldf-resource xmlns="http://xmlns.oracle.com/weblogic/weblogic-diagnostics" xmlns:sec="http://xmlns.oracle.com/weblogic/security" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:wls="http://xmlns.oracle.com/weblogic/security/wls" xsi:schemaLocation="http://xmlns.oracle.com/weblogic/weblogic-diagnostics http://xmlns.oracle.com/weblogic/weblogic-diagnostics/2.0/weblogic-diagnostics.xsd">   <name>Module-0</name>   <watch-notification>     <watch>       <name>myScaleUpPolicy</name>       <enabled>true</enabled>       <rule-type>Harvester</rule-type>       <rule-expression>wls:ClusterGenericMetricRule("DockerCluster", "com.bea:Type=WebAppComponentRuntime,ApplicationRuntime=OpenSessionApp,*", "OpenSessionsCurrentCount","&gt;=",0.01,5,"1 seconds","10 seconds")       </rule-expression>       <expression-language>EL</expression-language>       <alarm-type>AutomaticReset</alarm-type>       <schedule>         <minute>*</minute>         <second>*/15</second>       </schedule>       <alarm-reset-period>60000</alarm-reset-period>       <notification>RestScaleUpAction</notification>     </watch>     <rest-notification>       <name>RestScaleUpAction</name>       <enabled>true</enabled>       <timeout>0</timeout>       <endpoint-url>http://${OPERATOR_ENDPOINT}/hooks/scale-up</endpoint-url>       <rest-invocation-method-type>PUT</rest-invocation-method-type>       <accepted-response-type>application/json</accepted-response-type>       <http-authentication-mode>None</http-authentication-mode>       <custom-notification-properties></custom-notification-properties>     </rest-notification>   </watch-notification> </wldf-resource>   The base element for defining policies and actions is <watch-notification>. Policies are defined in <watch> elements. Actions are defined in elements whose names correspond to the action type. For example, the element for a REST action is <rest-notification>.   Here are descriptions of key configuration details regarding the policies and actions that are specified in the preceding resource descriptor file.  For information about all the available action types, see Configuring Actions.   Policies:   ·      The sample demo includes a policy named myScaleUpPolicy, which has the policy expression shown below as it would appear in the WebLogic Server Administration Console:       ·      The policy expression for myScaleUpPolicy uses the smart rule, ClusterGenericMetricRule. The configuration of this smart rule can be read as:   For the cluster DockerCluster, WLDF will monitor the OpenSessionsCurrentCount attribute of the WebAppComponentRuntimeMBean for the OpenSessionApp application.  If the OpenSessionsCurrentCount is greater than or equal to 0.01 for 5 per cent of the Managed Server instances in the cluster, then the policy will be evaluated as true. Metrics will be collected at a sampling rate of 1 second, and the sample data will be averaged out over the specified 10 second period of time of the retention window.   For more information about smart rules, see Smart Rule Reference.   Actions:   An action is an operation that is executed when a policy expression evaluates to true.  In a traditional WebLogic Server deployment, scaling actions (scale up and scale down) are associated with policies for scaling a dynamic cluster.  Elastic actions scale Managed Server instances in a dynamic cluster by interacting with Node Managers.   WLDF also supports many other types of diagnostic actions:   ·       Java Management Extensions (JMX) ·       Java Message Service (JMS) ·       Simple Network Management Protocol (SNMP) ·       Simple Mail Transfer Protocol (SMTP) ·       Diagnostic image capture ·       REST ·       WebLogic logging system ·       WebLogic Scripting Tool (WLST) ·       Heap dump ·       Thread dump     For our sample, we use a REST action to show invoking a REST endpoint to initiate a scaling operation.  We selected a REST action, instead of an elastic action, because we are not running Node Manager in the Kubernetes environment, and we’re scaling pods by using the Kubernetes API and API server.  For more information about all the diagnostic actions supported in WLDF, see Configuring Actions.   ·      The REST action, associated with the policy myScaleUpPolicy from earlier, was configured in the Actions tab of the policy configuration pages in the WebLogic Server Administration Console:       ·      The REST endpoint URL in which to send the notification is established by the <endpoint-url> element in the diagnostic system module’s resource descriptor file.   By looking at the configuration elements of the REST action, you can see that the REST invocation will send an empty PUT request to the endpoint with no authentication.  If you prefer, you can also send a Basic Authentication REST request by simply setting the <http-authentication-mode> attribute to Basic.   Other WLDF resource descriptor configuration settings worth noting are:   1.     The file name of a WLDF resource descriptor can be anything you like.  For our sample demo, Module-0-3905.xml was generated when we used the WebLogic Server Administration Console to configure the WLDF policy and REST action.   2.     In the demo WebLogic domain, the WLDF diagnostic system module was created using the container-scripts/add-app-to-domain.py script:   # Configure WLDF # ============-= as_name = sys.argv[3]   print('Configuring WLDF system resource'); cd('/')   create('Module-0','WLDFSystemResource') cd('/WLDFSystemResources/Module-0') set('DescriptorFileName', 'diagnostics/Module-0-3905.xml')   cd('/') assign('WLDFSystemResource', 'Module-0', 'Target', as_name)   In the script, you can see that:   ·      A WLDF diagnostic system module is created named Module-0. ·      The WLDF resource descriptor file, Module-0-3905.xml, is associated with Module-0. ·      The diagnostic system module is targeted to the Administration Server, specified as as_name, which is passed in as a system argument.  This diagnostic system module was targeted to the Administration Server because its policy contains the ClusterGenericMetricRule smart rule, which must be executed from the Administration Server so that it can have visibility across the entire cluster. For more information about smart rules and their targets, see Smart Rule Reference. Demo Webhook   In the sample demo, a webhook is used to receive the REST notification from WLDF and to scale the StatefulSet and, by extension, the WebLogic Server cluster.  The following hook is defined in webhooks/hooks.json:   [   {     "id": "scale-up",     "execute-command": "/var/scripts/scaleUpAction.sh",     "command-working-directory": "/var/scripts",     "response-message": "scale-up call ok\n"   } ]   This hook named scale-up corresponds to the <endpoint-url> specified in the REST notification:   <endpoint-url>http://${OPERATOR_ENDPOINT}/hooks/scale-up</endpoint-url>   Notice that the endpoint URL contains the environment variable ${OPERATOR_ENDPOINT}. This environment variable will be replaced with the correct host and port of the webhook when the Administration Server is started.   When the hook endpoint is invoked, the command specified by the “execute-command” property is executed, which in this case is the shell script "/var/scripts/scaleUpAction.sh:   #!/bin/sh   echo "called" >> scaleUpAction.log   num_ms=`curl -v --cacert /var/run/secrets/kubernetes.io/serviceaccount/ca.crt -H "Authorization: Bearer $(cat /var/run/secrets/kubernetes.io/serviceaccount/token)" -X GET https://kubernetes/apis/apps/v1beta1/namespaces/default/statefulsets/${MS_STATEFULSET_NAME}/status | grep -m 1 replicas| sed 's/.*\://; s/,.*$//'`   echo "current number of servers is $num_ms" >> scaleUpAction.log   new_ms=$(($num_ms + 1))   echo "new_ms is $new_ms" >> scaleUpAction.log   curl -v --cacert /var/run/secrets/kubernetes.io/serviceaccount/ca.crt -H "Authorization: Bearer $(cat /var/run/secrets/kubernetes.io/serviceaccount/token)" -X PATCH -H "Content-Type: application/strategic-merge-patch+json" -d '{"spec":{"replicas":'"$new_ms"'}}' https://kubernetes/apis/apps/v1beta1/namespaces/default/statefulsets/${MS_STATEFULSET_NAME}   In the script, we are issuing requests to the Kubernetes API server REST endpoints with ‘curl’ and then parsing the JSON response.  The first request is to retrieve the current replica count for the StatefulSet.  Then we scale up the StatefulSet by incrementing the replica count by one and sending a PATCH request with the new value for the replicas property in the request body.   Wrap Up With a simple configuration to use the Policies and Actions component in WLDF, we can provide automatic scaling functionality for a statically-configured WebLogic Server cluster in a Kubernetes environment.  WLDF’s tight integration with WebLogic Server provides a very comprehensive set of WebLogic domain-specific (custom) metrics to be used for scaling decisions.  Although we used a webhook as our REST endpoint to receive WLDF notifications, we could have just as easily implemented another Kubernetes object or service running in the Kubernetes cluster to scale the WebLogic Server cluster in our sample demo. For example, the WebLogic Server team is also investigating the Kubernetes Operator pattern for integrating WebLogic Server in a Kubernetes environment.  A Kubernetes Operator is "an application-specific controller that extends the Kubernetes API to create, configure, and manage instances of complex applications".  For more information on Operators,  see Introducing Operators: Putting Operational Knowledge into Software. Stay tuned for future blog updates on WebLogic Server and its integration with Kubernetes. The next blog related to WebLogic Server Clustering will be in the area of dynamic clusters for WebLogic Server on Kubernetes.

Elasticity (scaling up or scaling down) of a WebLogic Server cluster provides increased reliability of customer applications as well as optimization of resource usage.  Elasticity was introduced...

Let WebLogic work with Elastic Stack in Kubernetes

Over the past decade, there has been a big change in application development, distribution, and deployment. More and more popular tools have become available to meet the requirements. Some of the tools that you may want to use are provided in the Elastic Stack. In this article, we'll show you how to integrate them with WebLogic Server in Kubernetes. Note: You can find the code for this article at https://github.com/xiumliang/Weblogic-ELK. What Is the Elastic Stack? The Elastic Stack consists of several products: Elasticsearch, Logstash, Kibana, and others. Using the Elastic Stack, you can gain insight from your application's log data, in real time. Elasticsearch is a distributed, RESTful search and analytics engine capable of solving a growing number of use cases. As the heart of the Elastic Stack, it stores your data centrally so you can discover the expected and uncover the unexpected. Logstash is an open source, server-side data processing pipeline that ingests data from a multitude of sources simultaneously, transforms it, and then sends it to your favorite “stash.” Kibana lets you visualize your Elasticsearch data and navigate the Elastic Stack. It gives you the freedom to select how to shape your data. You don’t always have to know what you're looking for. Let WebLogic Server Work with the Elastic Stack There are several ways to use the Elastic Stack to gather and analyze WebLogic Server logs. In this article, we will introduce two of them. Integrate the Elastic Stack with WebLogic Server by Using Shared Volumes WebLogic Server instances put their logs into a shared volume. The volume could be an NFS or a host path. Logstash collects the logs from the volume and transfers the filtered logs to Elasticsearch. This type of integration requires a shared disk for the logs. The advantage is that the log files are stored and persisted, even after the WebLogic Server and Elastic Stack pods shutdown. Also, by using a shared volume, you do not have to consider the network configuration between Logstash and Elasticsearch. You just need deploy two pods (Elastic Stack and WebLogic Server), and use the shared volume. The disadvantage is that you need to maintain a shared volume for the pods; you must consider the disk space. In a multi-server environment, you need to arrange the logs on the shared volume so that there is no conflict between them. Deploy a WebLogic Server Pod to Kubernetes $ kubectl create -f k8s_weblogic.yaml In this k8s_weblogic.yaml file, we've defined a shared volume of type 'hostPath'. When the pod starts, the WebLogic Server logs are written to the shared volume so Logstash can access them. We can change the volume type to NFS or another type supported by Kubernetes, but we must be careful about permissions. If the permission is not correct, the logs may not be written or read on the shared volume. We can check if the pod is deployed and started: $ kubectl get pods We get the following: NAME READY STATUS RESTARTS AGE -------------------------------------------------------- weblogic-1725565574-fgmsr 1/1 Running 0 31s Deploy an Elastic Stack Pod to Kubernetes $ kubectl create -f k8s_elk.yaml The K8s_elk.yaml file defines the shared volume, which is the same as the definition in the k8s_weblogic.yaml file, because both the WebLogic Server and the Elastic Stack pods mount the same shared volume, so that Logstash can read the logs. Please note that Logstash is not started when the pod starts. We need to further configure Logstash before starting it. After the Elastic Stack pod is started, we have two pods in the Kubernetes node: NAME READY STATUS RESTARTS AGE ---------------------------------------------------- weblogic-1725565574-fgmsr 1/1 Running 0 31s elk-3823852708-zwbfg 1/1 Running 0 6m Connect to the Pod and Verify the Elastic Stack Pods Started on the Pod Machine $ kubectl exec -it elk-3823852708-zwbfg /bin/bash Run the following command to verify that Elasticstash has started. $ curl GET -i "http://127.0.0.1:9200" $ curl GET -i "http://127.0.0.1:9200/_cat/indices?v" We get the following indices if Elasticstash was started:   Because Kibana is a web application, we verify Kibana by opening the following URL in a browser: http://[NODE_IP_ADDRESS]:31711/app/kibana We get Kibana's welcome page. The port 31711 is the node port defined in the k8s_elk.yaml. Configure Logstash $ vim /opt/logstash/config/logstash.conf In the logstash.conf file, the "input blcok" defines where Logstash gets the input logs. The "filer block" defines a simple rule for how to filter WebLogic Server logs. The "output block" transfers the Logstash filtered logs to the Elasticsearch address:port. Start Logstash and Verify the Result $ /opt/logstash/bin# ./logstash -f ../config/logstash.conf After Logstash is started, open the browser and point to the Elasticsearch address: http://[NODE_IP_ADDRESS]:31712/_cat/indices?v Compared to the previous result, there is an additional line, logstash-2017.07.28, which indicates that Logstash has started and transferred logs to Elasticsearch.  Also, we can try to access any WebLogic Server applications. Now the Elastic Stack can gather and process the logs. Integrate Elastic Stack with WebLogic Server via the Network In this approach, WebLogic Server and the Logstash agent are deployed in one pod, and Elasticsearch and Kibana are deployed in another pod. Because Logstash and Elasticsearch are not in the same pod, Logstash has to transfer data to Elasticsearch using an outside ip:port. For this type of integration, we need to configure the network for Logstash. The advantage is that we do not have to maintain a shared disk and arrange the log folders when using multiple WebLogic Server instances. The disadvantage is that we must add a Logstash for each WebLogic Server pod so that the logs can be collected. Deploy Elasticsearch and Kibana to Kubernetes $ kubectl create -f k8s_ek.yaml The k8s_ek.yaml file is similar to the k8s_elk.yaml file. They use the same image. The difference is, k8s_ek.yaml set env "LOGSTASH_START = 0", which indicates that Logstash does not start when the container starts up. Also, k8s_ek.yaml does not define a port for Logstash. The Logstash port will be defined in the same pod with WebLogic Server. We can verify the ek startup with: http://[NODE_IP_ADDRESS]:31712/_cat/indices?v Generate the Logstash Configuration with EK Pod IP Address $ kubectl describe pod ek-3905776065-4rmdx We get the following information: Name: ek-3905776065-4rmdx Namespace: liangz Node: [NODE_HOST_NAME]/10.245.252.214 Start Time: Thu, 02 Aug 2017 14:37:19 +0800 Labels: k8s-app=ek pod-template-hash=3905776065 Annotations: kubernetes.io/created-by={"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicaSet","namespace":"liangz-cn","name":"ek-3905776065","uid":"09a30990-7296-11e7-bd24-0021f6e6a769","a... Status: Running IP: 10.34.0.5 The IP address of the ek pod is [10.34.0.5]. We need to define the IP address in the Logstash.conf file. Create the Logstash.conf file in the shared volume where we need to locate it: input { file { path => "/shared-logs/*.log*" start_position => beginning } } filter { grok { match => [ "message", "<%{DATA:log_timestamp}> <%{WORD:log_level}> <%{WORD:thread}> <%{HOSTNAME:hostname}> <%{HOSTNAME:servername}> <%{DATA:timer}> <<%{DATA:kernel}>> <> <%{DATA:uuid}> <%{NUMBER:timestamp}> <%{DATA:misc}> <%{DATA:log_number}> <%{DATA:log_message}>" ] } } output { elasticsearch { hosts => ["10.34.0.5:9200"] } } We will define two VolumeMounts in the Logstash-WebLogic Server pod: Log path: /shared-logs For the WebLogic Server instance which shared its logs with Logstash in the same pod conf path: /shared-conf For Logstash which uses the Logstash.config file The Logstash.conf file defines the input file path to /shared-logs. Also, it connects to Elasticsearch with "10.34.0.5:9200" which we discovered previously. Deploy the Logstash and WebLogic Server Pod to Kubernetes $ kubectl create -f k8s_logstash_weblogic.yaml In this k8s_logstash_weblogic.yaml, we add two images (WebLogic Server and Logstash). They share WebLogic Server logs with a pod-level shared volume, "shared-logs". This is a benefit of defining WebLogic Server and Logstash together. We do not need an NFS. If we want to deploy the pod to more nodes, we just need to modify the replicas value. All the new pods will have their own pod-level shared volume. We do not have to consider a possible conflict between the logs. $ kubectl get pods NAME READY STATUS RESTARTS AGE --------------------------------------------------------------- ek-3905776065-4rmdx 1/1 Running 0 6m logstash-wls-38554443-n366v 2/2 Running 0 14s Verify the Result Open the following URL: http://[NODE_IP_ADDRESS]:31712/_cat/indices?v The first line shows us that Logstash has collected the logs and transferred them to Elasticsearch.

Over the past decade, there has been a big change in application development, distribution, and deployment. More and more popular tools have become available to meet the requirements. Some of...

The WebLogic Server

WebLogic Server and Java SE 9

The latest WebLogic Server release 12.2.1.3.0 is now available as of August 30, 2017 and you can download it at http://www.oracle.com/technetwork/middleware/fusion-middleware/downloads/index.html .  See https://blogs.oracle.com/weblogicserver/weblogic-server-12213-is-available for more information on the new release.  Java SE 9 became available on September 21, 2017 and it’s available at www.oracle.com/javadownload.  Details about the features included in this release can be found on the OpenJDK  JDK 9  page: http://openjdk.java.net/projects/jdk9/.  While we have been working on WLS support for JDK 9 for 3.5 years, it was clear early on that the schedules would not align.  Although not certified, you should get pretty far running 12.2.1.3 on JDK9. Start by installing JDK9 somewhere convenient and setting your environment appropriately. export JAVA_HOME=/dir/jdk-9 export PATH="$JAVA_HOME/bin:$PATH" Then install the new WLS release and ignore the following warnings. jar xf fmw_12.2.1.3.0_wls_Disk1_1of1.zip fmw_12.2.1.3.0_wls.jar java -jar fmw_12.2.1.3.0_wls.jar WARNING: An illegal reflective access operation has occurred WARNING: Illegal reflective access by com.sun.xml.bind.v2.runtime.reflect.opt.Injector (file:/home/user/OraInstall2017-09-11_11-27-52AM/oracle_common/modules/com.sun.xml.bind.jaxb-impl.jar) to method java.lang.ClassLoader.defineClass(java.lang.String,byte[],int,int) WARNING: Please consider reporting this to the maintainers of com.sun.xml.bind.v2.runtime.reflect.opt.Injector WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations WARNING: All illegal access operations will be denied in a future release Problem: This JDK version was not certified at the time it was made generally available. It may have been certified following general availability. Recommendation: Check the Supported System Configurations Guide (http://www.oracle.com/technetwork/middleware/ias/downloads/fusion-certification-100350.html) for further details. Press "Next" if you wish to continue. Expected result: 1.8.0_131 Actual result: 9 The illegal reflective access warnings are coming from code that has been updated to work on JDK9 but the logic checks if the JDK8 approach works first.  The warnings will go away when the default is (or you run explicitly with)  --illegal-access=deny. You will see these benign warnings from the following known list. org.python.core.PyJavaClass com.oracle.classloader.PolicyClassLoader$3 weblogic.utils.StackTraceUtilsClient   com.sun.xml.bind.v2.runtime.reflect.opt.Injector com.sun.xml.ws.model.Injector net.sf.cglib.core.ReflectUtils$ com.oracle.common.internal.net.InterruptibleChannels com.oracle.common.internal.net.WrapperSelector com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.ClusterService com.tangosol.util.ClassHelper weblogic.utils.io.ObjectStreamClass If you see other warnings, they might need to be reported to the owner. There are a lot of behavior changes in JDK9.  We have done as much as possible to hide them.  In particular, command lines should continue to run without any changes.  You can set the environment using the standard . wlserver/server/bin/setWLSEnv.sh The most popular commands are java weblogic.Server   - create a domain in an empty directory and start the server java weblogic.WLST myscript.py – run a WLST script using Jython WLS 12.2.1.3.0 is now providing the Java EE classes that are hidden by default in JDK9.  Do not use the JDK9 command line option --add-modules java.se.ee or add any of the individual modules.  WLS is not doing anything to make use of or integrate with the new JDK9 module features.  There are many changes in JDK9 and it’s likely that you will require some changes in your application.  See https://docs.oracle.com/javase/9/migrate/toc.htm as a good starting place to review potential changes. Having said all of that, JDK9 is not a Long Term Support release.  Support ends in March 2018 when 18.3 is released and the next Long Term Support release isn't until 18.9 in September 2018.  That means that WebLogic Server won't be certified on JDK9 and the next release to be certified won't be until after September 2018.  Customer support won't be taking bug reports on JDK9.  This article indicates progress on the JDK upgrade front and gives you a hint that you can try playing with WLS and JDK9 starting with release 12.2.1.3.0 (and not earlier releases). Remember that JDK 8 will continue to be supported until 2025. Notes: The WLS installer (GUI or CLI) must be launched from a bash (or bash compatible like ksh) shell. WLS will not boot on JDK 10 or later, due to libraries not recognizing the new version number.

The latest WebLogic Server release 12.2.1.3.0 is now available as of August 30, 2017 and you can download it at http://www.oracle.com/technetwork/middleware/fusion-middleware/downloads/index.html. ...

The WebLogic Server

Customize your ZDT Patching Workflows Using “Hooks”

In the 12.2.1.3.0 release of WebLogic Server, ZDT Patching offers a cool new feature that allows you to extend the logic of your existing patching workflows by adding user-defined scripts that can be “hooked” at predefined locations (known as extension points). These user-defined scripts, known as extensions, are executed in conjunction with the workflow. With the custom hooks feature, you can have your workflow perform any additional task that is specific to a business need but is not appropriate to include in the base patching workflow. You can add checks on each node in the workflow to ensure that there is enough disk space for the rollout of Oracle home, or have your workflow perform admin tasks such as deploy or redeploy any additional applications, or define custom logic to send out email notifications about the status of the upgrade, and so on. This feature provides seven extension points for multitenant and non-multitenant workflows where you can insert your custom scripts and resources. Implement custom hooks to modify any patching workflow in five simple steps: From the list of available predefined extension points, determine the point in the workflow where you want to implement the extension (which is nothing, but your extended logic). Extension points are available at different stages of the workflow. For instance, ep_OnlineBeforeUpdate can be used to execute any logic before the patching operation starts on each node. This is typically the point where prerequisite checks can be performed. You can find the complete list of extension points available for multitenant and non-multitenant workflows at About Extension Points in the Administering Zero Downtime Patching Workflows guide. Create extension scripts to define your custom logic. The scripts can be UNIX-specific shell scripts or Window-specific batch scripts. Optionally, your scripts can use the predefined environment variables that this feature provides. Specify the extensions in the extensionConfiguration.json file. However, this feature allows more than one way of specifying extensions to give you the flexibility to override or customize parameters at different levels. Learn more about these options at Specifying Extensions to Modify the Workflow in the Administering Zero Downtime Patching Workflows guide. Once you have created the extensionConfiguration.json file and defined your custom logic in extension scripts, you must package them into a JAR file that has a specific directory structure. You can find the directory structure of the extension JAR file in our docs. At this point, you must remember to place the extension JAR on all nodes similar to how you place the patched Oracle home on all nodes before the rollout. Configure the workflow using either WLST or the WebLogic Server Administration Console, and specify the name of the JAR file that you created for the update. You will find complete details about available extension points and how to use the custom hooks feature in Modifying Workflows Using Custom Hooks in the Administering Zero Downtime Patching Workflows guide.

In the 12.2.1.3.0 release of WebLogic Server, ZDT Patching offers a cool new feature that allows you to extend the logic of your existing patching workflows by adding user-defined scripts that can be...

The WebLogic Server

Security Best Practices for WebLogic Server Running in Docker and Kubernetes

Overview The WebLogic Server (WLS) team is investing in new integration capabilities for running WLS in Kubernetes and Docker cloud environments. As part of this effort, we have identified best practices for securing Docker and Kubernetes environments when running WebLogic Server. These best practices are in addition to the general WebLogic Server recommendations found in the Oracle® Fusion Middleware Securing a Production Environment for Oracle WebLogic Server 12c documentation.     Ensuring the Security of WebLogic Server Running in Your Docker Environment References to Docker Security Resources These recommendations are based on documentation and white papers from a variety of sources. These include: Docker Security –  https://docs.docker.com/engine/security/security/ Center for Internet Security (CIS) Docker Benchmarks - https://www.cisecurity.org/benchmark/docker/ CIS Linux Benchmarks – https://www.cisecurity.org/benchmark/oracle_linux/ NCC Group: Hardening Linux Containers - https://www.nccgroup.trust/us/our-research/understanding-and-hardening-linux-containers/ Seccomp profiles - https://docs.docker.com/engine/security/seccomp/ Best Practices for Writing Docker Files:  https://docs.docker.com/engine/userguide/eng-image/dockerfile_best-practices/ Understanding Docker Security and Best Practices: https://blog.docker.com/2015/05/understanding-docker-security-and-best-practices/   Summary of Recommendations The recommendations to secure your production environment are: Validate your Docker configuration using the CIS Docker benchmark. This can be done manually or automatically using a 3rd party tool. Validate your Host Operating System configuration using the appropriate CIS Operating System benchmark. Evaluate additional host hardening recommendations not already covered by the CIS Benchmarks.  Evaluate additional container runtime recommendations not already covered by the CIS Benchmarks. These are described in more detail in the following sections. Validate Your Docker Configuration Using the Center for Internet Security (CIS) Docker Benchmark The Center for Internet Security (CIS) produces a benchmark for both Docker Community Edition and multiple Docker EE versions. The latest benchmark is for Docker EE 1.13 and new benchmarks are added after new Docker EE versions are released. These benchmarks contain ~100 detailed recommendations for securely configuring Docker. The recommendations apply to both the host and the Docker components and are organized around the following topics: Host configuration Docker daemon configuration Docker daemon configuration files Container Images and Build File Container Runtime Docker Security Operations Docker Swarm Configuration For more information, refer to detailed benchmark document  - https://www.cisecurity.org/benchmark/docker/ You should validate your configuration against each CIS Docker Benchmark recommendation either manually or via an automated tool. You can find a list of CIS partner benchmark tools that can validate your Docker configuration on http://www.cisecurity.org. These include (in alphabetical order): Cavirin: https://www.cisecurity.org/partner/cavirin/ Docker Bench for Security: https://github.com/docker/docker-bench-security Qualys: https://www.cisecurity.org/partner/qualys/ Symantec: https://www.cisecurity.org/partner/symantec/ Tenable: https://www.cisecurity.org/partner/tenable/ Tripwire: https://www.cisecurity.org/partner/tripwire/ TwistLock: https://www.twistlock.com VMWare: https://blogs.vmware.com/security/2015/05/vmware-releases-security-compliance-solution-docker-containers.html Note: the CIS Benchmarks require a license to use commercially. For more information, refer to https://www.cisecurity.org/cis-securesuite/cis-securesuite-membership-terms-of-use/.   Validate Your Host Operating System (OS) Using the CIS Benchmark The Center for Internet Security (CIS) also produces a benchmark for various Operating Systems including different Linux flavors, AIX, Solaris, Microsoft Windows, OSX, etc. These benchmarks contain a set of detailed recommendations for securely configuring your Host OS. For example, the CIS Oracle Linux 7 Benchmark contains over 200 recommendations and over 300 pages of instructions. The recommendations apply to all aspects of Linux configuration. For more information, refer to detailed benchmark document  - https://www.cisecurity.org/cis-benchmarks/ You should validate your configuration against the appropriate CIS Operating System Benchmark either manually or via an automated tool. You can find a list of CIS partner benchmark tools that can validate your Operating System configuration on http://www.cisecurity.org. For example, for CIS Oracle Linux 7, this includes  (in alphabetical order): NNT: https://www.cisecurity.org/partner/nnt/ Qualys: https://www.cisecurity.org/partner/qualys/ Tenable: https://www.cisecurity.org/partner/tenable/ Tripwire: https://www.cisecurity.org/partner/tripwire/    Additional Host Hardening Recommendations Beyond the CIS benchmarks, there are additional host hardening recommendations and information that should be considered. These include: Use Grsecurity and PAX NCC Group Host Hardening Recommendations Grsecurity and PaX The grsecurity project provides various patches to the Linux kernel that enhance a system's overall security. This includes address space protection, enhanced auditing and process control. PaX flags data memory such as the stack as non-executable and program memory as non-writable. PaX also provides address space layout randomization. Grsecurity and PAX can be run on the kernel used for Docker without requiring changes in Docker configuration. The security features will apply to the entire host and therefore to all containers. You may want to investigate grsecurity and PaX to determine if they can be used in your production environment. For more information, refer to http://grsecurity.net. NCC Group Host Hardening Recommendations The NCC group white paper (NCC group Understanding Hardening Linux Containers) contains additional recommendations for hardening Linux containers:  There is overlap between these recommendations and the CIS Docker Benchmarks. The recommendations in the two sections: 10.1 Generation Container Recommendations 10.3 Docker Specific Recommendations  include additional host hardening items. This includes: Keep the kernel as up to date as possible. Typical sysctl hardening should be applied. 
 Isolate storage for containers and ensure appropriate security. Control device access and limit resource usage using Control Groups (cgroups) You may want to investigate these recommendations further; for more information, refer to NCC group Understanding Hardening Linux Containers Additional Container Runtime Recommendations Beyond the CIS Docker recommendations, there are additional container runtime recommendations that should be considered. These include: Use a custom seccomp profile either created manually or with help from a tool such as Docker Slim. Utilize the Docker Security Enhanced Linux (SELinux) profile. Specify additional restricted Linux kernel capabilities when starting Docker. Improve isolation and reduce attack surface by running only Docker on the host server. Additional container hardening based on NCC group recommendations. Run with a Custom seccomp Profile Secure computing mode (seccomp) is a Linux kernel feature that can be used to restrict the actions available within the container. This feature is available only if Docker has been built with seccomp and the kernel is configured with CONFIG_SECCOMP enabled. Oracle Linux 7 supports seccomp and Docker runs with a seccomp profile by default. The default seccomp profile provides a good default for running containers with seccomp and disables around 44 system calls out of 300+.  It is not recommended to change the default seccomp profile, but you can run with a custom profile using the      --security-opt seccomp=/path/to/seccomp/profile.json option. For more information on the seccomp default profile, refer to https://docs.docker.com/engine/security/seccomp/#passing-a-profile-for-a-container If the default seccomp profile is not sufficient for your Docker production environment, then you can optionally create a custom profile and run with it. A tool such as Docker Slim (https://github.com/docker-slim/docker-slim) may be useful in generating a custom seccomp profile via static and dynamic analysis. Run with SELinux Security-Enhanced Linux (SELinux) is a Linux kernel security module that provides a mechanism for supporting access control security policies, including mandatory access controls (MAC). You can enable SELinux when starting the Docker container. To enable SELinux, ensure that SELinux is installed     yum install selinux-policy then enable the docker SELinux module      semodule -v -e docker and then specify that SELinux is enabled when starting Docker     # vi /etc/sysconfig/docker     OPTIONS='--selinux-enabled --group=docker -g /scratch/docker' Run with Restricted Capabilities Docker runs containers with a restricted set of Linux kernel capabilities. You can specify additional capabilities to remove based on the requirements of your environment. For more information, refer to https://docs.docker.com/engine/security/security/#linux-kernel-capabilities. Run Only Docker on Server In order to ensure isolation of resources and reduce the attack surface, it is recommended to run only Docker on the server. Other services should be moved to Docker containers. NCC Group Docker Container Hardening Recommendations The NCC group white paper contains additional recommendations for hardening Linux containers: NCC group Understanding Hardening Linux Containers There is overlap between these recommendations and those listed in prior sections and in the CIS Benchmarks. The recommendations in the two sections: 10.1 Generation Container Recommendations 10.3 Docker Specific Recommendations  include additional Docker container hardening items. This includes: Control device access and limit resource usage using Control Groups (cgroups) Isolate containers based on trust and ownership. Have one application per container if feasible. Use layer two and layer three firewall rules to limit container to host and guest to guest communication. Use Docker container auditing tools such as Clair, drydock, and Project Natilus. For more information on these recommendations, refer to NCC group Understanding Hardening Linux Containers.      Ensuring the Security of WebLogic Server Running in Your Kubernetes Production Environment   References to Kubernetes Security Resources These recommendations are based on documentation and whitepapers from a variety of sources. These include: Security Best Practices for Kubernetes Deployment: http://blog.kubernetes.io/2016/08/security-best-practices-kubernetes-deployment.html CIS Kubernetes Benchmarks: https://www.cisecurity.org/benchmark/kubernetes/ Kubelet Authentication and Authorization: https://kubernetes.io/docs/admin/kubelet-authentication-authorization/ RBAC Authorization: https://kubernetes.io/docs/admin/authorization/rbac/ Pod Security Policies RBAC: https://kubernetes.io/docs/concepts/policy/pod-security-policy/#working-with-rbac Auditing: https://kubernetes.io/docs/tasks/debug-application-cluster/audit/ etcd Security Model: https://coreos.com/etcd/docs/latest/op-guide/security.html Summary of Recommendations The recommendations to secure your environment are: Validate your Kubernetes configuration using the CIS Kubernetes benchmark. This can be done manually or automatically using a 3rd party tool. Evaluate additional Kubernetes runtime recommendations not already covered by the CIS Benchmarks. These are described in more detail in the following sections.   Validate Your Kubernetes Configuration Using the CIS Kubernetes Benchmark The Center for Internet Security (CIS) produces a benchmark for Kubernetes. This benchmarks contain a set of detailed recommendations for securely configuring Kubernetes with ~ 250 pages in the benchmark. The recommendations apply to the various Kubernetes components and are organized around the following topics: Master Node Security Configuration including API Server, Scheduler, Controller Manager, configuration files and etcd. Worker Node Security Configuration including Kubelet and configuration files. Federated Deployments including Federation API Server and Federation Controller Manager. For more information, refer to https://www.cisecurity.org/benchmark/kubernetes/ You should validate your configuration against each CIS Kubernetes Benchmark recommendation either manually or via an automated tool. You can find a list of CIS partner benchmark tools that can validate your Kubernetes configuration on http://www.cisecurity.org. These include (in alphabetical order): Twistlock - www.twistlock.com NeuVector - http://neuvector.com/blog/open-source-kubernetes-cis-benchmark-tool-for-security/ Kube Bench - https://github.com/aquasecurity/kube-bench Note: the CIS Benchmarks require a license to use commercially. For more information, refer to https://www.cisecurity.org/cis-securesuite/cis-securesuite-membership-terms-of-use/.   Additional Container Runtime Recommendations Beyond the CIS Kubernetes recommendations, there are additional container runtime recommendations that should be considered. Images Images should be free of vulnerabilities and should be retrieved from either from a trusted registry or from a private registry. Image scanning should be performed to ensure there are no security vulnerabilities. You should use third party tools to perform the CVE scanning and you should integrate these into the build process. For more information, refer to http://blog.kubernetes.io/2016/08/security-best-practices-kubernetes-deployment.html. Security Context and Pod Security Policy Security policies can be set at the pod or container level via the security context. The security context can be used to: Make the Container run as Non-root user Control capabilities used in container. Make the root file system read-only Prevent container with user running as root from the pod. For more information, refer to https://kubernetes.io/docs/tasks/configure-pod-container/security-context/ and http://blog.kubernetes.io/2016/08/security-best-practices-kubernetes-deployment.html. Secure Access to Nodes You should avoid using SSH access to Kubernetes nodes, reducing the risk for unauthorized access to host resource. Use kubectl exec instead of SSH. If debugging of issues is required, create a separate staging environment that allows for SSH access.  Separate Resources A Kubernetes namespace allows you to partition created resources into logically named groups. Resources created in one namespace can be hidden from other namespaces.  You can create additional namespaces and attach resources and users to them.  You can utilize resource quotas attached to a namespace for memory, CPU, and pods. For more information, refer to http://blog.kubernetes.io/2016/08/security-best-practices-kubernetes-deployment.html. Manage Secrets A secret contains sensitive data such as a password, token, or key. Secrets can be mounted as data volumes or be exposed as environment variables to be used by a container in a pod. They can also be used by other parts of the system, without being directly exposed to the pod. You should: Manage user and pod access to secrets. Store secrets securely Don't use files or environment variables for secrets. For more information, refer to https://kubernetes.io/docs/concepts/configuration/secret/. Networking Segmentation Segment the network so different applications do not run on the same Kubernetes cluster. This reduces the risk of one compromised application attacking a neighboring application. Network segmentation ensures that containers can not communicate with other containers unless authorized. For more information, refer to Network Segmentation in http://blog.kubernetes.io/2016/08/security-best-practices-kubernetes-deployment.html

Overview The WebLogic Server (WLS) team is investing in new integration capabilities for running WLS in Kubernetes and Docker cloud environments. As part of this effort, we have identified...

The WebLogic Server

How to Use Java EL to Write WLDF Policy Expressions

WLDF provides specialized functions, called Smart Rules that encapsulate complex logic for looking at metric trends in servers and clusters over a recent time interval. If these prove insufficient, you have the option to write policy expressions directly using the beans and functions provided by WLDF and Java Expression Language (Java EL). Java EL is the recommended language for creating policy expressions in Oracle WebLogic Server 12c. Java EL has many powerful capabilities built into it, but they can make it more complex to work with. However, to make it easier, WLDF provides a set of EL extensions consisting of beans and functions that you can use in your policy expressions that access WebLogic data and events directly. You can write simple or complex policy expressions using the beans and functions. However, you must have good programming skills and experience using Java EL. For example, a relatively simple policy expression to check if the average HeapFreePercent over a 5-minute window is less than 20 can be written as: wls:extract("wls.runtime.serverRuntime.JVMRuntime.heapFreePercent", "30s", "5m").tableAverages().stream().anyMatch(hfp -> hfp < 20) A more complex policy expression to check the average value of the attribute PendingUserRequestCount across all servers in "cluster1" over a 5 minute interval and trigger if 75% of the nodes exceed an average of 100 pending requests can be written as: wls:extract(wls.domainRuntime.query({"cluster1"},"com.bea:Type=ThreadPoolRuntime,*", "PendingUserRequestCount"), "30s", "2m").tableAverages().stream().percentMatch(pendingCount -> pendingCount > 100) > 0.75 For more information and examples of using Java EL in policy expressions, see Creating Complex Policy Expressions Using WLDF Java EL Extensions in Configuring and Using the Diagnostics Framework for Oracle WebLogic Server.

WLDF provides specialized functions, called Smart Rules that encapsulate complex logic for looking at metric trends in servers and clusters over a recent time interval. If these prove insufficient,...

WebLogic Server 12.2.1.3 is Available

We are pleased to announce that Oracle WebLogic Server 12.2.1.3.0 is now available for download on the Oracle Technology Network (OTN), and will be available on Oracle Software Delivery Cloud (OSDC) soon.  WebLogic Server 12.2.1.3.0 is also referred as Patch Set 3 (PS3) for WebLogic Server 12.2.1.0.0, originally released in October 2015, and is being delivered as part of the overall Oracle Fusion Middleware 12.2.1.3.0 release.   WebLogic Server 12.2.1 Patch Sets include product maintenance and bug fixes for issues found since the initial 12.2.1 release, and we generally encourage WebLogic Server 12.2.1 users to adopt the latest Patch Set releases as they become available.  PS3 contains all of the features supported in WebLogic Server 12.2.1.0.0, 12.2.1.1.0 (PS1) and 12.2.1.2.0 (PS2).  Users running on prior 12.2.1 releases should consider moving to PS3 as soon as possible, and as makes sense within the context of their project plans.  Users planning or considering upgrades from prior versions of WebLogic Server to WebLogic Server 12.2.1 should retarget their migration plans, as possible and as makes sense, to PS3.    The Fusion Middleware 12.2.1.3.0 supported configurations matrix has been updated here.     Although WebLogic Server Patch Sets are intended primarily as maintenance vehicles, these releases also include a limited set of new feature capabilities that deliver incremental value without disrupting customers.   These features are summarized in the WebLogic Server 12.2.1.3 documentation under What's New?.  We will be describing some of these capabilities in detail in future blogs, but I'd like to mention two new security-related features here. Secured Production Mode is a new configuration option to help secure production environments. As indicated in our security documentation, Secured Production Mode helps apply secure WebLogic configuration settings by default, or warns users if settings that should typically be used in secured environments are not being used.  Oracle Identity Cloud Integrator is a new, optionally configurable, WebLogic Server security provider which allows you to use the Oracle Identity Cloud Service.    Users running WebLogic Server 12.2.1.3, either on premises or in Oracle Cloud, can now use Identity Cloud Integrator to support integrated authentication and identity assertion with the same Identity Cloud Service used by Oracle Cloud PaaS and SaaS Services.   We hope these features help you respond to evolving security requirements, and we've implemented these features such that they will not affect you unless you choose to enable them.     You should expect the Oracle Java Cloud Service, and other Oracle Cloud Services using WebLogic Server, to support 12.2.1.3 in the future.   We are also updating our Docker images to support 12.2.1.3, and we hope to have more to say about WebLogic Server Docker support at Oracle Open World.   And as noted in my prior post, we're working in a new WebLogic Server version that will support the upcoming Java EE 8 release.     So please start targeting your WebLogic Server 12.2.1 projects to WebLogic Server to WebLogic Server 12.2.1.3 to take advantage of the latest release with the latest features and maintenance. We hope to have more updates for you soon!  

We are pleased to announce that Oracle WebLogic Server 12.2.1.3.0 is now available for download on the Oracle Technology Network (OTN), and will be available on Oracle Software Delivery Cloud (OSDC)...

The WebLogic Server

WebLogic Server and Opening Up Java EE

Oracle has just announced that it is exploring moving Java Enterprise Edition (Java EE) technologies to an open source foundation, following the delivery of Java EE 8.  The intention is to adopt more agile processes, implement more flexible licensing, and change the governance process to better respond to changing industry and technology demands.   We will keep you updated on developments in this area.  WebLogic Server users may be wondering what this announcement may mean for them, because WebLogic Server supports Java EE standards.  The short answer is that there is no immediate impact.   We will continue to support existing WebLogic Server releases, deliver Oracle Cloud services based on WebLogic Server, and deliver new releases of WebLogic Server in the future. Some WebLogic Server customers are using older product versions, either 10.3.X (Java EE 5) or 12.1.X (Java EE 6).   We will continue to support these customers, and they have an upgrade path to newer WebLogic Server and Java EE versions. Some WebLogic Server customers have adopted WebLogic Server 12.2.1.X, (Java EE 7), with differentiated capabilities we have discussed in earlier blogs.   We’re expecting to release a new 12.2.1.X patch set release, WebLogic Server 12.2.1.3, in the near future.   Stay posted for more information on this.   We will continue to leverage WebLogic Server in Oracle Cloud through the Java Cloud Service, and other PaaS and SaaS offerings.    We are also investing in new integration capabilities for running WebLogic Server in Kubernetes/Docker cloud environments.  Finally, we are planning a new release of WebLogic Server for next calendar year (CY2018) that will support the new capabilities in Java EE 8, including HTTP/2 support, JSON processing and REST support improvements.  See the Aquarium blog for more information on new Java EE 8 capabilities.  In summary, there’s a lot for WebLogic Server customers to leverage going forward, and we have a strong track record of supporting WebLogic Server customers. As to what happens to potential future releases based on future evolutions of Java EE technologies beyond Java EE 8, that will be dependent on the exploration that we as a community are about to begin, and hopefully to a robust community-driven evolution of these technologies with Oracle’s support. Stay tuned for more updates on this topic.    Safe Harbor Statement The preceding is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, and timing of any features or functionality described for Oracle’s products remains at the sole discretion of Oracle.

Oracle has just announced that it is exploring moving Java Enterprise Edition (Java EE) technologies to an open source foundation, following the delivery of Java EE 8.  The intention is to adopt more...

The WebLogic Server

Using REST to Create an AGL Data Source

A recent question was raised by a customer on how to create an Active GridLink (AGL) data source using the RESTFUL API's in WebLogic Server (WLS).  First, you can't do it with the API's provided in WLS release 12.1.3.  New API's were provided starting in WLS 12.2.1 that provide much more complete functionality.  These API's  mirror the MBeans and are more like using WLST. The following shell script creates an Active GridLink data source using minimal parameters.  You can add more parameters as necessary.  It explicitly sets the data source type, which was new in WLS 12.2.1.  It uses the long-format URL, which is required for AGL. It sets up the SQL query using "ISVALID" to be used for test-connections-on-reserve, which is recommended.  It assumes that auto-ONS is used so no ONS node list is specified.  FAN-enabled must be explicitly set. c="curl -v --user weblogic:welcome1 -H X-Requested-By:MyClient -H Accept:application/json -H Content-Type:application/json" localhost=localhost editurl=http://${localhost}:7001/management/weblogic/latest/edit name="JDBCGridLinkDataSource" $c -d "{}" \  -X POST "${editurl}/changeManager/startEdit" $c -d "{     'name': '${name}',     'targets': [ { identity: [ 'servers', 'myserver' ] } ], }" \ -X POST "${editurl}/JDBCSystemResources?saveChanges=false" $c -d "{     'name': '${name}',     'datasourceType': 'AGL', }" \ -X POST "${editurl}/JDBCSystemResources/${name}/JDBCResource" $c -d "{         'JNDINames': [ 'jndiName' ] }" \ -X POST "${editurl}/JDBCSystemResources/${name}/JDBCResource/JDBCDataSourceParams" $c -d "{         'password': 'dbpassword',         'driverName': 'oracle.jdbc.OracleDriver',         'url': 'jdbc:oracle:thin:@(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=dbhost)(PORT=dbport))(CONNECT_DATA=(SERVICE_NAME=dbservice)))', }" \ -X POST "${editurl}/JDBCSystemResources/${name}/JDBCResource/JDBCDriverParams" $c -d "{         name: 'user',         value: 'dbuser' }" \ -X POST "${editurl}/JDBCSystemResources/${name}/JDBCResource/JDBCDriverParams/properties/properties" $c -d "{         'testTableName': 'SQL ISVALID' }" \ -X POST "${editurl}/JDBCSystemResources/${name}/JDBCResource/JDBCConnectionPoolParams" $c -d "{         "fanEnabled":true }" \ -X POST "${editurl}/JDBCSystemResources/${name}/JDBCResource/JDBCOracleParams" $c -d "{}" \  -X POST "${editurl}/changeManager/activate"

A recent question was raised by a customer on how to create an Active GridLink (AGL) data source using the RESTFUL API's in WebLogic Server (WLS).  First, you can't do it with the API's provided in WLS...

The WebLogic Server

Oracle Database 12.2 Feature Support with WebLogic Server

It's finally available - you can download the Oracle 12.2 database!  Integration of WebLogic Server (WLS) with Oracle 12.2 has been in progress for two years. This article provides information on how Oracle 12.2 database features are supported in WebLogic Server releases. Using Older Drivers with the 12.2 Database Server The simplest integration of WebLogic Server with a 12.2 database is to use the Oracle driver jar files included in your WebLogic Server installation. There are no known problems or upgrade issues when using 11.2.0.3, 11.2.0.4, 12.1.0.1, or 12.1.0.2 drivers with a 12.2 database.  See the Oracle JDBC FAQ for more information on driver support and features of Oracle 12.2 database. Using the Oracle 12.2 Drivers with the 12.2 Database Server To use many of the new 12.2 database features, it is necessary to use the 12.2 database driver jar files. Note that Oracle 12.2 database driver jar files are compiled for JDK 8. The earliest release of WLS that supports JDK 8 is WLS 12.1.3. The Oracle 12.2 database driver jar files cannot work with earlier versions of WLS.  In earlier versions of WLS you can use the drivers that come with the WLS installation to connect to the 12.2 DB, as explained above.  This article does not apply to Fusion MiddleWare (FMW) deployments of WLS. It’s likely that the next released version of FMW 12.2.1.3 will ship and support the Oracle 12.2 database driver jar files out of the box. Required Oracle 12.2 Driver Files The 12.2 Oracle database jar files are not shipped with WLS 12.1.3, 12.2.1, 12.2.1.1, and 12.2.1.2. This section lists the files required to use an Oracle 12.2 driver with these releases of WebLogic Server. These files are installed under the 12.2 database $ORACLE_HOME directory. Note: These jar files must be added to the CLASSPATH used for running WebLogic Server at the head of the CLASSPATH. They must come before all of the 12.1.0.2 client jar files. Select one of the following ojdbc files (note that these have "8" in the name instead of "7" from the earlier release)   The _g jar files are using for debugging and required if you want to enable driver level logging.  If you are using FMW, you must use the "dms" version of the jar file.  WLS uses the non-"dms" version of the jar by default. jdbc/lib/ojdbc8.jar jdbc/lib/ojdbc8_g.jar jdbc/lib/ojdbc8dms.jar jdbc/lib/ojdbc8dms_g.jar The following table lists additional required driver files:   File Description jdbc/lib/simplefan.jar Fast Application Notification (new) ucp/lib/ucp.jar Universal Connection Pool opmn/lib/ons.jar Oracle Network Server client jlib/orai18n.jar Internationalization support jlib/orai18n-mapping.jar Internationalization support jlib/orai18n-collation.jar Internationalization support jlib/oraclepki.jar Oracle Wallet support jlib/osdt_cert.jar Oracle Wallet support jlib/osdt_core.jar Oracle Wallet support rdbms/jlib/aqapi.jar AQ JMS support lib/xmlparserv2_sans_jaxp_services.jar SQLXML support rdbms/jlib/xdb.jar SQLXML support   Download Oracle 12.2 Database Files If you want to run one of these releases with the 12.2 jar files, Oracle recommends that you do a custom install of the Oracle Database client kit for a minimal installation. Select the Database entry from http://www.oracle.com/technetwork/index.html. Under Oracle Database 12.2 Release 1, select the "See All" link for your OS platform. For a minimal install, under the Oracle Database 12.2 Release 1 Client heading, select the proper zip file and download it. Unzip the file and run the installer. Select Custom, then select the Oracle JDBC/Thin interfaces, Oracle Net listener, and Oracle Advanced Security check boxes. You can also use an Administrator package client installation or a full database installation to get the jar files. The jar files are identical on all platforms. Update the WebLogic Server CLASSPATH or PRE_CLASSPATH To use an Oracle 12.2 database and Oracle 12.2 JDBC driver, you must update the CLASSPATH in your WebLogic Server environment. Prepend the required files specified in Required Oracle 12.2 Driver Files listed above to the CLASSPATH (before the 12.1.0.2 Driver jar files).  If you are using startWebLogic.sh, you also need to set the PRE_CLASSPATH. The following code sample outlines a simple shell script that updates the CLASSPATH of your WebLogic environment. Make sure ORACLE_HOME is set appropriately (e.g., something like /somedir/app/myid/product/12.2.0/client_1). #!/bin/sh # source this file in to add the new 12.2 jar files at the beginning of the CLASSPATH case "`uname`" in *CYGWIN*) SEP=";" ;; Windows_NT) SEP=";" ;; *) SEP=":" ;; esac dir=${ORACLE_HOME:?} # We need one of the following #jdbc/lib/ojdbc8.jar #jdbc/lib/ojdbc8_g.jar #jdbc/lib/ojdbc8dms.jar #jdbc/lib/ojdbc8dms_g.jar if [ "$1" = "" ] then ojdbc=ojdbc8.jar else ojdbc="$1" fi case "$ojdbc" in ojdbc8.jar|ojdbc8_g.jar|ojdbc8dms.jar|ojdbc8dms_g.jar) ojdbc=jdbc/lib/$ojdbc ;; *) echo "Invalid argument - must be ojdbc8.jar|ojdbc8_g.jar|ojdbc8dms.jar|ojdbc8dms_g.jar" exit 1 ;; esac CLASSPATH="${dir}/${ojdbc}${SEP}$CLASSPATH" CLASSPATH="${dir}/jdbc/lib/lib/simplefan.jar${SEP}$CLASSPATH" CLASSPATH="${dir}/ucp/lib/ucp.jar${SEP}$CLASSPATH" CLASSPATH="${dir}/opmn/lib/ons.jar${SEP}$CLASSPATH" CLASSPATH="${dir}/jlib/orai18n.jar${SEP}$CLASSPATH" CLASSPATH="${dir}/jlib/orai18n-mapping.jar${SEP}$CLASSPATH" CLASSPATH="${dir}/jlib/oraclepki.jar ${SEP}$CLASSPATH" CLASSPATH="${dir}/jlib/osdt_cert.jar ${SEP}$CLASSPATH" CLASSPATH="${dir}/jlib/osdt_core.jar${SEP}$CLASSPATH" CLASSPATH="${dir}/rdbms/jlib/aqapi.jar${SEP}$CLASSPATH" CLASSPATH="${dir}/lib/xmlparserv2_sans_jaxp_services.jar${SEP}$CLASSPATH" CLASSPATH="${dir}/jlib/orai18n-collation.jar${SEP}$CLASSPATH" CLASSPATH="${dir}/rdbms/jlib/xdb.jar${SEP}$CLASSPATH" For example, save this script in your environment with the name setdb122_jars.sh. Then run script with the ojdbc8.jar: . ./setdb122_jars.sh ojdbc8.jar export PRE_CLASSPATH="$CLASSPATH"  WebLogic Server Integration with Oracle Database 12.2 Several Oracle Database 12.2 features have been in WLS for many releases, just waiting for the release of the new database version to start working. The following table lists these features. All of these features require the 12.2 driver jar files and 12.2 database server.   Feature WLS Release Introduced Oracle Database Release JDBC 4.2 support 12.1.3 12.2 Service Switching 12.2.1 12.2 XA Replay Driver 12.2.1 12.2 Gradual Draining 12.2.1.2.0 12.1 with 12.2 enhancements UCP MT Shared Pool support 12.2.1.1.0 12.2 AGL Support for URL with @alias or @ldap 12.2.1.2.0 12.2 Sharding API’s Not directly available in WLS datasource but can be used via the WLS UCP native datasource type added in 12.2.1 12.2   You should expect other articles to describe these features in more detail and additional integration in future WLS releases.

It's finally available - you can download the Oracle 12.2 database!  Integration of WebLogic Server (WLS) with Oracle 12.2 has been in progress for two years. This article provides information on how...

The Oracle Container Registry has gone live!

We are pleased to announce that the Oracle Container Registry is now available. The Container Registry is designed to provide simple access to Oracle products for use in Docker containers.                                         The Oracle WebLogic Server 12.2.1.1 and 12.2.1.2 images are now available on the Oracle Container Registry. Currently, access to the Oracle Container Registry is limited to customers in the United States, United Kingdom and Australia. How do I login to the Oracle Container Registry? Point your browser at https://container-registry.oracle.com. If this is the first time you’re visiting the Container Registry, you will need to associate your existing Oracle SSO credentials or create a new account. Click the “Register” button and select either: “I Already Have an Oracle Single Sign On Account” to associate your existing account or “I Don't Have an Oracle Single Sign On Account” to create a new account.Once you have an account, click the login button to log into the Container Registry. You will be prompted to read and accept the license agreement. Note that acceptance of the license agreement is required to download images using the Docker command-line tool and that acceptance only persists for eight (8) hours.After accepting the license, you can browse the available business areas and images to review which images you’d like to pull from the registry using the Docker client. Pull The WebLogic Server images The Oracle WebLogic Server images in the registry are install/empty domain images for WebLogic Server 12.2.1.1 and 12.2.1.2. For every version of WebLogic Server there are two install images one created with the generic installer and one with the quick installer. To pull the image from the registry run the command# docker pull container-registry.oracle.com/middleware/weblogic Get Started To create an empty domain with an Admin Server running, you simply call# docker run -d container-registry.oracle.com/middleware/weblogic:12.2.1.2The WebLogic Server image will invoke createAndStartEmptyDomain.sh as the default CMD, and the Admin Server will be running on port 7001. When running multiple containers map port 7001 to a different port on the host:# docker run -d -p 7001:7001 container-registry.oracle.com/middleware/weblogic:12.2.1.2 To run a second container on port 7002:# docker run -d -p 7002:7001 container-registry.oracle.com/middleware/weblogic:12.2.1.2Now you can access the AdminServer Web Console at http://localhost:7001/console. Customize your WebLogic Server Domains You might want to customize your own WebLogic Server domain by extending this image. The best way to create your own domain is by writing your own Dockerfiles,and using WebLogic Scripting Tool (WLST) to create clusters, Data Sources, JMS Servers, Security Realms, and deploy applications.In your Dockerfile you will extend the WebLogic Server image with the FROM container-registry.oracle.com/middleware/weblogic:12.2.1.2 directive.We provide a variety of examples (Dockerfiles, shell scripts, and WLST scripts) to create domains, configure resources, deploy applications, and use load balancer in GitHub.

We are pleased to announce that the Oracle Container Registry is now available. The Container Registry is designed to provide simple access to Oracle products for use in Docker containers.             ...

Configuring Datasource Fatal Error Codes

Thereare well known error codes on JDBC operations that can always be interpreted asthe database shutting down, already down, or a configuration problem.  In this case, we don’t want to keep theconnection around because we know that subsequent operations will fail and theymight hang or take a long time to complete. These error codes can beconfigured in the datasource configuration using the "fatal-error-codes"value on the Connection Pool Parameters. The value is a comma separated list of error codes.  Ifa SQLException is seen on a JDBC operation and sqlException.getErrorCode()matches one of the configured codes, the connection will be closed instead ofreturning it to the connection pool. Note that in the earlier OC4J applicationserver, it closed all connections in the pool when one of these errors occurredon any connection. In the WLS implementation, we chose to only close theconnection that got the fatal error.  This allows you to add some error codes that are specific to a connection going bad in addition to the database being unavailable. The following error codes are pre-configured andcannot be disabled. You can provide additionalerror codes for these or other drivers on individual datasources. Driver Type Default Fatal Error Codes Oracle Thin Driver 3113, 3114, 1033, 1034, 1089, 1090, 17002 WebLogic or IBM DB2 driver -4498, -4499, -1776, -30108, -30081, -30080, -6036, -1229, -1224, -1035, -1034, -1015, -924, -923, -906, -518, -514, 58004 WebLogic or IBM Informix driver -79735, -79716, -43207, -27002, -25580, -4499, -908, -710, 43012 The following is a WLST script to add a fatal errorcode string to an existing datasource. # java weblogic.WLST fatalerrorcodes.pyimport sys, socket, oshostname = socket.gethostname()datasource="JDBC GridLink Data Source-0"connect("weblogic","welcome1","t3://"+hostname+":7001")edit()startEdit()cd("/JDBCSystemResources/" + datasource )targets=get("Targets")set("Targets",jarray.array([], ObjectName))save()activate()startEdit()cd("/JDBCSystemResources/" + datasource + "/JDBCResource/"+ datasource +"/JDBCConnectionPoolParams/" + datasource )set("FatalErrorCodes","1111,2222")save()activate()startEdit()cd("/JDBCSystemResources/" + datasource )set("Targets", targets)save()activate() As an experiment, I tried this with REST. localhost=localhostediturl=http://${localhost}:7001/management/weblogic/latest/editname="JDBC%20GridLink%20Data%20Source%2D0" curl -v \--user weblogic:welcome1 \-H X-Requested-By:MyClient \-H Accept:application/json \-X GET \${editurl}/JDBCSystemResources/${name}/JDBCResource/JDBCConnectionPoolParams?links=none" curl -v \--user weblogic:welcome1 \-H X-Requested-By:MyClient \-H Accept:application/json \-H Content-Type:application/json \-d "{}" \-X POST "${editurl}/changeManager/startEdit" curl -v \--user weblogic:welcome1 \-H X-Requested-By:MyClient \-H Accept:application/json \-H Content-Type:application/json \-d "{ targets: []}" \-X POST "${editurl}/JDBCSystemResources/${name}" curl -v \--user weblogic:welcome1 \-H X-Requested-By:MyClient \-H Accept:application/json \-H Content-Type:application/json \-d "{ fatalErrorCodes: '1111,2222'}" \-X POST \"${editurl}/JDBCSystemResources/${name}/JDBCResource/JDBCConnectionPoolParams" curl -v \--user weblogic:welcome1 \-H X-Requested-By:MyClient \-H Accept:application/json \-H Content-Type:application/json \-d "{ targets: [ { identity: [ servers,'myserver' ] } ]}" \-X POST "${editurl}/JDBCSystemResources/${name}" curl -v \--user weblogic:welcome1 \-H X-Requested-By:MyClient \-H Accept:application/json \-H Content-Type:application/json \-d "{}" \-X POST "${editurl}/changeManager/activate" curl -v \--user weblogic:welcome1 \-H X-Requested-By:MyClient \-H Accept:application/json \-X GET "${editurl}/JDBCSystemResources/${name}?links=none"

There are well known error codes on JDBC operations that can always be interpreted as the database shutting down, already down, or a configuration problem.  In this case, we don’t want to keep theconne...

AGL Datasource Support for URL with @alias or @LDAP

The Oracle driver has the ability to have an @alias string in the connection string URL so that the information like the host, port, and service name can be in an external tnsnames.ora file that is shared across many datasources. My perception is that this has grown in popularity in recent years to make management of the connection information easier (one place per computer).  In an effort to centralize the information further, it's possible to use an @LDAP format in the URL to get the connection information from a Lightweight Directory Access Protocol (LDAP) server.  See the Database JDBC Developer's Guide, https://docs.oracle.com/database/121/JJDBC/urls.htm#JJDBC28267, for more information. While this format of URL was supported for Generic and Multi Data sources, it was not supported for Active GridLink (AGL) datasources.  An AGL datasource URL was required to have a (SERVICE_NAME=value) as part of the long format URL.   Starting in WebLogic Server 12.2.1.2.0 (AKA PS2), the URL may also use an @alias or @ldap format.  The short format without an @alias or @LDAP is still not supported and will generate an error (and not work).  It is highly recommended that you use a database service name in the stored alias or LDAP entry.  Do not use a SID.  To optimize your AGL performance, you should be using a long format URL, in the alias or LDAP store, that has features like load balancing, retry count and delay, etc.   ALIAS Example: 1. Create a tnsnames.ora file with tns_entry=(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=RAC-scan-address)(PORT=port))(CONNECT_DATA=(SERVICE_NAME=service))) Normally, it is created in $ORACLE_HOME/network/admin.  2. Create your WLS datasource descriptor using a URL like  "jdbc:oracle:thin:/@tns_entry" 3. Add the following system property to the WebLogic server command line:-Doracle.net.tns_admin=$ORACLE_HOME/network/admin   LDAP Example: 1. Create your WLS datasource descriptor for LDAP or LDAPS using a URL like ""jdbc:oracle:thin:@ldap://ldap.example.com:7777/sales,cn=OracleContext,dc=com" JDBC Driver Requirement   Here's the catch.  You need to use a smarter ucp.jar file to support this functionality.  There are two options:   - Get a WLS patch to the 12.1.0.2 ucp.jar file based on Bug 23190035 - UCP DOESN'T SUPPORT ALIAS URL FOR RAC CLUSTER - Wait to run on an Oracle Database 12.2 ucp.jar file.  I'll be writing a blog about that when it's available.        

The Oracle driver has the ability to have an @alias string in the connection string URL so that the information like the host, port, and service name can be in an external tnsnames.ora file that is...

WebLogic Server 12.2.1.2 Datasource Gradual Draining

In October 2014, we delivered Oracle WebLogic Server 12.2.1 as part of theoverall Oracle Fusion Middleware 12.2.1 Release and October 2015 we deliveredthe first patch set release 12.2.1.1. This week, the second patch set 12.2.1.2 is available.   NewWebLogic Server 12.2.1.2 installers are now posted on the Oracle Technology Network and Oracle Software Delivery Cloud, andnew documentation has been madeavailable. There are a couple of newdatasource features hidden there. One ofthem is called “gradual draining.” When planned maintenance occurs on an Oracle RAC configuration, a planneddown service event is processed by an Active GridLink data source using thatdatabase. By default, all unreserved connections in the pool areclosed and borrowed connections are closed when returned to the pool. This can cause an uneven performance because: · New connections need to be created on thealternative instances. · A logon storm on the other instances can occur. It is desirable to gradually drain connections instead of closing themall immediately. The application can define the length of the draining periodduring which connections are closed. Itis configured using the weblogic.jdbc.drainTimeout value in the connectionproperties for the datasource. As usual,it can be set in the console, EM, or WLST. The following figure shows the administration console. The result is that connections are closed in a step-wise fashion every 5seconds. If the application is activelyusing connections, then they will be created on the alternative instances at asimilar rate. The following figure showsa perfect demonstration of draining and creating new connections over a 60 second period using a sampleapplication that generates constant load.  Without gradual draining, the current capacity on the down instance would drop off immediately similar to the LBA percentages and connections would be created on the alternative instance as quickly as possible. There are quite a few details about the interaction with RAC servicelife-cycle, datasource suspension and shut down, connection gravitation,etc. For more details, see Gradual Draining in Administering JDBCData Sources for Oracle WebLogic Server. Like several other areas in WLS datasource, this feature will be automaticallyenhanced when running with the Oracle Database 12.2 driver and server. More about that when the 12.2 release ships.

In October 2014, we delivered Oracle WebLogic Server 12.2.1 as part of the overall Oracle Fusion Middleware 12.2.1 Release and October 2015 we deliveredthe first patch set release 12.2.1.1. This week,...

Uploading AppToCloud Export Files to Oracle Storage Cloud Service

Before you can provision a new JCS instance using AppToCloud, the files that are generated from the on-premise healtcheck and export operations need to be uploaded to a container on the Oracle Storage Cloud.  There are several options available to perform this task.   Using a2c-export options Probably the simplest option is to perform the upload task as part of the operation of the a2c-export utility.   The a2c-export utility provides a mechanism to automatically upload the generated files as part of its normal operation when the relevant parameters are passed that indicate the Oracle Storage Cloud container to use and a username that has the privileges to perform the task.     Usage: a2c-export.sh [-help] -oh <oracle-home> -domainDir <domain-dir>             -archiveFile <archive-file> [-clusterToExport <cluster-name>]             [-clusterNonClusteredServers <cluster-name>] [-force]             [-cloudStorageContainer <cs-container>]             [-cloudStorageUser <cs-user>]      Below is an example of the output from an execution of a2c-export that shows how to use the upload option:     $ ./oracle_jcs_app2cloud/bin/a2c-export.sh \     -oh /Users/sbutton/Desktop/AppToCloudDemo/wls_1213 \     -domainDir /Users/sbutton/Desktop/AppToCloudDemo/wls_1213/user_projects/domains/demo_domain \     -archiveFile /tmp/demo_domain_export/demo_domain.zip  \     -cloudStorageContainer Storage-paas123/a2csc \     -cloudStorageUser fred.bloggs@demo.com      JDK version is 1.8.0_60-b27   A2C_HOME is /Users/sbutton/Desktop/AppToCloudDemo/oracle_jcs_app2cloud   /Library/Java/JavaVirtualMachines/jdk1.8.0_60.jdk/Contents/Home/bin/java -Xmx512m -DUseSunHttpHandler=true -cp /Users/sbutton/Desktop/AppToCloudDemo/oracle_jcs_app2cloud/jcs_a2c/modules/features/jcsa2c_lib.jar -Djava.util.logging.config.class=oracle.jcs.lifecycle.util.JCSLifecycleLoggingConfig oracle.jcs.lifecycle.discovery.AppToCloudExport -oh /Users/sbutton/Desktop/AppToCloudDemo/wls_1213 -domainDir /Users/sbutton/Desktop/AppToCloudDemo/wls_1213/user_projects/domains/demo_domain -archiveFile /tmp/demo_domain_export/demo_domain.zip -cloudStorageContainer Storage-paas123/a2csc -cloudStorageUser fred.bloggs@demo.com      The a2c-export program will write its log to /Users/sbutton/Desktop/AppToCloudDemo/oracle_jcs_app2cloud/logs/jcsa2c-export.log   Enter Storage Cloud password:    ####<07/09/2016 3:11:07 PM> <INFO> <AppToCloudExport> <getModel> <JCSLCM-02005> <Creating new model for domain /Users/sbutton/Desktop/AppToCloudDemo/wls_1213/user_projects/domains/demo_domain>   ####<07/09/2016 3:11:07 PM> <INFO> <EnvironmentModelBuilder> <populateOrRefreshFromEnvironment> <FMWPLATFRM-08552> <Try to discover a WebLogic Domain in offline mode>   ####<07/09/2016 3:11:16 PM> <INFO> <EnvironmentModelBuilder> <populateOrRefreshFromEnvironment> <FMWPLATFRM-08550> <End of the Environment discovery>   ####<07/09/2016 3:11:16 PM> <WARNING> <ModelNotYetImplementedFeaturesScrubber> <transform> <JCSLCM-00579> <Export for Security configuration is not currently implemented and must be manually configured on the target domain.>   ####<07/09/2016 3:11:16 PM> <INFO> <AppToCloudExport> <archiveApplications> <JCSLCM-02003> <Adding application to the archive: ConferencePlanner from /Users/sbutton/Desktop/AppToCloudDemo/ConferencePlanner.war>   ####<07/09/2016 3:11:17 PM> <INFO> <AppToCloudExport> <run> <JCSLCM-02009> <Successfully exported model and artifacts to /tmp/demo_domain_export/demo_domain.zip. Overrides file written to /tmp/demo_domain_export/demo_domain.json>   ####<07/09/2016 3:11:17 PM> <INFO> <AppToCloudExport> <run> <JCSLCM-02028> <Uploading override file to cloud storage from /tmp/demo_domain_export/demo_domain.json>   ####<07/09/2016 3:11:22 PM> <INFO> <AppToCloudExport> <run> <JCSLCM-02028> <Uploading archive file to cloud storage from /tmp/demo_domain_export/demo_domain.zip>   ####<07/09/2016 3:11:29 PM> <INFO> <AppToCloudExport> <run> <JCSLCM-02009> <Successfully exported model and artifacts to https://paas123.storage.oraclecloud.com. Overrides file written to Storage-paas12c/a2csc/demo_domain.json>      Activity Log for EXPORT      Informational Messages:        1. JCSLCM-02030: Uploaded override file to Oracle Cloud Storage container Storage-paas123/a2csc     2. JCSLCM-02030: Uploaded archive file to Oracle Cloud Storage container Storage-paas123/a2csc      Features Not Yet Implemented Messages:        1. JCSLCM-00579: Export for Security configuration is not currently implemented and must be manually configured on the target domain.      An HTML version of this report can be found at /tmp/demo_domain_export/reports/demo_domain-export-activityreport.html      Successfully exported model and artifacts to https://paas109.storage.oraclecloud.com. Overrides file written to Storage-paas109/fubar/demo_domain.json      a2c-export completed successfully (exit code = 0)      Using the Oracle Storage Cloud command line tool:   Another relatively easy approach to performing the upload task is to use the Oracle Cloud Storage command line utility.  This enables to you upload files directly from your local environment without needing to understand and use the REST API.   Download the command line interface utility from: http://www.oracle.com/technetwork/topics/cloud/downloads/index.html#cli and follow the instructions on how to extract it.  The upload client utility is packaged as an executable JAR file, requiring JRE 7+ to execute.   The mandatory parameters the utility requires are shown below:     $ java -jar uploadcli.jar -help   Version 2.0.0   ----Required Parameters----   -url <url>                  Oracle Storage Cloud Service REST endpoint.                               You can get this URL from Oracle Cloud My Services.   -user <user>                User name for the Oracle Storage Cloud Service account.   -container <name>           Oracle Storage Cloud Service container for the uploaded file(s).   <FILENAME>                  File to upload to Oracle Storage Cloud Service.                               Specify '.' to upload all files in current directory.                               Specify directory name to upload all files in the directory.                               For multi-file uploads, separate file names with ','.                               This MUST be the last parameter specified.     To perform the upload of the generated files, simply run the utility and provide the relevant parameter values.   Below is an example of uploading the demo_domain generated files to an Oracle Storage Cloud container.     $  java -jar /tmp/uploadcli.jar -url https://paas123.storage.oraclecloud.com/v1/Storage-paas123 \     -user steve.button@oracle.com \     -container a2csc \     demo_domain_export/demo_domain.json,demo_domain_export/demo_domain.zip       Enter your password: **********   INFO:Authenticating to service ...   INFO:Uploading File : /Users/sbutton/Desktop/AppToCloudDemo/Exports/demo_domain_export/demo_domain.json ...   INFO:File [ demo_domain.json ] uploaded successfully! - Data Transfer Rate: 2 KB/s    INFO:Uploading File : /Users/sbutton/Desktop/AppToCloudDemo/Exports/demo_domain_export/demo_domain.zip ...   INFO:File [ demo_domain.zip ] uploaded successfully! - Data Transfer Rate: 665 KB/s    INFO:Files Uploaded: 2   INFO:Files Skipped : 0   INFO:Files Failed  : 0     Next Steps   Once the generated files have been uploaded to Oracle Storage Cloud, the JCS provisioning process can be used to provision a new JCS instance that will be representative of the original on-premise domain.

Before you can provision a new JCS instance using AppToCloud, the files that are generated from the on-premise healtcheck and export operations need to be uploaded to a container on the Oracle...

Using AppToCloud to Migrate an On-Premise Domain to the Oracle Cloud

Part One - On-Premise Migration   Moving your WebLogic Server domains to the Oracle Cloud just got a whole shebang easier (#!)   With the introduction of the AppToCloud Tooling in Oracle Java Cloud Service 16.3.5 you can now simply and easily migrate a configured on-premise domain to  an equivalent Java Cloud Service instance in the Oracle Cloud, complete with the same set of configured settings, resources and deployed applications.   A key component of the AppToCloud landscape is the on-premise tooling which is responsible for inspecting a domain to check it's suitability for moving to the Oracle Cloud and then creating an export file containing a model of the domain topology (cluster with managed servers), local settings such as CLASSPATH entries and VM arguments, the set of configured WebLogic Server services such as data sources and deployments units such as Java EE applications and shared-libraries.   In this first of several blogs, I will provide an overview of how the AppToCloud on-premise tooling is used to create an export of a domain that is ready to be uploaded to the Oracle Cloud to then be provisioned as an Oracle Java Cloud Service instance.   Download and Install Tooling   AppToCloud on-premise tooling is used to inspect and export a domain.  The tooling needs to be downloaded from Oracle and installed on the server where the source domain is located.   Download the AppToCloud a2c-zip-installer.zip file from the Oracle Cloud Downloads page:  http://www.oracle.com/technetwork/topics/cloud/downloads/index.html   Copy the asc-zip-installer.zip file onto the machine hosting the on-premise installation and domain and unzip into a relevant directory.     $ unzip a2c-zip-installer.zip   Archive:  a2c-zip-installer.zip     inflating: oracle_jcs_app2cloud/jcs_a2c/modules/features/model-api.jar       inflating: oracle_jcs_app2cloud/bin/a2c-healthcheck.sh       inflating: oracle_jcs_app2cloud/jcs_a2c/modules/wlst.jar       inflating: oracle_jcs_app2cloud/jcs_a2c/modules/healthcheck.jar       inflating: oracle_jcs_app2cloud/jcs_a2c/modules/commons-lang-2.6.jar       ...     inflating: oracle_jcs_app2cloud/oracle_common/modules/com.fasterxml.jackson.core.jackson-databind_2.7.1.jar       inflating: oracle_jcs_app2cloud/bin/a2c-export.cmd       inflating: oracle_jcs_app2cloud/bin/a2c-export.sh       inflating: oracle_jcs_app2cloud/jcs_a2c/modules/jcsprecheck-api.jar       inflating: oracle_jcs_app2cloud/jcs_a2c/modules/jcsprecheck-impl.jar       inflating: oracle_jcs_app2cloud/oracle_common/modules/fmwplatform/common/envspec.jar       Verify the installation is successful by executing one of the utilities, such as the a2c-export utility, and inspecting the help text.     $ ./oracle_jcs_app2cloud/bin/a2c-export.sh -help   JDK version is 1.8.0_60-b27   A2C_HOME is /tmp/oracle_jcs_app2cloud   /Library/Java/JavaVirtualMachines/jdk1.8.0_60.jdk/Contents/Home/bin/java -Xmx512m -DUseSunHttpHandler=true -cp /tmp/oracle_jcs_app2cloud/jcs_a2c/modules/features/jcsa2c_lib.jar -Djava.util.logging.config.class=oracle.jcs.lifecycle.util.JCSLifecycleLoggingConfig oracle.jcs.lifecycle.discovery.AppToCloudExport -help   The a2c-export program will write its log to /private/tmp/oracle_jcs_app2cloud/logs/jcsa2c-export.log      Usage: a2c-export.sh [-help] -oh <oracle-home> -domainDir <domain-dir>             -archiveFile <archive-file> [-clusterToExport <cluster-name>]             [-clusterNonClusteredServers <cluster-name>] [-force]             ...     Step 1: Run a healthcheck on the source domain   The first step to performing an AppToCloud migration is to perform a healthcheck of the on-premise domain using the a2c-healthcheck utility.  The purpose of the healthcheck is to connect to the specified on-premise domain, inspect its contents, generate a report for any issues it discovers that may prevent the migration from being successful and finally, store the results in a directory for the export utility to use.   The healtcheck is run as an online operation.  This requires that the  AdminServer of the specified domain must be running and the appropriate connection details must be supplied as parameters to the healthcheck utility.  For security considerations the use of the password as a parameter should be be avoided as the healthcheck utility will securely prompt for the password when needed.     $ ./oracle_jcs_app2cloud/bin/a2c-healthcheck.sh \     -oh /Users/sbutton/Desktop/AppToCloudDemo/wls_1213 \     -adminUrl t3://localhost:7001 \     -adminUser weblogic \     -outputDir /tmp/demo_domain_export      JDK version is 1.8.0_60-b27   A2C_HOME is /Users/sbutton/Desktop/AppToCloudDemo/oracle_jcs_app2cloud   /Library/Java/JavaVirtualMachines/jdk1.8.0_60.jdk/Contents/Home/bin/java -Xmx512m -cp /Users/sbutton/Desktop/AppToCloudDemo/oracle_jcs_app2cloud/jcs_a2c/modules/features/jcsa2c_lib.jar -Djava.util.logging.config.class=oracle.jcs.lifecycle.util.JCSLifecycleLoggingConfig oracle.jcs.lifecycle.healthcheck.AppToCloudHealthCheck -oh /Users/sbutton/Desktop/AppToCloudDemo/wls_1213 -adminUrl t3://localhost:7001 -outputDir /tmp/demo_domain_export -adminUser weblogic   The a2c-healthcheck program will write its log to /Users/sbutton/Desktop/AppToCloudDemo/oracle_jcs_app2cloud/logs/jcsa2c-healthcheck.log   Enter password: ***********   Checking Domain Health   Connecting to domain      Connected to the domain demo_domain      Checking Java Configuration   ...   checking server runtime : conference_server_one   ...   checking server runtime : AdminServer   ...   checking server runtime : conference_server_two   Done Checking Java Configuration   Checking Servers Health      Done checking Servers Health   Checking Applications Health   Checking ConferencePlanner   Done Checking Applications Health   Checking Datasource Health   Done Checking Datasource Health   Done Checking Domain Health      Activity Log for HEALTHCHECK      Informational Messages:        1. JCSLCM-04037: Healthcheck Completed      An HTML version of this report can be found at /tmp/demo_domain_export/reports/demo_domain-healthcheck-activityreport.html      Output archive saved as /tmp/demo_domain_export/demo_domain.zip.  You can use this archive for the a2c-export tool.     Any findings from the healthcheck utility are reported as messages, including any items that need attention or that aren't supported with the current version of AppToCloud.  A static report is also generated that can be viewed after the execution of the utility showing the details of the on-premise domain and any messages generated from the healthcheck.   Note: it is mandatory to perform a healtcheck on the on-premise domain before an export operation can be performed.  The export operation requires the output from the healthcheck to perform its tasks.   Step 2: Export the source domain   Once a successful healthcheck has been performed on the on-premise domain, it is then ready to be exported into a form that can be uploaded to Oracle Cloud and used in the provisioning process for new Oracle Java Cloud Service instances.   Besides the path to the on-premise domain and location of the WebLogic Server installation, the export utility uses the output from the healthcheck operation to drive the export operation, storing the final output in the same file.     $ ./oracle_jcs_app2cloud/bin/a2c-export.sh \     -oh /Users/sbutton/Desktop/AppToCloudDemo/wls_1213 \     -domainDir /Users/sbutton/Desktop/AppToCloudDemo/wls_1213/user_projects/domains/demo_domain \      -archiveFile /tmp/demo_domain_export/demo_domain.zip      JDK version is 1.8.0_60-b27   A2C_HOME is /Users/sbutton/Desktop/AppToCloudDemo/oracle_jcs_app2cloud   /Library/Java/JavaVirtualMachines/jdk1.8.0_60.jdk/Contents/Home/bin/java -Xmx512m -DUseSunHttpHandler=true -cp /Users/sbutton/Desktop/AppToCloudDemo/oracle_jcs_app2cloud/jcs_a2c/modules/features/jcsa2c_lib.jar -Djava.util.logging.config.class=oracle.jcs.lifecycle.util.JCSLifecycleLoggingConfig oracle.jcs.lifecycle.discovery.AppToCloudExport -oh /Users/sbutton/Desktop/AppToCloudDemo/wls_1213 -domainDir /Users/sbutton/Desktop/AppToCloudDemo/wls_1213/user_projects/domains/demo_domain -archiveFile /tmp/demo_domain_export/demo_domain.zip   The a2c-export program will write its log to /Users/sbutton/Desktop/AppToCloudDemo/oracle_jcs_app2cloud/logs/jcsa2c-export.log   ####<31/08/2016 12:33:12 PM> <INFO> <AppToCloudExport> <getModel> <JCSLCM-02005> <Creating new model for domain /Users/sbutton/Desktop/AppToCloudDemo/wls_1213/user_projects/domains/demo_domain>   ####<31/08/2016 12:33:12 PM> <INFO> <EnvironmentModelBuilder> <populateOrRefreshFromEnvironment> <FMWPLATFRM-08552> <Try to discover a WebLogic Domain in offline mode>   ####<31/08/2016 12:33:21 PM> <INFO> <EnvironmentModelBuilder> <populateOrRefreshFromEnvironment> <FMWPLATFRM-08550> <End of the Environment discovery>   ####<31/08/2016 12:33:21 PM> <WARNING> <ModelNotYetImplementedFeaturesScrubber> <transform> <JCSLCM-00579> <Export for Security configuration is not currently implemented and must be manually configured on the target domain.>   ####<31/08/2016 12:33:21 PM> <INFO> <AppToCloudExport> <archiveApplications> <JCSLCM-02003> <Adding application to the archive: ConferencePlanner from /Users/sbutton/Desktop/AppToCloudDemo/ConferencePlanner.war>   ####<31/08/2016 12:33:22 PM> <INFO> <AppToCloudExport> <run> <JCSLCM-02009> <Successfully exported model and artifacts to /tmp/demo_domain_export/demo_domain.zip. Overrides file written to /tmp/demo_domain_export/demo_domain.json>      Activity Log for EXPORT      Features Not Yet Implemented Messages:        1. JCSLCM-00579: Export for Security configuration is not currently implemented and must be manually configured on the target domain.      An HTML version of this report can be found at /tmp/demo_domain_export/reports/demo_domain-export-activityreport.html      Successfully exported model and artifacts to /tmp/demo_domain_export/demo_domain.zip. Overrides file written to /tmp/demo_domain_export/demo_domain.json     Again messages from the execution of the export operation  are reported the console, including any items that need further attention or that aren't supported with the current version of AppToCloud.  A static report is also generated that can be viewed after the execution of the export of the on-premise domain. The export utility also generates an overrides file which externalizes all of the major settings that were extracted from the on-premise domain.  This file can be  modified locally to change the values of any of the provided settings and supplied with the domain export to the Oracle Java Cloud Service provisioning process.  Any modified settings in the overrides file will be used in place of the original values stored in the domain export when the new instance is provisioned.     {     "model" : {       "databases" : [ {         "id" : "demo_domain-database",         "jdbcConnectInfos" : [ {           "id" : "demo_domain-database-jdbc-0",           "url" : "jdbc:oracle:thin:@localhost:1521:xe",           "driverName" : "oracle.jdbc.xa.client.OracleXADataSource",           "xa" : true         } ]       } ],       "domains" : [ {         "id" : "demo_domain-domain",         "name" : "demo_domain",         "domainProfile" : {           "name" : "demo_domain",           "servers" : [ {             "id" : "AdminServer",             "isAdminServer" : "true"           }, {             "id" : "conference_server_one",             "isAdminServer" : "false"           }, {             "id" : "conference_server_two",             "isAdminServer" : "false"           } ],           "clusters" : [ {             "id" : "conference_cluster",             "serverRefs" : [ "conference_server_one", "conference_server_two" ]           } ]         },         "serverBindings" : [ {           "id" : "demo_domain-domain/AdminServer",           "serverRef" : "AdminServer",           "name" : "AdminServer"         }, {           "id" : "demo_domain-domain/conference_server_one",           "serverRef" : "conference_server_one",           "name" : "conference_server_one"         }, {           "id" : "demo_domain-domain/conference_server_two",           "serverRef" : "conference_server_two",           "name" : "conference_server_two"         } ],         "clusterBindings" : [ {           "clusterRef" : "conference_cluster",           "name" : "conference_cluster"         } ],         "dataSourceBindings" : [ {           "id" : "Conference Planner DataSource",           "dataSourceName" : "Conference Planner DataSource",           "dataSourceType" : "Generic",           "genericDataSourceBinding" : {             "jdbcConnectInfoRef" : "demo_domain-database-jdbc-0",             "credentialRef" : "jdbc/conference"           }         } ]       } ]     },     "extraInfo" : {       "domainVersion" : "12.1.3.0.0",       "a2cClientVersion" : "0.7.6",       "a2cClientCompatibilityVersion" : "1.0",       "a2cArchiveLocation" : {         "url" : "file:/tmp/demo_domain_export/demo_domain.zip"       },       "jvmInfos" : [ {         "serverId" : "AdminServer",         "maxHeapSize" : "512m"       }, {         "serverId" : "conference_server_one",         "maxHeapSize" : "512m"       }, {         "serverId" : "conference_server_two",         "maxHeapSize" : "512m"       } ],       "activityLog" : {         "healthCheck" : {           "infoMessages" : [ {             "component" : { },             "message" : "Healthcheck Completed"           } ]         },         "export" : {           "notYetSupportedMessages" : [ {             "component" : { },             "message" : "Export for Security configuration is not currently implemented and must be manually configured on the target domain."           } ]         }       }     }   }     Uploading to the Oracle Cloud   When the JCS provisioning process for an AppToCloud migration commences, it will load the on-premise domain export and overrides file from an Oracle Storage Cloud Service container.  The a2c-export tool can automatically upload the generated archive and overrides file to the Oracle Storage Cloud Service as part of the export by specifying additional parameters that identify the cloud storage container to use.     ./oracle_jcs_app2cloud/bin/a2c-export.sh \     -oh /Users/sbutton/Desktop/AppToCloudDemo/wls_1213 \     -domainDir /Users/sbutton/Desktop/AppToCloudDemo/wls_1213/user_projects/domains/demo_domain \     -archiveFile /tmp/demo_domain_export/demo_domain.zip \     -cloudStorageContainer "Storage-StorageEval01admin" \     -cloudStorageUser "StorageEval01admin.Storageadmin"         If you chose not to use the  a2c-export tool to upload the upload the archive and overrides files to the Oracle Storage Cloud Service then you will need to perform the upload using its REST API.   Next Steps   At this point the work with the on-premise domain is complete.  The next step after uploading the archive and overrides files to the Oracle Storage Cloud Service is to go to the Java Cloud Service console to provision a new instance using the AppToCloud option.   A Web UI will walk through the steps required, gathering the required information to create the service.Once the information is provided, the Oracle Java Cloud Service provisioning process will commence.

Part One - On-Premise Migration   Moving your WebLogic Server domains to the Oracle Cloud just got a whole shebang easier (#!)   With the introduction of the AppToCloud Tooling in Oracle Java Cloud...

Introducing AppToCloud

> Typical Workflow for Migrating Applications to Oracle Java Cloud Service   Oracle’s AppToCloud infrastructure enables you to quickly migrate existing Java applications and their supporting Oracle WebLogic Server resources to Oracle Java Cloud Service. The process consists of several tasks that fall into two main categories, On-Premises and Cloud:   On-Premises   The on-premises tasks involve generating an archive of your existing Oracle WebLogic Server environment and applications and importing it into Oracle Cloud.   Verify the prerequisites: ensure that your existing Oracle WebLogic Server domain meets the requirements of the AppToCloud tools. Install the tools: download and install the AppToCloud command line tools on the on-premises machine hosting your domain’s Administration Server. Perform a health check: use the AppToCloud command line tools to validate your on-premises Oracle WebLogic Server domain and applications. This process ensures that your domain and its applications are in a healthy state. These tools also identify any WebLogic Server features in your domain that the AppToCloud framework cannot automatically migrate to Oracle Java Cloud Service.    Note: This step is mandatory. It cannot be skipped.   Export the domain to Oracle Cloud: use the AppToCloud command line tools to capture your on-premises WebLogic Server domain and applications as a collection of files. These files are uploaded by the tool to a storage container that you have previously created in Oracle Storage Cloud Service.  The domain export files can also be manually uploaded to an Oracle Storage Cloud Service container using its REST API. Migrate the databases to Oracle Cloud: use standard Oracle database tools to move existing relational schemas to one or more database deployments in Oracle Database Cloud - Database as a Service. Create an Oracle Java Cloud Service service instance: create a service instance and select the AppToCloud option. As part of the creation process, you provide the location of the AppToCloud artifacts on cloud storage. Import your applications into the service instance: after the Oracle Java Cloud Service service instance is running, import the AppToCloud artifacts.  Oracle Java Cloud Service updates the service instance with the same resources and applications as your exported source environment. Note: The import operation can only be performed on a new and unmodified service instance. Do not perform any scaling operations, modify the domain configuration or otherwise change the service instance prior to this step.   Recreate resources if necessary: some Oracle WebLogic Server features are not currently supported by the AppToCloud tools. These features must be configured manually after provisioning your Oracle Java Cloud Service instance.  Use the same Oracle tools to perform these modifications that you originally used to configure the source environment.   WebLogic Server Administration Console Fusion Middleware Control WebLogic Scripting Tool (WLST)

> Typical Workflow for Migrating Applications to Oracle Java Cloud Service   Oracle’s AppToCloud infrastructure enables you to quickly migrate existing Java applications and their supporting Oracle...

WebLogic Server 12.2.1.1.0 - Domain to Partition Conversion Tool (DPCT) Updates

WebLogic Server 12.2.1.1.0 - Domain to Partition Conversion Tool (DPCT) Updates The Domain to Partition Conversion Tool (DPCT) provides assistance with the process of migrating an existing domain from WebLogic Server releases 10.3.6, 12.1.2, 12.1.3 or 12.2.1 domain to a partition in a WebLogic Server 12.2.1 domain. The DPCT process consists of two independent but related operations: The first operation involves inspecting an existing domain and exporting into an archive that captures the relevant configuration and binary files. The second task is to use one of several import partition options available with WebLogic Server 12.2.1 to import the contents of the exported domain to create a new partition. The new partition will contain the configuration resources and application deployments from the source domain. With the release of WebLogic Server 12.2.1.1.0 several updates and changes have been made to DPCT to further improve its functionality. The updated documentation covering the new features, bug fixes and known limitations is here:https://docs.oracle.com/middleware/12211/wls/WLSMT/config_dpct.htm#WLSMT1695 Key Updates a) Distribution of DPCT tooling with WebLogic Server 12.2.1.1.0 installation: initially the DPCT tooling was distributed as a separate zip file only available for download from OTN. With the 12.2.1.1.0 release, the DPCT tooling is provided as part of the base product installation as: $ORACLE_HOME/wlserver/common/dpct/D-PCT-12.2.1.1.0.zip This file can be copied from the 12.2.1.1.0 installation to the servers where the source domain is present and extracted for use.  The DPCT tooling is also still available for download from OTN:  http://www.oracle.com/technetwork/middleware/weblogic/downloads/wls-for-dev-1703574.html b) No patch require: previous use of DPCT required a patch to be applied to the target 12.2.1 installation in order to import an archive generated by the DPCT tooling. This requirement has been resolved. c) Improved platform support: several small issues relating to the use of DPCT tooling on Windows have been resolved. d) Improved reporting: a new report file is generated for each domain that is exported, listing the details of the source domain as well as each of the configuration resources and deployments that were captured in the exported archive. Any resources that were unable to be exported are also noted. e) JSON Overrides file formatting: the generated JSON file that serves as an overrides mechanism to allow target environment customizations to be specified on the import is now formatted correctly to make it clearer and easier to make changes. f) Additional Resources in JSON Overrides file: in order to better support customization on the target domain additional resources such as JDBC System Resources, SAF Agents, Mail Sessions and JDBC Stores are now expressed as configurable objects in the generated JSON file. g) Inclusion of new export-domain scripts: the scripts used to run the DPCT tooling have been reworked and included as new (additional) scripts. The new scripts are named export-domain.[cmd|sh] and provide clearer help text and make use of named parameters for providing input values to the script. The previous scripts are provided for backwards compatibility and continue to work, but it is recommended the new scripts are used where possible. Usage detail for the export-domain script: Usage: export-domain.sh -oh {ORACLE_HOME} -domainDir {WL_DOMAIN_HOME}        [-keyFile {KEYFILE}] [-toolJarFile {TOOL_JAR}] [-appNames {APP_NAMES}]         [-includeAppBits {INCLUDE_APP_BITS}] [-wlh {WL_HOME}]        where:              {ORACLE_HOME} : the MW_HOME of where the WebLogic is installed              {WL_DOMAIN_HOME} : the source WebLogic domain path              {KEYFILE} : an optional user-provided file containing a clear-text passphrase used to encrypt exported attributes written to the archive, default: None;              {TOOL_JAR} : file path to the com.oracle.weblogic.management.tools.migration.jar file.              Optional if jar is in the same directory location as the export-domain.sh              {APP_NAMES} : applicationNames is an optional list of application names to export.              {WL_HOME} : an optional parameter giving the path of the weblogic server for version 10.3.6.Used only when the WebLogic Server from 10.3.6 release is installed under a directory other than {ORACLE_HOME}/wlserver_10.3 Enhanced Cluster Topology and JMS Support In addition to the items listed above, some restructuring of the export and import operation has enabled DPCT to better support a number of key WebLogic Server areas.  When inspecting the source domain and generating the export archive, DPCT now enables the targeting of the resources and deployments to appropriate Servers and Clusters in the target domain. For every Server and Cluster in the source domain, there will be a corresponding resource-group object created in the generated JSON file, with each resource-group targeted to a dedicated Virtual Target, which in turn can be targeted to a Server or Cluster on the target domain. All application deployments and resources targeted to that particular WebLogic Server instance or cluster in the source domain corresponds to a resource group in the target domain. This change also supports the situation where the target domain has differently named Cluster and Server resources than the source domain, by allowing the target to be specified in the JSON overrides file so that it can be mapped appropriately to the new environment. A number of the previous limitations around the exporting of JMS configurations for both single server and cluster topologies have been addressed, enabling common JMS use cases to be supported with DPCT migrations. The documentation contains the list of existing known limitations.

WebLogic Server 12.2.1.1.0 - Domain to Partition Conversion Tool (DPCT) Updates The Domain to Partition Conversion Tool (DPCT) provides assistance with the process of migrating an existing domain...

Connection Initialization Callback on WLS Datasource

WebLogic Server 12.2.1.1 is now available. You can see the blog article announcing it at OracleWebLogic Server 12.2.1.1 is Now Available. One of the WLS datasource features that appeared quite a while ago but not mentioned much is the ability to define a callback that is called during connection initialization.  The original intent of this callback was to provide a mechanism that is used with the Application Continuity (AC) feature.  It allows for the application to ensure that the same initialization of the connection can be done when it is reserved and also later on if the connection is replayed.  For the latter case, the original connection has some type of "recoverable" error and is closed, a new connection is reserved under the covers, and all of the operations that were done on the original connection are replayed on the new connection.  The callback allows for the connection to be re-initialized with whatever state is needed by the application. The concept of having a callback to allow for the application to initialize all connections without scattering this processing all over the application software wherever getConnection() is called is very useful, even without replay being involved.  In fact, since the callback can be configured in the datasource descriptor, which I recommend, there is no change to the application except to write the callback itself.   Here's the history of support for this feature, assuming that the connection initialization callback is configured. WLS 10.3.6 - It is only called on an Active GridLink datasource when running with the replay driver (replay was only supported with AGL). WLS  12.1.1, 12.1.2, and 12.1.3 - It is called if used with the replay driver and any datasource type (replay support was added to GENERIC datasources). WLS 12.2.1 - It is called with any Oracle driver and any datasource type.  WLS 12.2.1.1 - It is called with any driver and any datasource type.  Why limit the goodness to just the Oracle driver? The callback can be configured in the application by registering it on the datasource in the Java code. You need to ensure that you only do this once per datasource.  I think it's much easier to register it in the datasource configuration.    Here's a sample callback. package demo;import oracle.ucp.jdbc.ConnectionInitializationCallback; public class MyConnectionInitializationCallback implements  ConnectionInitializationCallback {   public MyConnectionInitializationCallback()  {   }  public void initialize(java.sql.Connection connection)    throws java.sql.SQLException {     // Re-set the state for the connection, if necessary   }} This is a simple Jython script using as many defaults as possible to just show registering the callback. import sys, sockethostname = socket.gethostname()connect("weblogic","welcome1","t3://"+hostname+":7001")edit()dsname='myds'jndiName='myds'server='myserver'cd('Servers/'+server)target=cmocd('../..')startEdit()jdbcSR = create(dsname, 'JDBCSystemResource')jdbcResource = jdbcSR.getJDBCResource()jdbcResource.setName(dsname)dsParams = jdbcResource.getJDBCDataSourceParams()dsParams.addJNDIName(jndiName)driverParams = jdbcResource.getJDBCDriverParams()driverParams.setUrl('jdbc:oracle:thin:@(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=dbhost)(PORT=1521)))(CONNECT_DATA=(SERVICE_NAME=otrade)))')driverParams.setDriverName('oracle.jdbc.OracleDriver')driverParams.setPassword('tiger')driverProperties = driverParams.getProperties()userprop = driverProperties.createProperty('user')userprop.setValue('scott')oracleParams = jdbcResource.getJDBCOracleParams()oracleParams.setConnectionInitializationCallback('demo.MyConnectionInitializationCallback')  # register the callbackjdbcSR.addTarget(target)save()activate(block='true')  Here are a few observations.  First, to register the callback using the configuration, the class must be in your classpath.  It will need to be in the server classpath anyway to run but it needs to get there earlier for configuration.  Second, because of the history of this feature, it's contained in the Oracle parameters instead of the Connection parameters; there isn't much we can do about that.  In the WLS 12.2.1.1 administration console, the entry can be seen and configured in the Advanced parameters of the Connection Pool tab as shown in the following figure (in addition to the Oracle tab).  Finally, note that the interface is a Universal Connection Pool (UCP) interface so that this callback can be shared with your UCP application (all driver types are supported starting in Database 12.1.0.2). This feature is documented in the Application continuity section of the Administration Guide.   See http://docs.oracle.com/middleware/12211/wls/JDBCA/ds_oracledriver.htm#CCHFJDHF . You might be disappointed that I didn't actually do anything in the callback.  I'll use this callback again in my next blog to show how it's used in another new WLS 12.2.1.1 feature.

WebLogic Server 12.2.1.1 is now available. You can see the blog article announcing it at Oracle WebLogic Server 12.2.1.1 is Now Available. One of the WLS datasource features that appeared quite a...

WebLogic Server Continuous Availability in 12.2.1.1

We have made enhancements to the Continuous AvailabilityOffering in WebLogic 12.2.1.1 in the areas of Zero Downtime Patching, CrossSite Transaction Recovery, Coherence Federated Caching and CoherencePersistence. We have also enhanced thedocumentation to provide design considerations for the multi-data centerMaximum Availability Architectures (MAA) that are supported for WebLogic ServerContinuous Availability. Zero Downtime Patching Enhancements Enhancements in Zero Downtime Patching support updatingapplications running in a multitenant partition without affecting otherpartitions that run in the same cluster. Coherence applications can now be updated while maintaining highavailability of the Coherence data during the rollout process. We have also removed the dependency onNodeManager to upgrade the WebLogic Administration Server. Multitenancy support Application updates can use partition shutdowninstead of server shutdowns. Can update an application in a partition on a serverwithout affecting other partitions. Can update an application referenced by aResourceGroupTemplate. Coherence support - User can supply minimumsafety mode for rollout to Coherence cluster. Removed Administration Server dependency onNodeManager – The Administration Server no longer needs to be started byNodeManager. Cross-Site Transaction Recovery We introduced a “Site Leasing” mechanism to do auto recoverywhen there is a site failure or mid-tier failure. With site leasing we provide a more robust mechanismto failover and failback transaction recovery without imposing dependencies onthe TLog which affect the health of the Servers hosting the TransactionManager. Every server in a site will update their lease. When thelease expires for all servers running in a cluster in Site 1, servers runningin a cluster in a remote site assume ownership of the TLogs, and recover thetransactions while still continuing their transaction work. To learn more, please read Active-ActiveXA Transaction Recovery. Coherence Federated Caching and Coherence PersistenceAdministration Enhancements We have enhanced the WebLogic Server Administration Console tomake it easier to configure Coherence Federated Caching and Coherence Persistence. Coherence Federated Caching - Added the ability to setupFederation with basic active/active and active/passive configurations using theAdministration Console and eliminated the need to use configuration files. Coherence Persistence - Added a persistence tabin the Administration Console that provides the ability to configurePersistence related settings that apply to all services. Documentation In WebLogic Server 12.2.1.1 we have enhanced the document Continuous Availability for Oracle WebLogicServer to include a new chapter “DesignConsiderations for Continuous Availability” See http://docs.oracle.com/middleware/12211/wls/WLCAG/weblogic_ca_best.htm#WLCAG145. This new chapter provides design considerations and bestpractices for the components of your multi-data center environments. In addition to the general best practicesrecommended for all continuous availability MAA architectures, we provide specificadvice for each of the Continuous Availability supported topologies, and describehow the features can be used in these topologies to provide maximum high availabilityand disaster recovery.

We have made enhancements to the Continuous Availability Offering in WebLogic 12.2.1.1 in the areas of Zero Downtime Patching, CrossSite Transaction Recovery, Coherence Federated Caching...

Oracle WebLogic Server 12.2.1.1 is Now Available

Last October, we delivered Oracle WebLogic Server 12.2.1 as part of the overall Oracle Fusion Middleware 12.2.1 Release.   As noted previously on this blog, WebLogic Server 12.2.1 delivers compelling new feature capabilities in the areas of Multitenancy, Continuous Availability, and Developer Productivity and Portability to Cloud.   Today, we are releasing WebLogic Server 12.2.1.1, which is the first patch set release for WebLogic Server and Fusion Middleware 12.2.1.   New WebLogic Server 12.2.1.1 installers are now posted on the Oracle Technology Network and Oracle Software Delivery Cloud, and new documentation has been made available.  WebLogic Server 12.2.1.1 contains all the new features in WebLogic Server 12.2.1, and also includes an integrated, cumulative set of fixes and a small number of targeted, non-disruptive enhancements.    For customers who have just begun evaluating WebLogic Server 12cR2, or are planning evaluation and adoption, we recommend that you adopt WebLogic Server 12.2.1.1 so that you can benefit from the maintenance and enhancements that have been included.   For customers who are already running in production on WebLogic Server 12.2.1, you can continue to do so, though we will encourage adoption of WebLogic Server 12.2.1 patch sets. The enhancements are primarily in the following areas: Multitenancy - Improvements to Resource Consumption Management, partition security management, REST management, and Fusion Middleware Control, all targeted at multitenancy manageability and usability. Continuous Availability - New documented best practices for multi data center deployments, and product improvements to Zero Downtime Patching capabilities. Developer Productivity and Portability to the Cloud - The Domain to Partition Conversion Tool (D-PCT), which enables you to convert an existing domain to a WebLogic Server 12.2.1 partition, has been integrated into 12.2.1.1 with improved functionality.   So it's now easier to migrate domains and applications to WebLogic Server partitions, including partitions running in the Oracle Java Cloud Service.  We will provide additional updates on the capabilities described above, but everything is ready for you to get started using WebLogic Server 12.2.1.1 today.   Try it out and give us your feedback!

Last October, we delivered Oracle WebLogic Server 12.2.1 as part of the overall Oracle Fusion Middleware 12.2.1 Release.   As noted previously on this blog, WebLogic Server 12.2.1 delivers compelling...

Using SQLXML Data Type with Application Continuity

When I first wrote an article about changing Oracleconcrete classes to interfaces to work with Application Continuity (AC) (https://blogs.oracle.com/WebLogicServer/entry/using_oracle_jdbc_type_interfaces),I left out one type. oracle.sql.OPAQUEis replaced with oracle.jdbc.OracleOpaque. There isn’t a lot that you can do with this opaque type. While the original class had a lot ofconversion methods, the new Oracle type interfaces have only methods that are considered significant or not available with standard JDBC API’s. The new interface only has a method to getthe value as an Object and two meta information methods to get meta data andtype name. Unlike the other Oracle typeinterfaces (oracle.jdbc.OracleStruct extends java.sql.Struct andoracle.jdbc.OracleArray extends java.sql.Array), oracle.jdbc.OracleOpaque does not extend aJDBC interface There is one related very common use case that needsto be changed to work with AC. Earlyuses of SQLXML made use of the following XDB API. SQLXML sqlXml = oracle.xdb.XMLType.createXML( ((oracle.jdbc.OracleResultSet)resultSet).getOPAQUE("issue")); oracle.xdb.XMLType extends oracle.sql.OPAQUE and itsuse will disable AC replay. This must be replaced with the standard JDBC API SQLXML sqlXml =resultSet.getSQLXML("issue"); If you try to do a “new oracle.xdb.XMLType(connection,string)” when running with the replay datasource, you will get a ClassCastException. Since XMLTypedoesn’t work with the replay datasource and the oracle.xdb package uses XMLTypeextensively, this package is no longer usable for AC replay. The API’s for SQLXML are documented at https://docs.oracle.com/javase/7/docs/api/java/sql/SQLXML.html. The javadoc shows API’s to work with DOM,SAX, StAX, XLST, and XPath. Take a look at the sample program at //cdn.app.compendium.com/uploads/user/e7c690e8-6ff9-102a-ac6d-e4aebca50425/f4a5b21d-66fa-4885-92bf-c4e81c06d916/File/e57d46dd27d26fbd6aeeb884445dd5b3/xmlsample.txt The sample uses StAX to store the information andDOM to get it. By default, it uses thereplay datasource and it does not use XDB. You can run with replay debugging by doing somethinglike the following. Create a file named /tmp/config.txt that has the followingtext. java.util.logging.ConsoleHandler.formatter =java.util.logging.SimpleFormatterhandlers = java.util.logging.FileHandlerjava.util.logging.FileHandler.pattern = /tmp/replay.logoracle.jdbc.internal.replay.level = FINEST Change your WLS CLASSPATH (or one with the Oracle client jar files) to put ojdbc7_g.jar at thefront (to replace ojdbc7.jar) and add the current directory. Compile the program (after renaming .txt to .java)and run it using java -Djava.util.logging.config.file=/tmp/config.txtXmlSample The output replay log is in /tmp/replay.log. With the defaults in the sample program, youwon’t see replay disabled in the log. Ifyou change the program to set useXdb to true, you will see that replay isdisabled. The log will have “DISABLEREPLAY in preForMethodWithConcreteClass(getOPAQUE)” and “EnteringdisableReplayInternal”. This sample can be used to test other sequences ofoperations to see if they are safe for replay. Alternatively, you can use orachk to do a staticanalysis of the class. See https://blogs.oracle.com/WebLogicServer/entry/using_orachk_to_clean_upfor more information. If you run orachkon this program, you will get this failure. FAILED - [XmlSample][[MethodCall] desc=(Ljava/lang/String;)Loracle/sql/OPAQUE; method name=getOPAQUE, lineno=105]

When I first wrote an article about changing Oracle concrete classes to interfaces to work with Application Continuity (AC) (https://blogs.oracle.com/WebLogicServer/entry/using_oracle_jdbc_type_interfa...

Testing WLS and ONS Configuration

Introduction Oracle Notification Service (ONS) is installed and configured as part of theOracle Clusterware installation. All nodes participating in the cluster areautomatically registered with the ONS during Oracle Clusterware installation. Theconfiguration file is located on each node in $ORACLE_HOME/opmn/conf/ons.config. See the Oracle documentation for furtherinformation. This article focuses on theclient side. Oracle RAC Fast Application Notification (FAN) events are available startingin database 11.2. This is the minimum databaserelease required for WLS Active GridLink. FAN events are notifications sent by a cluster running Oracle RAC toinform the subscribers about the configuration changes within the cluster. The supported FAN events are service up,service down, node down, and load balancing advisories (LBA). fanWatcher Program You can optionally test your ONS configuration independentof running WLS. This tests theconnection from the ONS client to the ONS server but not configuration of yourRAC services. See https://blogs.oracle.com/WebLogicServer/entry/fanwatcher_sample_programfor details to get, compile, and run the fanWatcher program. I’m assuming that you have WLS 10.3.6 orlater installed and you have your CLASSPATH set appropriately. You would run the test program usingsomething like java fanWatcher"nodes=rac1:6200,rac2:6200" database/event/service If you are using the database 12.1.0.2 client jarfiles, you can handle more complex configurations with multiple clusters, forexample DataGuard, with something like java fanWatcher "nodes.1=site1.rac1:6200,site1.rac2:6200nodes.2=site2.rac1:6200,site2.rac2:6200" database/event/service Note that a newline is used to separate multiple nodelists. You can also test with a walletfile and password, if the ONS server is configured to use SSL communications. Once this program is running, you should minimallysee occasional LBA notifications. If youstart or stop a service, you should see an associated event. Auto ONS It’s possible to run without specifying the ONSinformation using a feature call auto-ONS. The auto-ONS feature cannot be usedif you are running with - an 11g driver or 11g database.Auto-ONS depends on protocol flowing between the driver and the database serverand this feature was added in 12c. - pre-WLS 12.1.3. Auto-ONS is supported starting in WLS 12.1.3. - an Oracle wallet with SSLcommunications. Configuration of thewallet requires also configuring the ONS information. - complicated ONS topology. In general, auto-ONS can figure out what youneed but there are cases where you need to specify it precisely. In WLS 12.2.1, an extension of the ONSconfiguration allows for specifying the exact topology using a property node list. See http://docs.oracle.com/middleware/1221/wls/JDBCA/gridlink_datasources.htm#BABGJBICfor more information. If you have some configurations thatuse an 11g driver or database and some that run with 12c driver/database, youmay want to just specify the ONS information all of the time instead of usingthe auto-ONS simplification. ThefanWatcher link above indicates how to test fanWatcher using auto-ONS. WLS ONS Configuration and Testing The next step is to ensure that you have end-to-endconfiguration running. That includes thedatabase service for which events will be generated to the AGL datasource thatprocesses the events for the corresponding service. On the server side, the database service must beconfigured RCLB enabled. RCLB is enabled for a service if the service GOAL (NOTCLB_GOAL) is set to either SERVICE_TIME or THROUGHPUT. See the Oracle documentation for furtherinformation on using srvctl to set this when creating the service. On the WLS side, the key pieces are the URL and theONS configuration. TheURL is configured using a long format with this service name specified. The URL can use an Oracle Single ClientAccess Name (SCAN) address, for example, jdbc:oracle:thin:@(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=scanname)(PORT=scanport))(CONNECT_DATA=(SERVICE_NAME=myservice))) ormultiplenon-SCAN addresses with LOAD_BALANCE=on, for example, jdbc:oracle:thin:@(DESCRIPTION=(ADDRESS_LIST=(LOAD_BALANCE=ON)(ADDRESS=(PROTOCOL=TCP)(HOST=host1)(PORT=1521))(ADDRESS=(PROTOCOL=TCP)(HOST=host2)(PORT=1521)))(CONNECT_DATA=(SERVICE_NAME=myservice))) Definingthe URL is a complex topic - see the Oracle documentation for more information. Asdescribed above, the ONS configuration can be implicit using auto-ONS orexplicit. The trade-offs andrestrictions are also described above. The format of the explicit ONS information is described at http://docs.oracle.com/middleware/1221/wls/JDBCA/gridlink_datasources.htm#BABGJBIC. Ifyou create the datasource using the administration console with explicit ONSconfiguration, there is a button to click on to test the ONS configuration. This tests doing a simple handshake with theONS server. Ofcourse, the first real test of your ONS configuration with WLS is deploying thedatasource, either when starting the server or when targeting the datasource ona running server. In the administration console,you can look at the AGL runtime monitoring page for ONS, especially if usingauto-ONS, to see the ONS configuration. Youcan look at the page for instances and check the affinity flag and instanceweight attributes that are updated on LBA events. If you stop a service using something like srvctlstop service -db beadev -i beadev2 -s otrade thatshould also show up on this page with the weight and capacity going to 0. Ifyou look at the server log (for example servers/myserver/logs/myserver.log) youshould see a message tracking the outage like the following. ….<Info> <JDBC> … <Datasource JDBC Data Source-0 for service otrade received a service down event forinstance [beadev2].> Ifyou want to see more information like the LBA events, you can enable theJDBCRAC debugging using –Dweblogic.debug.DebugJDBCRAC=true. For example, ...<JDBCRAC> ... lbaEventOccurred() event=service=otrade, database=beadev,event=VERSION=1.0 database=beadev service=otrade { {instance=beadev1 percent=50flag=GOOD aff=FALSE}{instance=beadev2 percent=50 flag=UNKNOWN aff=FALSE} } Therewill be a lot of debug output with this setting so it is not recommended forproduction.

Introduction Oracle Notification Service (ONS) is installed and configured as part of the Oracle Clusterware installation. All nodes participating in the cluster areautomatically registered with the...

Migrating from Generic Data Source to Active GridLink

Earlier, I wrote anarticle about how to migrate from a Multi Data source (MDS) for RACconnectivity to Active GridLink (AGL). This is needed to move from the older datasource technology to the newertechnology, both supporting Oracle RAC. Theinformation is now in the public documentation set at http://docs.oracle.com/middleware/1221/wls/JDBCA/gridlink_datasources.htm#JDBCA690. There are also many customers thatare growing up from a standalone database to an Oracle RAC cluster. In this case, it’s a migration from a GENERICdatasource to an AGL datasource. Thismigration is pretty simple. No changes should be required toyour applications.  A standard application looks up the datasource in JNDIand uses it to get connections.  The JNDI name won’t change. The only changes necessary should beto your configuration and the necessary information is generally provided byyour database administrator.   The information needed is the new URLand optionally the configuration of Oracle Notification Service (ONS) on theRAC cluster. The latter is only needed if you are running with - an 11g driver or 11g database.Auto-ONS depends on protocol flowing between the driver and the database serverand this feature was added in 12c. - pre-WLS 12.1.3. Auto-ONS is supported starting in WLS 12.1.3. - an Oracle wallet with SSLcommunications. Configuration of thewallet requires also configuring the ONS information. - complicated ONS topology. In general, auto-ONS can figure out what youneed but there are cases where you need to specify it precisely. In WLS 12.2.1, an extension of the ONSconfiguration allows for specifying the exact topology using a property nodelist. See http://docs.oracle.com/middleware/1221/wls/JDBCA/gridlink_datasources.htm#BABGJBICfor more information. TheURL and ONS attributes are configurable but not dynamic. That means that the datasource will need tobe shutdown and restarted after the change. The simplest way to do this is to untarget the datasource, make thechanges, and then re-target the datasource. The recommended approach to migrate from a GENERIC to AGL datasource is to use WLST. The URL must be changed in the JDBCDriverParamsobject. The new JDBCOracleParams object(it generally doesn’t exist for a GENERIC datasource) needs to have FAN enabledset to true and optionally set the ONS information. The following is a sample WLST script with the newvalues hard-coded. You couldparameterize it and make it more flexible in handling multiple datasources. If you are using an Oracle wallet for ONS,that needs to be added to the JDBCOracleParams object as well. # java weblogic.WLST file.pyimport sys, socket, oshostname = socket.gethostname()datasource="JDBC Data Source-0"connect("weblogic","welcome1","t3://"+hostname+":7001")edit()startEdit()cd("/JDBCSystemResources/" + datasource )targets=get("Targets")set("Targets",jarray.array([], ObjectName))save()activate()startEdit()cd("/JDBCSystemResources/" + datasource + "/JDBCResource/"+ datasource + "/JDBCDriverParams/"+ datasource )set("Url","jdbc:oracle:thin:@(DESCRIPTION=(ADDRESS_LIST=("+ "ADDRESS=(PROTOCOL=TCP)(HOST=dbhost)(PORT=1521)))" + "(CONNECT_DATA=(SERVICE_NAME=otrade)))")cd("/JDBCSystemResources/" + datasource + "/JDBCResource/"+ datasource +"/JDBCOracleParams/" + datasource )set("FanEnabled","true")set("OnsNodeList","dbhost:6200")# The following is for WLS 12.1.2 and 12.1.3 if not setting FanEnabled true, which is not recommended#set("ActiveGridlink","true")# The following is for WLS 12.2.1 and later#cd("/JDBCSystemResources/" + datasource + "/JDBCResource/"+# datasource )#set("DatasourceType", "AGL")save()activate()startEdit()cd("/JDBCSystemResources/" + datasource )set("Targets", targets)save()activate() In WLS 12.1.2 and 12.1.3, there is an explicit ActiveGridlink flag that can be used to identify an AGL datasource, even if FanEnabled is not set to true (which is not recommended) and auto-ONS is used (12.1.2 is the first release in which auto-ONS is supported). In the script above, uncomment the line to set it only if FanEnabled and OnsNodeList are not set. Starting in WLS 12.2.1, there is an explicitdatasource type at the JDBCResource level. If that is set to GENERIC, you must re-set it using set("DatasourceType","AGL"). In the script above,uncomment the lines to set it.  In this case, the ActiveGridlink flag is not necessary. In the administrative console, the database type isread-only and there is no mechanism to change the database type. You can try to get around this by setting theURL, FAN Enabled box, and ONS information. However, in 12.2.1 there is no way to re-set the Datasource Type in theconsole and that value overrides all others.

Earlier, I wrote an article about how to migrate from a Multi Data source (MDS) for RAC connectivity to Active GridLink (AGL). This is needed to move from the older datasource technology to the newerte...

New WebLogic Server Running on Docker in Multi-Host Environments

Oracle WebLogic Server 12.2.1 is now certified to run on Docker 1.9 containers. As part of this certification, you can create Oracle WebLogic Server 12.2.1 clusters which can span multiple physical hosts. Containers running on multi-host are built as an extension of existing Oracle WebLogic 12.2.1 Install images built with Dockerfiles , Domain images built with Dockerfiles, and existing Oracle Linux images Oracle Linux Images. To help you with this, we have posted scripts on GitHub as examples for you to get started. The table below describes the certification provided for WebLogic Server 12.2.1 on Docker 1.9. You can use these combinations of Oracle WebLogic Server, JDK, Linux and Docker versions when building your Docker images. WLS Version JDK Version Host OS Kernel Docker Version 12.2.1 8 Oracle Linux 6 UEK 4 1.9 or higher Oracle Linux 7   Please read earlier blog Oracle Weblogic 12.2.1 Running on Docker Containers for details on Oracle WebLogic Server 12.1.3 and Oracle WebLogic 12.2.1 certification on other versions of Docker. We support Oracle WebLogic Server in certified Docker containers running on other Linux host operating systems that have Kernel 4 or larger and that support Docker Containers, please read our Support statement. For additional details on the most current Oracle WebLogic Server supported configurations please refer to Oracle Fusion Middleware Certification Pages. The scripts that support multi-host environment on GitHub are based on the latest versions of Docker Networking, Swarm, and Docker Compose. The Docker Machine participates in the Swarm which is networked by a Docker overlay network. The WebLogic Admin Server container as well as the WebLogic Managed Servers containers run on different VMs in the Swarm and are able to communicate with each other. Dockerfiles and scripts we have provided enable users to create clustered and non-clustered Oracle WebLogic Server domain configurations, including both development and production running on a single or multiple hosts operating system or VMs. Each server running in the resulting domain configurations runs in its Docker container, and is capable of communicating as required with other servers. When these containers run in a WebLogic cluster all HA properties of the WebLogic cluster are supported such as in memory session replication, HTTP load balancing service and server migration.   Please check the new WebLogic on Docker Multi Host Workshop in Github. This workshop takes you step by step in how to build a WebLogic Server Domain on Docker in a multi host environment.  After the WebLogic domain has been started an Apache Plugin Web Tier container is started in the Swarm, the Apache Plugin load balances invocations to an application deployed to a WebLogic cluster.  This project takes advantage of the following tools Docker Machine, Docker Swarm, Docker Overlay Network, Docker Compose, Docker Registry, and Consul.  Very easily and quickly using the sample Dockerfiles, and scripts you can set up your environment running on Docker.  Try it out and enjoy! On  YouTube we have a video that shows you how to create a WLS domain/cluster on Multi Host environment. For documentation on how to use these Dockerfiles and scripts, see the whitepaper on OTN. .  We hope you will try running the different configurations of WebLogic Server on Docker containers, and look forward to hearing any feedback you might have.  

Oracle WebLogic Server 12.2.1 is now certified to run on Docker 1.9 containers. As part of this certification, you can createOracle WebLogic Server 12.2.1 clusters which can span multiple physical...

WebLogic Server 12.2.1: Elastic Cluster Scaling

WebLogic Server 12.2.1 added support for the elastic scaling of dynamic clusters: http://docs.oracle.com/middleware/1221/wls/ELAST/overview.htm#ELAST529 Elasticity allows you to configure elastic scaling for a dynamic cluster based on either of the following: Manually adding or removing a running dynamic server instance from an active dynamic cluster. This is called on-demand scaling. You can perform on-demand scaling using the Fusion Middleware component of Enterprise Manager, the WebLogic Server Administration Console, or the WebLogic Scripting Tool (WLST). Establishing policies that set the conditions under which a dynamic cluster should be scaled up or down and actions that define the scaling operations themselves. When the conditions defined in the scaling policy occur, the corresponding scaling action is triggered automatically. To see this in action, a set of video demonstrations have been added to the youtube.com/OracleWebLogic channel that show the use of various elastic scaling options available. WebLogic Server 12.2.1 Elastic Cluster Scaling with WLSThttps://www.youtube.com/watch?v=6PHYfVd9Oh4 WebLogic Server 12.2.1 Elastic Cluster Scaling with WebLogic Consolehttps://www.youtube.com/watch?v=HkG0Uw14Dak WebLogic Server 12.2.1 Automated Elastic Cluster Scalinghttps://www.youtube.com/watch?v=6b7dySBC-mk

WebLogic Server 12.2.1 added support for the elastic scaling of dynamic clusters: http://docs.oracle.com/middleware/1221/wls/ELAST/overview.htm#ELAST529 Elasticity allows you to configure elastic...

WebLogic on Docker Containers Series, Part 3: Creating a Domain Image

You already know how to quickly get started with WebLogic on Docker. You also learned with more details how to build an installation Docker image of WebLogic and Oracle JDK. This time, you will learn how to create a WebLogic Domain Image for Docker Containers. We are pushing some interesting samples of Docker images on GitHub so this way WebLogic customers and users can have a good idea of what is possible (although not everything in there may be officially supported as of this moment, like multihost), but to experiment and learn more about Docker itself. This blog post focuses on the 1221-domain sample, but make sure to subscribe to this blog or follow me on Twitter for future posts that will look into the other samples. I will also assume that you have the docker-images repository checked out and updated in your computer (with commit 4c36ef9f99c98), and of course you have Docker installed and properly working. Now moving on.  WebLogic Domains WebLogic uses a Domain concept for its infrastructure. This is the first thing a developer or administrator must create in order to be able to run a WebLogic Server. There are many ways to create a WebLogic Server Domain: using the Configuration Wizard, using WLST, or even bootstrapping the weblogic.Server class. Since we are using Docker and we want to automate everything, we create the domain with WLST. TL;DR; Building the Domain Image in 1221-domain sample First things first, make sure you have image oracle/weblogic:12.2.1-developer already created. If not, check Part 2 of this series to learn how.  Now go into folder samples/1221-domain and run the following command: $ pwd ~/docker-images/OracleWebLogic/samples/1221-domain $ docker build -t 1221-domain --build-arg ADMIN_PASSWORD=welcome1 . [...] $ docker images REPOSITORY              TAG                     IMAGE ID            CREATED             SIZE 1221-domain             latest                  327a95a2fbc8        2 days ago          1.195 GB oracle/weblogic         12.2.1-developer        b793273b4c9b        2 days ago          1.194 GB oraclelinux             latest                  4d457431af34        10 weeks ago        205.9 MB This is what you will end up having in your environment.  Understanding the sample WLST domain creation script Customers and users are always welcome to come up with their own scripts and automation process to create WebLogic domains (either for Docker or not), but we shared some examples here to make things easier for them. The 1221-domain sample has a subfolder named container-scripts that holds a set of handy scripts to create and run a domain image. The most important script though is the create-wls-domain.py WLST script. This file is executed when docker build is called, as you can see in the Dockerfile. In this sample, you learn how to read variables in create-wls-domain.py script with default values, that may be defined in the Dockerfile. The script defined in this sample requires a set of information in order to create a domain. Mainly, you need to provide:   Domain name: by default 'base_domain' Admin port (although WLS has 7001 by default when installed, this script defaults to 8001 if nothing is provided) Admin password: no default. Must inform during build with --build-arg ADMIN_PASSWORD=<your password>  Cluster Name: defaults to 'DockerCluster'  Note about About Clustering This sample shows how to define a cluster named with whatever is in $CLUSTER_NAME (defaults to DockerCluster) to demonstrate scalability of WebLogic on Docker containers. You can see how the Cluster is created in the WLST file. Back to the domain creation How to read variables in WLST with default values? Pretty simple:       domain_name = os.environ.get("DOMAIN_NAME", "base_domain")   admin_port = int(os.environ.get("ADMIN_PORT", "8001"))   admin_pass = os.environ.get("ADMIN_PASSWORD")   cluster_name = os.environ.get("CLUSTER_NAME", "DockerCluster") These variables can be defined as part of your Dockerfile, or even passed as arguments during build if you are using Docker 1.10 with the new ARG command, as the ADMIN_PASSWORD example shows.   ARG ADMIN_PASSWORD   ENV DOMAIN_NAME="base_domain" \   ADMIN_PORT="8001" \   ADMIN_HOST="wlsadmin" \   NM_PORT="5556" \   MS_PORT="7001" \   CLUSTER_NAME="DockerCluster" \ Other variables are defined here (NM_PORT, MS_PORT, ADMIN_HOST), but I'll explain them later on a future post. Meanwhile, let's continue. The next step as part of a domain image creation, is that you may want to reuse some Domain Template. In the sample script, we used the default template for new domains wlst.jar, but agan if you are working on your own set of domains feel free to use any template you may already have. Next we tell WLST to configure the AdminServer to listen on all addresses that will be available (in the container), and to listen on port as in $ADMIN_PORT. The 'weblogic' admin user needs a password that you had to provide with --build-arg (or defined directly inside Dockerfile) in $ADMIN_PASSWORD, and then we set that in the script. For the sake of providing some examples, we also define a JMS Server in the script, but we only target it to the AdminServer. If you want to target to the Cluster, you will have to tweak your own script.  The script is configured to set this domain in Production Mode too.  We set some Node Manager options, since we will be using NM as Per Domain (see docs for more details). Remember that each instance of this image (a container) has the same "filesystem", so it is as if you had copied the domain to different servers. If you are an experienced WebLogic administrator, you will quickly understand. If not, please comment and I'll share some links. This is important to be able to run Managed Servers inside containers based on this image. I'll get back to this in the future on running a Clustered WebLogic environment on Docker containers. There are a couple of other things we could've done in this script, such as:   Create and define a Data Source Create, define, and deploy applications (this is demonstrated as part of the 1221-appdeploy sample) And anything else you can do with WLST in Offline mode (remember that domain does not exist and thus is not running) But sure you will quickly find out how to do these for your own domains. Now that you have a domain Docker image 1221-domain, you are able to start it with:   $ docker run -ti 1221-domain Now have some fun with tweaking your own WLST scripts for domain creation in Docker.  

You already know how to quickly get started with WebLogic on Docker. You also learned with more details how to build an installation Docker image of WebLogic and Oracle JDK. This time, you will...

Now Available: Domain to Partition Conversion Tool (DPCT)

We are pleased to announce that a new utility has just been published to help with the process of converting existing WebLogic Server domains into WebLogic Server 12.2.1 partitions.  The Domain to Partition Conversion Tool (DPCT) provides a utility that inspects a specified source domain and produces an archive containing the resources, deployed applications and other settings.  This can then be used with the importPartition operation provided in WebLogic Server 12.2.1 to create a new partition that represents the original source domain.  An external overrides file is generated (in JSON format) that can be modified to adjust the targets and names used for the relevant artifacts when they are created in the partition. DPCT supports WebLogic Server 10.3.6, 12.1.1, 12.1.2 and 12.1.3 source domains and  makes the conversion to WebLogic Server 12.2.1 partitions a straightforward process. DPCT is available for downloaded from OTN: http://www.oracle.com/technetwork/middleware/weblogic/downloads/wls-main-097127.html  ** Note: there is also a corresponding patch (opatch) posted alongside the DPCT download that needs to be downloaded and applied to the target installation of WebLogic Server 12.2.1 to support the import operation **  The README contains more details and examples of using the tool: http://download.oracle.com/otn/nt/middleware/12c/1221/wls1221_D-PCT-README.txt A video demonstration of using DPCT to convert a WebLogic Server 12.1.3 domain with a deployed application into a WebLogic Server 12.2.1 is also available on our YouTube channel: https://youtu.be/D1vQJrFfz9Q

We are pleased to announce that a new utility has just been published to help with the process of converting existing WebLogic Server domains into WebLogic Server 12.2.1 partitions.  The Domain to...

ZDT Technical Topic: How are Those Sessions Kept Alive Anyway?

By now you have probably read documentation or previous blog posts about how Zero Downtime Patching provides a convenient automated method of updating a WebLogic Domain in a rolling fashion.  By automating the process, Zero Downtime Patching greatly saves time and eliminates the potential human errors from the repetitive course of procedure. In addition to that there is also some special features around replicated HTTP sessions that make sure end users do not lose their session at any point during the rollout process. Lets explore the technical details around maintaining session state during Zero Downtime Patching. One of the key aspects of WLS replicated session persistence contract is that the session may be maintained within the cluster even in the rare situation where a server crashes. However, the session persistence contract cannot guarantee sessions will be maintained when more than a single server goes down in a short time period. This is because the session has a single copy replicated to some secondary server within the cluster. The session is only replicated when the client makes a request to update the session so the client’s cookie can store a reference to the secondary server. Thus, if the primary server were to go down and then the secondary server were to go down before the session could be updated by a subsequent client request then the session would be lost. The rolling nature of Zero Downtime Patching fits this pattern, and thus must take extra care to avoid losing sessions. Administrators may have already observed that it is very easy to lose sessions by restarting one server at a time through the cluster. Before we go into technical details on how Zero Downtime Patching prevents the issue of losing sessions, it is important to note that the entire methodology relies on Oracle Traffic Director for load balancing, dynamic discovery, health checks, and session failover handling. In addition to this setup, 3 key features were utilized by Zero Downtime Patching directly to prevent the loss of sessions: 1. Preemptive Session Replication - Session data is preemptively propagated to another server in the cluster during graceful shutdown when necessary.  To get even more detailed on this, lets examine the scenario where the ZDT rollout has shutdown the server holding the HTTP Session, and the next step is to shutdown the server holding the replica. In that case, WebLogic can detect during shutdown that the session will be lost as there is no backup copy within the cluster. So the ZDT rollout can ensure that WebLogic Server replicates that session to another server within the cluster. The illustration below shows the problematic scenario where the server,  s1, holding the primary copy of the session is shutdown followed by the shutdown of the server, s2, holding the secondary or replica copy.  The ZDT Orchestration signals that s2 should preemptively replicate any single session copies before shutting down.  Thus there is always a copy available within the cluster.   2. Session State Query Protocol - Due to the way that WebLogicServer relies on the association of an HTTP Session with a primary server and a secondary server, it is not sufficient to simply have the session somewhere in the cluster.  There is also a need to be able to find the session when the client request lands on an arbitrary server within the cluster. The ZDT rollout enables the ability for WebLogicServers to query other servers in the cluster for specific sessions if they don’t have their own copy. The diagram above shows that an incoming request to a server without the session can trigger a query and once the session is found within the cluster it can be fetched so that the request can be served on the server, "s4".  3. Orphaned Session Cleanup - Once we combine the ability to preemptive replicate session instances, and the ability to fetch sessions from within the cluster, we must also take a more active approach to cleanup instances that are fetched.  Historically, WebLogic Server hasn’t had to worry much about orphaned sessions. Front end load balancers and web servers have been required to honor the session’s server affinity. And in the rare case that a request would land on a server that did not contain the primary or secondary, the session would be fetched from the primary server or secondary server and then the orphaned copy would be forgotten to be cleaned up upon timeout or other regular intervals. It was assumed that because the pointer to the session changed, that the actual stored reference would never be used again. However, the ZDT rollout repeatedly presents the scenario where a session must be found within the cluster and fetched from the server that holds the session. Not only can the number of session instances proliferate - all with various versions of the same session - the cluster is now queried for the copy and we must not find any stale copies - only the current replica of the session. The above illustration shows the cleanup action after s4 has fetched the session data to serve the incoming request.  It launches the cleanup request to s3 to ensure no stale data is left within the cluster. Summary: Now during ZDT Patching we can shutdown server1, and expect that any lone session copies will be propagated to server2 without the clients knowledge. When the client does send another request, WLS will be able to handle that request and query the cluster to find the session data. The data will be fetched and used on the server handling the request. The orphaned copy will be cleaned up and the server handling the request will go through the process of choosing its preferred Secondary server to store the replica. For more information about Zero Downtime Patching, view the documentation (http://docs.oracle.com/middleware/1221/wls/WLZDT/configuring_patching.htm#WLZDT166) References https://docs.oracle.com/cd/E24329_01/web.1211/e24425/failover.htm#CLUST205

By now you have probably read documentation or previous blog posts about how Zero Downtime Patching provides a convenient automated method of updating a WebLogic Domain in a rolling fashion.  By...

ZDT Rollouts and Singletons

pre { background: transparent }pre.cjk { font-family: "Nimbus Mono L", monospace } WebLogicServer offers messaging, transaction and other system services tofacilitate building enterprise grade applications. Typically,services can be either clustered or singleton. Clustered services aredeployed identically to each server in a cluster to provide increasedscalability and reliability. The session state of one clusteredserver is replicated on another server in the cluster. In contrast,singleton services run on only one server in a cluster at any givenpoint of time so as to offer specific quality of service (QOS) butmost importantly to preserve data consistency. Singleton services canbe JMS-related, JTA-related or user-defined. In highly available (HA)environments, it is important for all services to be up and runningeven during patch upgrades. Thenew WebLogic ZeroDowntime Patching (a.k.a ZDT patching) feature introduces a fullyautomated rolling upgrade solution to perform upgrades such thatdeployed applications continue to function and are available for endusers even during the upgrade process. ZDT patching supports rollingout Oracle Home, Java Home and also updating applications. Check out these blogsor view the documentationfor more information on ZDT. DuringZDT rollouts, servers are restarted in a rolling manner. Taking downa server would bring down the singleton service(s) thus causingservice disruptions, services will not be available until the serverstarts back up. The actual down time would vary depending on serverstart up time and number or type of applications deployed. Hence, toensure that singleton services do not introduce a single point offailure for dependent applications in the cluster, ZDT rolloutprocess automatically performs migrations. Somehighlights of how ZDT rollout handles singletons are: Itcan be applied to all types of rollout (rolloutOracleHome,rolloutJavaHome, rollingRestart, rolloutUpdate etc.) UsesJSON file based migration options for fine grained control ofservice migrations during a rollout. Can be specified in WLST orconsole. Supportsservice migrations (JMS or JTA) as well as server migration (WSM) Automaticfail back if needed Terms,Acronyms and Abbreviations Term Definition Singletons Services that are hosted only on one server in a cluster. Migratable Target (MT) A special target that provides a way to group services thatshould move together. It contains a list of candidate servers withonly one active server at a given time. Source Server Server instance where services are migrated “from” Destination Server Server instance where services are migrated “to” Automatic Service Migration (ASM) Process of moving affected subsystem service from one serverinstance to another running server instance Whole Server Migration (WSM) Process of moving entrire server instance from one physicalmachine to another Fail back Fail back refers to relocating services back to originalhosting or “home” server. Assumptions DuringZDT rollout, servers are shutdown gracefully and then started backup. To start with, administrators should be well aware of theimplications of restarting managed servers. An arbitrary applicationmay or may not be tolerant of a restart regardless of whether servicemigration is setup or not. Itmay have non-persistent state Itmay or may not be tolerant of runtime client exceptions Itmay be impacted by duration of restart Whena server is shutdown gracefully, client connections are closed,clients consequently get exceptions, and the JMS server is removedfrom candidate lists for load balancing and JMS message routingdecisions. Most of the time, such client exceptions are transient - aretry will be redirected to a different JMS server, or even to theoriginal JMS Server after it was migrated. But some exceptions willnot be transient and they will instead continue being thrown on eachclient retry until a particular JMS server instance comes back up.Though server does some level of quiescing during shutdown, itdoesn't prevent all errors in JMS client or else where. Withrespect to JTA, when a server is shutting down gracefully, anapplication wouldn'tgenerate any new transaction requests for that particular server. ForEJB/RMI path, the cluster aware stubs would detect server connectionfailures and redirect request to secondary server. It is assumed thatapplications are designed to handle exceptions during a transaction. Ifserver migration (WSM) is configured in environment, one should beaware that it usually takes a longer time (when compared to servicemigrations) to make services available since entire server instanceneeds to boot on a new hardware. Note:Ingeneral, the whole server migration is preferred for basic use due toits relative simplicity, but automatic service migration becomesattractive when faster fail-over times and advanced control over theservice migration are desirable. JMS WebLogicJMS sub system is robust and high-performance and is often used inconjunction with other APIs to build an enterprise application.Smooth functioning of the applications largely depends on how theapplication is designed (being resilient to failures, using certainpatterns or features) and also depends on how the JMS sub system istuned in the admin server. InWebLogic JMS, a message is only available if its host JMS server forthe destination is running. If a message is in a central persistentstore, the only JMS server that can access the message is the serverthat originally stored the message. HA is normally accomplished usingeither one or all of the following: Distributeddestinations: The queue and topic members of a distributeddestination are usually distributed across multiple servers within acluster, with each member belonging to a separate JMS server.Applications that use distributed destinations are more highlyavailable than applications that use simple destinations becauseWebLogic JMS provides load balancing and failover for memberdestinations of a distributed destination within a cluster. Store-and-Forward:JMS modules utilize the SAF service to enable local JMS messageproducers to reliably send messages to remote queues or topics. Ifthe destination is not available at the moment the messages aresent, either because of network problems or system failures, thenthe messages are saved on a local server instance, and are forwardedto the remote destination once it becomes available. HAServers/Services: JMS Servers can be automatically restarted and/ormigrated using either Whole Server Migration or Automatic ServiceMigration. JTA Aproduction environment designed for high availability would mostlikely ensure that JTA service (as well as other services) wouldn'tact as a single point of failure. The WebLogic transaction manager isdesigned to recover from system crashes with minimal userintervention. The transaction manager makes every effort to resolvetransaction branches that are prepared by resource managers with acommit or roll back, even after multiple crashes or crashes duringrecovery. It also attempts to recover transactions on system startupby parsing all transaction log records for incomplete transactionsand completing them. However, in preparation for maintenance type ofoperations like ZDT rollouts, JTA services may be configured formigrations. JTA migration is needed since in-flight transactions canhold locks on the underlying resources. If the transaction manager isnot available to recover these transactions, resources may hold on tothese locks as long as pending transactions are not resolved with acommit/rollback (for long periods of time), causing errors on newtransactions and making it difficult for applications to functionproperly. Moreon Service Migrations Service-levelmigration in WebLogic Server is the process of moving the pinnedservices from one server instance to a different server instance thatis available within the cluster. Servicemigration is controlled by logical migratable target, which serves asa grouping of services that is hosted on only one physical server ina cluster. You can select a migratable target in place of a server orcluster when targeting certain pinned services. The migrationframework provides tools and infrastructure for configuring andmigrating targets, and, in the case of automatic service migration,it leverages WebLogic Server's health monitoring subsystem to monitorthe health of services hosted by a migratable target. Followingtable summarizes various migration options Policy Type Description Manual Only (default) Automatic service migration is disabled for thistarget. Failure Recovery Pinned servicesdeployed to this target will: Initially start only on the preferred server Migrateonly if the cluster master determines that thepreferredserver has failed Exactly Once Pinned servicesdeployed to this target will: Initially start on a candidate server if the preferred oneisunavailable Migrateif the host server fails or is gracefully shutdown ZDTMigration Strategy and Options ForZDT rollouts, “exactly-once” type of services are not of concernsince the migration sub system automatically handles these services.It is the failure-recovery type of services that are of main concern.These services will not migrate if server is gracefully shutdown.Since the duration of restart may vary, these services need to bemigrated so that end users are not affected. Similarly,if user has configured services to be migrated manually, suchservices are automatically migrated on administrator's behalf duringa rollout. ZDT rollouts can handle both JMS and JTA servicesmigration. Caveats: 1. Transaction manager is notassigned to a migratable target like other pinned services,instead JTA ASM is a per-server setting. This is because thetransaction manager has no direct dependencies on other pinnedresources when contrasted with services such as JMS. 2. For user-defined singletons, ZDT rollout doesn't need totake any specific action since they are automatically configuredas “exactly-once”. Administratorcan specify the exact migration action on a per-server basis via themigration properties file passed as an option to any of the rolloutcommands. The migration options specified in the migration propertiesfile is validated against what is configured in the system, Requiredmigrations are initiated accordingly to mitigate down time. As anoptimization, the workflow generates the order in which rolloutshappens across the servers so as to prevent unnecessary migrationsbetween patched and unpatched servers. ZDTrollouts also support whole server migration (WSM) if server(s) areconfigured for it. Hereis the list of all the migration options: MigrationType Description jms All JMS related services running on currenthosting server are migrated to the destination server jta JTA service is migrated from current hostingserver to the destination server all Both JMS and JTA services on current hostingserver are migrated to the destination server server Whole server instance will be migrated todestination machine none No migrations will happen for singleton servicesrunning on current server Youwill observe that these migration options are very similar to WLSTmigrate command options. SampleMigration Sequence Thefollowing picture demonstrates a typical rollout sequence involvingservice migrations. Here, the JMS and JTA singleton services arerepresented by 2 types of migratable targets configured for eachserver. Persistent stores and the TLOG should be accessible from allservers in the cluster. Administrator has the control for specifyinghow the migrations should happen across the servers in the cluster.The next section describes the control knobs for fine grained controlof migrations during a rollout. ZDTMigration Properties Howthe migrations takes place for any of the rollouts is specified in amigration properties file that is passed an option to the rolloutcommand. The migration properties file is nothing but a JSON fileconsisting of 4 main properties: MigrationProperty Description source Denotes the source server (name) which is nothingbut the current hosting server for the singleton(s) destination Denotes the destination server (name) where thesingleton service will be migrated to. This can also be a machinename in case of server migration migrationType Acceptable types are "jms", "jta","all", "server", "none" asdescribed in previous section failback Indicates if automatic failback of service tooriginal hosting server should happen or not Hereis an example migration properties file: {"migrations":[ # Migrate all JMS migratable targets on server1 to server2. Perform a fail back# if the operation fails. { "source":"server1", "destination":"server2", "migrationType":"jms", "failback":"true" },# Migrate only JTA services from server1 to server3. Note that JTA migration# does not support the failback option, as it is not needed. { "source":"server1", "destination":"server3", "migrationType":"jta" },# Disable all migrations from server2 { "source":"server2", "migrationType":"none" }, {# Migrate all services (for example, JTA and JMS) from server 3 to server1 with# no failback "source":"server3", "destination":"server1", "migrationType":"all" }, # Use Whole Server Migration to migrate server4 to the node named machine 5 with# no failback { "source":"server4", "destination":"machine5", "migrationType":"server" } ]} IfmigrationType is "None", then it implies services runningon this server will not be migrated, it also means no failback isneeded. Ifthere are singleton services detected and administrator hasn'tpassed in migration properties file, rollout command will fail. Ifno migrations are needed, administrator should explicitly state thatvia the migration properties (i.e migrationType=”None”) for eachof the servers. IfmigrationType is "server", destination should point to anode manager machine name and WSM will be triggered for that serverinstance. Thedefault value of failback is false (no failback if option is notspecified). Fora specific server, either ASM or WSM can be applied but not both. SinceJTA sub system supports automatic failback of JTA service, failbackis not a valid option for JTA service. Eachof the above mentioned validation checks will happen as part ofpre-requisites check before any rollout. ZDTRollout Examples Thefollowing examples illustrate the usage of migration propertiesoption. Asample migrationProperties.json file: {"migrations":[ {"source":"m1","destination":"m2","migrationType":"jms","failback":"true"} ]} Passingmigration options to rolloutOracleHome rolloutOracleHome('myDomain','/pathto/patchedOracleHome.jar','/pathto/unpatchedOracleHomeBackup/',options='migrationProperties=/pathto/migrationProperties.json') Passingmigration options to rolloutApplications rolloutApplications('myDomain',applicationProperties='/pathto/applicationProperties.json',options='migrationProperties=/pathto/migrationProperties.json') Passingmigration options to rolloutJavaHome rolloutJavaHome('myDomain', javaHome='/pathto/JavaHome1.8.0_60',options='migrationProperties=/pathto/migrationProperties.json') Passingmigration options to rolloutUpdate rolloutUpdate('myDomain', '/pathto/patchedOracleHome.jar','/pathto/unpatchedOracleHomeBackup/', false,options='migrationProperties=/pathto/migrationProperties.json') Passingmigration options to rollingRestart rollingRestart('myDomain',options='migrationProperties=/pathto/migrationProperties.json') References WLSZDT WLSJMS Best Practices WhitePaper on WLS ASM (a good reference though quiteold)

WebLogic Server offers messaging, transaction and other system services to facilitate building enterprise grade applications. Typically,services can be either clustered or singleton. Clustered...

WebLogic on Docker Containers Series, Part 2

On my previous post, the first part of this series, I've shown to you how to quickly get started with WebLogic on Docker. You've learned how to create a base Docker image with WebLogic and Oracle JDK installed, and then how to create a second image that contains a configured WebLogic domain. Today's post will break down and explain what happens behind the scenes of that process Note: for the sake of history and keep this blog post useful in the future, I will refer to the commit 7741161 from the docker-images GitHub project, and version 12.2.1 of WebLogic. Walking through the build process of a WebLogic base image A base image of WebLogic means an image that contains only the software installed with minimum configuration, to further be extended and customized. It may be based on a Red Hat base Docker image, but preferably, we recommend you to use the Oracle Linux base image. Samples for how to build a base image are presented in the dockerfiles folder. Files for WebLogic versions 12.1.3 and 12.2.1 are maintained there, as well for two kinds of distributions: Developer, and Generic. Other versions and distributions may be added in the future. Differences between Developer and Generic distributions There aren't many differences between them, except these (extracted from the README.txt file inside the Quick Installer for Developer): WHAT IS NOT INCLUDED IN THE QUICK INSTALLER - Native JNI libraries for unsupported platforms. - Samples, non-english console help (can be added by using the WLS supplemental Quick Install) - Oracle Configuration Manager (OCM) is not included in the Quick installer - SCA is not included in the Quick Installer Also, the Quick Installer for Developers is compressed using pack200, an optimized compression tool for Java classes and JAR files, to reduce the download size of the installer. Besides these differences, the two distributions work perfectly fine for Java EE development and deployment.   Building the Developer distribution base image Although we provide a handy shell script to help you in this process, what really matters relies inside 12.2.1 folder and the Dockerfile.developer file. That recipe does a COPY of two packages, the RPM of JDK, and the WebLogic Quick Installer. These files must be present. We've put these .download files as placeholders to remind you of the need to download them. This same approach will apply for the Generic distribution. The installation of JDK uses rpm tool, which enables us to run Java inside the base image. A very obvious requirement. After JDK is installed, we proceed with the installation of WebLogic by simply calling "java -jar", and later we clean up yum. An important observation is the use of /dev/urandom in the Dockerfile. WebLogic requires some level of entropy for random bits that are generated during install, and as well domain creation. It is up to customers to decide whether they want to use /dev/random or /dev/urandom. Please configure this as desired. You can build this image in two ways: Using buildDockerImage.sh script. Indicate you want developer distribution [-d], and version 12.2.1 [-v 12.2.1]. $ pwd ~/docker-images/OracleWebLogic/dockerfiles $ sh buildDockerImage.sh -d -v 12.2.1   Manually calling docker build: $ cd 12.2.1 $ docker build -t oracle/weblogic:12.2.1-dev -f Dockerfile.developer . Either of these calls result in the following: REPOSITORY          TAG            IMAGE ID         CREATED          VIRTUAL SIZE oracle/weblogic     12.2.1-dev     99a470dd2110     15 secs ago      1.748 GB oraclelinux         7              bea04efc3319     5 weeks ago      206 MB oraclelinux         latest         bea04efc3319     5 weeks ago      206 MB As you may have know by now, this image contains only WebLogic and JDK installed, and thus does not serve to be executed, only to be extended. Building the Generic distribution base image Most of what you've learned from above applies to the Generic distribution. The differences are that you must download, obviously, the Generic installer. The installation process is a little bit different, since it uses the silent install mode, with environment definition coming from install.file and oraInst.loc. To build this image you either do by:   Call buildDockerImage.sh script. Indicate you want Generic distribution [-g], and version 12.2.1 [-v 12.2.1]. $ pwd ~/docker-images/OracleWebLogic/dockerfiles $ sh buildDockerImage.sh -g -v 12.2.1   Manually calling docker build: $ cd 12.2.1 $ docker build -t oracle/weblogic:12.2.1 -f Dockerfile.generic . Now you have two images you can extend from, either the Developer, or the Generic base image: REPOSITORY          TAG            IMAGE ID         CREATED          VIRTUAL SIZE oracle/weblogic     12.2.1         ea03630ee95d     18 secs ago      3.289 GB oracle/weblogic     12.2.1-dev     99a470dd2110     2 mins ago       1.748 GB oraclelinux         7              bea04efc3319     5 weeks ago      206 MB oraclelinux         latest         bea04efc3319     5 weeks ago      206 MB Note how the Generic image is larger than the developer image. That's because the Developer distribution contains less stuff inside, as described earlier. It will be up to Dev and Ops teams to decide which one to use. And how to build them. In the next post, I will walk you through the process of building the 1221-domain sample image. If you have any questions, feel free to comment, or tweet.

On my previous post, the first part of this series, I've shown to you how to quickly get started with WebLogic on Docker. You've learned how to create a base Docker image with WebLogic and Oracle JDK...

WebLogic on Docker Containers Series, Part 1

WebLogic 12.2.1 is certified to run Java EE 7 applications, supports Java SE 8 (since 12.1.3), and can be deployed on top of Docker containers. It also supports Multitenancy through the use of Partitions in the domain, enabling you to add another level of density to your environment. Undeniably, WebLogic is so much of a great option for Java EE based deployments that both developers and operations will benefit from. Even Adam Bien, Java EE Rockstar, has agreed with that. But you are here to play with WebLogic and Docker, so first, check these links about the certification and support: [Whitepaper] Oracle WebLogic Server on Docker Containers [Blog] WebLogic 12.2.1 Running on Docker Containers Dockerfiles, Scripts, and Samples on GitHub Docker Support Statement on MOS Doc.ID 2017645.1 Oracle Fusion Middleware Certification Pages Understanding WebLogic on Docker We recommend our customers and users to build their own image containing WebLogic and Oracle JDK installed without any domain configured. Perhaps a second image containing a basic domain. This is to guarantee easier reuse between DevOps teams. Let me describe an example: Ops would provide a base WebLogic image to Dev team, either with or without a pre-configured domain with a set of predefined shell scripts, and Devs would perform domain configuration and application deployment. Then Ops get a new image back and just run containers out of that image. It is a good approach, but certainly customers are free to think out of the box here and figure out what works best for them. TL;DR; Alright, alright... Do the following: 1 - Download docker-images' master.zip file repository directly and drop somewhere. $ unzip master.zip && mv docker-images-master docker-images 2 - Download WebLogic 12.2.1 for Developers and Oracle JDK 8 specific versions as indicated in Checksum.developer. Put them inside dockerfiles/12.2.1 folder. You will see placeholders there (*.download files). 3 - Build the installation image $ cd docker-images/OracleWebLogic/dockerfiles $ sh buildDockerImage.sh -d -v 12.2.1 4. Build the WebLogic Domain image $ cd ../samples/1221-domain $ docker build -t 1221-domain . 5. Run WebLogic from a Docker container $ docker run -d -p 8001:8001 1221-domain 6. Access Admin Console from your browser: http://localhost:8001/console Note that these steps are for your first image build only. Customers are encouraged to run a Docker Registry at their internal network, and store these images there just as they probably already do with Oracle software installers at some intranet FTP server. Important! Do not share binaries (either packed as a Docker image or not). * follow this series if you want to learn more of WebLogic on Docker. But please do read the entire post... :-) Creating your first WebLogic Docker Image The very first step to get started, is to checkout the docker-images project on GitHub: $ git checkout --depth=1 https://github.com/oracle/docker-images.git If you don't have or don't want to install the Git client, you can download the ZIP file containing the repository and extract it. Use your browser, or some CLI tool. Another thing to know before building your image, is that WebLogic comes in two flavors: one is the Developer distribution, smaller, and the other is the Generic distribution, for use in any environment. For the developer distribution, you have to download two files indicated inside Checksum.developer. If you want to build the Generic distribution instead of the Developer, see file Checksum.generic for further instructions, but tl;dr; you need two files again (or one if you have downloaded JDK already). The same instructions apply. Oracle WebLogic 12.2.1 Quick Installer for Developers (211MB) Oracle JDK 8u65 Linux x64 RPM (or latest as indicated in the Checksum file) Next step is to go to the terminal again and use the handy shell script buildDockerImage.sh, which will do some checks (like checksum) and select the proper Dockerfile (either .developer or .generic) for the specific version you want, although I do recommend you start with 12.2.1 from now on. $ cd docker-images/OracleWebLogic/dockerfiles $ sh buildDockerImage.sh -d -v 12.2.1   You may notice that it takes some time to copy files during the installation process. That's because WebLogic for Developers is compressed with pack200 to be a small download. But after you build this image, you can easily create any domain image on top of it, and you can also share your customized image using docker save/load. Next step is to create a WebLogic Domain. Creating the WebLogic Domain Image So far you have an image that is based on Oracle Linux 7 and has WebLogic 12.2.1 for Developers, and Oracle JDK installed. To run WebLogic, you must have a domain. Luckily WebLogic is mature enough to be very handy for DevOps operations, and has support for a scripting tool called WLST (you guessed: WebLogic Scripting Tool), based on Jython (Python for Java) that allows you to script any task that you'd perform through a wizard or the web interface, from installation to configuration to management to monitoring. I've shared some samples on the GitHub project and I'll cover a couple of them in this series of WebLogic on Docker, but for now, let's just create the basic, empty WebLogic Domain. Go to the samples folder and access folder 1221-domain. When there, just simply perform: $ cd docker-images/OracleWebLogic/samples/1221-domain $ docker build -t 1221-domain . This process is very fast, at least for this sample. Time may vary if your WLST for creating a domain performs more tasks. See the sample create-wls-domain.py to have some ideas. Starting WebLogic on Docker You now have the image 1221-domain ready to be used. All you need to do is to call: $ docker run -ti -p 8001:8001 1221-domain And now you can access the Admin Console on http://localhost:8001/console. Frequently Asked Questions, Part 1 - Can I write my own Dockerfiles and WLST scripts to install and create WebLogic? A: absolutely! That is the entire idea of sharing these scripts. These are excellent pointers on what can be done, and how. But customers and users are free to come up with their own files. And if you have some interesting approach to share, please send to me at bruno dot borges at oracle dot com. - Why is WebLogic Admin on port 8001 instead of default 7001? A: Well, it is a sample. It is to show what configurations you can do. The environment variable ADMIN_PORT, as well other configurations in the samples/1221-domain/Dockerfile are picked up by the create-wls-domain.py script while creating the domain. The WLST script will even use some defaults if these variables are not defined. Again, it's a sample. - What if I need to patch the WebLogic install? A: you do that by defining a new Dockerfile and apply the patch as part of the build process, to create a new base image version. Then you recreate your domain image that extends the new patched base image. You may also want to simply extend your existing domain image, apply the patch, and use that one, or you can also modify your image by applying the patch in some existing container, then committing the container to a new image. There are different ways to do that, but for sure applying the patch to a live container is not one of them, since it is a good idea to keep containers as disposable as possible, and you should also have an image from where you can create new patched containers. - What if I want a WebLogic cluster with Node Manager and Manged Servers? A: that works too. I'll cover that in this series. - Can I build a Docker image with a deployed artifact? A: yes. More on that in upcoming blog posts of this series. - Can I have a Load Balancer in front of a Swarm of Docker containers? A: yes. That will also be covered as part of this series. I hope you are excited to learn more about WebLogic on Docker. So please follow this blog and my Twitter account for upcoming posts.

WebLogic 12.2.1 is certified to run Java EE 7 applications, supports Java SE 8 (since 12.1.3), and can be deployed on top of Docker containers. It also supports Multitenancy through the use of Parti...

Java Rock Star Adam Bien Impressed by WebLogic 12.2.1

It is not an exaggeration to say Adam Bien is pretty close to a "household name" in the Java world. Adam is a long time Java enthusiast, author of quite a few popular books, Java Community Process (JCP) expert, Oracle ACE Director, official Oracle Java Champion and JavaOne conference Rock Star award winner. Adam most recently won the JCP member of the year award. His blog is amongst the most popular for Java developers.  Adam recently took WebLogic 12.2.1 for a spin and was impressed. Being a developer (not unlike myself) he focused on the full Java EE 7 support in WebLogic 12.2.1. He reported his findings to Java developers on his blog. He commented on fast startup, low memory footprint, fast deployments, excellent NetBeans integration and solid Java EE 7 compliance. You can read Adam's full write-up here. None of this of course is incidental. WebLogic is a mature product with an extremely large deployment base. With those strengths often comes the challenge of usability. Nonetheless many folks that haven't kept up-to-date with WebLogic evolution don't realize that usability and performance have long been a continued core focus. That is why folks like Adam are often pleasantly surprised when they take an objective fresh look at WebLogic. You can of course give WebLogic 12.2.1 a try yourself here. There is no need to pay anything just to try it out as you can use a free OTN developer license (this is what Adam used as per the instructions on his post). You can also use an official Docker image here. Solid Java EE support is of course the tip of the iceberg as to what WebLogic offers. As you are aware WebLogic offers a depth and breadth of proven features geared towards mission-critical, 24x7 operational environments that few other servers come close to. One of the best ways for anyone to observe this is taking a quick glance at the latest WebLogic documentation.

It is not an exaggeration to say Adam Bien is pretty close to a "household name" in the Java world. Adam is a long time Java enthusiast, author of quite a few popular books, Java Community Process...

Even Applications can be Updated with ZDT Patching

Zero Downtime Patching enables a convenient method of updating production applications on WebLogic Server without incurring any application downtime or loss of session data for your end-users.  This new feature may be especially useful for users who want to update multiple applications at the same time, or for those who cannot take advantage of the Production Redeployment feature due to various limitations or restrictions. Now there is a convenient alternative to complex application patching methods. This rollout is based on the process and mechanism for automating rollouts across a domain while allowing applications to continue to service requests. In addition to the reliable automation, the Zero Downtime Patching feature also combines Oracle Traffic Director (OTD) load balancer and WebLogic Server to provide some advanced techniques for preserving active sessions and even handling incompatible session state during the patching process. To rollout an application update, follow these 3 simple steps. 1. Produce a copy of the updated the application(s), test and verify. Note the administrator is responsible for making sure that the updated application sources are distributed to the appropriate nodes.  For stage mode, the updated application source needs to be available on the file system for the AdminServer to distribute the application source.  For no stage and external stage mode, the updated application source needs to be available on the file system for each node. 2. Create a JSON formatted file with the details of any applications that need to be updated during the rollout. {"applications":[ { "applicationName":"ScrabbleStage", "patchedLocation":"/scratch/wlsconfg/Domains/Domain1221/applications/ScrabbleStagev2.war", "backupLocation": "/scratch/wlsconfg/Domains/Domain1221/applications/ScrabbleStagev1.war" }, { "applicationName":"ScrabbleNoStage", "patchedLocation":"/scratch/wlsconfg/Domains/Domain1221/applications/ScrabbleNoStagev2.war", "backupLocation":"/scratch/wlsconfg/Domains/Domain1221/applications/ScrabbleNoStagev1.war" }, { "applicationName":"ScrabbleExternalStage", "patchedLocation":"/scratch/wlsconfg/Domains/Domain1221/applications/ScrabbleExternalStagev2.war", "backupLocation":"/scratch/wlsconfg/Domains/Domain1221/applications/ScrabbleExternalStagev1.war" } ]} 3. Simply run the Application rollout using a WLST command like this one: rolloutApplications(“Cluster1”, “/pathTo/applicationRolloutProperties”) The Admin Server will start the rollout that coordinates the rolling restart of each node in the cluster named “Cluster1”. While the servers are shutdown, the original application source is moved to the specified backup location, and the new application source is copied into place.  Each server in turn is then started in admin mode.  While the server is in admin mode, the application redeploy command is called for that specific server, causing it to reload the new source.  Then the server is resumed to its original running state and is serving the updated application. For more information about updating Applications with Zero Downtime Patching view the documentation.

Zero Downtime Patching enables a convenient method of updating production applications on WebLogic Server without incurring any application downtime or loss of session data for your end-users.  This...

WLS 12.2.1 launch - Servlet 3.1 new features

Introduction WLS 12.2.1 release support new features of Servlet 3.1 specification. The Servlet 3.1 specification is a major version of Servlet specification. This version of specification mainly introduced Non-blocking IO and Http Protocal upgrade features into ServletContainer for adopting in modern web application development.Non-blocking IO helps improve the ever increasing demand for improved Web Container scalability, increase the number of connections that can simultaneously be handled by the Web Container. Non-blocking IO in the Servlet container allows developers to read data as it becomes available or write data when possible to do so. Also this version introduced several minor changes for security and functional enhancement. 1 Upgrade Processing 1.1 Description In HTTP/1.1, the Upgrade general-header allows the client to specify the additional communication protocols that it supports and would like to use. If the server finds it appropriate to switch protocols, then new protocols will be used in subsequent communication.The Servlet container provides an HTTP upgrade mechanism. However the Servlet container itself does not have knowledge about the upgraded protocol. The protocol processing is encapsulated in the HttpUpgradeHandler. Data reading or writing between the Servlet container and the HttpUpgradeHandler is in byte streams.When an upgrade request is received, the Servlet can invoke the HttpServletRequest.upgrade method, which starts the upgrade process. This method instantiates the given HttpUpgradeHandler class. The returned HttpUpgradeHandler instance may be further customized. The application prepares and sends an appropriate response to the client. After exiting the service method of the Servlet, the Servlet container completes the processing of all filters and marks the connection to be handled by the HttpUpgradeHandler. It then calls the HttpUpgradeHandler's init method, passing a WebConnection to allow the protocol handler access to the data streams.The Servlet filters only process the initial HTTP request and response. They are not involved in subsequent communications. In other words, they are not invoked once the request has been upgraded. The HttpUpgradeHandler may use non blocking IO to consume and produce messages. The Developer has the responsibility for thread safe access to the ServletInputStream and ServletOutputStream while processing HTTP upgrade. When the upgrade processing is done, HttpUpgradeHandler.destroy will be invoked. 1.2 Example In this example, the client sends the request to the server. The server accepts the request, sends back the response, and then invokes the HttpUpgradeHandler.init() method and continues the communication using a dummy protocol. The client shows the request and response headers during the handshake process. Client the client initiates the HTTP upgrade request. @WebServlet(name = "ClientTest", urlPatterns = {"/"})public class ClientTest extends HttpServlet {protected void processRequest(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException {String reqStr = "POST " + contextRoot + "/ServerTest HTTP/1.1" + CRLF; ... reqStr += "Upgrade: Dummy Protocol" + CRLF;// Create socket connection to ServerTest s = new Socket(host, port); input = s.getInputStream(); output = s.getOutputStream();// Send request header with data output.write(reqStr.getBytes()); output.flush(); }} The header Upgrade: Dummy Protocol is an HTTP/1.1 header field set to Dummy Protocol in this example. The server decides whether to accept the protocol upgrade request. Server ServerTest.java checks the Upgrade field in the request header. When it accepts the upgrade requests, the server instantiates ProtocolUpgradeHandler, which is the implementation of HttpUpgradeHandler. If the server does not support the Upgrade protocol specified by the client, it sends a response with a 404 status. @WebServlet(name="ServerTest", urlPatterns={"/ServerTest"})public class ServerTest extends HttpServlet {protected void processRequest(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException {// Checking request header if ("Dummy Protocol".equals(request.getHeader("Upgrade"))){ ... ProtocolUpgradeHandler handler = request.upgrade(ProtocolUpgradeHandler.class); } else { response.setStatus(400); ... } } ...} ProtocolUpgradeHandler is the implementation of HttpUpgradeHandler, which processes the upgrade request and switches the communication protocol. The server checks the value of the Upgrade header to determine if it supports that protocol. Once the server accepts the request, it must use the Upgrade header field within a 101 (Switching Protocols) response to indicate which protocol(s) are being switched. Implementation of HttpUpgradeHandler public class ProtocolUpgradeHandler implements HttpUpgradeHandler { @Overridepublic void init(WebConnection wc) {this.wc = wc;try { ServletOutputStream output = wc.getOutputStream(); ServletInputStream input = wc.getInputStream(); Calendar calendar = Calendar.getInstance(); DateFormat dateFormat = new SimpleDateFormat("MM/dd/yyyy HH:mm:ss");// Reading the data into byte array input.read(echoData);// Setting new protocol header String resStr = "Dummy Protocol/1.0 " + CRLF; resStr += "Server: Glassfish/ServerTest" + CRLF; resStr += "Content-Type: text/html" + CRLF; resStr += "Connection: Upgrade" + CRLF; resStr += "Date: " + dateFormat.format(calendar.getTime()) +CRLF; resStr += CRLF;// Appending data with new protocol resStr += new String(echoData) + CRLF;// Sending back to client ... output.write(resStr.getBytes()); output.flush(); } catch (IOException ex) { Logger.getLogger(ProtocolUpgradeHandler.class.getName()).log(Level.SEVERE, null, ex); } ... } @Overridepublic void destroy() { ...try { wc.close(); } catch (Exception ex) { Logger.getLogger(ProtocolUpgradeHandler.class.getName()).log(Level.SEVERE, "Failed to close connection", ex); } ... }} The init() method sets up the new protocol headers. The new protocol is used for subsequent communications. This example uses a dummy protocol. The destroy() method is invoked when the upgrade process is done. This example shows the handshake process of the protocol upgrade. After the handshake process, the subsequent communications use the new protocol. This mechanism only applies to upgrading application-layer protocols upon the existing transport-layer connection. This feature is most useful for Java EE Platform providers. 2 Non-blocking IO 2.1 Description Non-blocking request processing in the Web Container helps improve the ever increasing demand for improved Web Container scalability, increase the number of connections that can simultaneously be handled by the Web Container. Non blocking IO in the ServletContainer allows developers to read data as it becomes available or write data when possible to do so. Non-blocking IO only works with async request processing in Servlets and Filters, and upgrade processing. Otherwise, an IllegalStateException must be thrown when ServletInputStream's setReadListener is invoked. 2.2 Non-Blocking Read Example Servlet In ServerServlet, the server receives the request, starts the asynchronous processing of the request, and registers a ReadListener @WebServlet(name = "ServerServlet", urlPatterns = {"/server"}, asyncSupported = true)public class ServerServlet extends HttpServlet { .....protected void service(HttpServletRequest request, HttpServletResponse response)throws ServletException, IOException { response.setContentType("text/html;charset=UTF-8");// async read final AsyncContext context = request.startAsync();final ServletInputStream input = request.getInputStream();final ServletOutputStream output = response.getOutputStream(); input.setReadListener(new ReadListenerImpl(input, output, context)); } Note: Non-blocking I/O only works with asynchronous request processing in servlets and filters or upgrade handler. See Servlet Spec 3.2 for more details. Read Listener Implementation public class ReadListenerImpl implements ReadListener {private ServletInputStream input;private ServletOutputStream output;private AsyncContext context;private StringBuilder sb = new StringBuilder();public ReadListenerImpl(ServletInputStream input, ServletOutputStream output, AsyncContext context) {this.input = input;this.output = output;this.context = context; } /** * do when data is available to be read. */ @Overridepublic void onDataAvailable() throws IOException {while (input.isReady()) { sb.append((char) input.read()); } } /** * do when all the data has been read. */ @Overridepublic void onAllDataRead() throws IOException {try { output.println("ServerServlet has received '" + sb.toString() + "'."); output.flush(); } catch (Exception e) { e.printStackTrace(); } finally { context.complete(); } } /** * do when error occurs. */ @Overridepublic void onError(Throwable t) { context.complete(); t.printStackTrace(); } The onDataAvailable() method is invoked when data is available to be read from the input request stream. The container subsequently invokes the read() method if and only if isReady() returns true. The onAllDataRead() method is invoked when all the data from the request has been read. The onError(Throwable t) method is invoked if there is any error or exceptions occurs while processing the request. The isReady() method returns true if the underlying data stream is not blocked. At this point, the container invokes the onDataAvailable() method.Users can customize the constructor to handle different parameters. Usually, the parameters are ServletInputStream, ServletOutputStream, or AsyncContext. This sample uses all of them to implement the ReadListener interface. 2.3 Non-Blocking Write Example Servlet In ServerServlet.java, after receiving a request, the servlet starts the asynchronous request processing and registers a WriteListener. protected void processRequest(HttpServletRequest request, HttpServletResponse response)throws ServletException, IOException { response.setContentType("text/html;charset=UTF-8");// async write final AsyncContext context = request.startAsync();final ServletOutputStream output = response.getOutputStream(); output.setWriteListener(new WriteListenerImpl(output, context)); } Write Listener Implementation public class WriteListenerImpl implements WriteListener {private ServletOutputStream output;private AsyncContext context;public WriteListenerImpl(ServletOutputStream output, AsyncContext context) {this.context = context;this.output = output; } /** * do when the data is available to be written */ @Overridepublic void onWritePossible() throws IOException {if (output.isReady()) { output.println("<p>Server is sending back 5 hello...</p>"); output.flush(); }for (int i = 1; i <= 5 && output.isReady(); i++) { output.println("<p>Hello " + i + ".</p>"); output.println("<p>Sleep 3 seconds simulating data blocking.<p>"); output.flush();// sleep on purpose try {Thread.sleep(3000); } catch (InterruptedException e) {// ignore } } output.println("<p>Sending completes.</p>"); output.flush(); context.complete(); } /** * do when error occurs. */ @Overridepublic void onError(Throwable t) { context.complete(); t.printStackTrace(); }} The method onWritePossible() is invoked when data is available to write to the response stream. The container subsequently invokes the writeBytes() method if and only if isReady() returns true. The onError(Throwable t) method is invoked if any error or exceptions occur while writing to the response. The isReady() method returns true if the underlying data stream is not blocked. At this point, the container invokes the writeBytes() method. 4 SessionID change 4.1 Description Servlet specification 3.1 a new interfaces and method for avoiding Session fixation. Weblogic ServletContainer should implement the session id change processing for security reason. 4.2 SessionID change Example In this example application, the SessionIDChangeListener interface overrides the sessionIdChanged method, which receives a notification that the session ID has been changed in a session. The SessionIDChangeTest changes the value of the session ID by invoking javax.servlet.http.HttpServletRequest.changeSessionId(). Servlet @WebServlet(name = "SessionIDChangeServlet", urlPatterns = {"/SessionIDChangeServlet"})public class SessionIDChangeServlet extends HttpServlet {protected void processRequest(HttpServletRequest request, HttpServletResponse response)throws ServletException, IOException { response.setContentType("text/html;charset=UTF-8"); PrintWriter out = response.getWriter(); HttpSession session = request.getSession(true);try { StringBuilder sb = new StringBuilder(); sb.append("<h3>Servlet SessionIDChangeTest at " + request.getContextPath() + "</h3><br/>"); sb.append("<p>The current session id is: &nbsp;&nbsp;" + session.getId() + "</p>"); /* Call changeSessionID() method. */ request.changeSessionId(); sb.append("<p>The current session id has been changed, now it is: &nbsp;&nbsp;" + session.getId() + "</p>"); request.setAttribute("message", sb.toString()); request.getRequestDispatcher("response.jsp").forward(request, response); } finally { out.close(); } }....} The Servlet get a session object from request. A sessionID generated at that time. After request.changeSessionId() was called a new sessionID generated to replace the old one on the session object. HttpSessionIdListener Implementation @WebListenerpublic class SessionIDChangeListener implements HttpSessionIdListener { @Overridepublic void sessionIdChanged(HttpSessionEvent event, String oldSessionId) {System.out.println("[Servlet session-id-change example] Session ID " + oldSessionId + " has been changed"); }} The implementation's sessionIdChanged method will be triggered when the request.changeSessionId() was called.

Introduction WLS 12.2.1 release support new features of Servlet 3.1 specification. The Servlet 3.1 specification is a major version of Servletspecification. This version of specification...

Introducing WLS JMS Multi-tenancy

Introduction Multi-tenancy (MT) is the main theme of the WebLogic Server 12.2.1 release. It enhances the Oracle Platform for Software-as-a-Service (SaaS) and Platform-as-a-Service (PaaS) use cases. The main benefits of WebLogic multi-tenancy are increased density, tenant isolation, and simplified cloud configuration and management. This article introduces multi-tenancy support for WebLogic JMS, the messaging component in WebLogic Server.    Key MT Concepts Some of you may have already learned from other blogs (for example Tim’s blog about Domain Partitions for Multi Tenancy) about some of the key concepts in WebLogic MT. But for the benefit of a broader audience, here is a quick review of those concepts before we get into JMS specifics. WebLogic Multi-tenancy introduces the concepts of domain partition (also known as partition), resource group (RG), and resource group template (RGT).   A Partition is conceptually a slice of a WebLogic domain, where resources and applications for different tenants can be configured and deployed in isolation on the same WebLogic server or in the same cluster. This improves overall density. Partitions define the isolation boundaries for JNDI, security, runtime MBeans, application persistent data, work managers and logging.  Furthermore, Partitions running on the same server instance have their own lifecycle, for example, a partition can be shut down at any time without impacting other partitions. A Resource Group is simply a collection of functionally related resources and applications. A RG can be targeted and managed independently of other resource groups in the same partition. Resource groups can be defined not only inside a partition, but also at the domain level. As with partitions, RGs in the same partition (or at the domain level) that are running on the same server instance have their own lifecycle. A Resource Group Template provides a templating mechanism to reduce the administrative overhead of configuring WebLogic resources and applications for SaaS use cases where the same resources and applications need to run in multiple partitions. It offers a configure-once-and-use-everywhere capability, where a common set of configuration artifacts can be specified in a RGT, and can then be referenced from RGs in different partitions. A RGT is not targetable, and resources in a RGT will not deploy unless the RGT is referenced by a deployed RG. Note that the resources and applications configured or deployed in a partition (directly inside RGs or via RGs referencing a RGT) are scoped to that partition. Understanding JMS Resources in MT In a similar way to other WebLogic configuration artifacts, JMS resources such as JMS servers, SAF agents, path services, persistent stores, messaging bridges, JMS system modules, app-deployment JMS modules, Java EE 7 resource definition modules, and JMS applications can all now be configured and deployed in a RG, either directly or via RGTs, as well as in the ‘classic’ way, which is always directly at the domain level. Note that it is perfectly acceptable to combine both partition and ‘classic’ configuration together in the same domain.    Resources and applications in different partitions are isolated from one another. For example, you can configure a JMS destination with the same JNDI name in multiple partitions running in the same cluster, and these destinations will be managed via independent runtime MBean instances, and can be independently secured via partition-specific security realms. In addition to non-persistent state, the persistent data (for example, persistent messages and durable subscriptions) in such JMS destinations are also isolated from one another. Configuring JMS Resources in MT The following configuration snippets show how JMS resources configured in a multi-tenant environment differs from traditional non-MT JMS configuration.   As you can see, partition-scoped JMS resources are embedded in a resource group in a partition (alternatively, they can be embedded in a Resource Group Template, which is in turn referenced by a Resource Group). In addition, resources in a resource group are never individually targeted. Instead, the whole resource group is targeted via a virtual target, which is itself targeted in the normal way.  If a RG targets to a virtual target that is in turn targeted to a WL cluster, all JMS resources and applications in the RG will also be targeted to that cluster. As we will see later, a virtual target not only provides the targeting information of a RG, it also defines the accessing point of a partition. For more information about resource group targeting and virtual targets, check out Joe's blog about Partition Targeting and Virtual Targets. You might have noticed that I did not discuss configuring individual JMS resources for each server in a WL cluster, nor did I mention configuring “migratable targets” to add high availability. I have good news for you! Neither is needed or even supported in MT. They have been replaced with greatly enhanced WebLogic JMS cluster-targeting and HA support; my colleague Kathiravan blogs about it in 12.2.1 WebLogic JMS: Dynamic Scalability and High Availability Made Simple. Although system level JMS resources (such as JMS servers, SAF agents, persistent stores, messaging bridges, path services, and JMS modules) are scoped differently in a MT configuration, their respective attributes are specified in exactly the same way as in a non-MT configuration. Various validation and targeting rules are enforced to ensure that WebLogic MT JMS configuration is isolated, self contained, and easy to manage. One basic and high-level rule in configuring JMS in MT is that a JMS configuration artifact may only reference other configuration artifacts that are in the same scope. For example, a resource group scoped JMS server can only reference a persistent store that is also defined in the same resource group. These rules are enforced by configuration validation checks and by errors and warnings that are logged at runtime. {C} Accessing JMS Resources in MT A JMS application designed for multi-tenancy accesses JMS resources in the same way as ‘classic’ JMS applications, by looking up JMS resources in a JNDI name space. The difference is that in a MT environment, a WebLogic JNDI InitialContext is associated with a particular scope (i.e. the domain or a partition), when it is created.  A MT application can have multiple JNDI contexts that refer to the same WebLogic cluster but are scoped to different partitions. An initial context, once created, sticks to its scope until it is closed. This means that all JNDI operations using a partition-scoped JNDI context instance are performed using the partition-specific area of the JNDI space. The scope of a JNDI context is determined by the “provider URL” supplied when the initial context is created. Once an application successfully establishes a partition-scoped JNDI initial context, it can use this context to look up JMS connection factories and destinations in the same way as in a non-MT environment; except that now the application can only access partition-scoped JMS resources. Let us look at some specific use cases and see how an application can establish an initial context to a particular partition in each of the use cases. Use Case 1 - Local Intra-partition Access When a Java EE application needs to access a JMS destination in its local partition in the same cluster (or on the same non-clustered managed server), the application can just create an initial context without supplying a provider URL. Example 1: Null Provider URL Context ctx = new InitialContext(); Object cf = ctx.lookup("jms/mycf1"); Object dest = ctx.lookup("jms/myqueue1");   This initial context will be scoped to the partition in which the application is deployed. Use Case 2 - Local Inter-partition Access If a Java EE application needs to access a JMS destination (or other resource) in a different partition than the partition to which it is deployed, and the partition is in the same cluster (or on the same managed server) then it can use either a partition-scoped JNDI name or a provider URL with the "local:" protocol. Using Partition Scoped JNDI Names A JNDI name can be decorated with a namespace prefix to indicate its scope. Example 2.1: given the partition configuration in the above examples, the following code can be used to access a JMS destination that is configured in "partition1". Context ctx = new InitialContext(); Object cf = ctx.lookup("partition:partition1/jms/mycf1"); Object dest = ctx.lookup("partition:partition1/jms/myqueue1");   Similarly a Java EE application in a partition can access a domain level JNDI resource in the same cluster using a partition scoped initial context with the "domain:" namespace prefix, for example "domain:jms/mycf2". Using a provider URL with the "local:" Protocol Alternatively, one can specify a "local:" provider URL when creating an initial context to a specific partition. Example 2.2: given the partition configuration in the above examples, the following code can be used to access a JMS destination that is configured in "partition1". Hashtable<String, String> env = new Hashtable<>(); env.put(Context.PROVIDER_URL, "local://?partitionName=partition1"); env.put(Context.SECURITY_PRINCIPAL, "weblogic"); env.put(Context.SECURITY_CREDENTIALS, "welcome1"); Context ctx = new InitialContext(env); Object cf = ctx.lookup("jms/mycf1"); Object dest = ctx.lookup("jms/myqueue1");   The initial context will be associated with "partition1" for its lifetime. Similarly, a Java EE application in a partition can access a domain level JNDI resource in the same cluster using “local://?partitionName=DOMAIN” as the provider URL.  Use Case 3 - General Partition Access A third way for a Java EE application or client to access a JMS destination in a partition is to use a "partition URL". A partition URL is intended to be used when the JMS destination is in a remote cluster (or on a remote non-clustered managed server).  A typical "partition URL" is t3://hostname:port, or t3://host:port/URI-prefix. Partition URLs may only be used by Java EE applications or clients using WLS 12.2.1 or later (older versions should use dedicated partition ports: see below). Example 3: given the partition configuration in the examples above, the following code can be used to access a JMS destination that is configured in "partition1". Note that "/partition1" in the provider URL below is the uri-prefix configured in the VirtualTarget for partition1. Hashtable<String, String> env = new Hashtable<>(); env.put(Context.PROVIDER_URL, "t3://abcdef00:7001/partition1"); env.put(Context.SECURITY_PRINCIPAL, "weblogic"); env.put(Context.SECURITY_CREDENTIALS, "welcome1"); Context ctx = new InitialContext(env); Object cf = ctx.lookup("jms/mycf1"); Object dest = ctx.lookup("jms/myqueue1");   Although it is not a best practice, a “partition URL” can also be used to access another partition in the same JVM/cluster. Use Case 4 – Dedicated Partition Ports One last option is to setup dedicated ports for each partition, and configuring these is described in Joe's blog about Partition Targeting and Virtual Targets. Configuring dedicated partition ports enables applications that use ‘classic’ URLs to access a partition, and is mainly intended to enable clients and applications that are running on releases older than 12.2.1 to access partitions in a 12.2.1 or later domain. Such older clients and applications do not support the use of host name and URI-prefix to access a partition. An attempt to use them from an older client will simply fail or may silently access the domain level JNDI name space. What’s next? I hope this article helps you to understand the basics of JMS MT! It is time to start exploring this new and exciting capability. You can find more information for messaging in MT in the Configuring Messaging chapter of Oracle® Fusion Middleware Using WebLogic Server Multitenant.

Introduction Multi-tenancy (MT) is the main theme of the WebLogic Server 12.2.1 release. It enhances the Oracle Platform for Software-as-a-Service (SaaS) and Platform-as-a-Service (PaaS) use cases. The...

New EJB 3.2 feature - Modernized JCA-based Message-Driven Bean

WebLogic Server 12.2.1 is a fully compatible implementation of Java EE 7 specification. One of the big improvements in EJB container in this release of WebLogic Server is that, a message-driven bean is able to implement a listener interface with no methods. When such a no-methods listener interface is used, all non-static public methods of the bean class (and of the bean class's super classes except java.lang.Object) are exposed as message listener methods. Let's develop a sample step by step. The sample application assumes that an e-commercial website sends the buy/sell events to JMS Queues - buyQueue and sellQueue - respectively when a product is sold or bought. The connector listens on the queues, and execute message-driven bean's non-static public methods to persist the records in events to persistent store. 1. Define a no-methods message listener interface In our sample, the message listener interface NoMethodsListenerIntf has no methods in it. List 1 - No-methods message listener interface public NoMethodsListenerIntf { } 2. Now define the bean class In message-driven bean class, there are two non-static public methods - productBought and productSold, so they are both exposed as message listener methods. When connector gets a product-sold event from sellQueue, it will then invoke message-driven bean's productSold method, and likewise for product-bought event. We annotate productSold method and productBought method with @EventMonitor, indicating that they are the target methods that connector should execute. These two methods will persist the records into database or other persistent store. You can define more non-static public methods, but which ones should be executed by connector are up to connector itself. List 2 - Message-Driven Bean @MessageDriven(activationConfig = {   @ActivationConfigProperty(propertyName = "resourceAdapterJndiName", propertyValue = "eis/TradeEventConnector") }) public class TradeEventProcessingMDB implements NoMethodsListenerIntf {   @EventMonitor(type = "Retailer")   public void productSold(long retailerUUID, long productId) {     System.out.println("Retailer [" + retailerUUID + "], product [" + productId + "] has been sold!");     // persist to database   }   @EventMonitor(type = "Customer")   public void productBought(long customerId, long productId) {     System.out.println("Customer [" + customerId + "] has bought product [" + productId + "]!");     // persist to database   } } The EventMonitor annotation is defined as below: List 3 - EventMonitor annotation @Target({ ElementType.METHOD }) @Retention(RetentionPolicy.RUNTIME) public @interface EventMonitor {   public String type(); } When this message-driven bean is deployed onto WebLogic Server, EJB container detects that it's an EJB 3.2 compatible message-driven bean. If you forgot to specify a value for resourceAdapterJndiName, WebLogic Server will try to locate a suitable connector resource, for example, a connector that is declaring support of the same no-methods message listener interface (in the current application or server-wide connector that is global-accessible). If a suitable connector is found and associated with message-driven bean, the connector can retrieve the bean class definition and then analyze. 3. Developing a connector that is used to associate with message-driven bean In connector application, we retrieve the bean class definition via getEndpointClass() method of MessageEndpointFactory, and then inspect every method if it's annotated with @EventMonitor. After that, we create a javax.jms.MessageListener with the target method of the bean class to listen on the event queues. List 4 - trade event connector @Connector(     description = "This is a sample resource adapter",     eisType = "Trade Event Connector",     vendorName = "Oracle WLS",     version = "1.0") public class TradeEventConnector implements ResourceAdapter, Serializable {   // jms related resources   ......   private static final String CALLBACK_METHOD_TYPE_RETAILER = "Retailer";   private static final String CALLBACK_METHOD_TYPE_CUSTOMER = "Customer";   @Override   public void endpointActivation(MessageEndpointFactory mef, ActivationSpec activationSpec)       throws ResourceException {     try {       Class<?> beanClass = mef.getEndpointClass(); // retrieve bean class definition       ......       jmsContextForSellingEvent = ...; // create jms context       jmsContextForBuyingEvent = ...;       jmsConsumerForSellingEvent = jmsContextForSellingEvent.createConsumer(sellingEventQueue);       jmsConsumerForBuyingEvent = jmsContextForBuyingEvent.createConsumer(buyingEventQueue);       jmsConsumerForSellingEvent.setMessageListener(createTradeEventListener(mef, beanClass, CALLBACK_METHOD_TYPE_RETAILER));       jmsConsumerForBuyingEvent.setMessageListener(createTradeEventListener(mef, beanClass, CALLBACK_METHOD_TYPE_CUSTOMER));       jmsContextForSellingEvent.start();       jmsContextForBuyingEvent.start();     } catch (Exception e) {       throw new ResourceException(e);     }   }   private MessageListener createTradeEventListener(MessageEndpointFactory mef, Class<?> beanClass, String callbackType) {     for (Method m : beanClass.getMethods()) {       if (m.isAnnotationPresent(EventMonitor.class)) {         EventMonitor eventMonitorAnno = m.getAnnotation(EventMonitor.class);         if (callbackType.equals(eventMonitorAnno.type())) {           return new JmsMessageEventListener(mef, m);         }       }     }     return null;   }   @Override   public void endpointDeactivation(MessageEndpointFactory mef, ActivationSpec spec) {     // deactivate connector   }   ...... } The associated activation spec for the connector is defined as below: List 5 - the activation spec @Activation(     messageListeners = {NoMethodsListenerIntf.class}   ) public class TradeEventSpec implements ActivationSpec, Serializable {   ...... } 4. Developing a message listener to listen on the event queue. When message listener's onMessage() is invoked, we create a message endpoint via MessageEndpointFactory, and invoke the target method on this message endpoint. List 6 - jms message listener public class JmsMessageEventListener implements MessageListener {   private MessageEndpointFactory endpointFactory;   private Method targetMethod;   public JmsMessageEventListener(MessageEndpointFactory mef, Method executeTargetMethod) {     this.endpointFactory = mef;     this.targetMethod = executeTargetMethod;   }   @Override   public void onMessage(Message message) {     MessageEndpoint endpoint = null;     String msgText = null;     try {       if (message instanceof TextMessage) {         msgText = ((TextMessage) message).getText();       } else {         msgText = message.toString();       }       long uid = Long.parseLong(msgText.substring(0, msgText.indexOf(",")));       long pid = Long.parseLong(msgText.substring(msgText.indexOf(",") + 1));       endpoint = endpointFactory.createEndpoint(null);       endpoint.beforeDelivery(targetMethod);       targetMethod.invoke(endpoint, new Object[]{uid, pid});       endpoint.afterDelivery();     } catch (Exception e) {       // log exception       System.err.println("Error when processing message: " + e.getMessage());     } finally {       if (endpoint != null) {         endpoint.release();       }     }   } } 5. Verify the application We assume that the syntax of the event is composed of two digits separated with ",", for example, 328365,87265. The former digit is customer or retailer id, and the latter digit is product id. Now sending such events to the event queues, you'll find that they are persisted by message-driven bean.

WebLogic Server 12.2.1 is a fully compatible implementation of Java EE 7 specification. One of the big improvements in EJB containerin this release of WebLogic Server is that, a message-driven...

WLS JNDI Multitenancy

  The most important feature introduced in WebLogic Server 12.2.1 is multi-tenancy. As we know, before WLS 12.2.1, one WLS domain is used by one tenant. Since WLS 12.2.1, WLS domain can be divided into multiple partitions so that tenants can use different partitions of on WLS domain. Then Multiple tenants can share one WLS domain without influence each other. So isolation of resources between partitions is key. Since JNDI is a common way to access to these resources, the main target of JNDI in WLS 12.2.1 is to isolate JNDI resources.   Before WLS 12.2.1, there is only one JNDI global tree of WLS domain. It is difficult to use this one JNDI global tree to support multiple partitions. Because partition requires a unique isolated namespace. For example, it is possible multiple partitions use same JNDI name to bind/lookup JNDI resource separately. And it will result in NameAlreadyBoundException. To isolate JNDI resources in different partitions, every partition has unique JNDI global tree since WLS 12.2.1. Then tenant can operate JNDI resource in one partition without name conflict with one in other partition. For Application scoped JNDI tree, it is only visible in application internal, so it isolated naturally. So there is no change for App scoped JNDI tree in WLS 12.2.1. Let us see how to access JNDI resource in partition. Access JNDI resource in partition   To access JNDI resources in partition, we need add partition information in provider url property during creating InitialContext.   Runtime environment:     Managed server:           ms1 , ms2     Cluster:                         managed server ms1, managed server ms2     Virtual target:               VT1 target to managed server ms1, VT2 target to cluster     Partition:                      Partition 1 has available target VT1, Partition2 has available target VT2.   We need add parition1 inforamtion in properties during creating InitialContext to access JNDI resources in partition1.     Hashtable<String, String> env = new Hashtable<>();     env.put(Context.PROVIDER_URL, "t3://ms1:7001/partition1");       env.put(Context.SECURITY_PRINCIPAL, "weblogic");     env.put(Context.SECURITY_CREDENTIALS, "welcome1");     Context ctx = new InitialContext(env);     Object c = ctx.lookup("jdbc/ds1");   Partition2 runs in cluster, so we can use cluster address format in properties during creating InitialContext.     Hashtable<String, String> env = new Hashtable<>();     env.put(Context.PROVIDER_URL, "t3://ms1:7001,ms2:7003/partition2");     env.put(Context.SECURITY_PRINCIPAL, "weblogic");     env.put(Context.SECURITY_CREDENTIALS, "welcome1");     Context ctx = new InitialContext(env);     Object c = ctx.lookup("jdbc/ds1");   In weblogic, we can create Foreign JNDI provider to link JNDI resources in other server. In WLS 12.2.1, we also can use Foreign JNDI provider to link to  JNDI resources in specified partition by adding partition information in configuration. These partition information including provider url, user and password will be used to create JNDI context. The following is an example of Foreign JNDI provider configuration in partition1. This provider links to partition2. <foreign-jndi-provider-override>  <name>jndi_provider_rgt</name>  <initial-context-factory>weblogic.jndi.WLInitialContextFactory</initial-context-factory>  <provider-url>t3://ms1:7001,ms2:7003/partition2</provider-url>  <password-encrypted>{AES}6pyJXtrS5m/r4pwFT2EXQRsxUOu2n3YEcKJEvZzxZ7M=</password-encrypted>  <user>weblogic</user>  <foreign-jndi-link>    <name>link_rgt_2</name>    <local-jndi-name>partition_Name</local-jndi-name>    <remote-jndi-name>weblogic.partitionName</remote-jndi-name>  </foreign-jndi-link></foreign-jndi-provider-override> Stickiness of JNDI Context   When a JNDI context is created, the context will associate with  specified partition. Then all subsequent JNDI operations are done within associated partition JNDI tree, not in current partition one. And this associated partition will remain even if the context is used by a different thread than was used to create the context.. If provider url property is set in environment during creating JNDI context, partition specified in provider url is associated. Otherwise, JNDI context associates with current partition. Life cycle of Partition JNDI service   Before WLS 12.2.1,, JNDI service life cycle is same with weblogic server. In WLS 12.2.1, every partition owns JNDI global tree by itself, so JNDI service life cycle is same with partition. As soon as partition startup, JNDI service of this partition is available. And during partition shutdown, JNDI service of this partition is destroyed.

  The most important feature introduced in WebLogic Server 12.2.1 is multi-tenancy. As we know, before WLS 12.2.1, one WLS domain is used by one tenant.Since WLS 12.2.1, WLS domain can be divided into...

Monitoring FAN Events

fanWatcher is a sample program to print the Oracle Notification Service (ONS) Fast Application Notification (FAN) event information. These events provide information regarding load balancing, and service and instance up and down events. This information is automatically processed by WebLogic Server Active GridLink and UCP on the mid-tier. For more information about FAN events, see this link.  The program described here is an enhancement of the earlier program described in that white paper  This program can be modified to work as desired to monitor events and help diagnose problems with configuration. The code is available this link. To run this Java application, you need to be set up to run a JDK and you need ons.jar and ojdbcN.jar in the CLASSPATH. The CLASSPATH is set differently depending on whether you are running on the database server or on the mid-tier with WebLogic Server or UCP. Make sure to use the correct path separator for CLASSPATH on your platform (';' for Windows, ':' otherwise). The general format for the command line is java fanWatcher config_type [eventtype … ] Event Type Subscription The event type sets up the subscriber to only return limited events. You can run without specifying the event type to see what types of events are returned. When you specify an event name on the command line, the program sets up the subscriber to have a simple match on the event. If the specified pattern occurs anywhere in a notification's header, then the comparison statement evaluates true. The most basic pattern match is an empty string (not null), which matches all notifications. The pattern is enclosed in double quotes (required) and prefixed with “%” to be case insensitive. Event processing is more complete than shown in this sample. The subscription string is generally composed of one or more comparison statements, each logically related to another with the boolean operators '|' for an OR relationship or '&' for an AND relationship. Parentheses are used to group these comparison statements, and the '!' operator placed before an opening parenthesis negates the evaluated value within. Each individual comparison statement must be enclosed within double quotes ('"'), and can take one of two basic forms: "pattern" or "name=value". A "pattern" is a simple string match of the notification header: if the specified "pattern" occurs anywhere in a notification's header, then the comparison statement evaluates true. The most basic pattern match is an empty string (not NULL) which matches all notifications. The "name=value" format compares the ONS notification header or property name with the name against the specified value, and if the values match, then the comparison statement evaluates true. If the specified header or property name does not exist in the notification the comparison statement evaluates false. A comparison statement will be interpreted as a case insensitive when a percent character ('%') is placed before the opening quote. Note that for "name=value" comparison statements, only the value is treated as case insensitive with this option: the name lookup will always be case sensitive. A comparison statement will be interpreted as a regular expression when a dollar sign character ('$') is placed before the opening quote. Standard POSIX regular expressions are supported. To specify a regular expression that is also case insensitive, place the dollar sign and percent sign together and in that order ("$%") before the opening quote. A special case subscription string composed of only the exclamation point character ('!') signifies that the subscription will not match any notifications. You might want to modify the event to select on a specific service by using something like %"eventType=database/event/servicemetrics/<serviceName> " Running with Database Server 10.2 or later This approach runs on the database server and connects directly to the local ONS daemon available in the Grid Infrastructure cluster. The FANwatcher utility must be run as a user that has privilege to access the $CRS_HOME/opmn/conf/ons.config, which is used by the ons daemon to start and accessed by this program. The configuration type on the command line is set to “crs”. # CRS_HOME should be set for your Grid infrastructure echo $CRS_HOME CRS_HOME=/mypath/scratch/12.1.0/grid/ CLASSPATH="$CRS_HOME/jdbc/lib/ojdbc6.jar:$CRS_HOME/opmn/lib/ons.jar:." export CLASSPATH javac fanWatcher.java java -Doracle.ons.oraclehome=$CRS_HOME fanWatcher crs Running with WLS 10.3.6 or later using an explicit node list There are two ways to run in a client environment – with an explicit node list and using auto-ONS. It’s necessary to have ojdbcN.jar and ons.jar that are available when configured for WLS. If you are set up to run with UCP directly, these should also be in your CLASSPATH. In the first approach, it will work with Oracle driver and database 11 and later (SCAN support came in later versions of Oracle including the 11.2.0.3 jar files that shipped with WLS 10.3.6). # Set the WLS environment using wlserver*/server/bin/setWLSEnv CLASSPATH="$CLASSPATH:." # add local directory for sample program export CLASSPATH javac fanWatcher.java java fanWatcher "nodes=rac1:6200,rac2:6200" database/event/service The node list is a string of one or more values of the form name=value separated by a newline character (\n). There are two supported formats for the node list. The first format is available for all versions of ONS. The following names may be specified. nodes – This is required. The format is one or more host:port pairs separated by a comma. walletfile – Oracle wallet file used for SSL communication with the ONS server. walletpassword – Password to open the Oracle wallet file. The second format is available starting in database 12.2.0.2. It supports more complicated topologies with multiple clusters and node lists. It has the following names. nodes.id—this value is a list of nodes representing a unique topology of remote ONS servers. id specifies a unique identifier for the node list. Duplicate entries are ignored. The list of nodes configured in any list must not include any nodes configured in any other list for the same client or duplicate notifications will be sent and delivered. The list format is a comma separated list of ONS daemon listen addresses and ports pairs separated by colon. maxconnections.id—